Skip to content
Snippets Groups Projects

julia training slides for 2022-06-08

Merged Miroslav Kratochvil requested to merge mk-juliatraining into develop
3 files
+ 28
25
Compare changes
  • Side-by-side
  • Inline
Files
3
@@ -25,10 +25,17 @@
# Basic parallel processing
**Using Threads:**
**Using `Threads`:**
1. start Julia with parameter `-t N`
2. parallelize any loops with `Threads.@threads`
2. parallelize (some) loops with `Threads.@threads`
```julia
a = zeros(100000)
Threads.@threads for i = eachindex(a)
a[i] = hardfunction(i)
end
```
**Using `Distributed`:**
@@ -38,6 +45,8 @@ addprocs(N)
newVector = pmap(function, oldVector)
```
We will use the `Distributed` approach.
# How to design for parallelization?
@@ -53,7 +62,7 @@ newVector = pmap(function, oldVector)
- parallelize programs using `pmap` and `dmapreduce` (DistributedData.jl)
- Decompose more advanced programs into *tasks with dependencies*
- Dagger.jl
- `make -jN` is a surprisingly good tool for parallelization!
- `make -jN` may be a surprisingly good tool for parallelization!
Loading