Skip to content
Snippets Groups Projects
Commit 3956466b authored by Miroslav Kratochvil's avatar Miroslav Kratochvil :bicyclist:
Browse files

updates

parent ec62c0aa
No related branches found
No related tags found
No related merge requests found
......@@ -39,6 +39,10 @@ sqrt(x)
Computes the square root .....
```
- *If you like notebooks*, Julia kernels are available too (but in comparison
they are quite impractical)
- VSCode extension exists too (feels very much like RStudio)
# REPL modes
......
......@@ -165,7 +165,7 @@ Utilizing ULHPC <i class="twa twa-light-bulb"></i>
# What does the cluster look like? (Iris)
# Reminder: ULHPC (iris)
<center>
<img src="slides/img/iris.png" width="30%">
......@@ -186,7 +186,7 @@ Start an allocation and connect to it:
After some brief time, you should get a shell on a compute node. There you can install and start Julia as usual:
```
```tex
0 [mkratochvil@iris-131 ~](2696005 1N/T/1CN)$ module add lang/Julia
0 [mkratochvil@iris-131 ~](2696005 1N/T/1CN)$ julia
_
......@@ -194,7 +194,7 @@ After some brief time, you should get a shell on a compute node. There you can i
(_) | (_) (_) |
_ _ _| |_ __ _ | Type "?" for help, "]?" for Pkg help.
| | | | | | |/ _` | |
| | |_| | | | (_| | | Version 1.6.2 (2021-07-14)
| | |_| | | | (_| | | Version 1.8.5 (2023-01-08)
_/ |\__'_|_|_|\__'_| | Official https://julialang.org/ release
|__/ |
......@@ -239,15 +239,3 @@ You start the script using:
```sh
$ sbatch runAnalysis.sbatch
```
<div class=leader>
<i class="twa twa-blueberries"></i>
<i class="twa twa-red-apple"></i>
<i class="twa twa-melon"></i>
<i class="twa twa-grapes"></i><br>
Questions?
</div>
Lets do some hands-on problem solving (expected around 15 minutes)
<div class=leader>
<i class="twa twa-rocket"></i>
<i class="twa twa-rocket"></i>
<i class="twa twa-rocket"></i><br>
<i class="twa twa-volcano"></i>
<i class="twa twa-mount-fuji"></i>
<i class="twa twa-snow-capped-mountain"></i>
<i class="twa twa-mountain"></i>
<i class="twa twa-sunrise-over-mountains"></i>
<br>
Utilizing GPUs
</div>
......@@ -27,13 +30,13 @@ There's also:
```julia
julia> data = randn(10000,10000);
julia> @time (data'*data);
julia> @time data*data;
julia> using CUDA
julia> data = cu(data);
julia> @time data*data
julia> @time data*data;
```
......@@ -42,6 +45,7 @@ julia> @time data*data
The "high-level" API spans most of the CU* helper tools:
- broadcasting numerical operations via translation to simple kernels (`.+`, `.*`, `.+=`, `ifelse.`, `sin.`, ...)
- matrix and vector operations using `CUBLAS`
- `CUSOLVER` (solvers, decompositions etc.) via `LinearAlgebra.jl`
- ML ops (in `Flux.jl`): `CUTENSOR`
......@@ -59,20 +63,49 @@ The "high-level" API spans most of the CU* helper tools:
CUDA kernels (`__device__` functions) are generated transparently directly from Julia code.
```julia
a = CUDA.zeros(1024)
a = cu(someArray)
function kernel(a)
function myKernel(a)
i = threadIdx().x
a[i] += 1
return
end
@cuda threads=length(a) kernel(a)
@cuda threads=length(a) myKernel(a)
```
Some Julia constructions will not be feasible on the GPU (mainly allocating complex structures); these will trigger a compiler message from `@cuda`.
# Programming kernels -- usual tricks
The amount of threads and blocks is limited by hardware; let's make a
grid-stride loop to process a lot of data quickly!
```julia
a = cu(someArray)
b = cu(otherArray)
function (a, b)
index = threadIdx().x + blockDim().x * (blockIdx().x-1)
gridStride = gridDim().x * blockDim().x
for i = index:gridStride:length(a)
a[i] += someMathFunction(b[i])
end
return
end
@cuda threads=1024 blocks=32 kernel(a)
```
Typical CUDA trade-offs:
- too many blocks won't work, insufficient blocks won't cover your SMs
- too many threads per block will fail or spill to memory (slow), insufficient threads won't allow parallelization/latency hiding in SM
- thread divergence destroys performance
# CUDA.jl interface
Functions available in the kernel:
......
......@@ -3,6 +3,7 @@
*Why is it good to work in compiled language?*
- Programs become much faster for free.
- Even if you use the language as a package glue, at least the glue is not slow.
*What do we gain by having types in the language?*
......
......@@ -4,7 +4,7 @@
<i class="twa twa-red-circle"></i>
<i class="twa twa-green-circle"></i>
<i class="twa twa-purple-circle"></i><br>
$LANG to Julia in 15 minutes
<span style="color:#888">$LANG</span> to Julia in 15 minutes
</div>
......@@ -13,6 +13,7 @@ $LANG to Julia in 15 minutes
- you can `Tab` through almost anything in REPL
- functions have useful help with examples, try `?cat`
- `typeof(something)` may give good info
......@@ -39,8 +40,6 @@ $LANG to Julia in 15 minutes
- `Set{Int}`
- `Dict{Int,String}`
(default type is typically `Any`)
# Basic functionality and expectable stuff
......@@ -55,7 +54,7 @@ Surprising parts:
- all functions can (and should) be overloaded
- simply add a type annotation to parameter with `::` to distinguish between implementations for different types
- overloading is cheap
- specialization to types is __precisely__ the reason why compiled code can be fast
- *specialization to known simple types types* is precisely the reason why compiled code can be *fast*
- adding type annotations to code and parameters helps the compiler to do the right thing
......@@ -65,8 +64,14 @@ Surprising parts:
Using functional-style loops is *much less error-prone* to indexing
errors.
- Transform an array:
- Transform an array, original:
```julia
for i=eachindex(arr)
arr[i] = sqrt(arr[i])
end
```
Structured:
```julia
map(sqrt, [1,2,3,4,5])
map((x,y) -> (x^2 - exp(y)), [1,2,3], [-1,0,1])
......
# Overview
1. Why would you learn another programming language again?
2. $OTHERLANG to Julia in 15 minutes
2. `$OTHERLANG` to Julia in 15 minutes
3. Running distributed Julia on ULHPC
4. Easy GPU programming with CUDA.jl
......@@ -4,18 +4,20 @@
<i class="twa twa-blue-book"></i>
<i class="twa twa-computer-disk"></i>
<i class="twa twa-chart-increasing"></i><br>
Packages for doing stuff
Packages for <br>doing useful things
</div>
# Packages for doing stuff
# How do I do ... ?
- Structuring the data: `DelimitedFiles`, `CSV`, `DataFrames`
- Working with large data: `DistributedArrays`, `LabelledArrays`
- Stats: `Distributions`, `StatsBase`, `Statistics`
- Math: `ForwardDiff`, `Symbolics`
- Problem solving: `JuMP`, `DifferentialEquations`
- ML: `Flux`
- Problem solving: `JuMP`
- Bioinformatics: `BioSequences`, `GenomeGraphs`
- Plotting: `Makie`, `UnicodePlots`
- Writing notebooks: `Literate`
......
<div class=leader>
<i class="twa twa-blueberries"></i>
<i class="twa twa-red-apple"></i>
<i class="twa twa-melon"></i>
<i class="twa twa-grapes"></i><br>
Questions?
</div>
# Thank you!
<center><img src="slides/img/r3-training-logo.png" height="200px"></center>
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment