Skip to content
Snippets Groups Projects
Commit e5d61e22 authored by Laurent Heirendt's avatar Laurent Heirendt :airplane:
Browse files

Merge branch 'mk-juliahpc' into 'develop'

add julia hpc slides

See merge request R3/school/courses!176
parents 5bc4d276 e1c497ef
No related branches found
No related tags found
2 merge requests!178Develop,!176add julia hpc slides
Pipeline #72302 passed
Showing
with 2021 additions and 0 deletions
<div class=leader>
<i class="twa twa-axe"></i><i class="twa twa-carpentry-saw"></i><i class="twa twa-screwdriver"></i><i class="twa twa-wrench"></i><i class="twa twa-hammer"></i><br>
Bootstrapping Julia
</div>
# Installing Julia
Recommended method:
- Download an archive from https://julialang.org/downloads/
- Execute `julia` as-is
- Link it to your `$PATH`
Distribution packages usually work well too:
- **Debians&Ubuntus**: `apt install julia`
- **Iris/Aion**: `module add lang/Julia`
# Life in REPL
```julia
user@pc $ julia
julia> sqrt(1+1)
1.4142135623730951
julia> println("Well hello there!")
Well hello there!
julia> ?
help?> sqrt
sqrt(x)
Computes the square root .....
```
- *If you like notebooks*, Julia kernels are available too (but in comparison
they are quite impractical)
- VSCode extension exists too (feels very much like RStudio)
# REPL modes
Julia interprets some additional keys to make our life easier:
- `?`: help mode
- `;`: shell mode
- `]`: packaging mode (looks like a box!)
- `Backspace`: quits special mode
- `Tab`: autocomplete anything
- `\`... `Tab`: expand math characters
# Managing packages from the package management environment
- Install a package
```julia
] add UnicodePlots
```
- Uninstall a package
```julia
] remove UnicodePlots
```
# Loading libraries, modules and packages
- Load a local file (with shared functions etc.)
```julia
include("mylibrary.jl")
```
- Load a package, add its exports to the global namespace
```julia
using UnicodePlots
```
# <i class="twa twa-light-bulb"> </i> How to write a standalone program?
*Your scripts should communicate well with the environment!*
(that means, among other, you)
```julia
#!/usr/bin/env julia
function process_file(filename)
@info "Processing $filename..."
# ... do something ...
if error_detected
@error "something terrible has happened"
exit(1)
end
end
for file in ARGS
process_file(file)
end
```
Correct processing of commandline arguments makes your scripts *repurposable*
and *configurable*.
# <i class="twa twa-light-bulb"></i> Workflow: Make a local environment for your script
- Enter a local project with separate package versions
```julia
] activate path/to/project
```
- Install dependencies of the local project
```julia
] instantiate
```
- Execute a script with the project environment
```sh
$ julia --project=path/to/project script.jl
```
(Project data is stored in `Project.toml`, `Manifest.toml`.)
<div class=leader>
<i class="twa twa-rocket"></i>
<i class="twa twa-rocket"></i>
<i class="twa twa-rocket"></i><br>
Parallel Julia
</div>
# Note about MPI
If you're into MPI, you can perfectly use MPI using `MPI.jl` package.
Here we show `Distributed.jl` approach, because:
- it is slightly more user-friendly
- it is super easy to use it for any Julia code
# Julia model of distributed computation
<center>
<img src="slides/img/distrib.svg" width="50%">
</center>
# Basic parallel processing
**Using `Threads`:**
1. start Julia with parameter `-t N`
2. parallelize (some) loops with `Threads.@threads`
```julia
a = zeros(100000)
Threads.@threads for i = eachindex(a)
a[i] = hardfunction(i)
end
```
**Using `Distributed`:**
```julia
using Distributed
addprocs(N)
newVector = pmap(myFunction, myVector)
```
We will use the `Distributed` approach.
# Managing your workers
```julia
using Distributed
addprocs(4)
myid()
workers()
```
Running commands on workers:
```julia
@spawnat 3 @info "Message from worker"
@spawnat :any myid()
```
Getting results from workers:
```julia
job = @spawnat :any begin sleep(10); return 123+321; end
fetch(job)
```
Cleaning up:
```julia
rmprocs(workers())
```
# Processing lots of data items in parallel
```julia
datafiles = ["file$i.csv" for i=1:20]
@everywhere function process_file(name)
println("Processing file $name")
# ... do something ...
end
pmap(process_file, datafiles)
```
<i class="twa twa-light-bulb"></i><i class="twa twa-light-bulb"></i> Doing it manually:
```julia
@sync for f in datafiles
@async @spawnat :any process_file(f)
end
```
# Gathering results from workers
```julia
items = collect(1:1000)
@everywhere compute_item(i) = 123 + 321*i
pmap(compute_item, items)
```
<i class="twa twa-light-bulb"></i><i class="twa twa-light-bulb"></i><i class="twa twa-light-bulb"></i> Doing manually with `@spawnat`:
```julia
futures = [@spawnat :any compute_item(item) for item in items]
fetch.(futures)
```
# How to design for parallelization?
**Recommended way:** *Utilize the high-level looping primitives!*
- use `map`, parallelize by just switching to `pmap`
- use `reduce` or `mapreduce`, parallelize by just switching to `dmapreduce` (DistributedData.jl)
# <i class="twa twa-light-bulb"></i> Parallel → distributed processing
It is very easy to organize *multiple computers* to work for you!
You need a working `ssh` connection:
```sh
user@pc1 $ ssh server1
Last login: Wed Jan 13 15:29:34 2021 from 2001:a18:....
user@server $ _
```
Spawning remote processes on remote machines:
```julia
julia> using Distributed
julia> addprocs([("server1", 10), ("pc2", 2)])
```
**Benefit:** No additional changes to the parallel programs!
<div class=leader>
<i class="twa twa-abacus"></i>
<i class="twa twa-laptop"></i>
<i class="twa twa-desktop-computer"></i>
<i class="twa twa-flag-luxembourg"></i><br>
Utilizing ULHPC <i class="twa twa-light-bulb"></i>
</div>
# Reminder: ULHPC (iris)
<center>
<img src="slides/img/iris.png" width="30%">
<br>
<tt>hpc-docs.uni.lu/systems/iris</tt>
</center>
# Running Julia on the computing nodes
Start an allocation and connect to it:
```sh
0 [mkratochvil@access1 ~]$ srun -p interactive -t 30 --pty bash -i
```
(You can also use `si`.)
After some brief time, you should get a shell on a compute node. There you can install and start Julia as usual:
```tex
0 [mkratochvil@iris-131 ~](2696005 1N/T/1CN)$ module add lang/Julia
0 [mkratochvil@iris-131 ~](2696005 1N/T/1CN)$ julia
_
_ _ _(_)_ | Documentation: https://docs.julialang.org
(_) | (_) (_) |
_ _ _| |_ __ _ | Type "?" for help, "]?" for Pkg help.
| | | | | | |/ _` | |
| | |_| | | | (_| | | Version 1.8.5 (2023-01-08)
_/ |\__'_|_|_|\__'_| | Official https://julialang.org/ release
|__/ |
julia>
```
# Making a HPC-compatible Julia script
Main challenges:
1. discover the available resources
2. spawn worker processes at the right place
```julia
using ClusterManagers
addprocs_slurm(parse(Int, ENV["SLURM_NTASKS"]))
# ... continue as usual
```
# Scheduling an analysis script
Normally, you write a "batch script" and add it to a queue using `sbatch`.
Script in `runAnalysis.sbatch`:
```sh
#!/bin/bash
#SBATCH -J MyAnalysisInJulia
#SBATCH -n 10
#SBATCH -c 1
#SBATCH -t 30
#SBATCH --mem-per-cpu 4G
julia runAnalysis.jl
```
You start the script using:
```sh
$ sbatch runAnalysis.sbatch
```
<div class=leader>
<i class="twa twa-volcano"></i>
<i class="twa twa-mount-fuji"></i>
<i class="twa twa-snow-capped-mountain"></i>
<i class="twa twa-mountain"></i>
<i class="twa twa-sunrise-over-mountains"></i>
<br>
Utilizing GPUs
</div>
# Note about CUDA
Julia can serve as an extremely user-friendly front-end for CUDA, abstracting all ugly steps that you'd need to do with normal CUDA, yet still leaving enough flexibility to write high-performance low-level compute kernels.
The approach here demonstrates what `CUDA.jl` does.
There's also:
- `AMDGPU.jl`
- `Metal.jl` for <i class="twa twa-green-apple"></i>
- `Vulkan.jl` (less user friendly but works everywhere)
# Using your GPU for accelerating simple stuff
```julia
julia> data = randn(10000,10000);
julia> @time data*data;
julia> using CUDA
julia> data = cu(data);
julia> @time data*data;
```
# What's available?
The "high-level" API spans most of the CU* helper tools:
- broadcasting numerical operations via translation to simple kernels (`.+`, `.*`, `.+=`, `ifelse.`, `sin.`, ...)
- matrix and vector operations using `CUBLAS`
- `CUSOLVER` (solvers, decompositions etc.) via `LinearAlgebra.jl`
- ML ops (in `Flux.jl`): `CUTENSOR`
- `CUFFT`
- `CUSPARSE` via `SparseArrays.jl`
- limited support for reducing operations (`findall`, `findfirst`, `findmin`, ...) -- these do not translate easily to GPU code
- very limited support for array index processing
(See: https://github.com/NVIDIA/CUDALibrarySamples)
# Programming kernels in Julia!
CUDA kernels (`__device__` functions) are generated transparently directly from Julia code.
```julia
a = cu(someArray)
function myKernel(a)
i = threadIdx().x
a[i] += 1
return
end
@cuda threads=length(a) myKernel(a)
```
Some Julia constructions will not be feasible on the GPU (mainly allocating complex structures); these will trigger a compiler message from `@cuda`.
# Programming kernels -- usual tricks
The amount of threads and blocks is limited by hardware; let's make a
grid-stride loop to process a lot of data quickly!
```julia
a = cu(someArray)
b = cu(otherArray)
function applySomeMath(a, b)
index = threadIdx().x + blockDim().x * (blockIdx().x-1)
gridStride = gridDim().x * blockDim().x
for i = index:gridStride:length(a)
a[i] += someMathFunction(b[i])
end
return
end
@cuda threads=1024 blocks=32 applySomeMath(a)
```
Typical CUDA trade-offs:
- too many blocks won't work, insufficient blocks won't cover your SMs
- too many threads per block will fail or spill to memory (slow), insufficient threads won't allow parallelization/latency hiding in SM
- thread divergence destroys performance
# CUDA.jl interface
Functions available in the kernel:
- `gridDim`, `blockDim`
- `blockIdx`, `threadIdx`
- `warpsize`, `laneid`, `active_mask`
- `sync_threads`, `sync_warp`, `threadfence`, ...
- `vote_all`, `vote_ballot`, `shfl_sync`, ...
Parameters for the `@cuda` spawn:
- `threads=nnn` per block
- `blocks=nnn` per grid
- `shmem=nnn` how much shared memory to request (available via `CuStaticSharedArray`)
This diff is collapsed.
2023/2023-06-06_Julia-HPCSchool/slides/img/favicon.ico

39.9 KiB

2023/2023-06-06_Julia-HPCSchool/slides/img/iris.png

3.37 MiB

<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 153.14 98.64"><defs><style>.cls-1{fill:#1a1a1a}.cls-2{fill:#4d64ae}.cls-3{fill:#ca3c32}.cls-4{fill:#9259a3}.cls-5{fill:#399746}</style></defs><title>Asset 2</title><g id="Layer_2" data-name="Layer 2"><g id="Layer_1-2" data-name="Layer 1"><g id="layer1"><g id="g3855"><g id="g945"><g id="g984"><g id="g920"><path id="path3804" d="M93.14,80.94h-13V21.13l13-3.58Z" class="cls-1"/><g id="g898"><g id="g893"><path id="path19" d="M22.17,36.33a8.9,8.9,0,1,1,8.9-8.9A8.91,8.91,0,0,1,22.17,36.33Z" class="cls-2"/></g><path id="path3819" d="M29.14,80.83A26.48,26.48,0,0,1,27.83,90a12.12,12.12,0,0,1-3.62,5.4A12.33,12.33,0,0,1,18.57,98a36.64,36.64,0,0,1-7.32.67,22.47,22.47,0,0,1-4.81-.47A13,13,0,0,1,2.9,96.93,6,6,0,0,1,.76,95.07,3.62,3.62,0,0,1,0,92.88,4.26,4.26,0,0,1,1.59,89.5a6.47,6.47,0,0,1,4.33-1.35,5,5,0,0,1,1.87.32,6,6,0,0,1,1.43.79,12,12,0,0,1,1.16,1.07c.31.4.59.77.83,1.12A7.58,7.58,0,0,0,12.72,93a2.3,2.3,0,0,0,1.15.4,1.85,1.85,0,0,0,1-.28,2,2,0,0,0,.71-1,7.18,7.18,0,0,0,.4-1.91,23.12,23.12,0,0,0,.16-3.06V40.48l13-3.58Z" class="cls-1"/></g><path id="path3802" d="M48.14,37.94V68a6.14,6.14,0,0,0,.47,2.39A6.45,6.45,0,0,0,50,72.24a7,7,0,0,0,2,1.27,6.12,6.12,0,0,0,2.4.48,4.2,4.2,0,0,0,1.61-.4,8.42,8.42,0,0,0,1.8-1.12,13.27,13.27,0,0,0,1.81-1.66,12.92,12.92,0,0,0,1.61-2.11V37.94h13v43h-13v-4a22.47,22.47,0,0,1-5.43,3.53,13.62,13.62,0,0,1-5.59,1.28,16.52,16.52,0,0,1-5.9-1,15.59,15.59,0,0,1-4.76-2.89,13.56,13.56,0,0,1-3.17-4.28,12.41,12.41,0,0,1-1.15-5.29V37.94Z" class="cls-1"/><g id="g905"><g id="g890"><path id="path13" d="M105.79,36.33a8.9,8.9,0,1,1,8.91-8.9A8.91,8.91,0,0,1,105.79,36.33Z" class="cls-3"/><path id="path25" d="M127.18,36.33a8.9,8.9,0,1,1,8.91-8.9A8.91,8.91,0,0,1,127.18,36.33Z" class="cls-4"/><path id="path31" d="M116.49,17.8a8.9,8.9,0,1,1,8.9-8.9,8.89,8.89,0,0,1-8.9,8.9Z" class="cls-5"/></g><path id="path3823" d="M100.14,40.6l13-3.58V80.94h-13Z" class="cls-1"/></g><path id="path3808" d="M140.14,58.77a37.64,37.64,0,0,0-3.77,1.87,21.89,21.89,0,0,0-3.46,2.3,12.77,12.77,0,0,0-2.55,2.67,5.12,5.12,0,0,0-1,2.94,8.53,8.53,0,0,0,.32,2.34,7,7,0,0,0,.87,1.91,5.15,5.15,0,0,0,1.23,1.27,2.67,2.67,0,0,0,1.51.48,6.3,6.3,0,0,0,3.18-1,41.31,41.31,0,0,0,3.62-2.47Zm13,22.17h-13V77.52c-.71.61-1.42,1.17-2.11,1.67a14.2,14.2,0,0,1-2.3,1.35,13.56,13.56,0,0,1-2.82.88,19.75,19.75,0,0,1-3.78.31,16,16,0,0,1-5.33-.83,12.23,12.23,0,0,1-4-2.31,10.23,10.23,0,0,1-2.51-3.53,11,11,0,0,1-.87-4.37,10.27,10.27,0,0,1,.91-4.42,13.11,13.11,0,0,1,2.55-3.57,19.36,19.36,0,0,1,3.77-2.86,40.26,40.26,0,0,1,4.65-2.31c1.67-.69,3.4-1.32,5.17-1.91l5.25-1.71,1.43-.31V49.34a11.91,11.91,0,0,0-.44-3.45,5.82,5.82,0,0,0-1.15-2.31,4,4,0,0,0-1.79-1.31,6.6,6.6,0,0,0-2.34-.4,7.38,7.38,0,0,0-2.59.4,4.37,4.37,0,0,0-1.67,1.11,3.94,3.94,0,0,0-.91,1.59,6.52,6.52,0,0,0-.28,2,9.51,9.51,0,0,1-.28,2.35,4.85,4.85,0,0,1-.91,2A4.47,4.47,0,0,1,126,52.6a6.84,6.84,0,0,1-2.9.52,7.51,7.51,0,0,1-2.51-.4,6.16,6.16,0,0,1-1.91-1.15,6,6,0,0,1-1.27-1.75,5.59,5.59,0,0,1-.44-2.18,6.42,6.42,0,0,1,1.51-4.1,13.16,13.16,0,0,1,4.06-3.3,23.45,23.45,0,0,1,5.92-2.14,31.07,31.07,0,0,1,7.12-.8,32.21,32.21,0,0,1,7.87.84,16.37,16.37,0,0,1,5.49,2.34,9.55,9.55,0,0,1,3.18,3.66,10.91,10.91,0,0,1,1,4.81Z" class="cls-1"/></g></g></g></g></g></g></g></svg>
\ No newline at end of file
2023/2023-06-06_Julia-HPCSchool/slides/img/r3-training-logo.png

32.4 KiB

2023/2023-06-06_Julia-HPCSchool/slides/img/unicodeplot.png

9.93 KiB

2023/2023-06-06_Julia-HPCSchool/slides/img/whyjulia.png

2.32 MiB

# Julia for newcomers
## June 8th, 2022
<div style="top: 6em; left: 0%; position: absolute;">
<img src="theme/img/lcsb_bg.png">
</div>
<div style="top: 1em; left: 60%; position: absolute;">
<img src="slides/img/r3-training-logo.png" height="200px">
<img src="slides/img/julia.svg" height="200px">
<h1 style="margin-top:3ex; margin-bottom:3ex;">Julia on HPCs</h1>
<h4>
Miroslav Kratochvíl, Ph.D.<br>
Laurent Heirendt, Ph.D.<br>
R3 Team - <a href="mailto:lcsb-r3@uni.lu">lcsb-r3@uni.lu</a><br>
<i>Luxembourg Centre for Systems Biomedicine</i>
</h4>
</div>
<link rel="stylesheet" href="https://lcsb-biocore.github.io/icons-mirror/twemoji-amazing.css">
<style>
code {border: 2pt dotted #f80; padding: .4ex; border-radius: .7ex; color:#444; }
.reveal pre code {border: 0; font-size: 18pt; line-height:27pt;}
em {color: #e02;}
li {margin-bottom: 1ex;}
div.leader {font-size:400%; line-height:120%; font-weight:bold; margin: 1em;}
section {padding-bottom: 10em;}
</style>
# Motivation first!
*Why is it good to work in compiled language?*
- Programs become much faster for free.
- Even if you use the language as a package glue, at least the glue is not slow.
*What do we gain by having types in the language?*
- Generic programming, and lots of optimization possibilities for the compiler.
*Is Julia ecosystem ready for my needs? <i class="twa twa-thinking-face"></i>*
- Likely. If not, extending the packages is super easy.
- Base includes most of the functionality of Matlab, R and Python with numpy,
and many useful bits of C++
# Why Julia?
<center><img src="slides/img/whyjulia.png" width="80%"></center>
(Source: JuliaCon 2016, Arch D. Robison)
<div class=leader>
<i class="twa twa-blue-circle"></i>
<i class="twa twa-red-circle"></i>
<i class="twa twa-green-circle"></i>
<i class="twa twa-purple-circle"></i><br>
<span style="color:#888">$OTHERLANG</span> to Julia<br>in 15 minutes
</div>
# Always remember
- you can `Tab` through almost anything in REPL
- functions have useful help with examples, try `?cat`
- `typeof(something)` may give good info
# Everything has a type that determines storage and value handling
- `Vector{Int}`
```julia
[1, 2, 5, 10]
```
- `Matrix{Float64}`
```julia
[1.0 2.0; 2.0 1.0]
```
- `Tuple`
```julia
(1, 2.0, "SomeLabel")
```
- `Set{Int}`
- `Dict{Int,String}`
# Basic functionality and expectable stuff
Most concepts from C, Python and MATLAB are portable as they are.
Surprising parts:
- arrays are indexed from `1` (for a relatively good reason)
- Arrays: `array[1]`, `array[2:5]`, `array[begin+1:end-1]`, `size`, `length`, `cat`, `vcat`, `hcat`, ...
- code blocks `begin` and `end` with keywords
- you can stuff everything on one line!
- all functions can (and should) be overloaded
- simply add a type annotation to parameter with `::` to distinguish between implementations for different types
- overloading is cheap
- *specialization to known simple types types* is precisely the reason why compiled code can be *fast*
- adding type annotations to code and parameters helps the compiler to do the right thing
# <i class="twa twa-light-bulb"></i> Structured cycles
Using functional-style loops is *much less error-prone* to indexing
errors.
- Transform an array, original:
```julia
for i=eachindex(arr)
arr[i] = sqrt(arr[i])
end
```
Structured:
```julia
map(sqrt, [1,2,3,4,5])
map((x,y) -> (x^2 - exp(y)), [1,2,3], [-1,0,1])
```
- Summarize an array:
```julia
reduce(+, [1,2,3,4,5])
reduce((a,b) -> "$b $a", ["Use", "the Force", "Luke"])
reduce(*, [1 2 3; 4 5 6], dims=1)
```
**Tricky question (<i class="twa twa-light-bulb"></i><i class="twa twa-light-bulb"></i><i class="twa twa-light-bulb"></i>):** What is the overhead of the "nice" loops?
# Array-creating loops and generators
```julia
julia> [i*10 + j for i = 1:3, j = 1:5]
3×5 Matrix{Int64}:
11 12 13 14 15
21 22 23 24 25
31 32 33 34 35
julia> join(sort([c for word in ["the result is 123", "what's happening?", "stuff"]
for c in word
if isletter(c)]))
"aaeeeffghhhiilnnpprssssttttuuw"
julia> Dict('a'+i => i for i=1:26)
Dict{Char, Int64} with 26 entries:
'n' => 13
'f' => 5
...
```
# Control flow: subroutines (functions)
- Multi-line function definition
```julia
function combine(a,b)
return a + b
end
```
- "Mathematical" neater definition
```julia
combine(a,b) = a + b
```
- <i class="twa twa-light-bulb"></i> Definition with types specified (prevents errors, allows optimizations!)
```julia
function combine(a::Int, b::Int)::Int
return a + b
end
function combine(a::Vector, b::Vector)::Vector
return a .+ b
end
combine(a::String, b::String)::String = "$a and $b"
```
# Broadcasting over iterable things (aka The Magic Dot)
- Broadcasting operators by prepending a dot
```julia
matrix[row, :] .+= vector1 .* vector2
```
- Broadcasting a function
```julia
sqrt.(1:10)
maximum.(eachcol(rand(100,100)))
x = [1,2,3,4]
x' .* x
```
- Making generators
``` julia
myarray_index = Dict(myarray .=> eachindex(myarray))
```
<i class="twa twa-light-bulb"></i> The "magic dot" is a shortcut for calling `broadcast(...)`.
[
{ "filename": "index.md" },
{ "filename": "overview.md" },
{ "filename": "intro.md" },
{ "filename": "bootstrap.md" },
{ "filename": "language.md" },
{ "filename": "pkgs.md" },
{ "filename": "distributed.md" },
{ "filename": "gpu.md" },
{ "filename": "thanks.md" }
]
# Overview
1. Why would you learn another programming language again?
2. `$OTHERLANG` to Julia in 15 minutes
3. Running distributed Julia on ULHPC
4. Easy GPU programming with CUDA.jl
<div class=leader>
<i class="twa twa-bar-chart"></i>
<i class="twa twa-blue-book"></i>
<i class="twa twa-computer-disk"></i>
<i class="twa twa-chart-increasing"></i><br>
Packages for <br>doing useful things
</div>
# How do I do ... ?
- Structuring the data: `DelimitedFiles`, `CSV`, `DataFrames`
- Working with large data: `DistributedArrays`, `LabelledArrays`
- Stats: `Distributions`, `StatsBase`, `Statistics`
- Math: `ForwardDiff`, `Symbolics`
- Problem solving: `JuMP`, `DifferentialEquations`
- ML: `Flux`
- Bioinformatics: `BioSequences`, `GenomeGraphs`
- Plotting: `Makie`, `UnicodePlots`
- Writing notebooks: `Literate`
# Data frames
Package `DataFrames.jl` provides a work-alike of the data frames from
other environments (pandas, `data.frame`, tibbles, ...)
```julia
using DataFrames
mydata = DataFrame(id = [32,10,5], text = ["foo", "bar", "baz"])
mydata.text
mydata.text[mydata.id .>= 10]
```
Main change from `Matrix`: *columns are labeled and their types differ*, also entries may be missing
# DataFrames
Popular way of importing data:
```julia
using CSV
df = CSV.read("database.csv", DataFrame) # can also do a Matrix
CSV.write("backup.csv", df)
```
Popular among computer users:
```julia
using XLSX
x = XLSX.readxlsx("important_results.xls")
XLSX.sheetnames(x)
DataFrame(XLSX.gettable(x["Results sheet"])...)
```
<small>(Please do not export data to XLSX.)</small>
# Plotting
<center>
<img src="slides/img/unicodeplot.png" width="40%" />
</center>
<div class=leader>
<i class="twa twa-blueberries"></i>
<i class="twa twa-red-apple"></i>
<i class="twa twa-melon"></i>
<i class="twa twa-grapes"></i><br>
Questions?
</div>
# Thank you!
<center><img src="slides/img/r3-training-logo.png" height="200px"></center>
Contact us if you need help:
<a href="mailto:lcsb-r3@uni.lu">lcsb-r3@uni.lu</a>
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment