@@ -32,7 +32,7 @@ Data can be transferred via a dedicated VM, which can be requested via [ServiceN
Instead of transferring data between Atlas and UL HPC Clusters through the laptop as described above, the transfer will go through the dedicated VM. Once connected to the VM and mounted to Atlas, the ```rsync``` command can be used in the same way as described in the [UL HPC documentation](https://hpc-docs.uni.lu/data/transfer/#data-transfer-tofromwithin-ul-hpc-clusters). This method is recommended for **recurring transfers of very large datasets** that benefit from high-speed network connection between the VM and the HPC.
> Note: For larger transfers between Atlas and UL HPC Clusters, you may want to run some operations in the background using ```screen``` or ```tmux```.
**Note**: For larger transfers between Atlas and UL HPC Clusters, you may want to run the operations in background using `screen` or `tmux`. These prevent interruption of data transfer in cases when your `ssh` connection gets interrupted.
## 3. Via Large File Transfer (LFT)
An alternative solution is to use LFT for transferring data between Atlas and UL HPC Clusters, which can transfer high data volumes of several tera bytes. However, LFT should only be used, if data is already on LFT (e.g. received by external collaborators). In this case, you can make a copy of the data and directly upload it to the UL HPC Clusters for computational analysis. A master copy of the data must then be manually uploaded to Atlas for internal archival.