Useful links
Your first login at ETSIAE
- Login to the computer
- Open the file manager > Navigate to the network drive
temporal
> folder MobaXterm
- Copy the executable inside to your desktop
- Double-click this executable
- Select
create new session
- Indicate your user name for RUCHE and the hostname
ruche.mesocentre.universite-paris-saclay.fr
- On the terminal tab, enter your RUCHE password > do not save it
General terminal commands
-
Directories
- Print the name of current directory:
pwd
- List files in current directory:
ls
- Change directory:
cd path/to/other/folder
- Go to your home directory:
cd ~
- Make a new folder:
mkdir new_folder
- Remove a folder:
rm -r my_folder
- Copy a folder:
cp -r my_folder new_folder
-
Files
- Remove a file:
rm my_file
- Copy a file:
cp my_file new_file
- Move a file:
mv my_file other_folder/new_name
-
Text editors
emacs some_file &
vim some_file
nano some_file
- Get a file from the web:
wget some_URL
Running Smilei on the RUCHE cluster
-
Commands to manage jobs:
- Submit a job:
sbatch submission_script.sh
- Check job status:
squeue
- Cancel a job:
scancel job_id
If your job stays in status "CF" for more than 1 minute, don't hesitate to cancel and resubmit.
-
Get Smilei
git clone https://github.com/SmileiPIC/Smilei.git --depth 1
-
Prepare the post-processing tool happi
cd ~/Smilei
source /gpfs/workdir/perezf/smilei_env.sh
make happi
- Compile Smilei:
- Copy the following in a new file
compile.sh
in the Smilei
folder.
- Start the compilation with the command
sbatch compile.sh
- Check the advancement using
tail -f smilei_c.out
#!/bin/bash
#SBATCH --job-name=smilei_c
#SBATCH --output=smilei_c.out
#SBATCH --error=smilei_c.err
#SBATCH --time=00:05:00
#SBATCH --ntasks=1 # total number of MPI processes
#SBATCH --ntasks-per-node=1 # number of MPI processes per node
#SBATCH --cpus-per-task=40 # number of threads per MPI process
##SBATCH --partition=cpu_short
##SBATCH --reservation=prouveurc_165
source /gpfs/workdir/perezf/smilei_env.sh
set -x
cd ${SLURM_SUBMIT_DIR}
make -j 40 machine=ruche_cascadelake_intel
- Run
smilei_test
:
~/Smilei/smilei_test input.py
Do not forget to replace input.py
by your input file!
- Run Smilei without parallelization:
export OMP_NUM_THREADS=4
~/Smilei/smilei input.py
- Launch a parallel simulation:
- Copy the following in a new file
run.sh
in the folder of your input file.
- Modify values according to the needs of the simulation:
- Replace
my_input_file.py
by the name of your input file
ntasks
= the number of MPI processes
cpus-per-task
= the number of threads per MPI process
OMP_SCHEDULE
can be set to static
or dynamic
- Start the simulation, in the folder of your input file, with
sbatch run.sh
- The result will appear in the folder
$WORKDIR/simulation_results
#!/bin/bash
#SBATCH --job-name=run_smilei
#SBATCH --output=smilei.out
#SBATCH --time=00:10:00
#SBATCH --ntasks=1 # total number of MPI processes
#SBATCH --ntasks-per-node=1 # number of MPI processes per node
#SBATCH --cpus-per-task=40 # number of threads per MPI process
##SBATCH --reservation=prouveurc_165
INPUT_FILE=my_input_file.py
source /gpfs/workdir/perezf/smilei_env.sh
set -x
cd ${SLURM_SUBMIT_DIR}
# Dynamic scheduling for patch-spec loop
export OMP_SCHEDULE=dynamic
# number of OpenMP threads per MPI process
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
# Binding OpenMP Threads of each MPI process on cores
export OMP_PLACES=cores
# execution
F=$WORKDIR/simulation_results
rm -fr $F
mkdir $F
cp $INPUT_FILE $F/input.py
cd $F
srun ~/Smilei/smilei input.py
-
Start Python & happi for post-processing
If you haven't done so already, load the environment:
source /gpfs/workdir/perezf/smilei_env.sh
Then you may launch: ipython
in which you import happi
Using Paraview on RUCHE
Running a GPU simulation on RUCHE
- The Following assumes you handle the directories yourself.
- Create a new directory in the
$WORKDIR
and put your input file there.
- Copy the following in a new file
run.sh
in the same folder, replacing input.py
by the name of your input file
- Start the simulation in the same folder, with
sbatch run.sh
- The result will appear in the same folder.
#!/bin/bash
#SBATCH --job-name=smileigpu
#SBATCH --output=%x.o%j
#SBATCH --time=00:20:00
#SBATCH --ntasks=1 # Number of MPI processes (= total number of GPU)
#SBATCH --ntasks-per-node=1 # nombre de tache MPI par noeud (= nombre de GPU par noeud)
#SBATCH --gres=gpu:1
#SBATCH --partition=gpua100
##SBATCH --reservation=prouveurc_166
#SBATCH --exclude=ruche-gpu16
export INPUT=input.py
# Env for Smilei GPU
module purge
module load gcc/11.2.0/gcc-4.8.5
module load nvhpc/23.7/gcc-11.2.0
module load cuda/11.8.0/nvhpc-23.7
module load openmpi/4.1.5/nvhpc-23.7-cuda
#module load hdf5/1.12.0/nvhpc-23.7-openmpi
module load python/3.9.10/gcc-11.2.0 # works
export HDF5_ROOT_DIR=/gpfs/users/prouveurc/tools/hdfsrc/install #/gpfs/softs/spack_0.17/opt/spack/linux-centos7-haswell/nvhpc-23.7/hdf5-1.12.0-3em63nl4p5tmv37offfmuvz2uswvgwzv/
export LD_LIBRARY_PATH=/gpfs/users/prouveurc/tools/hdfsrc/install/lib/:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=/gpfs/softs/spack_0.17/opt/spack/linux-centos7-cascadelake/gcc-11.2.0/gettext-0.21-bppg5g6ijfrvi7sdylhhg3t5f6v2fh2x/lib/:$LD_LIBRARY_PATH
export NVHPC_CUDA_HOME=/gpfs/softs/spack_0.17/opt/spack/linux-centos7-cascadelake/nvhpc-23.7/cuda-11.8.0-j62qyr3fdv4uuxx3kln3ckwo4xoqrntx/
module load anaconda3/2022.10/gcc-11.2.0
# Run cuda code
LD_PRELOAD=/gpfs/softs/spack/opt/spack/linux-centos7-haswell/gcc-4.8.5/gcc-11.2.0-mpv3i3uebzvnvz7wxn6ywysd5hftycj3/lib64/libstdc++.so.6.0.29 /gpfs/workdir/prouveurc/workshop/smilei $INPUT
Compiling a GPU executable (if you want to)
- The Following assumes you are in the smilei directory
# Env for Smilei GPU
module purge
module load gcc/11.2.0/gcc-4.8.5
module load nvhpc/23.7/gcc-11.2.0
module load cuda/11.8.0/nvhpc-23.7
module load openmpi/4.1.5/nvhpc-23.7-cuda
#module load hdf5/1.12.0/nvhpc-23.7-openmpi
module load python/3.9.10/gcc-11.2.0 # works
export HDF5_ROOT_DIR=/gpfs/users/prouveurc/tools/hdfsrc/install #/gpfs/softs/spack_0.17/opt/spack/linux-centos7-haswell/nvhpc-23.7/hdf5-1.12.0-3em63nl4p5tmv37offfmuvz2uswvgwzv/
export LD_LIBRARY_PATH=/gpfs/users/prouveurc/tools/hdfsrc/install/lib/:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=/gpfs/softs/spack_0.17/opt/spack/linux-centos7-cascadelake/gcc-11.2.0/gettext-0.21-bppg5g6ijfrvi7sdylhhg3t5f6v2fh2x/lib/:$LD_LIBRARY_PATH
export NVHPC_CUDA_HOME=/gpfs/softs/spack_0.17/opt/spack/linux-centos7-cascadelake/nvhpc-23.7/cuda-11.8.0-j62qyr3fdv4uuxx3kln3ckwo4xoqrntx/
make -j 4 machine="ruche_gpu2" config="gpu_nvidia"