-
Connect to the cluster
ssh -Y mylogin@ruche.mesocentre.universite-paris-saclay.fr
-
Commands to manage jobs:
- Submit a job:
sbatch submission_script.sh
- Check job status:
squeue
- Cancel a job:
scancel job_id
If your job stays in status "CF" for more than 1 minute, don't hesitate to cancel and resubmit.
-
Get Smilei
git clone https://github.com/SmileiPIC/Smilei.git --depth 1
-
Prepare the post-processing tool happi
cd ~/Smilei
source /gpfs/workdir/perezf/smilei_env.sh
make happi
- Compile Smilei:
- Copy the following in a new file
compile.sh
in the Smilei
folder.
- Start the compilation with the command
sbatch compile.sh
#!/bin/bash
#SBATCH --job-name=smilei_c
#SBATCH --output=smilei_c.out
#SBATCH --error=smilei_c.err
#SBATCH --time=00:05:00
#SBATCH --ntasks=1 # total number of MPI processes
#SBATCH --ntasks-per-node=1 # number of MPI processes per node
#SBATCH --cpus-per-task=40 # number of threads per MPI process
##SBATCH --partition=cpu_short
source /gpfs/workdir/perezf/smilei_env.sh
set -x
cd ${SLURM_SUBMIT_DIR}
make -j 40 machine=ruche_cascadelake_intel
- Run
smilei_test
:
~/Smilei/smilei_test input.py
Do not forget to replace input.py
by your input file!
- Run Smilei without parallelization:
export OMP_NUM_THREADS=4
~/Smilei/smilei input.py
- Launch a parallel simulation:
- Copy the following in a new file
run.sh
in the folder of your input file.
- Modify values according to the needs of the simulation:
- Replace
my_input_file.py
by the name of your input file
ntasks
= the number of MPI processes
cpus-per-task
= the number of threads per MPI process
OMP_SCHEDULE
can be set to static
or dynamic
- Start the simulation, in the folder of your input file, with
sbatch run.sh
- The result will appear in the folder
$WORKDIR/simulation_results
#!/bin/bash
#SBATCH --job-name=run_smilei
#SBATCH --output=smilei.out
#SBATCH --time=00:10:00
#SBATCH --ntasks=1 # total number of MPI processes
#SBATCH --ntasks-per-node=1 # number of MPI processes per node
#SBATCH --cpus-per-task=40 # number of threads per MPI process
INPUT_FILE=my_input_file.py
source /gpfs/workdir/perezf/smilei_env.sh
set -x
cd ${SLURM_SUBMIT_DIR}
# Dynamic scheduling for patch-spec loop
export OMP_SCHEDULE=dynamic
# number of OpenMP threads per MPI process
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
# Binding OpenMP Threads of each MPI process on cores
export OMP_PLACES=cores
# execution
F=$WORKDIR/simulation_results
rm -fr $F
mkdir $F
cp $INPUT_FILE $F/input.py
cd $F
srun ~/Smilei/smilei input.py
-
Start Python & happi for post-processing
If you haven't done so already, load the environment:
source /gpfs/workdir/perezf/smilei_env.sh
Then you may launch: ipython
in which you import happi