Python on OLCF Systems
In high-performance computing, Python is heavily used to analyze scientific data on the system. Some users require specific versions of Python or niche scientific packages to analyze their data, which may further depend on numerous other Python packages. Because of all the dependencies that some Python packages require, and all the types of data that exist, it can be quite troublesome to get different Python installations to “play nicely” with each-other, especially on an HPC system where the system environment is complicated. Conda, a package and virtual environment manager from the Anaconda distribution, helps alleviate these issues.
Conda allows users to easily install different versions of binary software packages and any required libraries appropriate for their computing platform. The versatility of conda allows a user to essentially build their own isolated Python environment, without having to worry about clashing dependencies and other system installations of Python. Conda is available on OLCF systems, and loading the default Python module loads an Anaconda Python distribution. Loading this distribution automatically puts you in a “base” conda environment, which already includes packages that one can use for simulation, analysis, and machine learning.
For users interested in using Python with Jupyter, see our Jupyter at OLCF page instead.
For users interested in using the machine learning
open-ce module (formerly
ibm-wml-ce) on Summit, see our IBM Watson Machine Learning CE -> Open CE page.
OLCF Python Guides
Below is a list of guides created for using Python on OLCF systems.
To start using Python, all you need to do is load the module:
$ module load python
Loading the Python module on all systems will put you in a “base” pre-configured conda environment. This option is recommended for users who do not need custom environments, and only require packages that are already installed in the base environment. This option is also recommended for users that just need a Python interpreter or standard packages like NumPy, Scipy, and Matplotlib.
To see a full list of the packages installed in the base environment, use
conda list. A small preview from Summit is provided below:
$ module load python $ conda list # packages in environment at /sw/summit/python/3.8/anaconda3/2020.07-rhel8: # # Name Version Build Channel _ipyw_jlab_nb_ext_conf 0.1.0 py38_0 _libgcc_mutex 0.1 main alabaster 0.7.12 py_0 anaconda 2020.07 py38_0 anaconda-client 1.7.2 py38_0 anaconda-project 0.8.4 py_0 asn1crypto 1.3.0 py38_0 astroid 2.4.2 py38_0 astropy 4.0.1.post1 py38h7b6447c_1 . . .
It is not recommended to try to install new packages into the base environment.
Instead, you can clone the base environment for yourself and install packages
into the clone. To clone an environment, you must use the
<env_to_clone> flag when creating a new conda environment. An example for
cloning the base environment is provided in Best Practices below.
You can also create your own custom conda environment after loading the Python module. This option is recommended for users that require a different version of Python than the default version available, or for users that want a personal environment to manage specialized packages.
To create and activate an environment in a specific location using Python
version X.Y, use the
$ module load python $ conda create -p /path/to/my_env python=X.Y $ source activate /path/to/my_env
To create and activate an environment with a specific name using Python
version X.Y, use the
--name flag (by default, this creates the environment
$ module load python $ conda create --name my_env python=X.Y $ source activate my_env
It is highly recommended to create new environments in the “Project Home”
/ccs/proj/<project_id>/<user_id>). This space avoids purges,
allows for potential collaboration within your project, and works better with
the compute nodes. It is also recommended, for convenience, that you use
environment names that indicate the hostname, as virtual environments created
on one system will not necessarily work on others.
It is always recommended to deactivate an environment before activating a new one. Deactivating an environment can be achieved through:
$ source deactivate # deactivates the current environment
How to Run
Remember, at larger scales both your performance and your fellow users’ performance will suffer if you do not run on the compute nodes. It is always highly recommended to run on the compute nodes (through the use of a batch job or interactive batch job).
The OS-provided Python is no longer accessible as
/usr/bin/env python); rather, you
must specify it as
python3. If you are using python from one
of the module files rather than the version in
/usr/bin, this change should
not affect how you invoke python in your scripts, although we encourage
python3 as a best practice.
Batch Script - Summit
To use Python on a Summit compute node, you must use
jsrun, even if
running in serial.
$PATH issues are known to occur after having loaded multiple
conda environments before submitting a batch script. Therefore, it is
recommended to use a fresh login shell before submission. The
-L flag for
bsub ensures that no previously set environment variables are passed into
the batch job.
$ bsub -L $SHELL submit.lsf
This means you will have to load your modules and activate your environment inside the batch script. An example batch script for Summit is provided below:
#!/bin/bash #BSUB -P PROJECT_ID #BSUB -W 00:05 #BSUB -nnodes 1 #BSUB -J python #BSUB -o python.%J.out #BSUB -e python.%J.err cd $LSB_OUTDIR date module load python source activate my_env jsrun -n1 -r1 -a1 -c1 python3 script.py
Interactive Job - Summit
To use Python in an interactive session on Summit:
$ module load python $ bsub -W 0:05 -nnodes 1 -P <PROJECT_ID> -Is $SHELL $ source activate my_env $ jsrun -n1 -r1 -a1 -c1 python3 script.py
Batch Script - Andes
On Andes, you are already on a compute node once you are in a batch job.
Therefore, you only need to use
srun if you plan to run parallel-enabled
Similar to Summit (see above),
$PATH issues are known to occur if not
submitting from a fresh login shell, which can result in the wrong conda
environment being detected. To avoid this, you must use the
flag, which ensures that no previously set environment variables are passed
into the batch job:
$ sbatch --export=NONE submit.sl
This means you will have to load your modules and activate your environment inside the batch script. An example batch script for Andes is provided below:
#!/bin/bash #SBATCH -A <PROJECT_ID> #SBATCH -J python #SBATCH -N 1 #SBATCH -p batch #SBATCH -t 0:05:00 cd $SLURM_SUBMIT_DIR date module load python source activate my_env python3 script.py
Interactive Job - Andes
To use Python in an interactive session on Andes:
$ module load python $ salloc -A <PROJECT_ID> -N 1 -t 0:05:00 $ source activate my_env $ python3 script.py
Cloning the base environment:
It is not recommended to try to install new packages into the base environment. Instead, you can clone the base environment for yourself and install packages into the clone. To clone an environment, you must use the
--clone <env_to_clone>flag when creating a new conda environment. An example for cloning the base environment into a specific directory called
conda_envs/summit/in your “Project Home” on Summit is provided below:
$ conda create -p /ccs/proj/<project_id>/<user_id>/conda_envs/summit/baseclone-summit --clone base $ source activate /ccs/proj/<project_id>/<user_id>/conda_envs/summit/baseclone-summit
It is highly recommended to create new environments in the “Project Home” directory (
/ccs/proj/<project_id>/<user_id>). This space avoids purges, allows for potential collaboration within your project, and works better with the compute nodes. It is also recommended, for convenience, that you use environment names that indicate the hostname, as virtual environments created on one system will not necessarily work on others.
Adding known environment locations:
For a conda environment to be callable by a “name”, it must be installed in one of the
envs_dirsdirectories. The list of known directories can be seen by executing:
$ conda config --show envs_dirs
On OLCF systems, the default location is your
$HOMEdirectory. If you plan to frequently create environments in a different location other than the default (such as
/ccs/proj/...), then there is an option to add directories to the
For example, to track conda environments in a subdirectory called
summitin Project Home you would execute:
$ conda config --append envs_dirs /ccs/proj/<project_id>/<user_id>/conda_envs/summit
This will create a
.condarcfile in your
$HOMEdirectory if you do not have one already, which will now contain this new envs_dirs location. This will now enable you to use the
--name env_nameflag when using conda commands for environments stored in the
summitdirectory, instead of having to use the
-p /ccs/proj/<project_id>/<user_id>/conda_envs/summit/env_nameflag and specifying the full path to the environment. For example, you can do
source activate my_envinstead of
source activate /ccs/proj/<project_id>/<user_id>/conda_envs/summit/my_env.