.. _install: Installation ============ The zip file containing *XCLASS* (version 1.4.3) can be downloaded `here `_. *XCLASS* has only been tested on Linux and Mac systems. We have no experience with Windows! Required packages ----------------- *XCLASS* requires the following packages: - gcc (version 7.4.0 or newer), - gfortran (version 7.4.0 or newer) with OpenMP, - OpenMPI (version 1.8.6), - python 3.x (recommended >= 3.10), - numpy (version 1.19.5 or newer), - scipy (version 1.5.4 or newer), - matplotlib (version 3.3.4 or newer), - astropy (version 4.1 or newer), - spectral_cube (version 0.6.3 or newer), - regions (version 0.5 or newer), - PyQt5 (version 5.14.1 or newer), - h5py (version 3.1.0 or newer), - lxml (version 4.9.2 or newer), - emcee (version 3.1.4 or newer), - ultranest (version 3.6.4 or newer), - libraries: libz, libm, libdl, libcurl, libpthread, and libgomp. PyPI ---- In order to install the *XCLASS* package just execute .. code:: shell python3 -m pip install xclass_pip_off/ Mac --- (Thanks to David Friedlander) On a Apple Silicon (M1 Pro) hardware architectures the following additional environment variables have to be defined .. code:: shell export FC=gfortran-mp-12 export CC=gcc-mp-12 export LDFLAGS="-L/opt/local/lib -lcurl -lz" For some other env variables we need to determine where things are first, as this will change as MacPorts packages get upgraded over the years. For each of these items we can search the list of contents of a given package, and then run that output through "dirname" which gives us the enclosing directory. In each case, use the name of the actual package installed on *your* system. "mpif90" compiler .. code:: shell dirname `port contents openmpi-gcc12 | grep mpif90$` leads to: .. code:: shell export PATH="/opt/local/libexec/openmpi-gcc12/:${PATH}" "f951" binary (needed by gfortran) .. code:: shell dirname `port contents gcc12 | grep f951` leads to (don't just copy these-- the CPU architecture matters!) : .. code:: shell export PATH="/opt/local/libexec/gcc/x86_64-apple-darwin22/12.2.0:${PATH}" or .. code:: shell export PATH="/opt/local/libexec/gcc/arm64-apple-darwin22/12.2.0:${PATH}" And the code needs to be able to find the omp_lib.mod module: .. code:: shell dirname `port contents gcc12 | grep omp_lib.mod` leads to: .. code:: shell export FINCLUDE="-I/opt/local/lib/gcc12/gcc/x86_64-apple-darwin22/12.2.0/finclude" or .. code:: shell export FINCLUDE="-I/opt/local/lib/gcc12/gcc/arm64-apple-darwin22/12.2.0/finclude" Init file --------- During the installation process, *XCLASS* creates a new subdirectory, called ``.xclass`` located in the user's home directory, i.e. :: /path-to-user-home-directory/.xclass/ This subdirectory contains a small text file called ``init.dat``, which describes some internal parameters used by *XCLASS*. Additionally, *XCLASS* creates a subdirectory called ``db/``, i.e. :: /path-to-user-home-directory/.xclass/db/ which contains the *XCLASS* database file ``cdms_sqlite.db``. Parallelization --------------- In order to use the parallelization option of the interface, the user might increase the stack size for OpenMP by adding the following lines to the .bashrc (or .bash_profile) file: .. code:: shell ulimit -s unlimited export KMP_STACKSIZE='3999M' export OMP_STACKSIZE='3999M' export GOMP_STACKSIZE='3999M' Please note, if more or less RAM is available, please increase/decrease the value ``"3999"`` to a value useful for your machine. Job directories --------------- The *XCLASS* interface creates so-called job directories for many *XCLASS* function, where all files created by a function call are stored in. By default, all these job-directories are stored in a so-called run directory which is created within the .xclass subdirectory with name “run”, i.e. :: /path-to-user-home-directory/.xclass/run/ Sometimes it is useful to create the run directory, not within the *XCLASS* subdirectory. By defining the environment variable **XCLASSJobDirectory** .. code:: shell export XCLASSJobDirectory="run_somewhere_else" the user can define another location for the run directory. Temporary files --------------- During the fit process, the MAGIX optimization package included in *XCLASS* creates many temporary files, which are written to the temporary directory (temp). This directory is by default also located within the .xclass subdirectory with name “temp”, i.e. :: /path-to-user-home-directory/.xclass/temp/ By defining the environment variable **MAGIXTempDirectory** .. code:: shell export MAGIXTempDirectory="temp_somewhere_else" in the bashrc (or .bash_profile) file, the user can define another location for this temporary directory. It is strongly recommended to use a so-called RAM drive, i.e. set the environment variable to (Linux users) .. code:: shell export MAGIXTempDirectory="/dev/shm/user-name/" whenever possible. (The RAM drive is a common name for a temporary file storage facility on many Unix-like operating systems. The usage of a RAM drive improve the performance of *XCLASS* because the temporary files are not written to the hard drive but to the RAM, which is orders of magnitude faster.) For Mac users, add the following lines to your .bash_profile file to create a RAM drive (no guarantee) .. code:: shell ## create RAM drive on Mac if [ -d /Volumes/RAMDisk/ ]; then echo ' '; else diskutil erasevolume HFS+ 'RAMDisk' `hdiutil attach -nomount ram://16777216`; fi ## create a subdirectory there for user 'user-name' if [ -d /Volumes/RAMDisk/user-name/ ]; then echo ' '; else mkdir -p /Volumes/RAMDisk/user-name/; fi ## set temp directory environment variable for MAGIX export MAGIXTempDirectory=/Volumes/RAMDisk/user-name/ Troubleshooting --------------- * astropy 6.0 can not be used with spectral-cube 0.6.3 (Status: 2024-01-25) -> downgrade to astropy 5.0 (?) * There is a problem when installing *XCLASS* in a virtual environment because numpy.f2py cannot be executed there -> ?? * The following error message may appear on Macs :: -------------------------------------------------------------------------- A system call failed during shared memory initialization that should not have. It is likely that your MPI job will now either abort or experience performance degradation. Local host: gs66-draco System call: unlink(2) /var/folders/v9/d5smf9694zjd9q3jtnykh1m46xv24k/T//ompi.gs66-draco.232622226/pid.27703/1/vader_segment.gs66-draco.232622226.caa90001.4 Error: No such file or directory (errno 2) -------------------------------------------------------------------------- -> The error can be fixed by adding .. code:: shell export TMPDIR=/tmp to **.bash_profile** `(see https://github.com/open-mpi/ompi/issues/7393) `_ * ...