TELEMAC (Installation)#

Preface#

This tutorial guides through the installation of open TELEMAC-MASCARET on Debian Linux based platforms (i.e., also works with Ubuntu and its derivatives, such as Mint or Lubuntu). Account for approximately 1-2 hours for installing TELEMAC and make sure to have a stable internet connection (>1.4 GB file download).


This section only guides through the installation of TELEMAC. A tutorial for running hydro(-morpho)dynamic models with TELEMAC is currently under construction for this eBook.

Before you start

  • This tutorial guides through the local installation of TELEMAC in a user folder on Debian Linux, which is OK for practicing, but not good for setting up a computing server. For installing TELEMAC on a server, or globally on a computer, check with your Admin: TELEMAC should be installed in the ROOT/opt/ directory.

  • Installing TELEMAC on a Virtual Machine (VM) is useful for getting started with TELEMAC and its sample cases, but not recommended for running a real-world numerical model (limited performance of VMs).

  • Familiarize with the Linux Terminal to understand the underpinnings for compiling TELEMAC.

  • This tutorial refers to the software package open TELEMAC-MASCARET as TELEMAC because MASCARET is a one-dimensional (1d) model and the numerical simulation schemes in this eBook focus on two-dimensional (2d) and three-dimensional (3d) modeling.

A couple of installation options are available:

Continue to read and walk through the following sections.

If you are working with the Mint Hyfo Virtual Machine, skip the tutorials on this website because TELEMAC v8p3 is already preinstalled and you are good to go for completing the TELEMAC tutorials.

Load the TELEMAC environment and check if it works with:

cd ~/telemac/v8p3/configs
source pysource.hyfo.sh
config.py

TELEMAC is also available through the SALOME-HYDRO software suite, which is a spinoff of SALOME. However, the principal functionalities of SALOME-HYDRO may migrate to a new QGIS plugin. Therefore, this eBook recommends installing TELEMAC independently from any pre- or post-processing software.

The Austrian engineering office Flussplan provides a Docker container of TELEMAC v8 on their docker-telemac GitHub repository. Note that a Docker container represents an easy-to-install virtual environment that leverages cross-platform compatibility, but affects computational performance. If you have the proprietary Docker software installed and computational performance is not the primary concern for your models, Flussplan’s Docker container might be a good choice. For instance, purely hydrodynamic models with small numbers of grid nodes and without additional TELEMAC module implications will efficiently run in the Docker container.

Basic Requirements#

Working with TELEMAC requires some software for downloading source files, compiling, and running the program. The mandatory software prerequisites for installing TELEMAC on Debian Linux are:

  • Python 3.7 (and more recent) with NumPy >=1.15

  • GNU Fortran 95 compiler (gfortran)

Python3#

Estimated duration: 5-8 minutes.

The high-level programing language Python3 is pre-installed on Debian Linux 10.x and needed to launch the compiler script for TELEMAC. To launch Python3, open Terminal and type python3. To exit Python, type exit().

TELEMAC requires the NumPy Python library that comes along with SciPy and matplotlib.

To install NumPy libraries, open Terminal and type (hit Enter after every line):

sudo apt install python3-numpy python3-scipy python3-matplotlib python3-distutils python3-dev python3-pip

To test if the installation was successful, type python3 in Terminal and import the three libraries:

Python 3.9.1 (default, Jul  25 2030, 13:03:44) [GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> a = numpy.array((1, 1))
>>> print(a)
[1 1]
>>> exit()

None of the three library imports should return an ImportError message. To learn more about Python read the section on Packages, Modules and Libraries.

GIT#

Estimated duration: Less than 5 minutes.

The installation of the git version control system and its usage is extensively described in the git section of this eBook. In addition to the git functionalities described in the git section, the following are needed to manage large files that come along with TELEMAC:

sudo apt install git-all git-lfs

GNU Fortran 95 Compiler (gfortran)#

Estimated duration: 3-10 minutes.

The Fortran 95 compiler is needed to compile TELEMAC through a Python3 script, which requires that gfortran is installed. The Debian Linux retrieves gfortran from the standard package repositories. Thus, to install the Fortran 95 compiler, open Terminal and type:

sudo apt install gfortran

More Compilers and Essentials#

Estimated duration: 2-5 minutes. To enable parallelism, a C compiler is required for recognition of the command cmake in Terminal. Moreover, we will need build-essential for building packages and create a comfortable environment for dialogues. VIM is a text editor that we will use for bash file editing. Alternatively to VIM, consider gedit or Nano (remove not-wanted editors from the below list). Therefore, open Terminal (as root/superuser, i.e., type su) and type:

sudo apt install -y cmake build-essential dialog vim gedit gedit-plugins

Get the TELEMAC Repo#

Estimated duration: 25-40 minutes (large downloads).

Before getting more packages to enable parallelism and compiling, download the latest version of TELEMAC with git in which additional packages will be embedded. To download (i.e., git clone) TELEMAC, open Terminal, which will by default start in your home directory (/home/USERNAME/). The following instructions assume you want to install TELEMAC directly in your home directory. However, it might make sense to create a new sub-folder (e.g., called /modeling) to better organize your file system (mkdir ~/modeling/ > cd ~/modeling). To download TELEMAC into the home (or a new) directory, type (enter no when asked for password encryption):

git clone https://gitlab.pam-retd.fr/otm/telemac-mascaret.git

This will have downloaded TELEMAC to a sub-directory called /telemac-mascaret.

After downloading (cloning) the TELEMAC repository, verify which is the latest version:

cd telemac-mascaret
git branch --list

At the time of writing these lines, the latest version is v8p5r0. Check out the last version as follows:

git checkout tags/v8p5r0

Optional Requirements (Parallelism and Others)#

This section guides through the installation of additional packages required for parallelism. Make sure that Terminal recognizes gcc, which should be included in the Debian base installation (verify with gcc --help). This section includes installation for packages enabling parallelism, which provides substantial acceleration of simulations:

  • Message Passing Interface (MPI)

  • Metis

In addition, the MED file format for input meshes (e.g., created with SALOME) and computation results can be installed with TELEMAC, but versioning is tricky. To work with MED files, also check out the Q4TS QGIS plugin. Also, AED2

Parallelism: Install MPI#

Estimated duration: 5 minutes.

TELEMAC’s parallelism modules require that the Message Passing Interface (MPI) standard is installed either through the MPICH or the Open MPI library. Here, we opt for Open MPI, which can be installed via Terminal:

sudo apt install libopenmpi-dev openmpi-bin

To test if the installation was successful type:

mpif90 --help

The Terminal should prompt option flags for processing a gfortran file. The installation of MPI on Linux is also documented in the opentelemac wiki.

How to use MPICH in lieu of Open MPI

This tutorial uses the configuration file systel.edfHy.cfg, which includes parallelism compiling options that build on Open MPI. Other configuration files (e.g., systel.cis-ubuntu.cfg) use MPICH instead of Open MPI. To use those configuration files, install MPICH with sudo apt install mpich.

Parallelism: Install Metis#

Estimated duration: 10-15 minutes.

Metis is a software package for partitioning unstructured graphs, partitioning meshes, and computing fill-reducing orderings of sparse matrices by George Karypis. TELEMAC uses Metis as a part of Partel to split the mesh into multiple parts for parallel runs. Metis is developed by the Karypis Lab at the University of Minnesota.

Download the Metis archive and unpack it in a temporary (temp) directory. The following code block changes to the optionals directory (cd) of TELEMAC, creates the temp folder with mkdir, downloads, and unzips the Metis archive (run in Terminal as normal user - not as root):

To install Metis, use the hydro-informatics/metis v5.1.1 fork from the Karypis Lab’s METIS Github repository, which is tweaked for the Telemac installation:

cd ~/telemac/optionals
mkdir metis
git clone https://github.com/hydro-informatics/metis.git
cd metis

This repository also embraces a fork from the Karypis Labs’ GKlib, which still needs to be compiled (starting from the ~/telemac/optionals/metis folder):

cd GKlib
make config cc=gcc prefix=~/telemac/optionals/metis/GKlib openmp=set
make
make install
cd ..

Next, adapt the Metis’ Makefile either with any text editor or the VIM text editor (installed earlier through sudo apt install vim):

Open the metis Makefile (i.e., ~/telemac/optionals/metis/Makefile) by navigating through your system browser (also known as Explorer on Windows) and double-clicking on the Makefile (opens, for example, gedit). At the top of the Makefile, find prefix  = not-set and cc = not-set to replace them with:

prefix = ~/telemac/optionals/metis/build/
cc = gcc

Save and close the Makefile.

vim Makefile

VIM opens in the Terminal window and the program may be a little bit confusing to use for someone who is used to Windows or mac OS. If VIM/Terminal asks if you want to continue {E}diting, confirm with the E key. Then click in the file and enable editing through pressing the i key. Now, -- INSERT -- should be prompted on the bottom of the window. Look for the prefix  = not-set and the cc = not-set definitions. Click in the corresponding lines and press the i key to enable editing (recall: -- INSERT -- will appear at the bottom of the window). Change both variables to:

prefix = ~/telemac/optionals/metis/build/
cc = gcc

Press Esc to leave the INSERT mode and then type :wq (the letters are visible on the bottom of the window) to save (write-quit) the file. Hit Enter to return to the Terminal.

Back in Terminal, install Metis (make sure to be in the right directory, that is, ~/telemac/optionals/metis/):

make config
make
make install

To verify the successful installation, make sure that the file ~/telemac/optionals/metis/build/lib/libmetis.a exists (i.e., <install_path>/lib/libmetis.a ).

Hdf5 and MED Format Handlers#

Estimated duration: 15-25 minutes (building libraries takes time).

HDF5 is a portable file format that incorporates metadata and communicates efficiently with C/C++ and Fortran on small laptops as well as massively parallel systems. The hdf5 file library is provided by the HDFgroup.org. To install it, the manual installation of v1.10.6 is recommended, as the installation with apt installs an incompatible HDF version with Telemac use the system package manger apt:

Retrieve the package and install it (Terminal):

wget https://support.hdfgroup.org/ftp/HDF5/releases/hdf5-1.10/hdf5-1.10.6/src/hdf5-1.10.6.tar.gz
tar -xvf hdf5-1.10.6.tar.gz
cd hdf5-1.10.6
./configure --prefix=/usr/local/hdf5 
make
make install

To add this installation to the system paths, open your (hidden) user .bashrc file (/home/<user>/.bashrc) and add the following (text editor):

export PATH=/usr/local/hdf5-1.10.6/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/hdf5-1.10.6/lib:$LD_LIBRARY_PATH

To use these paths immediately, enter (Terminal):

source ~/.bashrc

Test with (Terminal):

h5cc -showconfig

Untested workflow!

We could not yet verify the integrity of this installation step. So this might be incompatible with Telemac.

Install the following packages, binding them to v1.10.6 (Terminal):

sudo apt install libhdf5-dev=1.10.6 hdf5-tools=1.10.6

MED FILE LIBRARY: The med file library is provided by salome-platform.org and since recently, there is an option install MED through apt although we could not yet test how this works with Telemac.

and we need to use the file (med-4.1.1.tar.gz to ensure compatibility with hdf5. So do not try to use any other med file library version because those will not work properly with hdf5. Moreover, the med file library requires that zlib is installed. To install zlib open Terminal and type:

sudo apt-cache search zlib | grep -i zlib
sudo apt install zlib1g zlib1g-dev

The following command block, switches to the above-created temp folder, downloads, and unzips the med-4.1.1 archive (run in Terminal as normal user - not as root):

cd ~/telemac/optionals/temp
wget --referer'https://www.salome-platform.org/?page_id=2768''https://files.salome-platform.org/Salome/medfile/med-4.1.1.tar.gz'
gunzip med-4.1.1.tar.gz
tar -xvf med-4.1.1.tar
cd med-4.1.1

To compile the med file library type:

./configure --prefix=/home/USERNAME/telemac/optionals/med-4.1.1 --disable-python
make
make install

The flag --prefix sets the installation directory defines where the med library should be installed.

The installation of the med file library on Linux is also documented in the opentelemac wiki.

Untested workflow!

We could not yet verify the integrity of this installation step. So this might be incompatible with Telemac.

The following packages are available through apt, and can potentiall work with Telemac:

  • libmed1v5: MED file library runtime

  • libmed-dev: development files for the MED library

  • libmedc1v5 and libmedc-dev: C bindings for the MED library

To install them, enter (Terminal):

sudo apt install libmed1v5 libmed-dev libmedc1v5 libmedc-dev

If created, remove the temp folder to avoid storing garbage:

cd ~/telemac/optionals
sudo rm -r temp

Compile TELEMAC#

Adapt and Verify Configuration File (systel.x.cfg)#

Estimated duration: 2-20 minutes.

Two options are described in this section for setting up a configuration file: (i) a modification (reduced module availability) of the default-available ~/telemac/configs/systel.edf.cgf configuration file, and (ii) extended descriptions for setting up a custom configuration file. Option (i) provides a powerful HPC environment, but does not include the installation of the excludes AED2 (waqtel), MUMPS, and GOTM (general ocean) modules.

To facilitate setting up the systel file, use our edf-based template: right-click on this download of systel.edfHy.cfg > Save Link As… > ~/telemac/configs/systel.edfHy.cfg, which was tested on Debian 10, Debian 11, and Linux Mint 21.3.

The systel.edfHy.cfg is designed to be used with the S10.gfortran.dyn configuration, for which we removed all dependencies of AED2, MUMPS, and GOTM. That is, none of the flags [flags_mumps] [flags_aed] [flags_gotm] is enabled and they were removed from the S10.gfortran.dyn configuration, which is fully sufficient for running the Telemac tutorials in this eBook.

How to add AED2, MUMPS, and GOTM

To add AED2, MUMPS, and GOTM functionality install the corresponding modules in the optionals/ directory and use the default systel.edf.cfg configuration file. The installation of AED2, MUMPS, and GOTM is described in the Telemac installation wiki, though not straight forward because multiple links and additional dependencies are outdated.

Python API Not Set Up

The following descriptions for setting up a custom systel.X.cfg configuration file do not enable Telemac’s Python API. For enabling the Python API, follow the template-based installation instructions, or use systel.edf.cfg.

The configuration file will tell the compiler how flags are defined and where optional software lives. Here, we use the configuration file systel.cis-debian.cfg, which lives in ~/telemac/configs/. In particular, we are interested in the following section of the file:

# _____                          ___________________________________
# ____/ Debian gfortran openMPI /__________________________________/
[debgfopenmpi]
#
par_cmdexec:   <config>/partel < partel.par >> <partel.log>
#
mpi_cmdexec:   /usr/bin/mpiexec -wdir <wdir> -n <ncsize> <exename>
mpi_hosts:
#
cmd_obj:    /usr/bin/mpif90 -c -O3 -DHAVE_MPI -fconvert=big-endian -frecord-marker=4 <mods> <incs> <f95name>
cmd_lib:    ar cru <libname> <objs>
cmd_exe:    /usr/bin/mpif90 -fconvert=big-endian -frecord-marker=4 -lpthread -v -lm -o <exename> <objs> <libs>
#
mods_all:   -I <config>
#
libs_all:   /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so /home/telemac/metis/build/lib/libmetis.a

The configuration file contains other configurations such as a scalar or a debug configuration for compiling TELEMAC. Here, we only use the Debian gfortran open MPI section that has the configuration name [debgfopenmpi]. To verify if this section if correctly defined, check where the following libraries live on your system (use Terminal and cd + ls commands or Debian’s File browser):

  • Open MPI’s include folder is typically located in /usr/lib/x86_64-linux-gnu/openmpi/include

  • Open MPI library typically lives in /usr/lib/x86_64-linux-gnu/openmpi/libmpi.so
    The number 40.20.3 may need to be added after libmpi.so when your system is based on Debian 10.

  • mpiexec is typically installed in /usr/bin/mpiexec

  • mpif90 is typically installed in /usr/bin/mpif90

  • If installed, AED2 typically lives in ~/telemac/optionals/aed2/, which should contain the file libaed2.a (among others) and the folders include, obj, and src.

Then open the configuration file in VIM (or any other text editor) to verify and adapt the Debian gfortran open MPI section:

cd ~/telemac/configs
vim systel.edfHy.cfg

Enable Parallelism

Make the following adaptations in Debian gfortran open MPI section to enable parallelism:

  • Remove par_cmdexec from the configuration file; that means delete the line (otherwise, parallel processing will crash with a message that says cannot find PARTEL.PAR):
    par_cmdexec:   <config>/partel < PARTEL.PAR >> <partel.log>

  • Find libs_all to add and adapt the following items:

    • metis (all metis-related directories to /home/USERNAME/telemac/optionals/metis/build/lib/libmetis.a).

    • openmpi (correct the library file to /usr/lib/x86_64-linux-gnu/openmpi/libmpi.so or wherever libmpi.so.xx.xx.x lives on your machine).

    • aed2 (~/telemac/optionals/aed2/libaed2.a).

libs_all:    /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so /home/USERNAME/telemac/optionals/metis/build/lib/libmetis.a /home/USERNAME/telemac/optionals/aed2/libaed2.a

  • Add the incs_all variable to point include openmpi and aed2:

incs_all: -I /usr/lib/x86_64-linux-gnu/openmpi/include -I /home/USERNAME/telemac/optionals/aed2 -I /home/USERNAME/telemac/optionals/aed2/include

  • Search for openmpi in libs_all and

  • Search for cmd_obj: definitions, add -cpp in front of the -c flags, and -DHAVE_AED2. For example:

cmd_obj:    /usr/bin/mpif90 -cpp -c -O3 -DHAVE_AED2 -DHAVE_MPI -fconvert=big-endian -frecord-marker=4 <mods> <incs> <f95name>

An additional keyword in the configurations is options: that accepts multiple keywords including mpi, api (TelApy - TELEMAC’s Python API), hpc, and dyn or static. The provided cfg file primarily uses the mpi keyword. To use other installation options (e.g., HPC or dynamic), read the instructions for HPC installation on opentelemac.org and have a look at the most advanced default config file from EDF (~/telemac/configs/systel.edf.cfg).

Setup Python Source File#

Estimated duration: 4-20 minutes.

A Python source file lives in ~/telemac/configs, where also a template called pysource.template.sh is available. This section guides through either using our pysource.gfortranHPC.sh (without AED2 and MUMPS), or a custom source file.

To facilitate setting up the pysource.gfortranHPC.sh file, our template is designed for use with the above-described systel.edfHy.cfg configuration file, and it is based on the default-provided pysource.template.sh. Either download pysource.gfortranHPC.sh > Save Link As… > ~/telemac/configs/pysource.gfortranHPC.sh or create a new pysource file with the following contents:

### TELEMAC settings -----------------------------------------------------------
###
# Path to telemac root dir
export HOMETEL=/home/USER-NAME/telemac/v8p5r0
# Adding python scripts to PATH
export PATH=$HOMETEL/scripts/python3:.:$PATH
# Configuration file
export SYSTELCFG=$HOMETEL/configs/systel.edfHy.cfg
# Name of the configuration to use
export USETELCFG=S10.gfortran.dyn
# Path to this file
export SOURCEFILE=$HOMETEL/configs/pysource.gfortranHPC.sh
### Python
# To force python to flush its output
export PYTHONUNBUFFERED='true'
### API
export PYTHONPATH=$HOMETEL/scripts/python3:$PYTHONPATH
export LD_LIBRARY_PATH=$HOMETEL/builds/$USETELCFG/lib:$HOMETEL/builds/$USETELCFG/wrap_api/lib:$LD_LIBRARY_PATH
export PYTHONPATH=$HOMETEL/builds/$USETELCFG/wrap_api/lib:$PYTHONPATH
###
### EXTERNAL LIBRARIES -----------------------------------------------------------
###
### METIS ----------------------------
###
### COMPILERS -----------------------------------------------------------
###
# Here are a few examples for external libraries
export SYSTEL=$HOMETEL/optionals

### MPI -----------------------------------------------------------
export MPIHOME=/usr/bin/mpifort.openmpi
export PATH=/usr/lib/x86_64-linux-gnu/openmpi:$PATH
export LD_LIBRARY_PATH=$PATH/lib:$LD_LIBRARY_PATH
###
### EXTERNAL LIBRARIES -----------------------------------------------------------
###
### MUMPS -------------------------------------------------------------
#export MUMPSHOME=$SYSTEL/LIBRARY/mumps/gnu
#export SCALAPACKHOME=$SYSTEL/LIBRARY/scalapack/gnu
#export BLACSHOME=$SYSTEL/LIBRARY/blacs/gnu
### METIS -------------------------------------------------------------
export METISHOME=$SYSTEL/metis/build/
export LD_LIBRARY_PATH=$METISHOME/lib:$LD_LIBRARY_PATH

Make sure do adapt the variable HOMETEL=/home/USER-NAME/telemac/v8p5r0.

AED2, MUMPS, and GOTM deactivated

AED2 (waqtel), MUMPS, and GOTM (general ocean) are deactivated in our template. To activate them, uncomment (i.e., remove #) before the MUMPS variables, add AED2 and GOTM variables, and use the systel.edf.cfg configuration file.

Here, we use the template to create our own Python source file called pysource.gfortranHPC.sh tailored for compiling the parallel version of TELEMAC on Debian Linux with the Open MPI library. The Python source file starts with the definition of the following variables:

  • HOMETEL: The path to the telemac/VERSION folder (<root>).

  • SYSTELCFG: The path to the above-modified configuration file (systel.edfHy.cfg) relative to HOMETEL.

  • USETELCFG: The name of the configuration to be used (debgfopenmpi). Configurations enabled are defined in the systel.*.cfg file, in the brackets ([debgfopenmpi]) directly below the header of every configuration section.

  • SOURCEFILE: The path to this file and its name relative to HOMETEL.

More definitions are required to define TELEMAC’s Application Programming Interface (API), (parallel) compilers to build TELEMAC with Open MPI, and external libraries located in the optionals folder. The following code block shows how the Python source file pysource.gfortranHPC.sh should look like. Make sure to verify every directory on your local file system, use your USERNAME, and take your time to get all directories right, without typos (critical task).

### TELEMAC settings -----------------------------------------------
###
# Path to Telemac s root dir
export HOMETEL=/home/USERNAME/telemac-mascaret
# Add Python scripts to PATH
export PATH=$HOMETEL/scripts/python3:.:$PATH
# Configuration file
export SYSTELCFG=$HOMETEL/configs/systel.edfHy.cfg
# Name of the configuration to use
export USETELCFG=debgfopenmpi
# Path to this Python source file
export SOURCEFILE=$HOMETEL/configs/pysource.openmpi.sh
# Force python to flush its output
export PYTHONUNBUFFERED='true'
### API
export PYTHONPATH=$HOMETEL/scripts/python3:$PYTHONPATH
export LD_LIBRARY_PATH=$HOMETEL/builds/$USETELCFG/wrap_api/lib:$LD_LIBRARY_PATH
export PYTHONPATH=$HOMETEL/builds/$USETELCFG/wrap_api/lib:$PYTHONPATH
###
### COMPILERS -----------------------------------------------------
export SYSTEL=$HOMETEL/optionals
### MPI -----------------------------------------------------------
export MPIHOME=/usr/bin/mpifort.mpich
export PATH=lib/x86_64-linux-gnu/openmpi:$PATH
export LD_LIBRARY_PATH=$PATH/lib:$LD_LIBRARY_PATH
###
### EXTERNAL LIBRARIES ---------------------------------------------
###
### METIS ----------------------------------------------------------
export METISHOME=$SYSTEL/metis/build/
export LD_LIBRARY_PATH=$METISHOME/lib:$LD_LIBRARY_PATH
### AED ------------------------------------------------------------
export AEDHOME=$SYSTEL/aed2
export LD_LIBRARY_PATH=$AEDHOME/obj:$LD_LIBRARY_PATH

Compile#

Estimated duration: 20-30 minutes (compiling takes time).

The compiler is called through Python and the above-created bash script ( pysource.gfortranHPC.sh or pysource.openmpi.sh). Thus, the Python source file pysource.gfortranHPC.sh knows where helper programs and libraries are located, and it knows the configuration to be used. With the Python source file, compiling TELEMAC becomes an easy task in Terminal. First, load the Python source file pysource.gfortranHPC.sh as source in Terminal, and then, test if it is correctly configured by running config.py:

cd ~/telemac/configs
source pysource.gfortranHPC.sh
config.py

Running config.py should produce a character-based image in Terminal and end with My work is done. If that is not the case and error messages occur, attentively read the error messages to identify the issue (e.g., there might be a typo in a directory or file name, or a misplaced character somewhere in pysource.gfortranHPC.sh or systel.edfHy.cfg). When config.py ran successfully, start compiling TELEMAC with the --clean flag to avoid any interference with earlier installations:

compile_telemac.py --clean

The compilation should run for a while (can take more than 30 minutes) and successfully end with the phrase My work is done.

Troubleshoot errors in the compiling process

If an error occurred in the compiling process, traceback error messages and identify the component that did not work. Revise setting up the concerned component in this workflow very thoroughly. Do not try to re-invent the wheel - the most likely problem is a tiny little detail in the files that you created on your own. Troubleshooting may be a tough task, in particular, because you need to put into question your own work.

Test TELEMAC#

Estimated duration: 5-10 minutes.

Once Terminal was closed or any clean system start-up requires to load the TELEMAC source environment in Terminal before running TELEMAC:

cd ~/telemac/configs
source pysource.gfortranHPC.sh
config.py

To run and test if TELEMAC works, use a pre-defined case from the provided examples folder:

cd ~/telemac/examples/telemac2d/gouttedo
telemac2d.py t2d_gouttedo.cas

To test if parallelism works, install htop to visualize CPU usage:

sudo apt update
sudo apt install htop

Start htop’s CPU monitor with:

htop

In a new Terminal tab run the above TELEMAC example with the flag --ncsize=N (NCSIZE), where N is the number of processors (CPUs) to use for parallel computation (make sure that N CPUs are also available on your machine):

cd ~/telemac/examples/telemac2d/gouttedo
telemac2d.py t2d_gouttedo.cas --ncsize=4

Alternatively, the --nctile and --ncnode flags can be used to define a number of core per node (NCTILE) and a number of nodes (NCNODE), respectively. The relationship between these flags is NCSIZE = NCTILE * NCNODE. Thus, the following two lines yield the same result (run in cd ~/telemac/examples/telemac2d/donau):

telemac2d.py t2d_donau.cas --nctile=4 --ncnode=2
telemac2d.py t2d_donau.cas --ncsize=8

When the computation is running, observe the CPU charge. If the CPUs are all working with different percentages, the parallel version is working well.

TELEMAC should startup, run the example case, and again end with the phrase My work is done. To assess the efficiency of the number of CPUs used, vary ncsize. For instance, the donau example (cd ~/telemac/examples/telemac2d/donau) ran with telemac2d.py t2d_donau.cas --ncsize=4 may take approximately 1.5 minutes, while telemac2d.py t2d_donau.cas --ncsize=2 (i.e., half the number of CPUs) takes approximately 2.5 minutes. The computing time may differ depending on your hardware, but note that doubling the number of CPUs does not cut the calculation time by a factor of two. So to optimize system resources, it can be reasonable to start several simulation cases on fewer cores than one simulation on multiple cores.

Generate Telemac Docs#

TELEMAC comes with many application examples in the subdirectory ~/telemac/examples/ and the documentation plus reference manuals can be generated locally. To this end, make sure to source the TELEMAC environment:

source ~/telemac/configs/pysource.gfortranHPC.sh

To generate the user manual type (takes a while):

doc_telemac.py

To generate the reference manual type:

doc_telemac.py --reference

To create the documentation of all example causes use:

validate_telemac.py

Note

The validate_telemac.py essentially runs through all examples, but some of them are broken and will cause the script to crash. This may also happen if not all modules are installed (e.g., Hermes is missing).

Utilities (Pre- & Post-processing)#

More Pre- and Post-processing Software

More software for dealing with Telemac pre- and post-processing is available in the form of SALOME and ParaView.

QGIS (Linux and Windows)#

Estimated duration: 5-10 minutes (depends on connection speed).

QGIS is a powerful tool for viewing, creating, and editing geospatial data, which is useful for pre and post-processing. Detailed installation guidelines are provided in the QGIS installation instructions and the QGIS tutorial in this eBook. The Q4TS plugin enables pre and post-processing of files for running simulations with TELEMAC, and it can also be linked with SALOME for running TELEMAC directly with a GUI.

To get the Q4TS, follow the developer’s instructions at https://gitlab.pam-retd.fr/otm/q4ts:

  • In QGIS, open the Plugin Manager (Plugins > Manage and Install Plugins…).

  • Go to Settings > Add… and enter https://otm.gitlab-pages.pam-retd.fr/q4ts/plugins.xml in the URL field. Enter a Name (e.g., q4ts), and leave all other fields as they are. Click OK.

  • Click on Reload all Repositories.

  • Go to the All tab, enter Q4TS and install the plugin.

After the installation, Q4TS enables MED to SLF conversion (and vice versa), mesh refinements, boundary creation, friction table editing, and many more options (in the QGIS Toolbox).

BlueKenue (Windows or Linux+Wine)#

Estimated duration: 10 minutes.

BlueKenueTM is a pre- and post-processing software provided by the National Research Council Canada, which is compatible with TELEMAC. It provides similar functions as the Fudaa software featured by the TELEMAC developers and additionally comes with a powerful mesh generator. It is particularly for the mesh generator that you want to install BlueKenueTM after downloading the latest version (login details in the Telemac Forum). Next, there are two options for installing BlueKenueTM depending on your platform:

  1. On Windows: directly use the BLueKenue (.msi) installer.

  2. On Linux: use Wine amd64 through PlayOneLinux to install BlueKenueTM on Linux. For Ubuntu (Debian) - based Linux, the PlayOnLinux section in this eBook provides detailed instructions. Direct installation of BlueKenue through Wine only is discouraged because of severe compatibility issues.

Note the typical installation directories of BlueKenueTM executable are:

  • 32-bit version is typically installed in "C:\\Program Files (x86)\\CHC\\BlueKenue\\BlueKenue.exe"

  • 64-bit version is typically installed in "C:\\Program Files\\CHC\\BlueKenue\\BlueKenue.exe"

Additionally, the Canadian Hydrological Model Stewardship (CHyMS) provides more guidance for installing BlueKenueTM on other platforms than Windows on their FAQ page in the troubleshooting section (direct link to how to run Blue Kenue on another operating system).

Fudaa-PrePro (Linux and Windows)#

Estimated duration: 5-15 minutes (upper time limit if java needs to be installed).

Get ready with the pre- and post-processing software Fudaa-PrePro:

  • Install java:

    • On Linux: sudo apt install default-jdk

    • On Windows: Get java from java.com

  • Download the latest version from the Fudaa-PrePro repository

  • Un-zip the downloaded file an proceed depending on what platform you are working with (see below)

  • cd to the directory where you un-zipped the Fudaa-PrePro program files

  • Start Fudaa-PrePro from Terminal or Prompt

    • On Linux: tap sh supervisor.sh

    • On Windows: tap supervisor.bat

There might be an error message such as:

Error: Could not find or load main class org.fudaa.fudaa.tr.TrSupervisor

In this case, open supervisor.sh in a text editor and correct $PWD Fudaa to $(pwd)/Fudaa. In addition, you can edit the default random-access memory (RAM) allocation in the supervisor.sh (orbat) file. Fudaa-PrePro starts with a default RAM allocation of 6 GB, which might be too small for grid files with more than 3·106 nodes, or too large if your system’s RAM is small. To adapt the RAM allocation and7or fix the above error message, right-click on supervisor.sh (or on Windows: supervisor.bat), and find the tag -Xmx6144m, where 6144 defines the RAM allocation. Modify this values an even-number multiple of 512. For example, set it to 4·512=2048 and correct $PWD Fudaa to $(pwd)/Fudaa:

#!/bin/bash
cd `dirname $0`
java -Xmx2048m -Xms512m -cp "$(pwd)/Fudaa-Prepro-1.4.2-SNAPSHOT.jar"
org.fudaa.fudaa.tr.TrSupervisor $1 $2 $3 $4 $5 $6 $7 $8 $9