TELEMAC (Installation)

Requirements

This tutorial guides through the installation of open TELEMAC-MASCARET on Debian Linux.

Account for approximately 1-2 hours for installing TELEMAC and make sure to have a stable internet connection (>1.4 GB file download).

Preface

This page only guides through the installation of TELEMAC. A tutorial for running hydro(-morpho)dynamic models with TELEMAC is currently under construction for this eBook.

Good to Know

  • Installing TELEMAC on a Virtual Machine (VM) is useful for getting started with TELEMAC and its sample cases, but not recommended for running a real-world numerical model (limited performance of VMs).

  • Familiarize with the Linux Terminal to understand the underpinnings for compiling TELEMAC.

  • This tutorial refers to the software package open TELEMAC-MASCARET as TELEMAC because MASCARET is a one-dimensional (1d) model and the numerical simulation schemes in this eBook focus on two-dimensional (2d) and three-dimensional (3d) modelling.

Mint Hyfo VM users

If you are working with the Mint Hyfo Virtual Machine, skip the tutorials on this website because TELEMAC is already preinstalled and you are good to go for completing the TELEMAC tutorials.

Load the TELEMAC environment and check if it works with:

cd ~/telemac/v8p3/configs
source pysource.hyfo.sh
config.py

TELEMAC is also available through the SALOME-HYDRO software suite, which is a spinoff of SALOME. However, the principal functionalities of SALOME-HYDRO will migrate to a new QGIS plugin. Therefore, this eBook recommends installing TELEMAC independently from any pre- or post-processing software.

TELEMAC Installation Workflow

TELEMAC Docker image

The Austrian engineering office Flussplan provides a Docker container of TELEMAC v8 on their docker-telemac GitHub repository. Note that a Docker container represents an easy-to-install virtual environment that leverages cross-platform compatibility, but affects computational performance. If you have the proprietary Docker software installed and computational performance is not the primary concern for your models, Flussplan’s Docker container might be a good choice. For instance, purely hydrodynamic models with small numbers of grid nodes and without additional TELEMAC module implications will efficiently run in the Docker container.

Prerequisites

Working with TELEMAC requires some software for downloading source files, compiling, and running the program. The mandatory software prerequisites for installing TELEMAC on Debian Linux are:

  • Python 3.7 (and more recent) with NumPy >=1.8

  • GNU Fortran 95 compiler (gfortran)

Admin (sudo) rights required

Superuser (sudo for super doers list) rights are required for many actions described in this workflow. Read more about how to set up and grant sudo rights for a user account on Debian Linux in the tutorial for setting up Debian Linux.

Python3

Estimated duration: 5-8 minutes.

The high-level programing language Python3 is pre-installed on Debian Linux 10.x and needed to launch the compiler script for TELEMAC. To launch Python3, open Terminal and type python3. To exit Python, type exit().

TELEMAC requires the NumPy Python library that comes along with SciPy and matplotlib.

To install NumPy libraries, open Terminal and type (hit Enter after every line):

sudo apt install python3-numpy python3-scipy python3-matplotlib python3-distutils python3-dev python3-pip

To test if the installation was successful, type python3 in Terminal and import the three libraries:

Python 3.8.2 (default, Jul  25 2030, 13:03:44) [GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> a = numpy.array((1, 1))
>>> print(a)
[1 1]
>>> exit()

None of the three library imports should return an ImportError message. To learn more about Python read the section on Packages, Modules and Libraries.

GIT

Estimated duration: Less than 5 minutes.

The installation of the git version control system and its usage is extensivly described in the git section of this eBook. In addition to the git functionalities described in the git section, the following are needed to manage large files that come along with TELEMAC:

sudo apt install git-all git-lfs

GNU Fortran 95 Compiler (gfortran)

Estimated duration: 3-10 minutes.

The Fortran 95 compiler is needed to compile TELEMAC through a Python3 script, which requires that gfortran is installed. The Debian Linux retrieves gfortran from the standard package repositories. Thus, to install the Fortran 95 compiler, open Terminal and type:

sudo apt install gfortran

Compilers and Other Essentials

Estimated duration: 2-5 minutes. To enable parallelism, a C compiler is required for recognition of the command cmake in Terminal. Moreover, we will need build-essential for building packages and create a comfortable environment for dialogues. VIM is a text editor that we will use for bash file editing. Therefore, open Terminal (as root/superuser, i.e., type su) and type:

sudo apt install -y cmake build-essential dialog vim

Download TELEMAC

Estimated duration: 25-40 minutes (large downloads).

Before getting more packages to enable parallelism and compiling, download the latest version of TELEMAC with git in which additional packages will be embedded. To download (i.e., git clone) TELEMAC, open Terminal in the /home/USERNAME/ directory (either tap cd ~ or use the File browser to navigate to your home directory and right-click in the empty space to open Terminal). The following installation instructions assume that you are installing TELEMAC in the directory /home/USERNAME/telemac/, which can be created in Terminal with the command mkdir /home/USERNAME/telemac/ (corresponds to mkdir ~/telemac/) - do not forget to change directory into that folder (e.g., cd ~/telemac). You may want to choose another directory and adapt the installation directories accordingly. Now, to download TELEMAC into this directory, type (enter no when asked for password encryption):

git clone https://gitlab.pam-retd.fr/otm/telemac-mascaret.git v8p3

This will have downloaded TELEMAC v8p3 to the directory /home/USERNAME/telemac/v8p3.

After downloading (cloning) the TELEMAC repository, switch to (check out) the latest version:

cd v8p3
git checkout tags/v8p3r0

Compile TELEMAC

Adapt and Verify Configuration File (systel.*.cfg)

Estimated duration: 15-20 minutes.

Facilitate compiling with our templates

To facilitate setting up the systel file, use our template (no by-default AED2):

The configuration file will tell the compiler how flags are defined and where optional software lives. Here, we use the configuration file systel.cis-debian.cfg, which lives in ~/telemac/v8p3/configs/. In particular, we are interested in the following section of the file:

# _____                          ___________________________________
# ____/ Debian gfortran openMPI /__________________________________/
[debgfopenmpi]
#
par_cmdexec:   <config>/partel < partel.par >> <partel.log>
#
mpi_cmdexec:   /usr/bin/mpiexec -wdir <wdir> -n <ncsize> <exename>
mpi_hosts:
#
cmd_obj:    /usr/bin/mpif90 -c -O3 -DHAVE_MPI -fconvert=big-endian -frecord-marker=4 <mods> <incs> <f95name>
cmd_lib:    ar cru <libname> <objs>
cmd_exe:    /usr/bin/mpif90 -fconvert=big-endian -frecord-marker=4 -lpthread -v -lm -o <exename> <objs> <libs>
#
mods_all:   -I <config>
#
libs_all:   /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so.40.20.3 /home/telemac/metis-5.1.0/build/lib/libmetis.a

The configuration file contains other configurations such as a scalar or a debug configuration for compiling TELEMAC. Here, we only use the Debian gfortran open MPI section that has the configuration name [debgfopenmpi]. To verify if this section if correctly defined, check where the following libraries live on your system (use Terminal and cd + ls commands or Debian’s File browser):

  • Metis is typically located in ~/telemac/v8p3/optionals/metis-5.1.0/build (if you used this directory for <install_path>), where libmetis.a typically lives in ~/telemac/v8p3/optionals/metis-5.1.0/build/lib/libmetis.a

  • Open MPI’s include folder is typically located in /usr/lib/x86_64-linux-gnu/openmpi/include

  • Open MPI library typically lives in /usr/lib/x86_64-linux-gnu/openmpi/libmpi.so.40.20.3
    The number 40.20.3 may be different depending on the operating system version. Make sure to adapt the number after libmpi.so..

  • mpiexec is typically installed in /usr/bin/mpiexec

  • mpif90 is typically installed in /usr/bin/mpif90

  • If installed, AED2 typically lives in ~/telemac/v8p3/optionals/aed2/, which should contain the file libaed2.a (among others) and the folders include, obj, and src.

Then open the configuration file in VIM (or any other text editor) to verify and adapt the Debian gfortran open MPI section:

cd ~/telemac/v8p3/configs
vim systel.cis-debian.cfg

Make the following adaptations in Debian gfortran open MPI section to enable parallelism:

  • Remove par_cmdexec from the configuration file; that means delete the line (otherwise, parallel processing will crash with a message that says cannot find PARTEL.PAR):
    par_cmdexec:   <config>/partel < PARTEL.PAR >> <partel.log>

  • Find libs_all to add and adapt:

    • metis (all metis-related directories to /home/USER-NAME/telemac/v8p3/optionals/metis-5.1.0/build/lib/libmetis.a).

    • openmpi (correct the library file to /usr/lib/x86_64-linux-gnu/openmpi/libmpi.so.40.20.3 or wherever libmpi.so.xx.xx.x lives on your machine).

    • med including hdf5 (~/telemac/v8p3/optionals/).

    • aed2 (~/telemac/v8p3/optionals/aed2/libaed2.a).

libs_all:    /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so.40.20.3 /home/USER-NAME/telemac/v8p3/optionals/metis-5.1.0/build/lib/libmetis.a /home/USER-NAME/telemac/v8p3/optionals/aed2/libaed2.a /home/USER-NAME/telemac/v8p3/optionals/med-3.2.0/lib/libmed.so /home/USER-NAME/telemac/v8p3/optionals/hdf5/lib/libhdf5.so
  • Add the incs_all variable to point include openmpi, med, and aed2:

incs_all: -I /usr/lib/x86_64-linux-gnu/openmpi/include -I /home/USER-NAME/telemac/v8p3/optionals/aed2 -I /home/USER-NAME/telemac/v8p3/optionals/aed2/include  -I /home/USER-NAME/telemac/v8p3/optionals/med-3.2.0/include
  • Search for openmpi in libs_all and

  • Search for cmd_obj: definitions, add -cpp in front of the -c flags, -DHAVE_AED2, and -DHAVE_MED. For example:

cmd_obj:    /usr/bin/mpif90 -cpp -c -O3 -DHAVE_AED2 -DHAVE_MPI -DHAVE_MED -fconvert=big-endian -frecord-marker=4 <mods> <incs> <f95name>

An additional keyword in the configurations is options: that accepts multiple keywords including mpi, api (TelApy - TELEMAC’s Python API), hpc, and dyn or static. The provided cfg file primarily uses the mpi keyword. To use other installation options (e.g., HPC or dynamic), read the instructions for HPC installation on opentelemac.org and have a look at the most advanced default config file from EDF (~/telemac/v8p3/configs/systel.edf.cfg).

Setup Python Source File

Estimated duration: 15-20 minutes.

Facilitate setting up the pysource with our templates

To facilitate setting up the pysource file use our template:

The Python source file lives in ~/telemac/v8p3/configs, where there is also a template available called pysource.template.sh. Here, we will use the template to create our own Python source file called pysource.openmpi.sh tailored for compiling the parallel version of TELEMAC on Debian Linux with the Open MPI library. The Python source file starts with the definition of the following variables:

  • HOMETEL: The path to the telemac/VERSION folder (<root>).

  • SYSTELCFG: The path to the above-modified configuration file (systel.cis-debian.cfg) relative to HOMETEL.

  • USETELCFG: The name of the configuration to be used (debgfopenmpi). Configurations enabled are defined in the systel.*.cfg file, in the brackets ([debgfopenmpi]) directly below the header of every configuration section.

  • SOURCEFILE: The path to this file and its name relative to HOMETEL.

More definitions are required to define TELEMAC’s Application Programming Interface (API), (parallel) compilers to build TELEMAC with Open MPI, and external libraries located in the optionals folder. The following code block shows how the Python source file pysource.openmpi.sh should look like. Make sure to verify every directory on your local file system, use your USER-NAME, and take your time to get all directories right, without typos (critical task).

### TELEMAC settings -----------------------------------------------
###
# Path to Telemac s root dir
export HOMETEL=/home/USER-NAME/telemac/v8p3
# Add Python scripts to PATH
export PATH=$HOMETEL/scripts/python3:.:$PATH
# Configuration file
export SYSTELCFG=$HOMETEL/configs/systel.cis-debian.cfg
# Name of the configuration to use
export USETELCFG=debgfopenmpi
# Path to this Python source file
export SOURCEFILE=$HOMETEL/configs/pysource.openmpi.sh
# Force python to flush its output
export PYTHONUNBUFFERED='true'
### API
export PYTHONPATH=$HOMETEL/scripts/python3:$PYTHONPATH
export LD_LIBRARY_PATH=$HOMETEL/builds/$USETELCFG/wrap_api/lib:$LD_LIBRARY_PATH
export PYTHONPATH=$HOMETEL/builds/$USETELCFG/wrap_api/lib:$PYTHONPATH
###
### COMPILERS -----------------------------------------------------
export SYSTEL=$HOMETEL/optionals
### MPI -----------------------------------------------------------
export MPIHOME=/usr/bin/mpifort.mpich
export PATH=lib/x86_64-linux-gnu/openmpi:$PATH
export LD_LIBRARY_PATH=$PATH/lib:$LD_LIBRARY_PATH
###
### EXTERNAL LIBRARIES ---------------------------------------------
### HDF5 -----------------------------------------------------------
export HDF5HOME=$SYSTEL/hdf5
export LD_LIBRARY_PATH=$HDF5HOME/lib:$LD_LIBRARY_PATH
export LD_RUN_PATH=$HDF5HOME/lib:$MEDHOME/lib:$LD_RUN_PATH
### MED  -----------------------------------------------------------
export MEDHOME=$SYSTEL/med-3.2.0
export LD_LIBRARY_PATH=$MEDHOME/lib:$LD_LIBRARY_PATH
export PATH=$MEDHOME/bin:$PATH
### METIS ----------------------------------------------------------
export METISHOME=$SYSTEL/metis-5.1.0/build/
export LD_LIBRARY_PATH=$METISHOME/lib:$LD_LIBRARY_PATH
### AED ------------------------------------------------------------
export AEDHOME=$SYSTEL/aed2
export LD_LIBRARY_PATH=$AEDHOME/obj:$LD_LIBRARY_PATH

Compile

Estimated duration: 20-30 minutes (compiling takes time).

The compiler is called through Python and the above-created bash script ( pysource.openmpi.sh ). Thus, the Python source file pysource.openmpi.sh knows where helper programs and libraries are located, and it knows the configuration to be used. With the Python source file, compiling TELEMAC becomes an easy task in Terminal. First, load the Python source file pysource.openmpi.sh as source in Terminal, and then, test if it is correctly configured by running config.py:

cd ~/telemac/v8p3/configs
source pysource.openmpi.sh
config.py

Running config.py should produce a character-based image in Terminal and end with My work is done. If that is not the case and error messages occur, attentively read the error messages to identify the issue (e.g., there might be a typo in a directory or file name, or a misplaced character somewhere in pysource.openmpi.sh or systel.cis-debian.cfg). When config.py ran successfully, start compiling TELEMAC with the --clean flag to avoid any interference with earlier installations:

compile_telemac.py --clean

The compilation should run for a while (can take more than 30 minutes) and successfully end with the phrase My work is done.

Troubleshoot errors in the compiling process

If an error occurred in the compiling process, traceback error messages and identify the component that did not work. Revise setting up the concerned component in this workflow very thoroughly. Do not try to re-invent the wheel - the most likely problem is a tiny little detail in the files that you created on your own. Troubleshooting may be a tough task, in particular, because you need to put into question your own work.

Test TELEMAC

Estimated duration: 5-10 minutes.

Once Terminal was closed or any clean system start-up requires to load the TELEMAC source environment in Terminal before running TELEMAC:

cd ~/telemac/v8p3/configs
source pysource.openmpi.sh
config.py

To run and test if TELEMAC works, use a pre-defined case from the provided examples folder:

cd ~/telemac/v8p3/examples/telemac2d/gouttedo
telemac2d.py t2d_gouttedo.cas

To test if parallelism works, install htop to visualize CPU usage:

sudo apt update
sudo apt install htop

Start htop’s CPU monitor with:

htop

In a new Terminal tab run the above TELEMAC example with the flag --ncsize=N, where N is the number of CPUs tu use for parallel computation (make sure that N CPUs are also available on your machine):

cd ~/telemac/v8p3/examples/telemac2d/gouttedo
telemac2d.py t2d_gouttedo.cas --ncsize=4

When the computation is running, observe the CPU charge. If the CPUs are all working with different percentages, the parallel version is working well.

TELEMAC should startup, run the example case, and again end with the phrase My work is done. To assess the efficiency of the number of CPUs used, vary ncsize. For instance, the donau example (cd ~/telemac/v8p3/examples/telemac2d/donau) ran with telemac2d.py t2d_donau.cas --ncsize=4 may take approximately 1.5 minutes, while telemac2d.py t2d_donau.cas --ncsize=2 (i.e., half the number of CPUs) takes approximately 2.5 minutes. The computing time may differ depending on your hardware, but note that doubling the number of CPUs does not cut the calculation time by a factor of two. So to optimize system resources, it can be reasonable to start several simulation cases on fewer cores than one simulation on multiple cores.

Run Sample Cases (Examples)

TELEMAC comes with many application examples in the sub-directory ~/telemac/v8p3/examples/. To generate the documentation and verify the TELEMAC installation, load the TELEMAC environment and validate it:

cd ~/telemac/v8p3/configs/
source pysource.openmpi.sh
cd ..
config.py
validate_telemac.py

Note

The validate_telemac.py script may fail to run when not all modules are installed (e.g., Hermes is missing).

Utilities (Pre- & Post-processing)

More Pre- and Post-processing Software

More software for dealing with Telemac pre- and post-processing is available in the form of SALOME and ParaView.

QGIS (Linux and Windows)

Estimated duration: 5-10 minutes (depends on connection speed).

QGIS is a powerful tool for viewing, creating, and editing geospatial data that can be useful in pre- and post-processing. Detailed installation guidelines are provided in the QGIS installation instructions and the QGIS tutorialin this eBook. For working with TELEMAC, consider installing the following QGIS Plugins (Plugins > Manage and Install Plugins…):

  • BASEmesh enables to create a SMS 2dm file that can be converted to a selafin geometry for TELEMAC (read more in the QGIS pre-processing tutorial for TELEMAC).

  • PostTelemac visualizes *.slf (and others such as *.res) geometry files at different time steps.

  • DEMto3D enables to export STL geometry files for working with SALOME and creating 3D meshes.

Note that DEMto3D will be available in the Raster menu: DEMto3D > DEM 3D printing.

BlueKenue (Windows or Linux+Wine)

Estimated duration: 10 minutes.

BlueKenueTM is a pre- and post-processing software provided by the National Research Council Canada, which is compatible with TELEMAC. It provides similar functions as the Fudaa software featured by the TELEMAC developers and additionally comes with a powerful mesh generator. It is in particular for the mesh generator that you want to install BlueKenueTM. The only drawback is that BlueKenueTM is designed for Windows. So there are two options for installing BlueKenueTM:

  1. TELEMAC is running on a Debian Linux VM and your host system is Windows:
    Download (login details in the Telemac Forum) and install BlueKenueTM on Windows and use the shared folder of the VM to transfer mesh files.

  2. Use Wine (compatibility layer in Linux that enables running Windows applications) to install BlueKenueTM on Linux.

Here are the steps for installing BlueKenueTM on Debian Linux with wine:

Note

The latest 64-bit version (or any 64-bit version) will not install with wine. Make sure to use the 32-bit installer.

  • Install BlueKenueTM by using the Wine: In Terminal type wine control.

  • After running wine control in Terminal, a windows-like window opens.

  • Click on the Add/Remove… button in the window, which opens up another window (Add/Remove Programs).

  • Click on the Install… button and select the downloaded msi installer for BlueKenueTM.

  • Follow the instructions to install BlueKenueTM for Everyone (all users) and create a Desktop Icon.

After the successful installation, launch BlueKenueTM with Wine (read more about starting Windows applications through wine in the Virtual Machines chapter):

  • In Terminal type wine explorer

  • In the Wine Explorer window, navigate to Desktop and find the BlueKenue shortcut.

  • Start BlueKenue by double-clicking on the shortcut.

  • Alternatively, identify the installation path and the BlueKenueTM executable.

    • The 32-bit version is typically installed in "C:\\Program Files (x86)\\CHC\\BlueKenue\\BlueKenue.exe".

    • The 64-bit version is typically installed in "C:\\Program Files\\CHC\\BlueKenue\\BlueKenue.exe".

    • Start BlueKenueTM with wine "C:\\Program Files\\CHC\\BlueKenue\\BlueKenue.exe".

The Canadian Hydrological Model Stewardship (CHyMS) provides more guidance for installing BlueKenueTM on other platforms than Windows on their FAQ page in the troubleshooting section (direct link to how to run Blue Kenue on another operating system).

Fudaa-PrePro (Linux and Windows)

Estimated duration: 5-15 minutes (upper time limit if java needs to be installed).

Get ready with the pre- and post-processing software Fudaa-PrePro:

  • Install java:

    • On Linux: sudo apt install default-jdk

    • On Windows: Get java from java.com

  • Download the latest version from the Fudaa-PrePro repository

  • Un-zip the downloaded file an proceed depending on what platform you are working with (see below)

  • cd to the directory where you un-zipped the Fudaa-PrePro program files

  • Start Fudaa-PrePro from Terminal or Prompt

    • On Linux: tap sh supervisor.sh

    • On Windows: tap supervisor.bat

There might be an error message such as:

Error: Could not find or load main class org.fudaa.fudaa.tr.TrSupervisor

In this case, open supervisor.sh in a text editor and correct $PWD Fudaa to $(pwd)/Fudaa. In addition, you can edit the default random-access memory (RAM) allocation in the supervisor.sh (orbat) file. Fudaa-PrePro starts with a default RAM allocation of 6 GB, which might be too small for grid files with more than 3·106 nodes, or too large if your system’s RAM is small. To adapt the RAM allocation and7or fix the above error message, right-click on supervisor.sh (or on Windows: supervisor.bat), and find the tag -Xmx6144m, where 6144 defines the RAM allocation. Modify this values an even-number multiple of 512. For example, set it to 4·512=2048 and correct $PWD Fudaa to $(pwd)/Fudaa:

#!/bin/bash
cd `dirname $0`
java -Xmx2048m -Xms512m -cp "$(pwd)/Fudaa-Prepro-1.4.2-SNAPSHOT.jar"
org.fudaa.fudaa.tr.TrSupervisor $1 $2 $3 $4 $5 $6 $7 $8 $9