TELEMAC (Installation)

Requirements

This tutorial guides through the installation of open TELEMAC-MASCARET on Debian Linux.

Account for approximately two hours for installing TELEMAC and make sure to have a stable internet connection.

Preface

This page only guides through the installation of TELEMAC. A tutorial for running hydro(-morpho)dynamic models with TELEMAC is currently under construction for this eBook.

Good to Know

  • Installing TELEMAC on a Virtual Machine (VM) is useful for getting started with TELEMAC and its sample cases, but not recommended for running a real-world numerical model (limited performance of VMs).

  • Familiarize with the Linux Terminal to understand the underpinnings for compiling TELEMAC.

  • This tutorial refers to the software package open TELEMAC-MASCARET as TELEMAC because MASCARET is a one-dimensional (1d) model and the numerical simulation schemes in this eBook focus on two-dimensional (2d) and three-dimensional (3d) modelling.

Mint Hyfo VM users

If you are working with the Mint Hyfo Virtual Machine, skip the tutorials on this website because TELEMAC is already preinstalled and you are good to go for completing the TELEMAC tutorials.

Two Installation Options

This page describes two ways for installing TELEMAC:

  • Option 1 (recommended): Stand-alone installation of TELEMAC

    • Every software and packages needed to run a TELEMAC model is installed manually

    • Advantages:

      • Full control over modules to be installed (high flexibility)

      • Latest version of TELEMAC is installed and can be regularly updated

      • Up-to-date compilers and all libraries are exactly matching the system.

    • Disadvantages:

      • The variety of install options may cause errors when incompatible packages are combined

      • Challenging installation of optional modules such as AED2, HPC and parallelism

  • Option 2: Installation of TELEMAC within the SALOME-HYDRO software suite.

    • All pre-processing tasks are managed with SALOME-HYDRO

    • TELEMAC is launched through the HYDRO-SOLVER module

    • Post-processing is performed with ParaView

    • Advantages:

      • All-in-one solution for pre-processing

      • Integrated HPC installation of TELEMAC v8p2

      • Efficient for MED-file handling

    • Disadvantages:

      • Common input geometry file formats such as SLF (selafin) require additional software

      • Only works without errors on old Debian 9 (stretch)

      • The pre-compiled version of TELEMAC and other modules were built with outdated gfortran compilers that cannot run on up-to-date systems.

      • Often problems with the GUI and high risk of simulation crashes because of invalid library links.

So what option to choose? To leverage the full capacities of TELEMAC, use both: SALOME-HYDRO (Linux) is a powerful pre-processor for preparing simulations and the Stand-alone Installation of TELEMAC enables maximum flexibility, system integrity, and computational stability.

Stand-alone Installation of TELEMAC

TELEMAC Docker image

The Austrian engineering office Flussplan provides a Docker container of TELEMAC v8 on their docker-telemac GitHub repository. Note that a Docker container represents an easy-to-install virtual environment that leverages cross-platform compatibility, but affects computational performance. If you have the proprietary Docker software installed and computational performance is not the primary concern for your models, Flussplan’s Docker container might be a good choice. For instance, purely hydrodynamic models with small numbers of grid nodes and without additional TELEMAC module implications will efficiently run in the Docker container.

Prerequisites

Working with TELEMAC requires some software for downloading source files, compiling, and running the program. The mandatory software prerequisites for installing TELEMAC on Debian Linux are:

  • Python (use Python3 in the latest releases)

  • Subversion (svn)

  • GNU Fortran 95 compiler (gfortran)

Admin (sudo) rights required

Superuser (sudo for super doers list) rights are required for many actions described in this workflow. Read more about how to set up and grant sudo rights for a user account on Debian Linux in the tutorial for setting up Debian Linux.

Python3

Estimated duration: 5-8 minutes.

The high-level programing language Python3 is pre-installed on Debian Linux 10.x and needed to launch the compiler script for TELEMAC. To launch Python3, open Terminal and type python3. To exit Python, type exit().

TELEMAC requires the following additional Python libraries:

To install the three libraries, open Terminal and type (hit Enter after every line):

sudo apt install python3-numpy python3-scipy python3-matplotlib python3-distutils python3-dev python3-pip

To test if the installation was successful, type python3 in Terminal and import the three libraries:

Python 3.7.7 (default, Jul  25 2030, 13:03:44) [GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> import scipy
>>> import matplotlib
>>> a = numpy.array((1, 1))
>>> print(a)
[1 1]
>>> exit()

None of the three library imports should return an ImportError message. To learn more about Python read the section on Packages, Modules and Libraries.

Subversion (svn)

Estimated duration: Less than 5 minutes.

We will need the version control system Subversion for downloading (and keeping up-to-date) the TELEMAC source files. Subversion is installed through the Debian Terminal with (read more in the Debian Wiki:

sudo apt install subversion

After the successful installation, test if the installation went well by typing svn --help (should prompt an overview of svn commands). The Debian Wiki provides a tutorial for working with Subversion.

GNU Fortran 95 Compiler (gfortran)

Estimated duration: 3-10 minutes.

The Fortran 95 compiler is needed to compile TELEMAC through a Python3 script, which requires that gfortran is installed. The Debian Linux retrieves gfortran from the standard package repositories. Thus, to install the Fortran 95 compiler, open Terminal and type:

sudo apt install gfortran

Compilers and Other Essentials

Estimated duration: 2-5 minutes. To enable parallelism, a C compiler is required for recognition of the command cmake in Terminal. Moreover, we will need build-essential for building packages and create a comfortable environment for dialogues. VIM is a text editor that we will use for bash file editing. Therefore, open Terminal (as root/superuser, i.e., type su) and type:

sudo apt install -y cmake build-essential dialog vim

Download TELEMAC

Estimated duration: 15-30 minutes. We will need more packages to enable parallelism and compiling, but before installing them, download the latest version of TELEMAC through subversion (svn). The developers (irregularly) inform about the newest public release on their website and the latest absolute latest release can be read from the svn-tags website (use with passwords in the below command line block). To download TELEMAC, open Terminal in the Home directory (either use cd or use the Files browser to navigate to the Home directory and right-click in the empty space to open Terminal) and type (enter no when asked for password encryption):

svn co http://svn.opentelemac.org/svn/opentelemac/tags/v8p2r1  ~/telemac/v8p2 --username ot-svn-public --password telemac1*

This will have downloaded TELEMAC v8p2 to the directory /home/USER-NAME/telemac/v8p2.

Compile TELEMAC

Adapt and Verify Configuration File (systel.*.cfg)

Estimated duration: 15-20 minutes.

Facilitate compiling with our templates

To facilitate setting up the systel file, use our template (no by-default AED2):

  • Right-click on this download > Save Link As… > ~/telemac/v8p2/configs/systel.cis-debian.cfg > Replace Existing.

  • Make sure to verify the directories described in this section and replace the USER-NAME with your user name in the downloaded systel.cis-debian.cfg file.

  • To use AED2, download systel.cis-debian-aed2.cfg.

  • For dynamic compiling, download systel.cis-debian-dyn.cfg (rather than the above systel.cis-debian.cfg file).

The configuration file will tell the compiler how flags are defined and where optional software lives. Here, we use the configuration file systel.cis-debian.cfg, which lives in ~/telemac/v8p2/configs/. In particular, we are interested in the following section of the file:

# _____                          ___________________________________
# ____/ Debian gfortran openMPI /__________________________________/
[debgfopenmpi]
#
par_cmdexec:   <config>/partel < partel.par >> <partel.log>
#
mpi_cmdexec:   /usr/bin/mpiexec -wdir <wdir> -n <ncsize> <exename>
mpi_hosts:
#
cmd_obj:    /usr/bin/mpif90 -c -O3 -DHAVE_MPI -fconvert=big-endian -frecord-marker=4 <mods> <incs> <f95name>
cmd_lib:    ar cru <libname> <objs>
cmd_exe:    /usr/bin/mpif90 -fconvert=big-endian -frecord-marker=4 -lpthread -v -lm -o <exename> <objs> <libs>
#
mods_all:   -I <config>
#
libs_all:    /usr/lib64/openmpi/lib/libmpi.so.0.0.2 /home/telemac/metis-5.1.0/build/lib/libmetis.a

The configuration file contains other configurations such as a scalar or a debug configuration for compiling TELEMAC. Here, we only use the Debian gfortran open MPI section that has the configuration name [debgfopenmpi]. To verify if this section if correctly defined, check where the following libraries live on your system (use Terminal and cd + ls commands or Debian’s File browser):

  • Metis is typically located in ~/telemac/v8p2/optionals/metis-5.1.0/build (if you used this directory for <install_path>), where libmetis.a typically lives in ~/telemac/v8p2/optionals/metis-5.1.0/build/lib/libmetis.a

  • Open MPI’s include folder is typically located in /usr/lib/x86_64-linux-gnu/openmpi/include

  • Open MPI library typically lives in /usr/lib/x86_64-linux-gnu/openmpi/libmpi.so.40.10.3
    The number 40.10.3 may be different depending on the latest version. Make sure to adapt the number after libmpi.so..

  • mpiexec is typically installed in /usr/bin/mpiexec

  • mpif90 is typically installed in /usr/bin/mpif90

  • If installed, AED2 typically lives in ~/telemac/v8p2/optionals/aed2/, which should contain the file libaed2.a (among others) and the folders include, obj, and src.

Then open the configuration file in VIM (or any other text editor) to verify and adapt the Debian gfortran open MPI section:

cd ~/telemac/v8p2/configs
vim systel.cis-debian.cfg

Make the following adaptations in Debian gfortran open MPI section to enable parallelism:

  • Remove par_cmdexec from the configuration file; that means delete the line (otherwise, parallel processing will crash with a message that says cannot find PARTEL.PAR):
    par_cmdexec:   <config>/partel < PARTEL.PAR >> <partel.log>

  • Find libs_all to add and adapt:

    • metis (all metis-related directories to /home/USER-NAME/telemac/v8p2/optionals/metis-5.1.0/build/lib/libmetis.a).

    • openmpi (correct the library file to /usr/lib/x86_64-linux-gnu/openmpi/libmpi.so.40.10.3 or wherever libmpi.so.xx.xx.x lives on your machine).

    • med including hdf5 (~/telemac/v8p2/optionals/).

    • aed2 (~/telemac/v8p2/optionals/aed2/libaed2.a).

libs_all:    /usr/lib/x86_64-linux-gnu/openmpi/lib/libmpi.so.40.10.3 /home/USER-NAME/telemac/v8p2/optionals/metis-5.1.0/build/lib/libmetis.a /home/USER-NAME/telemac/v8p2/optionals/aed2/libaed2.a /home/USER-NAME/telemac/v8p2/optionals/med-3.2.0/lib/libmed.so /home/USER-NAME/telemac/v8p2/optionals/hdf5/lib/libhdf5.so
  • Add the incs_all variable to point include openmpi, med, and aed2:

incs_all: -I /usr/lib/x86_64-linux-gnu/openmpi/include -I /home/USER-NAME/telemac/v8p2/optionals/aed2 -I /home/USER-NAME/telemac/v8p2/optionals/aed2/include  -I /home/USER-NAME/telemac/v8p2/optionals/med-3.2.0/include
  • Search for openmpi in libs_all and

  • Search for cmd_obj: definitions, add -cpp in front of the -c flags, -DHAVE_AED2, and -DHAVE_MED. For example:

cmd_obj:    /usr/bin/mpif90 -cpp -c -O3 -DHAVE_AED2 -DHAVE_MPI -DHAVE_MED -fconvert=big-endian -frecord-marker=4 <mods> <incs> <f95name>

An additional keyword in the configurations is options: that accepts multiple keywords including mpi, api (TelApy - TELEMAC’s Python API), hpc, and dyn or static. The provided cfg file primarily uses the mpi keyword. To use other installation options (e.g., HPC or dynamic), read the instructions for HPC installation on opentelemac.org and have a look at the most advanced default config file from EDF (~/telemac/v8p2/configs/systel.edf.cfg).

Setup Python Source File

Estimated duration: 15-20 minutes.

Facilitate setting up the pysource with our templates

To facilitate setting up the pysource file use our template:

The Python source file lives in ~/telemac/v8p2/configs, where there is also a template available called pysource.template.sh. Here, we will use the template to create our own Python source file called pysource.openmpi.sh tailored for compiling the parallel version of TELEMAC on Debian Linux with the Open MPI library. The Python source file starts with the definition of the following variables:

  • HOMETEL: The path to the telemac/VERSION folder (<root>).

  • SYSTELCFG: The path to the above-modified configuration file (systel.cis-debian.cfg) relative to HOMETEL.

  • USETELCFG: The name of the configuration to be used (debgfopenmpi). Configurations enabled are defined in the systel.*.cfg file, in the brackets ([debgfopenmpi]) directly below the header of every configuration section.

  • SOURCEFILE: The path to this file and its name relative to HOMETEL.

More definitions are required to define TELEMAC’s Application Programming Interface (API), (parallel) compilers to build TELEMAC with Open MPI, and external libraries located in the optionals folder. The following code block shows how the Python source file pysource.openmpi.sh should look like. Make sure to verify every directory on your local file system, use your USER-NAME, and take your time to get all directories right, without typos (critical task).

### TELEMAC settings -----------------------------------------------
###
# Path to Telemac s root dir
export HOMETEL=/home/USER-NAME/telemac/v8p2
# Add Python scripts to PATH
export PATH=$HOMETEL/scripts/python3:.:$PATH
# Configuration file
export SYSTELCFG=$HOMETEL/configs/systel.cis-debian.cfg
# Name of the configuration to use
export USETELCFG=debgfopenmpi
# Path to this Python source file
export SOURCEFILE=$HOMETEL/configs/pysource.openmpi.sh
# Force python to flush its output
export PYTHONUNBUFFERED='true'
### API
export PYTHONPATH=$HOMETEL/scripts/python3:$PYTHONPATH
export LD_LIBRARY_PATH=$HOMETEL/builds/$USETELCFG/wrap_api/lib:$LD_LIBRARY_PATH
export PYTHONPATH=$HOMETEL/builds/$USETELCFG/wrap_api/lib:$PYTHONPATH
###
### COMPILERS -----------------------------------------------------
export SYSTEL=$HOMETEL/optionals
### MPI -----------------------------------------------------------
export MPIHOME=/usr/bin/mpifort.mpich
export PATH=lib/x86_64-linux-gnu/openmpi:$PATH
export LD_LIBRARY_PATH=$PATH/lib:$LD_LIBRARY_PATH
###
### EXTERNAL LIBRARIES ---------------------------------------------
### HDF5 -----------------------------------------------------------
export HDF5HOME=$SYSTEL/hdf5
export LD_LIBRARY_PATH=$HDF5HOME/lib:$LD_LIBRARY_PATH
export LD_RUN_PATH=$HDF5HOME/lib:$MEDHOME/lib:$LD_RUN_PATH
### MED  -----------------------------------------------------------
export MEDHOME=$SYSTEL/med-3.2.0
export LD_LIBRARY_PATH=$MEDHOME/lib:$LD_LIBRARY_PATH
export PATH=$MEDHOME/bin:$PATH
### METIS ----------------------------------------------------------
export METISHOME=$SYSTEL/metis-5.1.0/build/
export LD_LIBRARY_PATH=$METISHOME/lib:$LD_LIBRARY_PATH
### AED ------------------------------------------------------------
export AEDHOME=$SYSTEL/aed2
export LD_LIBRARY_PATH=$AEDHOME/obj:$LD_LIBRARY_PATH

Compile

Estimated duration: 20-30 minutes (compiling takes time).

The compiler is called through Python and the above-created bash script ( pysource.openmpi.sh ). Thus, the Python source file pysource.openmpi.sh knows where helper programs and libraries are located, and it knows the configuration to be used. With the Python source file, compiling TELEMAC becomes an easy task in Terminal. First, load the Python source file pysource.openmpi.sh as source in Terminal, and then, test if it is correctly configured by running config.py:

cd ~/telemac/v8p2/configs
source pysource.openmpi.sh
config.py

Running config.py should produce a character-based image in Terminal and end with My work is done. If that is not the case and error messages occur, attentively read the error messages to identify the issue (e.g., there might be a typo in a directory or file name, or a misplaced character somewhere in pysource.openmpi.sh or systel.cis-debian.cfg). When config.py ran successfully, start compiling TELEMAC with the --clean flag to avoid any interference with earlier installations:

compile_telemac.py --clean

The compilation should run for a while (can take more than 30 minutes) and successfully end with the phrase My work is done.

Troubleshoot errors in the compiling process

If an error occurred in the compiling process, traceback error messages and identify the component that did not work. Revise setting up the concerned component in this workflow very thoroughly. Do not try to re-invent the wheel - the most likely problem is a tiny little detail in the files that you created on your own. Troubleshooting may be a tough task, in particular, because you need to put into question your own work.

Test TELEMAC

Estimated duration: 5-10 minutes.

Once Terminal was closed or any clean system start-up requires to load the TELEMAC source environment in Terminal before running TELEMAC:

cd ~/telemac/v8p2/configs
source pysource.openmpi.sh
config.py

To run and test if TELEMAC works, use a pre-defined case from the provided examples folder:

cd ~/telemac/v8p2/examples/telemac2d/gouttedo
telemac2d.py t2d_gouttedo.cas

To test if parallelism works, install htop to visualize CPU usage:

sudo apt update
sudo apt install htop

Start htop’s CPU monitor with:

htop

In a new Terminal tab run the above TELEMAC example with the flag --ncsize=N, where N is the number of CPUs tu use for parallel computation (make sure that N CPUs are also available on your machine):

cd ~/telemac/v8p2/examples/telemac2d/gouttedo
telemac2d.py t2d_gouttedo.cas --ncsize=4

When the computation is running, observe the CPU charge. If the CPUs are all working with different percentages, the parallel version is working well.

TELEMAC should startup, run the example case, and again end with the phrase My work is done. To assess the efficiency of the number of CPUs used, vary ncsize. For instance, the donau example (cd ~/telemac/v8p2/examples/telemac2d/donau) ran with telemac2d.py t2d_donau.cas --ncsize=4 may take approximately 1.5 minutes, while telemac2d.py t2d_donau.cas --ncsize=2 (i.e., half the number of CPUs) takes approximately 2.5 minutes. The computing time may differ depending on your hardware, but note that doubling the number of CPUs does not cut the calculation time by a factor of two. So to optimize system resources, it can be reasonable to start several simulation cases on fewer cores than one simulation on multiple cores.

Run Sample Cases (Examples)

TELEMAC comes with many application examples in the sub-directory ~/telemac/v8p2/examples/. To generate the documentation and verify the TELEMAC installation, load the TELEMAC environment and validate it:

cd ~/telemac/v8p2/configs/
source pysource.openmpi.sh
cd ..
config.py
validate_telemac.py

Note

The validate_telemac.py script may fail to run when not all modules are installed (e.g., Hermes is missing).

Utilities (Pre- & Post-processing)

BlueKenue (Windows or Linux+Wine)

Estimated duration: 10 minutes.

BlueKenueTM is a pre- and post-processing software provided by the National Research Council Canada, which is compatible with TELEMAC. It provides similar functions as the Fudaa software featured by the TELEMAC developers and additionally comes with a powerful mesh generator. It is in particular for the mesh generator that you want to install BlueKenueTM. The only drawback is that BlueKenueTM is designed for Windows. So there are two options for installing BlueKenueTM:

  1. TELEMAC is running on a Debian Linux VM and your host system is Windows:
    Download (login details in the Telemac Forum) and install BlueKenueTM on Windows and use the shared folder of the VM to transfer mesh files.

  2. Use Wine (compatibility layer in Linux that enables running Windows applications) to install BlueKenueTM on Linux.

Here are the steps for installing BlueKenueTM on Debian Linux with wine:

Note

The latest 64-bit version (or any 64-bit version) will not install with wine. Make sure to use the 32-bit installer.

  • Install BlueKenueTM by using the Wine: In Terminal type wine control.

  • After running wine control in Terminal, a windows-like window opens.

  • Click on the Add/Remove… button in the window, which opens up another window (Add/Remove Programs).

  • Click on the Install… button and select the downloaded msi installer for BlueKenueTM.

  • Follow the instructions to install BlueKenueTM for Everyone (all users) and create a Desktop Icon.

After the successful installation, launch BlueKenueTM with Wine (read more about starting Windows applications through wine in the Virtual Machines chapter):

  • In Terminal type wine explorer

  • In the Wine Explorer window, navigate to Desktop and find the BlueKenue shortcut.

  • Start BlueKenue by double-clicking on the shortcut.

  • Alternatively, identify the installation path and the BlueKenueTM executable.

    • The 32-bit version is typically installed in "C:\\Program Files (x86)\\CHC\\BlueKenue\\BlueKenue.exe".

    • The 64-bit version is typically installed in "C:\\Program Files\\CHC\\BlueKenue\\BlueKenue.exe".

    • Start BlueKenueTM with wine "C:\\Program Files\\CHC\\BlueKenue\\BlueKenue.exe".

The Canadian Hydrological Model Stewardship (CHyMS) provides more guidance for installing BlueKenueTM on other platforms than Windows on their FAQ page in the troubleshooting section (direct link to how to run Blue Kenue on another operating system).

Fudaa-PrePro (Linux and Windows)

Estimated duration: 5-15 minutes (upper time limit if java needs to be installed).

Get ready with the pre- and post-processing software Fudaa-PrePro:

  • Install java:

    • On Linux: sudo apt install default-jdk

    • On Windows: Get java from java.com

  • Download the latest version from the Fudaa-PrePro repository

  • Un-zip the downloaded file an proceed depending on what platform you are working with (see below)

  • cd to the directory where you un-zipped the Fudaa-PrePro program files

  • Start Fudaa-PrePro from Terminal or Prompt

    • On Linux: tap sh supervisor.sh

    • On Windows: tap supervisor.bat

There might be an error message such as:

Error: Could not find or load main class org.fudaa.fudaa.tr.TrSupervisor

In this case, open supervisor.sh in a text editor and correct $PWD Fudaa to $(pwd)/Fudaa. In addition, you can edit the default random-access memory (RAM) allocation in the supervisor.sh (orbat) file. Fudaa-PrePro starts with a default RAM allocation of 6 GB, which might be too small for grid files with more than 3·106 nodes, or too large if your system’s RAM is small. To adapt the RAM allocation and7or fix the above error message, right-click on supervisor.sh (or on Windows: supervisor.bat), and find the tag -Xmx6144m, where 6144 defines the RAM allocation. Modify this values an even-number multiple of 512. For example, set it to 4·512=2048 and correct $PWD Fudaa to $(pwd)/Fudaa:

#!/bin/bash
cd `dirname $0`
java -Xmx2048m -Xms512m -cp "$(pwd)/Fudaa-Prepro-1.4.2-SNAPSHOT.jar"
org.fudaa.fudaa.tr.TrSupervisor $1 $2 $3 $4 $5 $6 $7 $8 $9

SALOME-HYDRO (Linux)

SALOME-HYDRO is a spinoff of SALOME (see description in the modular installation with full capacities to create and run a numerical model with TELEMAC. The program is distributed on salome-platform.org as specific EDF contribution.

Linux

SALOME-HYDRO also works on Windows platforms, but most applications and support is provided for Debian Linux.

Outdated library links of the SALOME-HYDRO installer

On any system that is not Debian 9 (stretch), SALOME-HYDRO can only be used as a pre-processor (Geometry & Mesh modules) and as a post-processor (ParaVis module) for med-file handling. The HydroSolver module that potentially enables running TELEMAC does not work properly with Debian 10 or any system that is not Debian 9. Therefore, the Stand-alone Installation of TELEMAC is still required to run models developed with SALOME-HYDRO.

Prerequisites

  • Install required packages (verify the latest version of libssl and if necessary, correct version)

sudo apt install openmpi-common gfortran mpi-default-dev zlib1g-dev libnuma-dev xterm net-tools
  • Install earlier versions of libssl:

    • Open the list of sources
      sudo editor /etc/apt/sources.list

    • Ubuntu users: In sources.list, add Ubuntu’s Bionic security as source with
      deb http://security.ubuntu.com/ubuntu bionic-security main
      Using Nano as text editor, copy the above line into sources.list, then press CTRL+O, confirm writing with Enter, then press CTRL+X to exit Nano.

    • Debian users: In sources.list, add Debian Stretch source with
      deb http://deb.debian.org/debian/ stretch main contrib non-free
      deb-src http://deb.debian.org/debian stretch main contrib non-free
      Using Nano as text editor, copy the above lines into source.list, then press CTRL+O, confirm writing with Enter, then press CTRL+X to exit Nano.

    • Back in Terminal tap
      sudo apt update && apt-cache policy libssl1.0-dev
      sudo apt install libssl1.0-dev libopenblas-dev libgeos-dev unixodbc-dev libnetcdf-dev libhdf4-0-alt libpq-dev qt5ct libgfortran3

  • Debian 9 users will need to add and install nvidia drivers as described in the virtual machine / Debian Linux installation section to Enable OpenGL.

Debian 10 (buster) Users

Potentially harmful action

The following steps for renaming system libraries are potentially harmful to your system. Only continue if you absolutely know what you are doing. Otherwise, go back to the Stand-alone Installation of TELEMAC section.

SALOME-HYDRO is using some outdated libraries, which require that newer versions (e.g., of the openmpi library) must be copied and the copies must be renamed to match the outdated library names. Therefore, open Terminal and tap:

sudo cp /usr/lib/x86_64-linux-gnu/libmpi.so.40 /usr/lib/x86_64-linux-gnu/libmpi.so.20
sudo cp /usr/lib/x86_64-linux-gnu/libicui18n.so.63 /usr/lib/x86_64-linux-gnu/libicui18n.so.57
sudo cp /usr/lib/x86_64-linux-gnu/libicuuc.so.63 /usr/lib/x86_64-linux-gnu/libicuuc.so.57
sudo cp /usr/lib/x86_64-linux-gnu/libicudata.so.63 /usr/lib/x86_64-linux-gnu/libicudata.so.57
sudo cp /usr/lib/x86_64-linux-gnu/libnetcdf.so.13 /usr/lib/x86_64-linux-gnu/libnetcdf.so.11
sudo cp /usr/lib/x86_64-linux-gnu/libmpi_usempif08.so.40 /usr/lib/x86_64-linux-gnu/libmpi_usempif08.so.20
sudo cp /usr/lib/x86_64-linux-gnu/libmpi_java.so.40 /usr/lib/x86_64-linux-gnu/libmpi_java.so.20
sudo cp /usr/lib/x86_64-linux-gnu/libmpi_cxx.so.40 /usr/lib/x86_64-linux-gnu/libmpi_cxx.so.20
sudo cp /usr/lib/x86_64-linux-gnu/libmpi_mpifh.so.40 /usr/lib/x86_64-linux-gnu/libmpi_mpifh.so.20
sudo cp /usr/lib/x86_64-linux-gnu/libmpi_usempi_ignore_tkr.so.40 /usr/lib/x86_64-linux-gnu/libmpi_usempi_ignore_tkr.so.20

In addition, the Qt library of the SALOME-HYDRO installer is targeting out-dated libraries on Debian 10. To troubleshoot this issue, open the file explorer and:

  • Go to the directory /usr/lib/x86_64-linux-gnu/

  • Find, highlight, and copy all lib files that contain the string libQt5 (or even just Qt5).

  • Paste the copied Qt5 library files into /SALOME-HYDRO/Salome-V2_2/prerequisites/Qt-591/lib/ (confirm replace existing files).

Both procedures for copying library files are anything but a coherent solution. However, it is currently the only way to get SALOME-HYDRO working on Debian 10.

Install SALOME-HYDRO

Open the Terminal, cd into the directory where you downloaded Salome-V1_1_univ_3.run (or Salome-HYDRO-V2_2-s9.run), and tap:

chmod 775 Salome-HYDRO-V2_2-S9.run
./Salome-HYDRO-V2_2-S9.run

During the installation process, define a convenient installation directory such as /home/salome-hydro/. The installer guides through the installation and prompts how to launch the program at the end.

Attention

If you get error messages such as ./create_appli_V1_1_univ.sh/xml: line [...]: No such file or directory., there is probably an issue with the version of Python. In this case, run update-alternatives --install /usr/bin/python python /usr/bin/python2.7 1 and re-try.

Try to launch SALOME-HYDRO:

cd /home/salome-hydro/appli_V2_2/
./salome

If there are issues such as Kernel/Session in the Naming Service ([Errno 3] No such processRuntimeError: Process NUMBER for Kernel/Session not found), go to the troubleshooting page.

If the program is not showing up properly (e.g., empty menu items), read more about Qt GUI support on the troubleshooting page.

ParaView (ParaVis) through SALOME-HYDRO

ParaView serves for the visualization of model results in the SALOME-HYDRO modelling chain. The built-in module ParaViS essentially corresponds to ParaView, but the separate usage of ParaView enables a better experience for post-processing of results. The installation of SALOME-HYDRO already involves an older version of ParaView that is able to manipulate MED files. To start ParaView through SALOME-HYDRO, open Terminal, cd to the directory where SALOME-HYDRO is installed, launch the environment, and then launch ParaView:

cd /home/slome-hydro/appli_V2_2/
. env.d/envProducts.sh
./runRemote.sh paraview

Tip

If the ParaVis module continuously crashes in SALOME-HYDRO, consider to install the latest version of SALOME (e.g., as described with the installation of OpenFOAM).

Alternatively, ParaView is freely available on the developer’s website and the latest stable release can be installed on Debian Linux, through the Terminal:

sudo apt install paraview

In this case, to run ParaView tap paraview in Terminal. If you are using a virtual machine, start ParaView with the --mesa-llvm flag (i.e., paraview --mesa-llvm). To enable MED file handling, MED coupling is necessary, which requires to follow the installation instructions on docs.salome-platform.org.

Start SALOME-HYDRO

To start SALOME-HYDRO, open Terminal and tap:

/home/salome-hydro/appli_V1_1_univ/salome

QGIS (Linux and Windows)

Estimated duration: 5-10 minutes (depends on connection speed).

QGIS is a powerful tool for viewing, creating, and editing geospatial data that can be useful in Pre- and post-processing. Detailed installation guidelines are provided in the QGIS installation instructions in this eBook.

For working with TELEMAC, consider installing the following QGIS Plugins (Plugins > Manage and Install Plugins…):

  • BASEmesh enables to create a SMS 2dm file that can be converted to a selafin geometry for TELEMAC (read more in the QGIS pre-processing tutorial for TELEMAC).

  • PostTelemac visualizes *.slf (and others such as *.res) geometry files at different time steps.

  • DEMto3D enables to export STL geometry files for working with SALOME and creating 3D meshes.

Note that DEMto3D will be available in the Raster menu: DEMto3D > DEM 3D printing.