Build FVCOM4.1 (MPI) on ITO Subsystem-A using Intel Compilers

Table of Contents

This page is a memorandum of building and executing [cci]FVCOM4.1[/cci] in [cci]MPI[/cci] using Intel compilers on the supercomputer Subsystem-A of ITO at Kyushu University. Those in [cci]Series[/cci] are introduced in this page. The building environment for [cci]Series[/cci] should be prepared. The source files are supposed to exist in [cci]FVCOM4.1/FVCOM_source[/cci] and the test case is [cci]Estuary[/cci] in [cci]FVCOM4.1/Examples/Estuary[/cci].

Preparation for METIS

METIS library requires to be installed for building for MPI. First [cci]load[/cci] Intel compilers:
[cc]
$ module load intel/2018
[/cc]
Environmental variables for Intel C compiler should be made as follows:
[cc]
$ export CC=icc
$ export CPP="icc -E"
[/cc]
Move to the directory of [cci]FVCOM4.1/METIS_source[/cci] where the [cci]METIS[/cci] source code exists and extract the source codes:
[cc]
$ tar xf metis.tgz
$ cd metis
[/cc]
Open [cci]makefile[/cci] and edit the [cci]MOPT[/cci] at the line 12 as follows:
[cc]
MOPT = -O3 -no-prec-div -fp-model fast=2 -xHost
[/cc]
The above option of [cci]MOPT[/cci] is the recommended value by the vendor. This [cci]makefile[/cci] should read [cci]make.inc[/cci] in [cci]FVCOM4.1/FVCOM_source[/cci] that was prepared for [cci]Series[/cci] where [cci]FLAGs[/cci] do not matter. For this purpose, [cci]include[/cci] in this [cci]makefile[/cci] should be corrected as follows:
[cc]
include ../../FVCOM_source/make.inc
[/cc]

Building METIS

To build [cci]METIS[/cci], just [cci]make install[/cci] in the source directory.
[cc]
$ make install
[/cc]

Edit make.inc

The file of [cci]make.inc[/cci] for [cci]Series[/cci] created in (here) requires slight modification for [cci]FLAG_4[/cci] and [cci]RANLIB[/cci] as follows:
[cc]
FLAG_1 = -DDOUBLE_PRECISION
FLAG_3 = -DWET_DRY
FLAG_4 = -DMULTIPROCESSOR
PARLIB = -lmetis #-L/usr/local/lib -lmetis
FLAG_8 = -DLIMITED_NO
FLAG_10 = -DGCN
FLAG_14 = -DRIVER_FLOAT
[/cc]
Then comment out the setting for Intel compiler for [cci]Series[/cci] in [cci]make.inc[/cci] and edit for Intel [cci]MPI[/cci] as follows:
[cc]
#--------------------------------------------------------------------------
# Intel/MPI Compiler Definitions (ITO-A@kyushu-u)
#--------------------------------------------------------------------------
CPP = icc -E
COMPILER = -DIFORT
CC = mpiicc
CXX = mpiicpc
CFLAGS = -O3 -no-prec-div -fp-model fast=2 -xHost
FC = mpiifort
DEBFLGS = #-check all -traceback
OPT = -O3 -no-prec-div -fp-model fast=2 -xHost
#--------------------------------------------------------------------------
[/cc]
This setting is for [cci]FLAT MPI[/cci] and the option is the recommended values by the vendor.

makefile for building FVCOM4.1

In building for [cci]Series[/cci], [cci]mod_esmf_nesting.F[/cci] was deleted in [cci]makefile[/cci] to avoid link errors. In building for [cci]MPI[/cci], this modified makefile is valid while the original one is also no problem.

Minor correction of a source file

Similarly in the case of [cci]Series[/cci] introduced here, the comments at the line 131 of [cci]wreal.F[/cci] should be deleted to avoid warning during compilation:
[cc]
# endif !!ice_embedding yding
[/cc]

Build FVCOM4.1 for MPI

In the directory of [cci]FVCOM_source[/cci], build [cci]FVCOM4.1[/cci] for [cci]MPI[/cci] as follows:
[cc]
$ make
[/cc]
Taking some time and the executable file of [cci]fvcom[/cci] should be created.

Executing test case of Estuary as batch job

[cci]MPI[/cci] run may only work in batch job (not interactively). First move to [cci]FVCOM4.1/Examples/Estuary/run[/cci], open [cci]tst_run.nml[/cci] and edit the number of rivers to be [cci]0[/cci] (originally 3), which is a bug.
[cc]
&NML_RIVER_TYPE
RIVER_NUMBER = 0,
[/cc]
Copy the executable of [cci]fvcom[/cci] to [cci]FVCOM4.1/Examples/Estuary/run[/cci]. Move to [cci]FVCOM4.1/Examples/Estuary/run[/cci] and create a script named, e.g., [cci]mpi.sh[/cci] containing the following:
[cc]
#!/bin/bash
#PJM -L "rscunit=ito-a"
#PJM -L "rscgrp=ito-ss-dbg"
#PJM -L "vnode=1"
#PJM -L "vnode-core=36"
#PJM -L "elapse=10:00"
#PJM -j
#PJM -X

module load intel/2018

NUM_NODES=${PJM_VNODES}
NUM_CORES=36
NUM_PROCS=36

export I_MPI_PERHOST=$NUM_CORES
export I_MPI_FABRICS=shm:ofa

export I_MPI_HYDRA_BOOTSTRAP=rsh
export I_MPI_HYDRA_BOOTSTRAP_EXEC=/bin/pjrsh
export I_MPI_HYDRA_HOST_FILE=${PJM_O_NODEINF}

mpiexec.hydra -n $NUM_PROCS ./fvcom --casename=tst
[/cc]
Here this script for [cci]FLAT MPI[/cci] is supposed to use one node with 36 cores/node. The number of cores per node should be specified in [cci]NUM_CORES[/cci] and [cci]NUM_PROCS[/cci] denotes [cci]the number of nodes × the number of cores per node[/cci]. Further information is given in this page (in Japanese).
To submit the batch job, use the command of [cci]pjsub[/cci] as follows:
[cc]
$ pjsub mpi.sh
[/cc]
To check the status of the job, invoke [cci]pjstat[/cci] command in the terminal. Further information about batch job if given in this page and in this page (in Japanese).

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.