Dalhousie Geodynamics Group


Department of Oceanography
Dalhousie University
1355 Oxford Street
PO Box 15000
Halifax, NS, B3H 4R2
Canada




SOPALE

Sopale Nested

Processing SOPALE Output

Printing

MOZART

TMM (Thermo-mechanical Model)

old SOPALE documentation

Geodynamics home page

Branches

Each release of Sopale Nested is identified by its branch name in the Subversion repository. Branch names often include the pattern yyyy-mm-dd, which is the date of the release. On the scrXm family and the stable, sopale_nested is installed in

~dguptill/software/sopale_nested_branchname

The directory structure in each branch now includes

  • bin
  • blkfct
  • config
  • mpi
  • mpimod
  • src
  • test

The binaries are in bin.

What Is A Branch?

Good Question. Every so often, depending new user requirements, the phase of the moon and other imponderables, the source code in the trunk of the Subversion repository is copied and given a name. Bingo! a new branch.

How can I find out what branches are installed?

On scrXm, and the stable, type this:

  ls -ld ~dguptill/software/sopale_nested*

Sopale Nested directories without a branch name, and those with the branch name of "trunk" are development and/or testing releases. They should not be used for production runs.

How do I know what is new in a branch?

See the wiki.

What is the most recent Branch/Release ?

See the wiki.

MPI on Linux

The MPI implementation used on Linux is OpenMPI. The OpenMPI documentation suggests that the MPI application (in this case Sopale Nested) should be built with the same compiler suite - and options if they vary data type sizes - as was used to compile OpenMPI. The result is that when running Sopale Nested, an MPI build matching the Sopale Nested binary should be used. For versions prior to 2009-04-29 the script run_sopale_nested will guide the user through the available builds.

OpenMPI user-written routines

The LS and SS copies of Sopale Nested are closely coupled. Most of the time, one of them is waiting for results from the other. The wait happens in the routines MPI_Recv and MPI_Send. The standard implementations of these routines do a busy wait which creates a 100% cpu load. Sopale Nested built this way imposes a load of 2.0 (1.0 for LS and 1.0 for SS) on the computer.

We have written special implementations of MPI_Recv and MPI_Send. These implementations loop on a wait-and-test, where the wait time is increased (up to a maximum) in each loop iteration. Sopale Nested built with them imposes a load of 1.0 on the computer. The result is that a given computer will handle twice as many runs of Sopale Nested when it is built this way.

sopalepc has 4 cpus, so it should be able to handle 4 runs of Sopale Nested ompimod builds, each with an SS, without a significant decrease in the speed of any one run.

Compilers

Sopale Nested has been ported from the p690 to sopalepc - a Linux computer. sopalepc has two compiler suites: gnu and intel. The performance (speed and numerical results) of Sopale Nested varies depending on which compiler, and which compiler options are used.

At present the gnu builds are slower than the intel builds. The intel-noopt is about twice the speed of the intel-std. The opmimod builds are preferred, since they load the computer much less. For the ultimate in speed, use ompimod-intel-noopt

Build Identifiers

The binaries of Sopale Nested are located in the bin sub-directory. They have names which indicate aspects of how they were created. Most names have the following format:

sopale_nested-<branchname>-<which MPI>-<which compiler>-<compiler options>-<grid sizes>

branchname
  • 2009-02-10 - in the example below.
which MPI
  • nopoe - LS region only; built without MPI on the p690.
  • nompi - LS region only; built without MPI on the GNU/Linux.
  • poe - both LS and SS regions if requested; built with poe on the p690
  • ompi - both LS and SS regions if requested; built with standard OpenMPI.
  • ompimod - both LS and SS regions if requested; built with modified OpenMPI.
which compiler
  • xlf - the Fortran compiler on the p690 (xlf, cc)
  • gnu - gnu compilers (gcc, gfortran)
  • intel - intel compiler suite (ifort, icc)
compiler options

indicates which compiler options were used

  • noopt
    • for gnu: -Wall -ffixed-line-length-132 -fconvert=big-endian
    • for intel: -warn all -assume byterecl -extend-source 132 -convert big_endian
  • std
    • for xlf - compiler options as described here
    • for gnu - noopt + -fdefault-double-8 -fdefault-real-8
    • for intel - noopt + -O2 -fp-model strict
grid sizes

indicates the maximum eulerian grid size, maximum lagrangian grid size, and the space allowed, per cell, for injected lagrangian particles. For example:

  • -501x301-1201x301-40

For example, one of the binaries on sopalepc is called:
  sopale_nested_2009-02-10-ompimod-intel-noopt-501x301-1201x301-40

But...How to figure out what build size I need?

Easy when running a model with no nest. Use any build with grid sizes larger than, or equal to, your model grid sizes.

When running a model with a nest, a calculation is required. On scrXm and the stable, there is help:


dguptill@scram:sopale_nested$ ~dguptill/software/bin/calc-build-size sopale_nested_i
LS is 201x131-7501x391
nest parameters are 96 186 1 80
     and            5 1
nest is 451x80
build size is 451x131-7501x391

But...The Build I Want Is Not There.!?

Ask Douglas

The compiler, compiler options, maximum eulerian grid size, maximum lagrangian grid size, space allowed for injected lagrangian particles are now all variable at build time. If you do not see the combination you want, please ask. In most cases, a new build can be prepared in less than 1/2 day.



This page was last modified on Wednesday, 07-Dec-2011 08:23:39 AST
Comments to geodynam at dal dot ca