Dynamic Shared Objects and Libraries (DSLs) [5]

5.1 Introduction

Cray supports dynamically linking applications with shared objects and libraries. Dynamic shared objects allow for use of multiple programs that require the same segment of memory address space to be used during linking and compiling. This functionality enables many previously unavailable applications to run on Cray systems and may reduce executable size and improve optimization of system resources. Also, when shared libraries are changed or upgraded, users do not need to recompile dependent applications. Cray Linux Environment uses Cray Data Virtualization Service (Cray DVS) to project the shared root onto the compute nodes. Thus, each compute node, using its DVS-projected file system transparently, calls shared libraries located at a central location.

5.2 About the Compute Node Root Run Time Environment

CLE facilitates compute node access to the Cray system shared root by projecting it through Cray DVS. DVS is an I/O forwarding mechanism that provides transparent access to remote file systems, while reducing client load. DVS allows users and applications running on compute nodes access to remote POSIX-compliant file systems.

ALPS runs with applications that use read-only shared objects. When a user runs an application, ALPS launches the application to the compute node root. After installation, the compute node root is enabled by default. However, an administrator can define the default case (DSO support enabled or disabled) per site policy. Users can override the default setup by setting an environment variable, CRAY_ROOTFS.

5.2.1 DSL Support

CLE supports DSLs for following cases:

  • Linking and loading against programming environments supported by Cray

  • Use of standard Linux services usually found on service nodes.

Launching terminal shells and other programming language interpreters by using the compute node root are not currently supported by Cray.

5.3 Configuring DSL

The shared root /etc/opt/cray/cnrte/roots.conf file contains site-specific values for custom root file systems. To specify a different pathname for roots.conf, edit the configuration file /etc/sysconfig/xt.conf and change the value for the variable, CRAY_ROOTFS_CONF. In the roots.conf file, the system default compute node root used is specified by the symbolic name DEFAULT. If no default value is specified, / will be assumed. In the following example segment of roots.conf, the default case uses the root mounted at on the compute nodes at /dsl:

DEFAULT=/dsl
INITRAMFS=/
DSL=/dsl

A user can override the system default compute node root value by setting the environment variable, CRAY_ROOTFS, to a value from the roots.conf file. This setting effectively changes the compute node root used for launching jobs. For example, to override the use of /dsl, a user would enter something similar to the following example at the command line on the login node:

% export CRAY_ROOTFS=INITRAMFS

If the system default is using initramfs, enter something like the following at the command line on the login node to switch to using the compute node root path specified by DSL:

% export CRAY_ROOTFS=DSL

An administrator can modify the contents of this file to restrict user access. For example, if the administrator wants to allow applications to launch only by using the compute node root, the roots.conf file would read as follows:

% cat /etc/opt/cray/cnrte/roots.conf
DEFAULT=/dsl

For more information, see Managing System Software for Cray XE and Cray XK Systems.

5.4 Building, Launching, and Workload Management Using Dynamic Objects

5.4.1 Linker Search Order

Search order is an important detail to consider when compiling and linking executables. The dynamic linker uses the following search order when loading a shared object:

  • Value of LD_LIBRARY_PATH environment variable.

  • Value of DT_RUNPATH dynamic section of the executable, which is set using the ld -rpath command. You can add a directory to the run time library search path using the ld command. However, when a supported Cray programming environment is used, the library search path is set automatically. For more information please see the ld(1) man page.

  • The contents of the human non-readable cache file /etc/ld.so.cache. The /etc/ld.so.conf contains a list of comma or colon separated path names to which the user can append custom paths.

  • The paths /lib and /usr/lib.

Loading a programming environment module before compiling will appropriately set the LD_LIBRARY_PATH environment variable. Conversely, unloading modules will clear the stored value of LD_LIBRARY_PATH. Other useful environment variables are listed in the ld.so(8) man page. If a programming environment module is loaded when an executable that uses dynamic shared objects is running, it should be the same programming environment used to build the executable. For example, if a program is built using the PathScale compiler, the user should load the module, PrgEnv-pathscale, when setting the environment to launch the application.

Example 1. Compiling an application

Compile the following program, reduce_dyn.c, dynamically by including the compiler option dynamic.

The C version of the program, reduce_dyn.c, looks like:

/* program reduce_dyn.c */ 
#include <stdio.h> 
#include "mpi.h" 

int main (int argc, char *argv[]) 
{ 
	int i, sum, mype, npes, nres, ret; 
	ret = MPI_Init (&argc, &argv); 
	ret = MPI_Comm_size (MPI_COMM_WORLD, &npes); 
	ret = MPI_Comm_rank (MPI_COMM_WORLD, &mype); 
	nres = 0; 
	sum = 0; 
	
	for (i = mype; i <=100; i += npes) 
	{ 
		sum = sum + i; 
	} 
	(void) printf ("My PE:%d My part:%d\n",mype, sum); 
	ret = MPI_Reduce (&sum,&nres,1,MPI_INTEGER,MPI_SUM,0,MPI_COMM_WORLD); 
	
	if (mype == 0) 
	{ 
		(void) printf ("PE:%d Total is:%d\n",mype, nres); 
	} 
	ret = MPI_Finalize (); 
} 

Invoke the C compiler using cc and the dynamic option:

% cc -dynamic reduce_dyn.c -o reduce_dyn

Alternatively, you can use the environment variable, XTPE_LINK_TYPE, without any extra compiler options:

% export XTPE_LINK_TYPE=dynamic
% cc reduce_dyn.c -o reduce_dyn

You can tell if an executable uses a shared library by executing the ldd command:

% ldd reduce_dyn	
   libsci.so => /opt/xt-libsci/10.3.7/pgi/lib/libsci.so (0x00002b1135e02000)
	libfftw3.so.3 => /opt/fftw/3.2.1/lib/libfftw3.so.3 (0x00002b1146e92000)
	libfftw3f.so.3 => /opt/fftw/3.2.1/lib/libfftw3f.so.3 (0x00002b114710a000)
	libsma.so => /opt/mpt/3.4.0.1/xt/sma/lib/libsma.so (0x00002b1147377000)
	libmpich.so.1.1 => /opt/mpt/3.4.0.1/xt/mpich2-pgi/lib/libmpich.so.1.1 (0x00002b11474a0000)
	librt.so.1 => /lib64/librt.so.1 (0x00002b114777a000)
	libpmi.so => /opt/mpt/3.4.0.1/xt/pmi/lib/libpmi.so (0x00002b1147883000)
	libalpslli.so.0 => /opt/mpt/3.4.0.1/xt/util/lib/libalpslli.so.0 (0x00002b1147996000)
	libalpsutil.so.0 => /opt/mpt/3.4.0.1/xt/util/lib/libalpsutil.so.0 (0x00002b1147a99000)
	libportals.so.1 => /opt/xt-pe/2.2.32DSL/lib/libportals.so.1 (0x00002b1147b9c000)
	libpthread.so.0 => /lib64/libpthread.so.0 (0x00002b1147ca8000)
	libm.so.6 => /lib64/libm.so.6 (0x00002b1147dc0000)
	libc.so.6 => /lib64/libc.so.6 (0x00002b1147f15000)
	/lib64/ld-linux-x86-64.so.2 (0x00002b1135ce6000)

There are shared object dependencies listed for this executable. For more information, please consult the ldd(1) man page.

Example 2. Running an application in interactive mode

If the system administrator has set up the compute node root run time environment for the default case, then the user executes aprun without any further argument:

% aprun -n 6 ./reduce_dyn 

However, if the administrator sets up the system to use initramfs, then the user must set the environment variable appropriately:

% export CRAY_ROOTFS=DSL
% aprun -n 6 ./reduce_dyn | sort 
Application 1555880 resources: utime 0, stime 8
  My PE:0 My part:816
  My PE:1 My part:833
  My PE:2 My part:850
  My PE:3 My part:867
  My PE:4 My part:884
  My PE:5 My part:800
  PE:0 Total is:5050

Example 3. Running an application using a workload management system

Running a program interactively using a workload management system such as PBS or Moab and TORQUE with the compute node root is essentially the same as running with the default environment. One exception is that if the compute node root is not the default execution option, you must set the environment variable after you have run the batch scheduler command, qsub:

% qsub -I -lmppwidth=4
% export CRAY_ROOTFS=DSL

Alternatively, you can use -V option to pass environment variables to the PBS or Moab and TORQUE job:

% export CRAY_ROOTFS=DSL
% qsub -V -I -lmppwidth=4

Example 4. Running a program using a batch script

Create the following batch script, reduce_script, to launch the reduce_dyn executable:

#!/bin/bash
#reduce_script
# Define the destination of this job
# as the queue named "workq":
#PBS -q workq
#PBS -l mppwidth=6
# Tell WMS to keep both standard output and
# standard error on the execution host:
#PBS -k eo
cd /lus/nid00008/crayusername
module load PrgEnv-pgi
aprun -n 6 ./reduce_dyn
exit 0

Then launch the script using the qsub command:

% export CRAY_ROOTFS=DSL
% qsub -V reduce_script
1674984.sdb
% cat reduce_script.o1674984
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
My PE:5 My part:800
My PE:4 My part:884
My PE:1 My part:833
My PE:3 My part:867
My PE:2 My part:850
My PE:0 My part:816
PE:0 Total is:5050
Application 1747058 resources: utime ~0s, stime ~0s

5.5 Troubleshooting

5.5.1 Error While Launching with aprun: "error while loading shared libraries"

If you encounter an error such as:

error while loading shared libraries: libsci.so: cannot open shared /
object file: No such file or directory

your environment may not be configured to launch applications using shared objects. Set the environment variable CRAY_ROOTFS to the appropriate value as prescribed in Example 2.

5.5.2 Running an Application Using a Non-existent Root

If you erroneously set CRAY_ROOTFS to a file system not specified in roots.conf, aprun will exit with the following error:

% set CRAY_ROOTFS=WRONG_FS
% aprun -n 4 -N 1 ./reduce_dyn
aprun: Error from DSL library: Could not find shared root symbol WRONG_FS,
			specified by env variable CRAY_ROOTFS, in config file: /etc/opt/cray/cnrte/roots.conf

aprun: Exiting due to errors. Application aborted

5.5.3 Performance Implications of Using Dynamic Shared Objects

Using dynamic libraries may introduce delays in application launch times because of shared object loading and remote page faults. Such delays are an inevitable result of the linking process taking place at execution and the relative inefficiency of symbol lookup in DSOs. Likewise, binaries that are linked dynamically may cause a small but measurable performance degradation during execution. If this delay is unacceptable, link the application with static libraries.