Open MPI logo

FAQ:
OS X

  |   Home   |   Support   |   FAQ   |   all just the FAQ
This FAQ is for Open MPI v4.x and earlier.
If you are looking for documentation for Open MPI v5.x and later, please visit docs.open-mpi.org.

Table of contents:

  1. How does Open MPI handle HFS+ / UFS filesystems?
  2. How do I use the Open MPI wrapper compilers in XCode?
  3. What versions of Open MPI support XGrid?
  4. How do I run jobs under XGrid?
  5. Where do I get more information about running under XGrid?
  6. Is Open MPI included in OS X?
  7. How do I not use the OS X-bundled Open MPI?
  8. I am using Open MPI 2.0.x / v2.1.x and getting an error at application startup. How do I work around this?


1. How does Open MPI handle HFS+ / UFS filesystems?

Generally, Open MPI does not care whether it is running from an HFS+ or UFS filesystem. However, the C++ wrapper compiler historically has been called mpiCC, which of course is the same file as mpicc when running on case-insensitive HFS+. During the configure process, Open MPI will attempt to determine if the build filesystem is case sensitive or not, and assume the install file system is the same way. Generally, this is all that is needed to deal with HFS+.

However, if you are building on UFS and installing to HFS+, you should specify --without-cs-fs to configure to make sure Open MPI does not build the mpiCC wrapper. Likewise, if you build on HFS+ and install to UFS, you may want to specify --with-cs-fs to ensure that mpiCC is installed.


2. How do I use the Open MPI wrapper compilers in XCode?

XCode has a non-public interface for adding compilers to XCode. A friendly Open MPI user sent in a configuration file for XCode 2.3 (MPICC.pbcompspec), which will add support for the Open MPI wrapper compilers. The file should be placed in /Library/Application Support/Apple/Developer Tools/Specifications/. Upon starting XCode, this file is loaded and added to the list of known compilers.

To use the mpicc compiler: open the project, get info on the target, click the rules tab, and add a new entry. Change the process rule for "C source files" and select "using MPICC".

Before moving the file, the ExecPath parameter should be set to the location of the Open MPI install. The BasedOn parameter should be updated to refer to the compiler version that mpicc will invoke — generally gcc-4.0 on OS X 10.4 machines.

Thanks to Karl Dockendorf for this information.


3. What versions of Open MPI support XGrid?

XGrid is a batch-scheduling technology that was included in some older versions of OS X. Support for XGrid appeared in the following versions of Open MPI:

Open MPI series XGrid supported
v1.0 series Yes
v1.1 series Yes
v1.2 series Yes
v1.3 series Yes
v1.4 and beyond No


4. How do I run jobs under XGrid?

XGrid support will be built if the XGrid tools are installed.

We unfortunately have little documentation on how to run with XGrid at this point other than a fairly lengthy e-mail that Brian Barrett wrote on the Open MPI user's mailing list:

Since Open MPI 1.1.2, we also support authentication using Kerberos. The process is essentially the same, but there is no need to specify the XGRID_PASSWORD field. Open MPI applications will then run as the authenticated user, rather than nobody.


5. Where do I get more information about running under XGrid?

Please write to us on the user's mailing list. Hopefully any replies that we send will contain enough information to create proper FAQs about how to use Open MPI with XGrid.


6. Is Open MPI included in OS X?

Open MPI v1.2.3 was included in some older versions of OS X, starting with version 10.5 (Leopard). It was removed in more recent versions of OS X (we're not sure in which version it disappeared — *but your best bet is to simply download a modern version of Open MPI for your modern version of OS X*).

Note, however, that OS X Leopard does not include a Fortran compiler, so the OS X-shipped version of Open MPI does not include Fortran support.

If you need/want Fortran support, you will need to build your own copy of Open MPI (assumedly when you have a Fortran compiler installed). The Open MPI team strongly recommends not overwriting the OS X-installed version of Open MPI, but rather installing it somewhere else (e.g., /opt/openmpi).


7. How do I not use the OS X-bundled Open MPI?

There are a few reasons you might not want to use the OS X-bundled Open MPI, such as wanting Fortran support, upgrading to a new version, etc.

If you wish to use a community version of Open MPI, You can download and build Open MPI on OS X just like any other supported platform. We strongly recommend not replacing the OS X-installed Open MPI, but rather installing to an alternate location (such as /opt/openmpi).

Once you successfully install Open MPI, be sure to prefix your PATH with the bindir of Open MPI. This will ensure that you are using your newly-installed Open MPI, not the OS X-installed Open MPI. For example:

1
2
3
4
5
6
7
8
9
10
11
12
shell$ wget https://www.open-mpi.org/.../open-mpi....
shell$ tar xf openmpi-<version>.tar.bz2
shell$ cd openmpi-<version>
shell$ ./configure --prefix=/opt/openmpi 2>&1 | tee config.out
[...lots of output...]
shell$ make -j 4 2>&1 | tee make.out
[...lots of output...]
shell$ sudo make install 2>&1 | tee install.out
[...lots of output...]
shell$ export PATH=/opt/openmpi/bin:$PATH
shell$ ompi_info
[...see output from newly-installed Open MPI...]

Of course, you'll want to make your PATH changes permanent. One way to do this is to edit your shell startup files.

Note that there is no need to add Open MPI's libdir to LD_LIBRARY_PATH; Open MPI's shared library build process automatically uses the "rpath" mechanism to automatically find the correct shared libraries (i.e., the ones associated with this build, vs., for example, the OS X-shipped OMPI shared libraries). Also note that we specifically do not recommend adding Open MPI's libdir to DYLD_LIBRARY_PATH.

If you build static libraries for Open MPI, there is an ordering problem such that /usr/lib/libmpi.dylib will be found before $libdir/libmpi.a, and therefore user-linked MPI applications that use mpicc (and friends) will use the "wrong" libmpi. This can be fixed by editing OMPI's wrapper compilers to force the use of the Right libraries, such as with the following flag when configuring Open MPI:

1
shell$ ./configure --with-wrapper-ldflags="-Wl,-search_paths_first" ...


8. I am using Open MPI 2.0.x / v2.1.x and getting an error at application startup. How do I work around this?

On some versions of Mac OS X / macOS Sierra, the default temporary directory location is sufficiently long that it is easy for an application to create file names for temporary files which exceed the maximum allowed file name length. With Open MPI v2.0.x, this can lead to errors like the following at application startup:

1
2
3
shell$ mpirun ... my_mpi_app
[[53415,0],0] ORTE_ERROR_LOG: Bad parameter in file ../../orte/orted/pmix/pmix_server.c at line 264
[[53415,0],0] ORTE_ERROR_LOG: Bad parameter in file ../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line

Or you may see something like this (v2.1.x):

1
2
3
4
5
6
7
8
shell$ mpirun ... my_mpi_app
PMIx has detected a temporary directory name that results
in a path that is too long for the Unix domain socket:
 
    Temp dir: /var/folders/mg/q0_5yv791yz65cdnbglcqjvc0000gp/T/openmpi-sessions-502@anlextwls026-173_0/53422
 
Try setting your TMPDIR environmental variable to point to
something shorter in length.

The workaround for the Open MPI 2.0.x and v2.1.x release series is to set the TMPDIR environment variable to /tmp or another short directory name.