Intel MPI does not support Mellanox ConnectIB Adapter using DAPL transport for Native Infiniband - IBM System Cluster 1350 (1410)



Source

RETAIN tip: H21417

Symptom

There is no entry in '/etc/dat.conf' for a 'mlx5*' device (also referred to as a ConnectIB card) in Intel Message Passing Interface (MPI) 4.1.0.030.

This means using the Direct Access Programming Library (DAPL) transport, aside from using the 'ib0' device (which requires Internet Protocol over InfiniBand (IPoIB) to be configured), the ConnectIB card cannot be specified as the DAPL device.

Affected configurations

The system is configured with one or more of the following IBM Options:

  • IBM System Cluster 1350, type 1410, any model

This tip is not system specific.

This tip is not software specific.

Solution

This behavior will be corrected in a future release of Intel MPI.

The target date for this release is scheduled for Second Quarter 2014.

The file is or will be available by selecting the appropriate Product Group, type of System, Product name, Product machine type, and Operating system on IBM Support's Fix Central web page, at the following URL:

Workaround

Currently, there is no direct workaround for this limitation. The only option for using DAPL transport for the ConnectIB until Intel MPI is to run it in IPoIB mode.

Additional information

The Intel MPI Library supports multiple network fabric types including DAPL, which is the faster and preferred network fabric for InfiniBand.

As of Intel MPI version 4.1.0.030, the Mellanox ConnectIB adapter is not supported as a DAPL-capable device. However, InfiniBand devices are supported using the Transmission Control Protocol/Internet Protocol (TCP or IP) fabric by running them in IPoIB mode.

Applicable countries and regions

 


Document id:  MIGR-5093201
Last modified:  2013-12-10
Copyright © 2014 IBM Corporation