Intel MPI does not support Mellanox ConnectIB Adapter using DAPL transport for Native Infiniband - IBM System Cluster 1350 (1410)


RETAIN tip: H21417


There is no entry in '/etc/dat.conf' for a 'mlx5*' device (also referred to as a ConnectIB card) in Intel Message Passing Interface (MPI)

This means using the Direct Access Programming Library (DAPL) transport, aside from using the 'ib0' device (which requires Internet Protocol over InfiniBand (IPoIB) to be configured), the ConnectIB card cannot be specified as the DAPL device.

Affected configurations

The system is configured with one or more of the following IBM Options:

  • IBM System Cluster 1350, type 1410, any model

This tip is not system specific.

This tip is not software specific.


This behavior will be corrected in a future release of Intel MPI.

The target date for this release is scheduled for fourth quarter 2014.

The file is or will be available by selecting the appropriate Product Group, type of System, Product name, Product machine type, and Operating system on IBM Support's Fix Central web page, at the following URL:


Currently, there is no direct workaround for this limitation. The only option for using DAPL transport for the ConnectIB until Intel MPI is to run it in IPoIB mode.

Additional information

The Intel MPI Library supports multiple network fabric types including DAPL, which is the faster and preferred network fabric for InfiniBand.

As of Intel MPI version, the Mellanox ConnectIB adapter is not supported as a DAPL-capable device. However, InfiniBand devices are supported using the Transmission Control Protocol/Internet Protocol (TCP or IP) fabric by running them in IPoIB mode.

Applicable countries and regions


Document id:  MIGR-5093201
Last modified:  2014-08-12
Copyright © 2015 IBM Corporation