MVAPICH2 - supported network types - mpi

Can MVAPICH2 be installed on a normal ethernet network other than InfiniBand or other HPC networking technology?

The very first line on the MVAPICH2 web site reads:
MVAPICH2 (MPI-3 over OpenFabrics-IB, OpenFabrics-iWARP, PSM, uDAPL and TCP/IP)
Then from the list of supported interfaces:
TCP/IP-CH3: The standard TCP/IP interface (provided by MPICH2) to work with a range of network adapters supporting TCP/IP interface. This interface can be used with IPoIB (TCP/IP over InfiniBand network) support of InfiniBand also. However, it will not deliver good performance/ scalability as compared to the other interfaces.
Therefore - yes.

Related

How to know the network device type from lldp captured packet?

I have a captured LLDP packet.
LLDP has a list of enabled capabilities (Router, Bridge etc.) but none of the capabilities in the list is Switch. The question is how can I know if the source which the packet arrived from is a Switch Device?
If there is no a concrete answer, proximate assumption will do.
Disclaimer: I cannot actively address the switch, I'm sniffing packets...
LLDP has a list of enabled capabilities (Router, Bridge etc.) but none
of the capabilities in the list is Switch.
A switch is a bridge. The original bridges only had a few interfaces (usually two), and bridging was done with software. When technology advanced to bridging in hardware, and the electronics became cheap enough to increase the interface density on a bridge, a vendor coined the marketing term, "switch."
Modern switches are transparent (all interfaces use the same protocol) bridges. There are also translating bridges, e.g. a WAP (Wireless Access Point), which translates between ethernet and Wi-Fi.

Will Netmap bridging break ipfw rule on FreeBSD

I am working on setup a netmap enabled (high performance bridging firewall).
The question is if i am using netmap's bridging tools to bridge em0 and em1,
and setup ipfw rules to block some kinds traffic on one em0, will it works?
the kernel bridging is works fine with ipfw but its slow(not netmap enabled), my worry is if it short circle the firewall rules, if i look at the implementation, it doesn't do anything about packet filtering, just once em0 received packets it will forward to em1 immediately
the netmap bridging tools is bridge.c
https://www.freebsd.org/cgi/man.cgi?query=netmap&sektion=4
While a NIC is in netmap mode, the OS will still believe the interface
is up and running. OS-generated packets for that NIC end up into a
netmap ring, and another ring is used to send packets into the
OS network stack. A close(2) on the file descriptor removes the
binding, and returns the NIC to normal mode (reconnecting the data
path to the host stack), or destroys the virtual port.
NICs without native support can still be used in netmap mode through emu-
lation. Performance is inferior to native netmap mode but still signifi-
cantly higher than sockets, and approaching that of in-kernel solutions
such as Linux's pktgen.
PS:
You can do bridging and filtering with ng_ipfw + ng_bridge - it's a fast kernel based solution

TCP - LRO/TSO techniques

Why is it must that all interfaces (routers and bridges) involved support LRO/TSO technique ?
Routers don't. Bridges do.
External routers, hubs, switches or anything else that is externally connected to the network will not see the effects of TSO, only interfaces inside the device with TSO will experience any effects - it's a software thing.
A router is an external device which is connected to the network by ethernet cables, fibre optic cables, wireless comms etc. These communication mediums adhere to internation standards such as 803.2 for ethernet or 803.11 for wireless - they're hardware devices, and hardware devices have very strict rules on how they communicate.
A bridge is an internal software construct and is specific to your OS.
Let's use 803.2 (ethernet) and a linux host for an example.
An application calls for a socket to be created and then pushes a large data chunk into the socket. The linux kernel determines which interface this data should be transitted on. The kernel will next interrogate the driver for this interface to determine its capabilities, if the interface is TSO capable the kernel will pass an sk_buff with a single "template" header and a huge chunk of data (more than 1 packets worth) to the interface driver.
Let's consider a standard interface straight to a hardware NIC first:
Some interfaces have fake TSO (they segment the packet in the driver) and some have true TSO (the template header and data are passed to the hardware with minimal alterations). At this point ether the driver or the NIC hardware will convert this large segment of data into multiple, standard compliant, 803.2 ethernet frames, it is these compliant frames that an external device, such as a router, hub, switch, modem or other host will see on the wire.
Now let's consider several NICs behind a software bridge:
Although the kernel is aware of each NIC at a low level, the network stack is only aware of the bride, thus only capabilities that ALL of the underlying NICs have should be passed up to the bridge. If an sk_buff is passed to a bridge, then ALL the interfaces in the bridge will receive the same sk_buff. We'll assume that the kernal has once again passed our large TSO sk_buff to a bridge, if any of the underlying interfaces does not support TSO then the packet will most likely be dropped by the hardware NIC in question.
In summary:
Worst case scenario is the bridge will repeatedly retry to send the same data chunk on the broken interface and the whole bridge will lock up until the application decides to give up. Best case scenario, the non TSO NIC will simply appear to be dead.
That said, if the NIC has unsafe code in its driver then this could cause a segmentation fault that could bring the whole system down.

Difference between IPoIB and TCP over Infiniband

Can someone explain the concepts of IPoIB and TCP over infiniband? I understand the overall concept and data rates provided by native infiniband, but dont quite understand how TCP and IPoIB fit in. Why do u need them and what do they do? What is the difference when someone says their network uses IPoIB or TCP with infiniband? Which one is better? I am not from a strong networking background, so it would be nice if you could elaborate.
Thank you for your help.
InfiniBand adapters ("HCAs") provide a couple of advanced features that can be used via the native "verbs" programming interface:
Data transfers can be initiated directly from userspace to the hardware, bypassing the kernel and avoiding the overhead of a system call.
The adapter can handle all of the network protocol of breaking a large message (even many megabytes) into packets, generating/handling ACKs, retransmitting lost packets, etc. without using any CPU on either the sender or receiver.
IPoIB (IP-over-InfiniBand) is a protocol that defines how to send IP packets over IB; and for example Linux has an "ib_ipoib" driver that implements this protocol. This driver creates a network interface for each InfiniBand port on the system, which makes an HCA act like an ordinary NIC.
IPoIB does not make full use of the HCAs capabilities; network traffic goes through the normal IP stack, which means a system call is required for every message and the host CPU must handle breaking data up into packets, etc. However it does mean that applications that use normal IP sockets will work on top of the full speed of the IB link (although the CPU will probably not be able to run the IP stack fast enough to use a 32 Gb/sec QDR IB link).
Since IPoIB provides a normal IP NIC interface, one can run TCP (or UDP) sockets on top of it. TCP throughput well over 10 Gb/sec is possible using recent systems, but this will burn a fair amount of CPU. To your question, there is not really a difference between IPoIB and TCP with InfiniBand -- they both refer to using the standard IP stack on top of IB hardware.
The real difference is between using IPoIB with a normal sockets application versus using native InfiniBand with an application that has been coded directly to the native IB verbs interface. The native application will almost certainly get much higher throughput and lower latency, while spending less CPU on networking.

Linux TCP stack packet injection

Could i inject packets to Linux TCP stack without modifying the ethernet driver? Could i do this with using a library or sth ?
Thank you,
If by 'inject packets to Linux TCP stack' you mean send some data that the Linux kernel will treat as a frame coming from an Ethernet interface then you can use a 'tap' device. If an IP packet (layer 3) is good enough, then use a 'tun' device.
http://en.wikipedia.org/wiki/TUN/TAP
http://www.kernel.org/pub/linux/kernel/people/marcelo/linux-2.4/Documentation/networking/tuntap.txt
Libnet
Libnet is a generic networking API that provides access to several protocols. It is not designed as a 'all in one' solution to networking. Currently many features that are common in some network protocols are not available with Libnet, such as streaming via TCP/IP. We feel that Libnet should not provide specific features that are possible in other protocols. If we restrict Libnet to the minimal needed to communicate (datagram/packets) then this allows it to support more interfaces.
Otherwise, if you're just wondering about injecting hand-crafted packets into the network, read the man pages and look for online help with raw sockets. Some good places to start are man 7 raw, man packet, and there are some ok tutorials at security-freak.net, though the code there is not written particularly well for my tastes.

Resources