I am using hazelcast 3.3 in my software for caching in clusters. I based my code on the following example
Stackoverflow tcp hazelcast example. (Update 2)
Now I face the following problem:
The host I run my program on has various network cards (=> multiple IP Numbers). I would like to start various instances of my program on the same machine using different interfaces (IP Numbers)
The tcp config for this seems to be
network.getInterfaces().setEnabled(true).addInterface("<MY IP NUMBER>");
No matter what IP I give here, on the OS side hazelcast always binds to 0.0.0.0 (all IPs).
Is this wanted? I would expect that hazelcast only binds to a specific IP.
Does hazelcast do the packet filtering on its own and therefore binds to all interfaces the same time?
That means I cannot use the same port number for my various running program instances since the binding to 0.0.0.0 will of course fail starting the second client (which actually happens)
Studying the Hazelcast documentation (Networking) It was definitly said that Hazelcast by default binds to ALL network interfaces. To change that there is this System property:
hazelcast.socket.bind.any
The documentation says: set to false and it will only bind to the specified interfaces.
I did not check it out but it it sounds like the solution to my problem.
EDIT: I tried now and it worked. Hazelcast only connected to the given interface.
run "ipconfig" on windows (or "ifconfig" on Linux, etc) to see all network interfaces. You should see at least 127.0.0.1 and some others. If your machine is multi-homed (has multiple network cards connected to multiple networks), make sure to select the correct one.
Bottom line, put the interface IP and not YOUR IP in:
network.getInterfaces().setEnabled(true).addInterface("<INTERFACE IP>");
For XML configs, it would be like this:
<network>
... snip...
<join>
... snip...
</join>
... snip...
<interfaces enabled="true">
<interface> <INTERFACE IP> </interface>
</interfaces>
... snip...
</network>
ALSO: be careful to put the < interface > element under the < network > element. You could also put the < interface > element inside the < tcp-ip > element but if you put it there, it means something else: if you put the < interface > element inside the < tcp-ip > element, it will be treated as an alias/synonym to the < member > tag, which is something totally different! So, put it under < network > otherwise it won't work!!!
Related
I am porting some Windows code to Linux. Part of the windows objects properties was not implemented in .net core's linux implementation. UnicastIPAddressInformation.PrefixOrigin is one of them.
.NET Core code docs show define it as:
value that identifies the source of a unicast IP address prefix.
MSDN defines it as:
Specifies how an IP address network prefix was located.
I am searching .NET Core repo browser for the implementation of this property, which returns the following enumeration:
public enum PrefixOrigin
{
Other = 0,
Manual,
WellKnown,
Dhcp,
RouterAdvertisement,
}
I could not find in .NET Core repo browser a class that implements UnicastIPAddressInformation. In .NET Framework repo browser, I understand the struct IdAdapterUnicastAddress is assigned a PrefixOrigin by marshaling OS data into C# classes/types. Anyway, I do not know at this point how to determine which enumeration value should be applied to a given IP.
Knowing barely nothing about computer networks, I am researching what is an IP prefix and how to figure it out. The practical example I could find was this one. As far as I understand, however, it provides a way to calculate the prefix length. I still need to know how to determine the PrefixOrigin enumeration value to a given IP.
Is it something that can be done by simply taking the prefix length into account ? If not, how do I figure out which PrefixOrigin value a given IP should be assigned ?
This field's value is telling you how a configured (or automatically-configured) IP address on the system was determined.
Manual: Somebody keyed it into the adapter configuration GUI in control panel or set it using e.g. netsh or similar.
Well Known: From a well-known source. I'm not really sure if Windows uses this value. It might be used when a 169.254.x.x address is assigned in the absence of any other configuration and when no DHCP server is present.
DHCP: When a DHCP server automatically assigns an IP address, which is the case in almost all home and office networks (but sometimes not on datacenter networks!), this is how you can tell.
Router Advertisement: IPv6 has an automatic configuration system which was supposed to replace DHCP. To keep things simple, think of this as being functionally the same as the field's DHCP value.
I wish to install a VM on my Xen Project machine that will run a Zentyal Firewall. My machine has three networks cards: one integrated, and two discreet, similar cards (they have the same Realtek chip, but are from different manufacturers). For the firewall to work optimally, what I want to do is assign and dedicate the two discreet NICs to my firewall VM, and use the integrated card for Dom0 and other VMs. I have been able to do similar things with other virtualisation software in the past, but have not been able to find a way to do it with Xen Project.
This page provides many useful configurations, but I don't think any of them match what I want to do. Is this at all possible, or must I give up hope of virtualising my firewall computer?
I think the best way to solve this would be using PCI passthrough in Xen. What this means is that you can leave 1 of your NICs attached to the dom0 (which can then be bridged to allow the other VMs to connect through the same interface - look at one of the Xen articles on network configuration for some examples of how to set this up, it'll be the same as if you only had a single NIC) and allow the firewall VM full control over the other two NICs.
The process for this is somewhat involved and can vary by distribution so I would advise you check the first article I linked but I will describe the basic process.
Check the PCI addresses of the two network cards you want to pass through using lspci. The lines of output for your cards will look something like the following (although the details will be very different the structure will be the same):
00:19.0 Ethernet controller: Intel Corporation 82579LM Gigabit Network Connection (rev 04)
00:19.1 Ethernet controller: Intel Corporation 82579LM Gigabit Network Connection (rev 04)
Make a note of the first column (00:19.0 and 00:19.1 in this example). Add this to the config for your firewall VM in the following format:
pci=['00:19.0','00:19.1']
On its own this will cause the VM to fail to boot as it will be unable to pass through the devices. In order for the devices to be passed through they will need to be bound to the pciback driver on dom0 with a command like:
xl pci-assignable-add 00:19.0
xl pci-assignable-add 00:19.1
This may not be possible in all situations but there are other methods if it is not. I strongly advise you to read the article I mentioned before to fully understand what the best way to do this is in your case.
I am trying to find the proper way of accomplishing the following.
I would like to provide 2Gb/s access for clients accessing a fileserver guest vm on a ESXi server, which itself access the datastore over iSCSI. Therefore the ESXi server need 2Gbps connection to the NAS. I would also like to provide 2Gbps directly on the NAS.
Looks like there are three technology which can help. Link aggregation (802.3ad, LAG, Trunk), Multi Path IO (MPIO), and iSCSI Multiple connection per session (MC/S).
However each have their own purpose and drawbacks, Aggregation provide 2Gbps total but a single connection (I think it's based on source/dest MAC address) can only get 1Gbps, which is useless (I think for iSCSI for example which is a single stream), MPIO seem a good option for iSCSI as it balance any traffic on two connection however it seem to require 2 IPs on the Source and 2 IPs on the DEST, I am unsure about MCs.
Here is what I would like to archive, however I am not sure of the technology to employ on each NIC pair of 1Gbps.
I also think this design is flawed because doing link aggregation between the NAS and the switch would prevent me from using MPIO on the ESX as it also require 2 IP on the nas and I think link aggregation will give me a single IP.
Maybe using MCs instead of MPIO would work?
Here a diagram:
If you want to achieve 2Gbps to a VM in ESX it is possible using MPIO & iSCSI but as you say you will need two adapters on the ESX host and two on the NAS. The drawback is that your NAS will need to support multiple connections from the same initiator, not all of them do. The path policy will need to be set to round-robin so you can use Active-Active connections. In order to get ESX to use both paths # over 50% each you will need to adjust the round robin balancing mode to switch paths every 1 IOPS instead of 1000. You can do this by SSHing to the host and using esxcli (if you need full instructions on how to do that I can provide them).
After this you should be able to run IOMeter on a VM and see the data rate # over 1Gbps, maybe 150MB/s for 1500 MTU and if you are using jumbo frames, then you will get around 200MB/s.
On another note (which might prove useful to your setups in the future), it is possible to achieve 2Gbps with two adapters on the source and bonded adapter on the NAS (so 2 → 1) when using the MPIO iSCSI Initiator that comes with Server 2008. This initiator works slightly different to VMWare and doesn't require your NAS to support many connections from one initiator — from what I can tell it spawns multiple initiators instead of sessions.
I am trying to use Boost for some IPv6 and multicast network communication. I need to construct an IPv6 multicast socket that uses a specific network interface index.
I was able to find the correct multicast option to set the network interface index in boost/asio/ip/detail/socket_option.hpp:
explicit multicast_request(const boost::asio::ip::address_v6& multicast_address, unsigned long network_interface = 0)
The problem is, I don't know how to find the correct value for the "network_interface" parameter. Is there a way to get the network_interface value using a local IPv6 address that I can provide? I looked in the documentation and examples, but couldn't find anything.
-- Dylan
Each platform provides APIs to enumerate the network interfaces, e.g. getifaddrs for many Unixes and GetAdaptersAddresses for Windows. Note on Windows there is a separate numerical space for IPv4 and IPv6 adapters which makes the API call if_nametoindex quite confusing.
You may wish to inspect the methods I employed in OpenPGM for portability, considering Windows doesn't really have useful adapter names:
http://code.google.com/p/openpgm/source/browse/trunk/openpgm/pgm/getifaddrs.c
http://code.google.com/p/openpgm/source/browse/trunk/openpgm/pgm/nametoindex.c
http://code.google.com/p/openpgm/source/browse/trunk/openpgm/pgm/indextoaddr.c
http://code.google.com/p/openpgm/source/browse/trunk/openpgm/pgm/indextoname.c
I don't think there's a platform-independent way to figure this out, just as there is no portable solution to enumerating the local addresses.
On Linux, you can find what you want in the second column of /proc/net/if_inet6, which is also available more robustly through the rtnetlink(7) interface.
In IPv6 networking, the IPV6_V6ONLY flag is used to ensure that a socket will only use IPv6, and in particular that IPv4-to-IPv6 mapping won't be used for that socket. On many OS's, the IPV6_V6ONLY is not set by default, but on some OS's (e.g. Windows 7), it is set by default.
My question is: What was the motivation for introducing this flag? Is there something about IPv4-to-IPv6 mapping that was causing problems, and thus people needed a way to disable it? It would seem to me that if someone didn't want to use IPv4-to-IPv6 mapping, they could simply not specify a IPv4-mapped IPv6 address. What am I missing here?
Not all IPv6 capable platforms support dualstack sockets so the question becomes how do applications needing to maximimize IPv6 compatibility either know dualstack is supported or bind separatly when its not? The only universal answer is IPV6_V6ONLY.
An application ignoring IPV6_V6ONLY or written before dualstack capable IP stacks existed may find binding separatly to V4 fails in a dualstack environment as the IPv6 dualstack socket bind to IPv4 preventing IPv4 socket binding. The application may also not be expecting IPv4 over IPv6 due to protocol or application level addressing concerns or IP access controls.
This or similar situations most likely prompted MS et al to default to 1 even tho RFC3493 declares 0 to be default. 1 theoretically maximizes backwards compatibility. Specifically Windows XP/2003 does not support dualstack sockets.
There are also no shortage of applications which unfortunately need to pass lower layer information to operate correctly and so this option can be quite useful for planning a IPv4/IPv6 compatibility strategy that best fits the requirements and existing codebases.
The reason most often mentioned is for the case where the server has some form of ACL (Access Control List). For instance, imagine a server with rules like:
Allow 192.0.2.4
Deny all
It runs on IPv4. Now, someone runs it on a machine with IPv6 and, depending on some parameters, IPv4 requests are accepted on the IPv6 socket, mapped as ::192.0.2.4 and then no longer matched by the first ACL. Suddenly, access would be denied.
Being explicit in your application (using IPV6_V6ONLY) would solve the problem, whatever default the operating system has.
I don't know why it would be default; but it's the kind of flags that i would always put explicit, no matter what the default is.
About why does it exist in the first place, i guess that it allows you to keep existing IPv4-only servers, and just run new ones on the same port but just for IPv6 connections. Or maybe the new server can simply proxy clients to the old one, making the IPv6 functionality easy and painless to add to old services.
For Linux, when writing a service that listens on both IPv4 and IPv6 sockets on the same service port, e.g. port 2001, you MUST call setsockopt(s, SOL_IPV6, IPV6_V6ONLY, &one, sizeof(one)); on the IPv6 socket. If you do not, the bind() operation for the IPv4 socket fails with "Address already in use".
There are plausible ways in which the (poorly named) "IPv4-mapped" addresses can be used to circumvent poorly configured systems, or bad stacks, or even in a well configured system might just require onerous amounts of bugproofing. A developer might wish to use this flag to make their application more secure by not utilizing this part of the API.
See: http://ipv6samurais.com/ipv6samurais/openbsd-audit/draft-cmetz-v6ops-v4mapped-api-harmful-01.txt
Imagine a protocol that includes in the conversation a network address, e.g. the data channel for FTP. When using IPv6 you are going to send the IPv6 address, if the recipient happens to be a IPv4 mapped address it will have no way of connecting to that address.
There's one very common example where the duality of behavior is a problem. The standard getaddrinfo() call with AI_PASSIVE flag offers the possibility to pass a nodename parameter and returns a list of addresses to listen on. A special value in form of a NULL string is accepted for nodename and implies listening on wildcard addresses.
On some systems 0.0.0.0 and :: are returned in this order. When dual-stack socket is enabled by default and you don't set the socket IPV6_V6ONLY, the server connects to 0.0.0.0 and then fails to connect to dual-stack :: and therefore (1) only works on IPv4 and (2) reports error.
I would consider the order wrong as IPv6 is expected to be preferred. But even when you first attempt dual-stack :: and then IPv4-only 0.0.0.0, the server still reports an error for the second call.
I personally consider the whole idea of a dual-stack socket a mistake. In my project I would rather always explicitly set IPV6_V6ONLY to avoid that. Some people apparently saw it as a good idea but in that case I would probably explicitly unset IPV6_V6ONLY and translate NULL directly to 0.0.0.0 bypassing the getaddrinfo() mechanism.