UDP connections in Julia - julia

using Sockets
sock = UDPSocket()
bind(sock,ip"x.x.x.x",port)
I am new to Sockets.jl and when I run this code in the REPL, the bind() function returns "false". Does anyone know what this means? I'm sure I have the right IP and port#. There isn't much documentation for the Sockets.jl package.

Checking the Sockets.jl code, bind returns false only if one of these three errors occurs:
UV_EACCES
= permission denied
UV_EADDRINUSE
= address already in use
UV_EADDRNOTAVAIL
= address not available
Do you have a firewall that might be blocking UDP access? If you're sure about the IP address being correct, then UV_EACCES seems like the most likely reason for failure here.

Related

How to fake a connection refused in rust?

I want to program something, that's quite similar to a firewall, a firewall that only lets the request through if it's the second try from the same IP.
But in order to do that, i have to inspect the package header, without opening the tcp connection/stream and returning a ACK. The rust std library doesn't have any way, as far as i know, to do that.
So how could i refuse a connection depending on the IP in rust?
Any help would be appreciated

sendto() return error code ENETDOWN

I met very strange situation.
In my program, the sendto() function returns error code ENETDOWN(Network is down) even though network is up and ping tryout success.
It's happened only when UDP stream connects to other network through several gateways. It's not always and happened sometimes.
If i run same code under same sub network, there is no error like ENETDOWN.
So, i trace sendto() function to Kernel area.
The neigh_hh_output() function in ip_finish_output2() of iop_output.c calls hh->hh_output() and it returns ENETDOWN error code.
Under normal operation, hh->hh_output() function is assigned to dev_queue_xmit() of dev.c and packet's sent to network.
When issue was happened, it seems assigned to neigh_blackhole() function in neigh_destroy() of neighbour.c. The neigh_blackhole() returns -ENETDOWN code.
But, i don't know when the neigh_destroy() is called and why that function is called.
I'm struggling with this problem for several weeks.
My test machine is placed like below description.
Test machine --- gateway(1.1.1.1) --- firewall(1.1.1.2) --- network ---- Destination.
First time, UDP connection establish between my test machine and Destination and gateway address of my test machine is 1.1.1.1.
Traffic has no problem between test machine destination. After some time or right after, suddenly, transmit traffic fail with "Network is Down" error(Error number 100, ENETDOWN).
At this time, if i tryout ping to destination in my test machine, ping response OK.
When i capture packet front of my test machine, ICMP redirect message comes from gateway(1.1.1.1). Its information is "Redirect for Host" and New gate address is "1.1.1.2".
When my test machine's OS(Linux 3.0.35) received ICMP redirect message, it changes virtual function pointer of hh->hh_output() from ev_queue_xmit() to neigh_blackhole(). Eventually, neigh_blackhole() return -ENETDOWN code.
So, change gate address of my test machine to 1.1.1.2. After that, "Network is down" error is not happened again.
I think that it's strange operation. sendto() function doesn't return ENETDOWN code as the man page. But, it's return ENETDOWN code.
Anyway, if the sendto() function returns -ENETDOWN even though network interface is update, how to overcome this error? Do i re-connect UDP stream?
I wonder this issue is bug of Linux kernel 3.0.35.
If i will know or find thing about this issue, i will update in here.
Please reference my case if anyone has similar issue with me.
It is claimed that a neighbor will be deleted for a variety of reasons including the host changed its layer 2 address while retaining its layer 3 address or is no longer reachable. See this. It also can be deleted if the gateway for the neighbor sends an ICMP redirect and the processing of redirects is enabled in the kernel.
If the neighbor is in the process of being deleted, then the packet is dispatched to neigh_blackhole which unconditionally returns -ENETDOWN. See the code here.
The man page for sendto() would lead you to believe that you shouldn't get -ENETDOWN under such circumstances, but this appears to be incorrect.
I would try to get a network capture when this occurs and look for ICMP messages indicating your destination is not reachable or for a change in the MAC address for the destination (or possibly a duplicate IP address) via ARP packets or the MAC addresses on the arriving packets from the destination.

proper way to handle missing IPv6 connectivity

I'm currently looking for a way to properly handle missing IPv6 connectivity.
The use case is, that i resolve a DNS record which might contain AAAA records and connect to each of the resolved IPs. Now the system running that code might not have IPv6 connectivity.
So i'm looking for the proper way to handle this and ignore these records, but only if the host can't connect anyway.
My current approach is:
if ip.To4() == nil && err.(*net.OpError).Err.(*os.SyscallError).Err == syscall.EHOSTUNREACH {
log.Info("ignoring unreachable IPv6 address")
continue
}
But i'm not sure, if there is a better way.
A simple solution would be to use a net.Dialer with DualStack set to true and just Dial() using a name and let the library handle the "happy eyeballs" for you.

is there any way to change a socket's family after bind? (IPv6-related problem)

I'm trying to retrofit an API to be compatible with IPv4. Basically the API at one stage creates a socket and then calls bind() to open a port for listening. The port is specified by passing a sockaddr returned by getaddrinfo(), specifying the port in the service parameter. Later, the caller has the choice of assigning a multicast group, calling an API function which sets IP_ADD_MEMBERSHIP on the socket.
The problem is that with IPv6 (i.e., family hint for getaddrinfo is AF_UNSPEC instead of AF_INET as it previously was), IP_ADD_MEMBERSHIP fails when the user asks for an IPv4 multicast group. This is because the system apparently defaults to providing an IPv6 address when no hint is provided.
The solution is obviously to know ahead of time whether the user will want to specify an IPv4 or IPv6 multicast group. However, since I'm trying not the change the API itself, this information is simply considered not known.
Do I have any other options?
I tried closing, recreating, and rebinding the socket before IP_ADD_MEMBERSHIP but my second bind() fails for some reason. (I tried specifying SO_REUSEADDR but this didn't help.)
Is there any way to simply "unbind" a socket and rebind it to a new family? Or just change the family, period?
Not possible. The usual kludgey solution is to keep two sockets, one for AF_INET, one for AF_INET6.

Determining when to try an IPv6 connection and when to use IPv4

I'm working on a network client program that connects to public servers, specified by the user. If the user gives me a hostname to connect to that has both IPv4 and IPv6 addresses (commonly, a DNS name with both A and AAAA records), I'm not sure how I should decide which address I should connect to.
The problem is that it's quite common for machines to support both IPv4 and IPv6, but only to have global connectivity over IPv4. The most common case of this is when only IPv6 link-local addresses are configured. At the moment the best alternatives I can come up with are:
Try the IPv6 address(es) first - if the connection fails, try the IPv4 address(es); or
Just let the user specify it as a config setting ("prefer_ipv6" versus "prefer_ipv4").
The problem I can see with option 1 is that the connection might not fail straight away - it might take quite a while to time out.
Please do try IPv6. In the significant majority of installations, trying to create an IPv6 connection will fail right away if it can't succeed for some reason:
if the system doesn't support IPv6 sockets, creating the socket will fail
if the system does support IPv6, and has link-local addresses configured, there won't be any routing table entry for the global IPv6 addresses. Again, the local kernel will report failure without sending any packets.
if the system does have a global IP address, but some link necessary for routing is missing, the source should be getting an ICMPv6 error message, indicating that the destination cannot be reached; likewise if the destination has an IPv6 address, but the service isn't listening on it.
There are of course cases where things can break, e.g. if a global (or tunnel) address is configured, and something falsely filters out ICMPv6 error messages. You shouldn't worry about this case - it may be just as well that IPv4 connectivity is somehow broken.
Of course, it's debatable whether you really need to try the IPv6 addresses first - you might just as well try them second. In general, you should try addresses in the order in which they are returned from getaddrinfo. Today, systems support configuration options that let administators decide in what order addresses should be returned from getaddrinfo.
Subsequent to the question being asked the IETF has proposed an answer to this question with RFC6555, a.k.a. Happy Eyeballs.
The pertinent point being the client and server may both have IPv4 and IPv6 but a hop in between may not so it is impossible to reliably predict which path will work.
You should let the system-wide configuration decide thanks to getaddrinfo(). Just like Java does. Asking every single application to try to cater for every single possible IPv6 (mis)configuration is really not scalable! In case of a misconfiguration it is much more intuitive to the user if all or none applications break.
On the other hand you want to try to log annoying delays and time-outs profusely, so users can quickly identify what to blame. Just like every other delays ideally, including (very common) DNS time-outs.
This talk has the solution. To summarize;
Sometimes there are problems with either DNS lookups or the subsequent connection to the resolved address
You don't want to wait for connecting to an IPv6 address to timeout before connecting to the IPv4 address, or vice versa
You don't want to wait for a lookup for an AAAA record to timeout before looking for an A record or vice versa
You don't want to stall while waiting for both AAAA and A records before attempting to connect with whichever record you get back first.
The solution is to lookup AAAA and A records simultaneously and independently, and to connect independently to the resolved addresses. Use whatever connection succeeds first.
The easiest way to do this is to allow the networking API do it for you using connect-by-name networking APIs. For example, in Java:
InetSocketAddress socketAddress = new InetSocketAddress("www.example.com", 80);
SocketChannel channel = SocketChannel.open(socketAddress);
channel.write(buffer);
The slide notes say at this point:
Here we make an opaque object called an InetSocketAddress from a host
and port, and then when we open that SocketChannel, that can complete
under the covers, doing whatever is necessary, without the
application ever seeing an IP address.
Windows also has connect-by-name APIs. I don’t have code fragments for
those here.
Now, I’m not saying that all implementations of these APIs necessarily
do the right thing today, but if applications are using these APIs,
then the implementations can be improved over time.
The di!erence with getaddrinfo() and similar APIs is that they
fundamentally can’t be improved over time. The API definition is that
they return you a full list of addresses, so they have to wait until
they have that full list to give you. There’s no way getaddrinfo can
return you a partial list and then later give you some more.
Some ideas:
Allow the user to specify the preference on a per-site basis.
Try IPv4 first.
Attempt IPv6 in parallel upon the first connection.
On subsequent connections, use IPv6 if the connection was successful previously.
I say to try IPv4 first because that is the protocol which is better established and tested.

Resources