localhost Tcp connections randomly disconnecting. Running tcp client/server apps with simple echoing the server sees the local client connection breaking but any client connection over the network stays up. All local host connections get severed. One can immediately reconnect with no problem. This happens randomly and relatively rarely but is still problematic in my environment. Has anyone observed this pattern?
Instead of "Localhost" use 127.0.0.1
Localhost is used in Linux. Windows works better with 127.0.0.1.
Beats me, Google for explanation or just use the knowledge.
Related
This is not a hacking question.
Imagine I have an application running on a local machine which has a TCP connection to some remote server. There are numerous ways to see packets, the obvious one is Wireshark. But is there a way to send packets to both the application, and the server without getting in the middle of two? That is, without running a proxy between the two, programmatically send packets to application as if they were coming from the server, as well as to the server as if they were coming from the application.
Requesting the local machine fails right away:
telnet localhost 65535
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
while it runs forever when requesting google or any other remote machine:
telnet www.google.com 65535
Trying 2a00:1450:4007:812::2004...
I suppose non standard ports should be closed on web servers. If yes, telnet should end up with a "Connection refused" right away as well. right?
Actually, this can be caused by multiple sources. A not so known, but common one:
There are some firewalls allowing you to delay requests to specific ports.
Think about this: Scanning one IP address for all ports only takes a few seconds.
If you delay the response (in case the port is not opended for instance) it will take a potential attacker much longer to scan all ports.
You could argue that the attacker could count anything taking let´s say 5 seconds or longer as timeout, but, there are applications, SMTP servers for instance, that often actually respond only after 20 seconds or so because of this.
May protocols are trying to be attacked and if you generally define that your mailserver will only respond after 20 seconds, that does not really matter to mails in most cases, most attacks will already count this as timeout and won´t even notice that a mailserver is running there whereas the clients get configured for timeouts of 30 seconds or so and they can connect.
Another common thing is that the windows telnet client won´t really post the output until you press any key.
(I have not put code in this question since the actual code probably doesn't matter here. If you say it does though then I can edit the question later to put it in.)
I'm new to using winsock2 or any other networking API for that matter. I have a very simple server application and client application in which the server sends a string to the client and then disconnects.
The applications work fine when I use localhost or 127.0.0.1 as the inet_addr() argument, but when I use my "real" IP, the client application just gets WSAECONNREFUSED and the server doesn't see it. I made sure that the port was the same for both applications and that also the protocol was the same.
[Edit] I have come back to this issue after abandoning networking for a while. I think this may actually have something to with the fact I am using a router, and not something in my code.
WSAECONNREFUSED is an active refusal of the connection by the peer or by an intermediate firewall. If it was the peer who issued it, it means you got the IP address or the port wrong, or else you got it right but the server isn't actually running; anyway, nothing is listening at that IP:port. If it was the firewall, adjust it.
Did you use htons() on the port number?
inet_addr() only works with IP address strings, you have to use gethostbyname() or getaddrinfo() to resolve localhost or any other hostname string to an IP address.
WSACONNREFUSED means the connection was actively refused on the remote end that you are trying to connect to.
If the server machine is refusing, that means either there is no socket listening on the requested IP:Port, or that there is one but its queue of pending client connections is full so it cannot accept a new connection at that moment.
If a router is refusing, that usually means the router is not configured to forward inbound connections for the requested IP:Port to a machine on the router's network. If you have a server running behind a router and are trying to connect to it using the router's public IP address, then the router has to be setup for port forwarding.
If a firewall is refusing, that usually means the requested port is not open.
Either way, there is no way for the client to know in code why the connection was refused. All it can do is wait for a period of time and then try again.
I have two applications that talk via TCP, both of which run on Windows XP machines. The client is a third-party application for which I have only the executable, no source. The IP address of the server it connects to is set in a text configuration file. The server is an application I am writing.
All netmasks are 255.255.255.0.
In all cases, the client runs on 192.168.142.202.
I am seeing a case where if I run my server on 192.168.142.207, everything works, but if I move my server over to another machine on the same subnet (192.168.142.105), everything does not work fine. Specifically, the connection does not seem to get properly established. I have looked at what's going on in Wireshark and would like to request assistance interpeting what I see.
On the server side, I see the 3-way handshake: SYN, SYN/ACK, ACK. I get no error codes on the return of accept(), and netstat shows the connection as established.
On the client side, the connection does not seem to be established properly. This causes the client to reconnect periodically, and it will also occasionally close all of the not-correctly-connected sockets that get created as a result. When I look at the client side in Wireshark, I most often see a SYN, SYN, SYN pattern, rather that the expected 3-way handshake. Occassionally, the 3-way handshake does appear, but even then, the client doesn't seem to be happy with the connection because it closes it.
I will note that there are actually two TCP connections between the client and server. The other connection (i.e. not the problematic connection I described above) works just fine. The problematic connection has listening port 5004; the good connection has listening port 1234.
I have placed both .txt and .pcap versions of the client and server Wireshark captures at this link: https://skydrive.live.com/redir.aspx?cid=c5beaf58ac752bb0&resid=C5BEAF58AC752BB0!105&parid=root
As far as the physical network setup goes, there is one switch in between the client and server in the case that works, and there are two switches in between the client and server in the case that doesn't work. All ping tests are successful. There are no wireless connections involved; everything is wired.
All firewalls are off.
Does anybody have any thoughts on either what the problem is or what further data I could gather to solve the problem?
Well, it appears this is not a network or network programming problem at all. I've figured out by trial and error that the third-party software that connects to me wants the machine it runs on to have a smaller IP address than the machine my software runs on. This seems completely arbitrary to me, but empirically, this very strongly appears to be the case. Arghhhh............
Thanks to any and all who may have spent time poring over the Wiresharks dumps I provided...
If I understand right, applications sometimes use HTTP to send messages, since using other ports is liable to cause firewall problems. But how does that work without conflicting with other applications such as web-browsers? In fact how do multiple browsers running at once not conflict? Do they all monitor the port and get notified... can you share a port in this way?
I have a feeling this is a dumb question, but not something I ever thought of before, and in other cases I've seen problems when 2 apps are configured to use the same port.
There are 2 ports: a source port (browser) and a destination port (server). The browser asks the OS for an available source port (let's say it receives 33123) then makes a socket connection to the destination port (usually 80/HTTP, 443/HTTPS).
When the web server receives the answer, it sends a response that has 80 as source port and 33123 as destination port.
So if you have 2 browsers concurrently accessing stackoverflow.com, you'd have something like this:
Firefox (localhost:33123) <-----------> stackoverflow.com (69.59.196.211:80)
Chrome (localhost:33124) <-----------> stackoverflow.com (69.59.196.211:80)
Outgoing HTTP requests don't happen on port 80. When an application requests a socket, it usually receives one at random. This is the Source port.
Port 80 is for serving HTTP content (by the server, not the client). This is the Destination port.
Each browser uses a different Source to generate requests. That way, the packets make it back to the correct application.
It is the 5-tuple of (IP protocol, local IP address, local port, remote IP address, remote port) that identifies a connection. Multiple browsers (or in fact a single browser loading multiple pages simultaneously) will each use destination port 80, but the local port (which is allocated by the O/S) is distinct in each case. Therefore there is no conflict.
Clients usually pick a port between 1024 and 65535.
It depends on the operating system how to handle this. I think Windows Clients increment the value for each new connection, Unix Clients pick a random port no.
Some services rely on a static client port like NTP (123 UDP)
A browser is a client application that you use in order to see content on a web server which is usually on a different machine.
The web server is the one listening on port 80, not the browser on the client.
You need to be careful in making the distinction between "listening on port 80" and "connecting to port 80".
When you say "applications sometimes use HTTP to send messages, since using other ports is liable to cause firewall problems", you actually mean "applications sometimes send messages to port 80".
The server is listening on port 80, and can accept multiple connections on that port.
Port 80 you're talking about here is the remote port on the server, locally browser opens high port for each connection established.
Each connection has port numbers on both ends, one is called local port, other remote port.
Firewall will allow traffic to high port for browser, because it knows that connection has been established from you computer.