Injecting packets in open TCP connection - tcp

This is not a hacking question.
Imagine I have an application running on a local machine which has a TCP connection to some remote server. There are numerous ways to see packets, the obvious one is Wireshark. But is there a way to send packets to both the application, and the server without getting in the middle of two? That is, without running a proxy between the two, programmatically send packets to application as if they were coming from the server, as well as to the server as if they were coming from the application.

Related

At a low level, how does the Websocket protocol detect the status of a connection?

I was discussing Websocket overhead with a fellow engineer and we were both uncertain of how a Websocket actually detects the status of a client connection.
Is there a "status" packet sent to the client/server periodically?
Does it have anything to do with ping or pong in the low level API?
What about frames?
How does the Websocket detect that it's disconnected on the client? the server?
I'm surprised I couldn't find this answered on SO, but that may be my mistake. I found this answer addresses scalability, but that's not what I'm asking here. This answer touches on implementation, but not at the depth I'm pursuing here.
A webSocket connection is a TCP connection that uses the webSocket protocol. By default, a server or client knows when the connection has disappeared only when the underlying TCP realizes that the connection has been closed and the webSocket layer is listening for a close event on the connection so it is informed that way.
The webSocket protocol does not, by itself, require heartbeat packets that can regularly test if the connection is still working. The TCP socket may still appear to be alive, but the connection may not actually still work. The other end could have disappeared or been interrupted in between and one or both endpoints might not know that at any given time.
Socket.io which is built on top of webSocket, uses the ping and pong packets to implement a heartbeat which regularly tests the connection and will, in fact, detect a non-functioning connection at the client, close the socket and then reconnect automatically.
Is there a "status" packet sent to the client/server periodically?
Not by default for a regular webSocket connection.
Does it have anything to do with ping or pong in the low level API?
It is up to a client or server if they want to send ping or pong packets themselves to implement some sort of connection validation detection.
What about frames?
webSocket frames are the data format for sending data over a webSocket. They don't have anything to do with this issue.
How does the Websocket detect that it's disconnected on the client? the server?
Described above. Unless the client/server implement their own ping/pong system to detect when a connection has gone awry, they just rely on TCP signaling to know when a connection has been closed by the other end. A webSocket connection can be non-functional without the client or server knowing it until they try to send.
When a browser window/tab opens a webSocket connection and then the window/tab is retargeted to a new URL, the browser will close all resources associated with that window/tab including any webSocket connections. If the link between client and server is functional at that point, then the server will be told the underlying TCP connection (and thus the webSocket) has been closed. If the network link goes down and then the user moves that window/tab to a new URL, the server will not necessarily know that the connection is non-functional without relying on ping/pong type signalling to regularly test the connection.
If the browser crashes, the OS should close any sockets opened by that process.
If the computer/OS crashes, the sockets will likely not get closed normally (though this may be somewhat OS dependent and may be crash-dependent too).

TCP Retransmission after Reset RST flag

I have around 20 clients communicating together with a central server in the same LAN. The clients can make transaction simultaneously with the server. The server forward each transaction to external appliance in the network. Sometimes it works, sometimes my application shows a "time out" message in a client screen (randomly)
I mirrored all traffic and found TCP Retransmission after TCP Reset packets for the first TCP Sequence. I immediately thought about packet loss but all my cables/NIC are fine, and I do not see DUP ACK in the capture.
It seems that RST packets may have different significations.
What causes those TCP Reset?
Where should I focus my investigation: network or application design ?
I would appreciate any help. Thanks in advance.
Judging by the capture, I assume your central server is 137.56.64.31. What's happening is the clients are initiating a connection to the server with a SYN packet and the server responds with a RST. This is typical if the server has no application listening on that particular port e.g. the webserver application isn't running and a client tries to connect to port 80.
The clients are all connecting to different ports on the server, which is unusual for an central server, but not unheard of. The destination ports the clients are connecting to on the server are: 11007, 11012, 11014, 11108, and 11115. Is that normal for the application? If not, the clients should be connecting to whatever port the application server is listening on.
The reason for the retransmits is that instead of giving up on the connection upon receiving a RST from the server, the client tries to initiate the connection again so Wireshark considers it a retransmission.

UDP Client-Server application and Firewalls/NATs

I have a simple client application which sends udp datagram to the server with known ip address and port and waits for respond. Client can be started on any computer or mobile device and it can be located behind several routers and firewalls... Server application listens certain port for client's datagram and replies to the client's endpoint with an answer. Server application works on windows computer with properly configured firewall, etc. So, as I understand, this simple scheme should work regardless to client's location and his firewall settings. But it looks like it doesn't work in about 75% of configurations. Server receives request from the client in 100% of cases, but in 75% of cases client can't receive response from the server, i.e. it looks like it's always blocked by something (server attempts 10 times to send an answer to the client, but without luck, i.e. client doesn't receive anything). I tried many different configurations of client computers to figure out the reasons of these issues and what I've found:
In some cases simple windows firewall can block respond packets (But how it could be possible? As I understand, all respond packets must be forwarded back to the client regardless to firewall settings.)
Some Hardware firewalls or NATs also can block respond UDP packets. And again, I can't understand why it could be possible?
The question is - is there any reliable method to deliver an answer to client? As I know, many programs, such as Skype works fine with UDP even with all these network "obstacles".
Thank you!

What is the cost of establishing connection using Unix Domain sockets versus TCP sockets?

Oddly I didn't find this info by googling. What is the cost of establishing connection using Unix Domain sockets versus TCP sockets?
Right now I have to do connection pooling with TCP sockets because reconnecting is quite expensive. I wonder if I can simplify my client by simply switching to Unix Domain sockets and getting rid of connection pooling.
If you look into the code, you'll see that Unix Domain sockets execute far less code than TCP sockets.
Messages sent through TCP sockets have to go all the way through the networking stack to the loopback interface (which is a virtual network interface device typically called "lo" on Unix-style systems), and then back up to the receiving socket. The networking stack code tacks on TCP and IP headers, makes routing decisions, forwards a packet to itself through "lo", then does more routing and strips the headers back off. Furthermore, because TCP is a networking protocol, the connection establishment part of it has all kinds of added complexity to deal with dropped packets. Most significantly for you, TCP has to send three messages just to establish the connection (SYN, SYN-ACK, and ACK).
Unix Domain sockets simply look at the virtual file system (or the "abstract namespace") to find the destination socket object (in RAM) and queue the message directly. Furthermore, even if you are using the file system to name your destination socket, if that socket has been accessed recently, its file system structures will be cached in RAM, so you won't have to go to to disk. Establishing a connection, for a Unix Domain socket involves creating a new socket object instance in RAM (i.e., the socket that gets returned by accept(), which is something that has to be done for TCP too) and storing a pointer in each of the two connected socket objects (so they each have a pointer to the other socket later when they need to send). That's pretty much it. No extra packets are needed.
By the way, this paper suggests that Unix Domain sockets are actually faster than even Pipes for data transfers:
http://osnet.cs.binghamton.edu/publications/TR-20070820.pdf
Unfortunately, they didn't do specific measurements of connection establishment costs, but as I have said, I've looked at the Linux source code and it's really quite a lot simpler than the TCP connection establishment code.
Connecting to a server using TCP sockets may involve network traffic, as well as the TCP three-way handshake.
Local sockets (formerly known as Unix domain sockets) are all local, but need to access a physical file on disk.
If you only do local communication then local sockets might be faster as there is less overhead from the protocol. If your application needs to connect remotely then you can't use local sockets.
By the way, if you're only communicating locally, and not over a network, a pair named pipes (or anonymous if you're forking) might be even better.

TCP Connect Fails

I have two applications that talk via TCP, both of which run on Windows XP machines. The client is a third-party application for which I have only the executable, no source. The IP address of the server it connects to is set in a text configuration file. The server is an application I am writing.
All netmasks are 255.255.255.0.
In all cases, the client runs on 192.168.142.202.
I am seeing a case where if I run my server on 192.168.142.207, everything works, but if I move my server over to another machine on the same subnet (192.168.142.105), everything does not work fine. Specifically, the connection does not seem to get properly established. I have looked at what's going on in Wireshark and would like to request assistance interpeting what I see.
On the server side, I see the 3-way handshake: SYN, SYN/ACK, ACK. I get no error codes on the return of accept(), and netstat shows the connection as established.
On the client side, the connection does not seem to be established properly. This causes the client to reconnect periodically, and it will also occasionally close all of the not-correctly-connected sockets that get created as a result. When I look at the client side in Wireshark, I most often see a SYN, SYN, SYN pattern, rather that the expected 3-way handshake. Occassionally, the 3-way handshake does appear, but even then, the client doesn't seem to be happy with the connection because it closes it.
I will note that there are actually two TCP connections between the client and server. The other connection (i.e. not the problematic connection I described above) works just fine. The problematic connection has listening port 5004; the good connection has listening port 1234.
I have placed both .txt and .pcap versions of the client and server Wireshark captures at this link: https://skydrive.live.com/redir.aspx?cid=c5beaf58ac752bb0&resid=C5BEAF58AC752BB0!105&parid=root
As far as the physical network setup goes, there is one switch in between the client and server in the case that works, and there are two switches in between the client and server in the case that doesn't work. All ping tests are successful. There are no wireless connections involved; everything is wired.
All firewalls are off.
Does anybody have any thoughts on either what the problem is or what further data I could gather to solve the problem?
Well, it appears this is not a network or network programming problem at all. I've figured out by trial and error that the third-party software that connects to me wants the machine it runs on to have a smaller IP address than the machine my software runs on. This seems completely arbitrary to me, but empirically, this very strongly appears to be the case. Arghhhh............
Thanks to any and all who may have spent time poring over the Wiresharks dumps I provided...

Resources