Why is "fork" needed by socat when connecting to a web server? - tcp

I am trying to understand tcp connections between a browser and a web server. I have a web server running on my local machine, and can browse to it just fine, as expected, using localhost:3000 or 127.0.0.1:3000. (I am running "rails s"and WEBrick.)
I wanted to put a software intermediary between the browser and the web server, and so began experimenting with socat. The following works just fine:
socat TCP-LISTEN:8080,fork TCP:localhost:3000
I can browse to localhost:8080 and things work as expected. However, if I omit the ",fork" argument like so,
socat TCP-LISTEN:8080 TCP:localhost:3000
the local rails web site is quite broken looking in the browser.
Why is that fork argument necessary? Why wouldn't a browser <--> web server connection work without it?

Without the fork, socat will accept a single TCP connection, forward data bidirectionally between the two endpoints for as long as that connection remains open, then exit. You can see this yourself easily:
Run socat in one terminal window
Telnet to localhost 8080 in another terminal window. It connects to the socat instance.
Telnet to localhost 8080 in a third terminal window. You get a connection refused error, because socat is not listening for new connections anymore: it has moved on to servicing the one it already got.
Type an HTTP request into the second terminal window. You'll get an HTTP response, and then socat will exit as the connection is closed.
The fork option just makes it fork a new child to process the newly accepted connection while the parent goes back to waiting for new connections.
socat's use of fork() rather than something more sophisticated like preforking or connection pooling is the reason you wouldn't want to implement high-performance middleware with socat!

Related

How to simulate tcp connection loss on localhost

I have a ASP NET Core web api that uses websockets. I am trying to find out if the server handles internet connection loss (client side) correctly.
However since I have only one machine on which both the server and the client run, i know that localhost does not use the Network Interface, and the desired handler is not triggered when I cut off the internet.
How can I thus have a server that runs a localhost, and a client that also runs on localhost but uses the network interface, so that I can cut the internet off and see how the server behaves for the given client?
I use TCPView to do this sort of testing. You can find the connection in the list, then right-click/close it:

[asyncssh]: Reverse ssh tunnels with python asyncssh

I am looking to use asyncssh with python3.7 (asyncio)
Here is what I want to build:
A remote device would be running a client that does a call-home to a centralized server. I want the server to be able to execute commands on client using reverse ssh tunnels on the incoming connection. I cannot use forward ssh (regular ssh) because the client could be behind NAT and server might not know the address of the client. I prefer client doing a call-home and then server managing the client.
The program for a POC should use python3 + an async implementation of ssh. I see asyncssh as the only viable choice (please suggest if you have an alternate):
Client: Connect to server and accepts reverse ssh tunnels to be opened on same outbound connection
Server: Accepts connection from client and keeps the session open. The server then opens reverse ssh tunnel to the client. For e.g. the server program should open 3 reverse ssh tunnnels on the incoming connection. Each of these tunnels would run one command ['ls', 'sleep 30 && date', 'sleep 5 && cat /proc/cpuinfo']
Server program should print the received response for each of these commands (one should come back amost immediately, other after 5 and other after 30).
I looked at the documentation, and I could not see examples of using multiple reverse ssh tunnels.
Anyone has experience using this? Can you point me to examples?
Developer of asyncssh has provided an example:
As of now, this is in develop branch. I have tested it and it does the job perfectly!
https://asyncssh.readthedocs.io/en/develop/#reverse-direction-example
[If you are checking this after a while, you might find it in master documentation.]

How to forward data from desktop application through fiddler or mitmproxy?

I am using a win10 desktop app for which I know it is sending TCP packets in order to communicate with the server. The payloads are encrypted. There is a chance that if the app is using TLS, a proxy like mitmproxy or fiddler will be able to decrypt the data.
The app also gets assigned different port every time it launches. So far the only promising information was to use netsh:
netsh interface portproxy add v4tov4 listenport=appPort listenaddress=appLocalIP connectport=fiddlerListeningPort connectaddress=fiddlerLocalIP
I ran this command after the app was already running because I can not determine its local port beforehand. But that did nothing. I was unable to find any other way to force the app to route the traffic through fiddler / mitmproxy.

Multiple processes listening on same port or not?

This is on Windows system. I have tomcat started on 8080. I have a nodejs program started which is also listening on 8080. So now I have 2 PIDs. When I do a netstat, I find two PIDs on the same port. So everything is clearly being shown. And these two processes ran without showing up any error. What baffled me is when I access the url localhost:8080 on the browser, it sometimes shows up tomcat home page and the rest of the time it shows up the nodejs response. Looks like there is a race between the processes as in whoever catches it first throws up a resonse. Next as I see that there is no error being thrown on reusing the same port, I try to open up another nodejs program listening on 8080. But this time it throws an error saying EADDRINUSE. This is confusing. If it had to throw such an error, why would it in the first place allow nodejs and tomcat both to listen on 8080? Any factual inputs and no conjectures would be helpful.
You either:
have a proxy in front of your servers
you run the servers on different network interfaces
some sort of port sharing has been setup on that machine.

Sockets server that handles system commands and http requests

i've been searching and trying for weeks now to find a solution to my issue that I can understand and easily implement but I had no joy. So i would be very grateful if someone could put me out of my misery.
I'm building an iphone app similar in functionality to apps like "Air Video" and "Air Playit". The app should communicate with a server running on a remote host. This server should be able to execute a command sent by the iphone to encode a video and stream it over http.
In my case, my iphone app sends commands to be executed on a remote host. the remote host is running a python socket server listening for example on port 3333.
On the iphone, i'm simply using
"CFStreamCreatePairWithSocketToHost", "CFWriteStreamOpen" and
"CFReadStreamOpen"
to connect, write and read data.
My remote host, successfully intercepts the commands and starts the encoding.
To serve the contents, I'm having to run a separate http server (i'm using Python simpleHTTPServer) which is listening on another port.
What I would like to do is use the same port for both system commands and http requests.
The apps I've mentioned above seem to do it that way and I've noticed they have their own build-in web server.
I'm sure I'm missing something but please bear with me this is my first attempt at building an app.
Encode your system commands into special HTTP requests. Decide which thing to do (execute command or serve the contents) based on HTTP request, not on the incoming port. If you need to use separate http servers (like you told), consider having a layer that receives everything from the devices and dispatches to other servers (or ports) based on the request.

Resources