Receiving http response inside a Docker container - http

I am running a process, process A, inside a Docker container that sends an HTTP reqeust to a process, process B, running natively on the host. Process B receives the request and after processing sends a response which process A does not receive. I believe this could be because I am not exposing the port on which the reply is sent. My understanding is that, this source port is randomly chosen and I am not sure how I can expose this port. Is there a way to overcome this issue?

My understanding is that, this source port is randomly chosen and I am not sure how I can expose this port
It is only if your container runs with the -P option.
If you run with -p hostp_port:container_port option (with container_port being a port EXPOSE'd in the Dockerfile of the image from which the container is running), that will work.

Related

[asyncssh]: Reverse ssh tunnels with python asyncssh

I am looking to use asyncssh with python3.7 (asyncio)
Here is what I want to build:
A remote device would be running a client that does a call-home to a centralized server. I want the server to be able to execute commands on client using reverse ssh tunnels on the incoming connection. I cannot use forward ssh (regular ssh) because the client could be behind NAT and server might not know the address of the client. I prefer client doing a call-home and then server managing the client.
The program for a POC should use python3 + an async implementation of ssh. I see asyncssh as the only viable choice (please suggest if you have an alternate):
Client: Connect to server and accepts reverse ssh tunnels to be opened on same outbound connection
Server: Accepts connection from client and keeps the session open. The server then opens reverse ssh tunnel to the client. For e.g. the server program should open 3 reverse ssh tunnnels on the incoming connection. Each of these tunnels would run one command ['ls', 'sleep 30 && date', 'sleep 5 && cat /proc/cpuinfo']
Server program should print the received response for each of these commands (one should come back amost immediately, other after 5 and other after 30).
I looked at the documentation, and I could not see examples of using multiple reverse ssh tunnels.
Anyone has experience using this? Can you point me to examples?
Developer of asyncssh has provided an example:
As of now, this is in develop branch. I have tested it and it does the job perfectly!
https://asyncssh.readthedocs.io/en/develop/#reverse-direction-example
[If you are checking this after a while, you might find it in master documentation.]

Bind docker container port to path

Docker noob here. Have setup a dev server with docker containers. I am able to run a basic containers.
For example
docker run --name node-test -it -v "$(pwd)":/src -p 3000:3000 node bash
Works as expected. As soon as I have many small projects, I would like to bind/listen to actual http localhost path instead of port. Something like that
docker run --name node-test -it -v "$(pwd)":/src -p 3000:80/node-test node bash
Is it possible? Thanks.
EDIT. Basically I want to type localhost/node-test instead of localhost:3000 in my browser window
It sounds like what you want is for your Docker container to respond to a URL like http://localhost/some/random/path by somehow specifying that path in the Docker --port option.
The short answer to that is no, that is not possible. The reason is that a port is not related to a path in any way - an HTTP server listens on a port, and serves resources that are found at a path. Note that there are many different types of servers and all of them listen on some port, but many (most?) of them have no concept of a path at all. For example, consider an SMTP (mail transfer) server - it often listens on port 25, but what does a path mean to it? All it does is transfer mail from one server to another.
There are two ways to accomplish what you're trying to do:
write your application to respond to particular paths. For example, if you're using the Express framework in your node application, create a route for the path you want.
use a proxy server to accept requests on one path and relay them to a server that's listening to another path.
Note that this has nothing to do with Docker - you'd be faced with the same two options if you were running your application on any server.

Sharing a network port between two docker containers

I am running a binary-protocol TCP server in a container. In order to facilitate zero downtime upgrades, I have constructed a flow where an instance can forward its server socket to the server in the new container by way of a unix domain socket. This works like a charm until the moment where the first container shuts down. Since it is the container which has published the port, the port de-publishes once the container closes. I'm trying to figure out the best way to handle this case.
Here's the basic rundown of what I'm doing:
# start the first container, starts listening on 3290
docker run -p 3290:3290 --name first /my/server/app
# start the second container, "steals" the server socket on 3290 from first
docker run --net container:first /my/server/app
# the second container, at this point, is handling connections from 3290
# when the first container is killed below, the port is de-published
# and the second container stops receiving connections
docker rm first
At first, I thought that a user-defined network would work best, but I cannot find a way to publish a port on a user-defined network. Another option I am considering is to construct another container which handles the publishing of ports, then have all other containers borrow the network from that running container. I think that approach will work, I just don't like the idea of having to have this extra container lying around for no other purpose. Though perhaps that is the only solution, thoughts?
https://docs.docker.com/engine/swarm/:
Load balancing: You can expose the ports for services to an external load balancer. Internally, the swarm lets you specify how to distribute service containers between nodes.
Video at the bottom of this post maybe interesting for you: https://blog.docker.com/2016/06/docker-1-12-built-in-orchestration/.

Can "Monit" monitor the processes running on remote servers?

I want to setup monit on a server which is going to be a centralized server to monitor processes running on remote servers. I checked many docs related to setup monit but could not find how to setup for remote server processes. For example a centralized monit server should monitor nginx running on A server, mongod running on B server and so on. Any suggestion how to do this?
In the documentation, Monit can be able to test the connection remotely, using tcp or udp, what you can do is to provide a small status file that gets refreshed for each technology you are intending to monitor, and let Monit hit that status file through http, etc. and can be used as follows:
check host nginxserver with address www.nginxserver.com
if failed port 80 protocol http
and request "/some_file"
then alert
Since you are testing a web server that can be easily accomplished with the above. as a note , below is the part about Monit connection testing:
CONNECTION TESTING Monit is able to perform connection testing via
networked ports or via Unix sockets. A connection test may only be
used within a check process or within a check host service entry in
the Monit control file.
If a service listens on one or more sockets, Monit can connect to the
port (using either tcp or udp) and verify that the service will accept
a connection and that it is possible to write and read from the
socket. If a connection is not accepted or if there is a problem with
socket i/o, Monit will assume that something is wrong and execute a
specified action. If Monit is compiled with openssl, then ssl based
network services can also be tested.
The full syntax for the statement used for connection testing is as
follows (keywords are in capital and optional statements in
[brackets]),
IF FAILED [host] port [type] [protocol|{send/expect}+] [timeout]
[retry] [[] CYCLES] THEN action [ELSE IF SUCCEEDED [[]
CYCLES] THEN action]
or for Unix sockets,
IF FAILED [unixsocket] [type] [protocol|{send/expect}+] [timeout]
[retry] [[] CYCLES] THEN action [ELSE IF SUCCEEDED [[]
CYCLES] THEN action]
host:HOST hostname. Optionally specify the host to connect to. If the
host is not given then localhost is assumed if this test is used
inside a process entry. If this test was used inside a remote host
entry then the entry's remote host is assumed. Although host is
intended for testing name based virtual host in a HTTP server running
on local or remote host, it does allow the connection statement to be
used to test a server running on another machine. This may be useful;
For instance if you use Apache httpd as a front-end and an
application-server as the back-end running on another machine, this
statement may be used to test that the back-end server is running and
if not raise an alert.
port:PORT number. The port number to connect to
unixsocket:UNIXSOCKET PATH. Specifies the path to a Unix socket.
Servers based on Unix sockets always run on the local machine and do
not use a port.

Why is "fork" needed by socat when connecting to a web server?

I am trying to understand tcp connections between a browser and a web server. I have a web server running on my local machine, and can browse to it just fine, as expected, using localhost:3000 or 127.0.0.1:3000. (I am running "rails s"and WEBrick.)
I wanted to put a software intermediary between the browser and the web server, and so began experimenting with socat. The following works just fine:
socat TCP-LISTEN:8080,fork TCP:localhost:3000
I can browse to localhost:8080 and things work as expected. However, if I omit the ",fork" argument like so,
socat TCP-LISTEN:8080 TCP:localhost:3000
the local rails web site is quite broken looking in the browser.
Why is that fork argument necessary? Why wouldn't a browser <--> web server connection work without it?
Without the fork, socat will accept a single TCP connection, forward data bidirectionally between the two endpoints for as long as that connection remains open, then exit. You can see this yourself easily:
Run socat in one terminal window
Telnet to localhost 8080 in another terminal window. It connects to the socat instance.
Telnet to localhost 8080 in a third terminal window. You get a connection refused error, because socat is not listening for new connections anymore: it has moved on to servicing the one it already got.
Type an HTTP request into the second terminal window. You'll get an HTTP response, and then socat will exit as the connection is closed.
The fork option just makes it fork a new child to process the newly accepted connection while the parent goes back to waiting for new connections.
socat's use of fork() rather than something more sophisticated like preforking or connection pooling is the reason you wouldn't want to implement high-performance middleware with socat!

Resources