Is there a way to make nginx terminate a websocket connection and pass only the socket stream to a server? - nginx

Basically what I'm trying to do is have a secure websocket connection start life at a client, go through nginx where nginx would terminate the tls, and instead of just proxying the websocket connection to a server, have nginx handle the websocket upgrade and just send the socket stream data to a tcp server or a unix domain socket.
Is that possible with the existing nginx modules and configuration?
proxy_pass can connect to a server via a unix domain socket
proxy_pass http://unix:/tmp/backend.socket:/uri/;
But the implication is that it still speaks http over the unix domain socket and the server is responsible for handling the websocket upgrade. I'm trying to get nginx to do the upgrading so that only the raw socket stream data gets to my server.
Sorta like a mix between proxy_pass and fastcgi_pass.
Do I have to modify one of these modules to make that possible or is there some way to configure this to work?

So what I eventually came to realize is that proxies just proxy and don't parse protocols. There's nothing built into nginx (although mod_ws in apache might do it) that can actually process the websockets protocol, the nginx proxy function just forwards the stream to the back end server. I'm working on another approach for this as the hope of having the webserver do the heavy lifting is not going to work easily.

Related

nginx tcp stream (k8s) - keep client connection open when upstream closes

I have an application that accepts TCP traffic (not HTTP) and I'd like the ability to have the traffic load balanced to it. However, one requirement is that when a client makes a connection, we do not close that connection under any circumstances (ideally) since we are dealing with some clients with older technology.
I've set up the kubernetes nginx ingress controller, but it isn't behaving how I'm hoping/expecting. What I would like is: If the connection to one of the upstream servers closes, then the client connection remains open for some amount of time while nginx picks a new upstream server and starts sending data to it. I am not concerned about the stream's data being split across different upstream servers, I just need the connection to stay open from the client's perspective during something like a redeploy.
What is actually happening is that from my client's perspective, currently when the upstream server app closes the connection, my connection is closed and I have to reconnect.
The ingress controller has this configuration, which I thought would accomplish what I want, but it doesn't seem to be working as expected:
server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-my-namespace-my-service-7550";
}
listen 7550;
proxy_timeout 600s;
proxy_next_upstream on;
proxy_next_upstream_timeout 600s;
proxy_next_upstream_tries 3;
proxy_pass upstream_balancer;
}
Any help at all is greatly appreciated and I'm happy to provide more info.
What you describe is how nginx works out of the box with http. However
Nginx has a detailed understanding of http
HTTP is a message based protocol i.e. uses requests and replies
Since nginx knows nothing about the protocol you are using, even if it uses a request/reply mechanism with no implied state, nginx does not know whether it has received a request not to to replay it elsewhere.
You need to implement a protol-aware mitm.
Unfortunately I haven't been able to get this functionality working with nginx. What I've ended up doing is writing my own basic TCP reverse-proxy that does what I need - if a connection to a backend instance is lost, it attempts to get a new one without interrupting the frontend connection. The traffic that we receive is fairly predictable in that I don't expect that moving the connection will interrupt any of the "logical" messages on the stream 99% of the time.
I'd still love to hear if anyone knows of an existing tool that has this functionality, but at the moment I'm convinced that there isn't one readily available.
I think you need to configure your Nginx Ingress to enable the keepalive options as listed in the documentation here. For instance in your nginx configuration as:
...
keepalive 32;
...
This will activate the keepalive functionality with a cache of upto 32 connections active at a time.

Can I use a reverse proxy for direct database connection?

Is it possible to setup a reverse proxy that would allow a database client to use a ssl port 443 connection and redirect to port 1521? I suspect it would not work. Can someone explain why or why not?
I'm assuming Oracle database based on port 1521.
There is no problem setting up Nginx TCP (L4) proxy for any TCP backend. Look here https://www.nginx.com/blog/tcp-load-balancing-udp-load-balancing-nginx-tips-tricks/ for example configuration.
When it comes to terminating SSL (L5) and sending data decrypted to TCP backend it's also technically possible with ngx_stream_ssl_module but I have never tested it and from what I can read people have problems setting this up for postgresql:
http://nginx.org/en/docs/stream/ngx_stream_ssl_module.html
Can nginx do TCP load balance with SSL termination
I have never seen Nginx setup as proxy for databases. Instead connection poolers (i.e. pgbouncer for postgresql) are often used not only for pooling but also as SSL offloading service. They are in fact L7 proxies for databases.
Oracle equivalent for pgbouncer seems to be Oracle Connection Manager and it supports SSL so I'd strongly recommend using it instead of Nginx or any other general purpose reverse proxy server:
https://docs.oracle.com/en/database/oracle/oracle-database/18/netag/configuring-oracle-connection-manager.html#GUID-AF8A511E-9AE6-4F4D-8E58-F28BC53F64E4

NGINX - Websocket client support

A quick question, Does Nginx support websocket client.
I have a webserver that uses NGINX and i use a websocket server for which NGINX acts as proxy. In the same port , can i use websocket client to initiate a connection with the external websocket server?
Yes, it does (since 1.3.13).
Have a look at the docs here and an example setup here

HTTP over AF_UNIX: HTTP connection to unix socket

We have HTTP server , for which we have HTTP client based application (on Linux) working fine.
But now we need to listen on Unix domain sockets from our client application.
So is it possible to send/receive httprequest, httpresponse packet from the unix domain socket?
Scenerio1:When connecting to localhost, it is required to eliminate the SSL
overhead by connecting HTTP to the unix socket instead of HTTPS to the
local port.
Basically Looking for a standard encoding a unix socket path in an HTTP URL.
Many Thanks in advance.
So long as your socket is a stream socket (SOCK_STREAM rather than SOCK_DGRAM) then it's technically possible. There's nothing in HTTP that requires TCP/IP, it just requires a reliable bidirectional stream.
However I've never seen an HTTP client that knows how to connect to such a socket. There's no URL format that I know of that would work, should you actually need to use a URL to talk to the server.
Also note that some things that normal web servers depend on (such as getpeername(), to identify the client) make no sense when you're not using TCP/IP.
EDIT I just saw your edit about mapping localhost to the UNIX socket. This is perfectly feasible, you just need to ensure that the client knows how to find the path of the UNIX socket that should be used instead of connecting to 127.0.0.1:xxx

How to use nginx or apache to process tcp inbound traffic and redirect to specific php processor?

This is the main idea, I want to use NGINX or Apache webservers as a tcp processor, so they manage all threads and connections and client sockets, all packets received from a port, lets say, port 9000 will be redirected to a program made on php or python, and that program will process each request, storing the data in a database. The big problem is also that this program needs to send data to the client or socket that is currently connecting to the NGINX or Apache server, I've been told that I should do something like this instead of creating my own TCP server, which is too difficult and is very hard to maintain since the socket communication with huge loads could lead in memory faults or even could crash down the server. I have done it before, and in fact the server crashed.
Any ideas how to achieve this ??
thanks.
apache/ nginx is web server and could be used to provide static content service to your cusomter and forwarding the application service requests to other application servers.
i only knows about django and here is sample configuration of nginx from Configuration for Django, Apache and Nginx
location / {
# proxy / requests to apache running django on port 8081
proxy_pass http://127.0.0.1:8081/;
proxy_redirect off;
}
location /media/ {
# serve static media directly from nginx
root /srv/anuva_project/www/;
expires 30d;
break;
}
Based on this configuration, the nginx access local static data for url under /media/*
and forward requests to django server located at localhost port 8018.
I have the feeling HAProxy is certainly a tool better suited for your needs, which have to do with TCP and not HTTP apparently. You should at least give it a try.

Resources