Is it possible to forward NON-http connecting request to some other port in nginx? - http

I have nginx running on my server, listening port 80 and 433. I know nginx has a number ways of port forwarding that allows me to forward request like: http://myserver:80/subdir1 to some address like: http://myserver:8888.
My question is it possible to configure nginx so that i can forward NON-http request (just those plain TCP connection) to some other port? It's very easy to test if it's a http request because the first bytes will be either "GET" or "POST". Here's the example.
The client connected to nginx .
The client send:
a. HTTP get request: "GET / HTTP 1.1": some rule for HTTP
b. Any bytes that can't be recognized as HTTP header: forward it to some other port, say, 888, 999, etc.
Is it technically possible? Or would you suggest a way to do this?

It is possible since nginx 1.9.0:
http://nginx.org/en/docs/stream/ngx_stream_core_module.html
Something along these lines (this goes on top level of nginx.conf):
stream {
upstream backend {
server backend1.example.com:12345;
}
server {
listen 12345;
proxy_pass backend;
}
}

This is technically possible for sure.
You can modify open source tcp proxies like nginx module called nginx_tcp_proxy_module or HAproxy.
Or you can write a nginx module similar to above one to do this for you.

if nginx remote proxying with HTTP, your client could use the HTTP CONNECT command, then it connects with the remote port and forwards all data as "raw" (or at least I think so).

Related

NGINX Forwarding a request

I have an NGINX Server set up, I'd like to take a request and forward it to another application on a TCP port.
Let's say I have the following JSON payload
{
"someKey1": 1234,
"someKey2": "a string"
}
This is sent inside query parameters like the following
https://mywebsite.com?payload=%7B%0A%20%22someKey1%22%3A%201234%2C%0A%20%22someKey2%22%3A%20%22a%20string%22%0A%7D
Is there a way to forward that JSON payload to TCP port 1234 natively with NGINX?
Additionally, can I do any pre-processing of the above payload prior to it being forwarded to TCP port 1234. For example, I'd like to covert the above JSON to
someKey1=1234,someKey2="a string"
And then forward this data to TCP port 1234
I understang I'd have to create some sort of REST endpoint using something like springboot to do this, but I'd really like to try and accomplish the above natively with NGINX if possible.
Nginx's primary purpose is HTTP server/proxy.
It can be scripted via ngx_http_lua_module, but for your task it is much simpler to make an app/microservice that will listen HTTP and forward your custom protocol, or modify your app that listens mentioned port to understand HTTP.
When your endpoint talks HTTP - nginx can then be used for routing:
location /some_path/ {
proxy_pass http://localhost:1234/;
}
location /some_other_path/ {
proxy_pass http://localhost:1235/;
}
NGINX is simple web-server, which accepts HTTP requests and forwards them to configured location (may be application server, or any other web-server), and responds back on HTTP to the requester. Data can't be processed inside NGINX.
You can configure forwarding rules in default file under sites-available directory in NGINX installation directory.
Here is the nice tutorial of NGINX configuration which might help you.

Haproxy Appending Port to `HTTP_HOST` Header in Backend Request

I am using haproxy in front of my web-server for ssl termination.
I am forwarding request on port 81 if request is https and 80 if request is normal http-
backend b1_http
mode http
server bkend_server
backend b1_https
mode http
server bkend_server:81
Problem is, when haproxy sends request to back-end, it sends HTTP_HOST header as request.domain.com:81.
Is it possible in haproxy that I can send https request to back-end at specific port without appending the port in HTTP_HOST request header?
There are two issues, here.
First, there is no HTTP_HOST header. The header is Host:. It sounds like HTTP_HOST is something being generated internally by your web server or framework.
Second, HAProxy doesn't modify the Host: header just because your back-end is listening on a port other than 80. It doesn't actually modify the Host: header at all, unless explicit configured to, using a mechanism like reqirep ^Host: ... or http-request set-header host ....
You can confirm this with a packet capture. You should find that whatever HTTP_HOST is, the value is necessarily being generated internally on the back-end system itself, because it's not coming from HAProxy.

Reverse proxy Elasticsearch transport port

In my environment, elasticsearch sits on a server that only has standard ports (80, 443, etc.) open. All the other ports are firewalled off. I currently have a reverse proxy on port 80 that reroutes all the elasticsearch HTTP requests to elasticsearch's http port.
I would also like to reroute TCP requests to elasticsearch's transport port, so that my local client can directly query elasticsearch as a client node. Nginx 1.9.0 recently allowed TCP load balancing, which is what I would like to utilize for this, but I'm having some trouble getting my system to work. Here is my nginx.conf file (removed the HTTP context to isolate the issue):
worker_processes 1;
events {
worker_connections 1024;
}
stream {
server {
listen 80;
proxy_pass 127.0.0.1:9300;
}
}
My client node is set up to talk to mydomain.com:80, so it should ideally be routing all traffic to the internal transport port. However, I am getting a the following exceptions: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available
Is there something else I need to configure on my client node or the tcp proxy?
EDIT 1:
Some additional information. I changed Elasticsearch's transport port from 9300 to 8030, which is a port that is open. When I correspondingly changed my nginx.conf to proxypass to 127.0.0.1:8030 my local client node started working, and got appropriate responses to my queries.
So the issue seems to be that if I'm proxy pass to an already open port, it works, but if the port is closed (9300), the proxy pass fails. Does anyone know why this would be and how to fix it? I'd prefer to stick to using port 9300 if possible.

nginx non http port redirection

Theres a server in a customer that runs a nginx, a salt master daemon from saltstack and a secret web app that does secret things.
Considerations:
In this scenario, theres only one ip, only one server and multiple DNS records available;
I have nginx running in port 80;
And salt master running in 6453;
A domain.example.com binding to that IP, exposing my nginx 80 port, that points to the secret webapp;
otherdomain.example.com binding to the same IP, exposing my nginx 80 port, that I want to use to proxy salt port.
That customer has a machine in other place, that does need to connect to the salt and the internet connection is provided by a secret organization and they only allow connections to port 80, no negotiation possible.
My question:
Is possible to use nginx to redirect the otherdomain.example.com 80 port to the 6453 port? I tried the following:
server {
listen 80;
server_name otherdomain.example.com;
proxy_pass 127.0.0.1:6453;
}
But that doesn't work as expected. It is possible? There's some way to do this using nginx?
The error I got from log was:
"proxy_pass" directive is not allowed here
proxy_pass needs to be specified within a location context, and is fundamentally a Web Thing. It only comes into play after the web headers are sent and interpreted.
Things like what you're trying to accomplish are commonly done using HAProxy in tcp mode, although there is a tcp proxy module that also does similar things.
However, I don't think you're going to be very successful, as ZMQ does not participate in the protocol (HTTP Host: headers) that easily allows you to tell the web requests apart from the non-web requests (that come in on the same port).
My recommendation is to either find some way to use another port for this, a second IP address, or write a tricky TCP proxier that'll identify incoming HTTP and/or ZMQ connections and transparently forward them to the correct local port.

HAProxy connect to backend with source IP

I am running HAProxy on a machine with multiple interfaces and I want the connection to the backend to be made from the source IP of the interface on which the client request came in. Using the source directive from the documentation in the listen blocks didn't seem to do it as all connections seem to come from the first interface. My configuration is as follows:
listen f_192.168.1.10_http
bind 192.168.1.10:80
source 192.168.1.10
mode http
option httplog
capture request header Host len 30
use_backend b_domain1_http if { hdr(host) -i domain1.com }
listen f_192.168.1.20_http
bind 192.168.1.20:80
source 192.168.1.20
mode http
option httplog
capture request header Host len 30
use_backend b_domain1_http if { hdr(host) -i domain1.com }
backend b_domain1_http
mode http
option httplog
server srv1 domain1.com:80 check inter 30s
Ie. I am struggling to get connections coming in on interface 192.168.1.10 to have their source IP be 192.168.1.10 when connecting to the backend. Right now, regardless of if the connection comes in on 192.168.1.10 or 192.168.1.20, the outgoing connection to the backend is initiated from 192.168.1.10. I thought that using source in the listen would accomplish this but when I look at the output of netstat -at, all originating connections to the backend come from 1 interface.
Does anyone have any idea on how I can ensure the source ip of the connection to the backend is the same as the interface of the original client request?
I believe you can use source as a parameter for a server.
backend be1
...
server srv1 domain1.com:80 source ${frontend_ip} check inter 30s
I believe it is possible to substitute %fi for ${frontend_ip}, and you may also use %fp or ${frontend_port} to specifiy the port. This way you can remove the source statements in the frontends.

Resources