HTTP relay / proxy / mapping server - http

I'm trying to set up a simple relay / mapping server locally and feel there has to be some off the shelf solution, but I can't seem to find it.
I'm debugging an application of mine that needs to connect to host_A. Instead of connecting to host_A I want to configure it to connect to local_proxy. I don't want to use proxying protocols, but instead want to configure it to connect to http://localhost:80 and then have local_proxy connect to host_A and have local_proxy simply relay all messages back and forth.
I would expect to have to configure local_proxy to tell it what server it is supposed to relaying
Then there is 1 particular endpoint I want to be able to intercept and change the return info so I can better debug my application.
I thought I should be able to do this with Charles Proxy, but I couldn't figure out how.
At the moment, this doesn't need to support SSL (though that would always be nice).

I think what you are trying to build here is known as a reverse-proxy. There is a variety of solutions available for that, but you may produce results fastest with nginx, which can not only be configured for reverse proxy duties but is also sporting some powerfull SSL capabilities. A minimal solution to your problem would look like this:
server {
listen 80;
# Adjust for expected hostname. Space-separated list of hostnames possible.
server_name host_a;
location / {
# Forward all incoming requests to host_a
proxy_pass http://host_a;
}
}

Related

nginx tcp stream (k8s) - keep client connection open when upstream closes

I have an application that accepts TCP traffic (not HTTP) and I'd like the ability to have the traffic load balanced to it. However, one requirement is that when a client makes a connection, we do not close that connection under any circumstances (ideally) since we are dealing with some clients with older technology.
I've set up the kubernetes nginx ingress controller, but it isn't behaving how I'm hoping/expecting. What I would like is: If the connection to one of the upstream servers closes, then the client connection remains open for some amount of time while nginx picks a new upstream server and starts sending data to it. I am not concerned about the stream's data being split across different upstream servers, I just need the connection to stay open from the client's perspective during something like a redeploy.
What is actually happening is that from my client's perspective, currently when the upstream server app closes the connection, my connection is closed and I have to reconnect.
The ingress controller has this configuration, which I thought would accomplish what I want, but it doesn't seem to be working as expected:
server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-my-namespace-my-service-7550";
}
listen 7550;
proxy_timeout 600s;
proxy_next_upstream on;
proxy_next_upstream_timeout 600s;
proxy_next_upstream_tries 3;
proxy_pass upstream_balancer;
}
Any help at all is greatly appreciated and I'm happy to provide more info.
What you describe is how nginx works out of the box with http. However
Nginx has a detailed understanding of http
HTTP is a message based protocol i.e. uses requests and replies
Since nginx knows nothing about the protocol you are using, even if it uses a request/reply mechanism with no implied state, nginx does not know whether it has received a request not to to replay it elsewhere.
You need to implement a protol-aware mitm.
Unfortunately I haven't been able to get this functionality working with nginx. What I've ended up doing is writing my own basic TCP reverse-proxy that does what I need - if a connection to a backend instance is lost, it attempts to get a new one without interrupting the frontend connection. The traffic that we receive is fairly predictable in that I don't expect that moving the connection will interrupt any of the "logical" messages on the stream 99% of the time.
I'd still love to hear if anyone knows of an existing tool that has this functionality, but at the moment I'm convinced that there isn't one readily available.
I think you need to configure your Nginx Ingress to enable the keepalive options as listed in the documentation here. For instance in your nginx configuration as:
...
keepalive 32;
...
This will activate the keepalive functionality with a cache of upto 32 connections active at a time.

Nginx catch "broken header" when listening to proxy_protocol

I need to use http health checks on a Elastic Beanstalk application, with proxy protocol turned on. That is currently not possible, and the health check fails with a an error --> *58 broken header while reading PROXY protocol
I figured I have two options
Perform the health check on another port, and setup nginx to listen to http requests on that port and proxy to my app.
If it is possible to catch the broken header errors, or detect regular http requests in the proxy_protocol server block, then redirect those requests to a port that listens to http.
I would prefer the latter(#2), if possible. So is there any way to do this?
Ideally, I would prefer not to have to do any of this. A feature request to fix this has been submitted to AWS, but it has no ETA.
The proxy protocol specification says:
The receiver MUST be configured to only receive the protocol described in this
specification and MUST not try to guess whether the protocol header is present
or not. This means that the protocol explicitly prevents port sharing between
public and private access. Otherwise it would open a major security breach by
allowing untrusted parties to spoof their connection addresses.
I think this means that option 2 is a sufficiently bad idea that it's not even supported by conforming implementations of the proxy protocol.
Option 1, on the other hand, seems pretty reasonable. You can set up a security group so that only legitimate health checks can come in on the port without proxy protocol enabled.
Another couple of options spring to mind too:
Simply point your health checks at the thing that's adding the header (i.e. ELB?), rather than directly at your Nginx instance. Not sure if this is possible with Elastic Beanstalk, it's not a service I use.
Use something else to add the proxy protocol header before forwarding the health-check traffic on to your Nginx, which would avoid having to duplicate your Nginx config. For instance a HAProxy running on the same machine as your Nginx could do this. Again, use security groups to ensure that only legitimate traffic gets through.

Go web server with nginx server in web application [duplicate]

This question already has answers here:
What are the benefits of using Nginx in front of a webserver for Go?
(4 answers)
Closed 7 years ago.
Sorry, I cannot find this answer from Google search
and nobody seems to explain clearly the difference between pure Go webserver
and nginx reverse proxy. Everybody seems to use nginx in front
for web applications.
My question is, while Go has all http serving functions,
what is the benefit of using nginx over pure Go web servers?
And in most cases, we set up the Go webserver for all routes here
and have the nginx configurations in front.
Something like:
limit_req_zone $binary_remote_addr zone=limit:10m rate=2r/s;
server {
listen 80;
log_format lf '[$time_local] $remote_addr ;
access_log /var/log/nginx/access.log lf;
error_log /var/log/nginx/error.log;
set_real_ip_from 0.0.0.0/0;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
server_name 1.2.3.4 mywebsite.com;
}
When we have this Go:
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hi there, I love %s!", r.URL.Path[1:])
}
func main() {
http.HandleFunc("/", handler)
http.ListenAndServe(":8080", nil)
}
Are the traffic to nginx and Go web server different?
If not, why do we have two layers of web server?
Please help me understand this.
Thanks,
There's nothing stopping you from serving requests from Go directly.
On the other hand, there are some features that nginx provides out-of-the box that may be useful, for example:
handle many virtual servers (e.g. have go respond on app.example.com and a different app on www.example.com)
http basic auth in some paths, say www.example.com/secure
access logs
etc
All of this can be done in go but would require programming, while in nginx it's just a matter of editing a .conf file and reloading the configuration. Nginx doesn't even need a restart for this changes to take place.
(From a "process" point of view, nginx could be managed by an ops employee, with root permissions, running in a well known port, while developers deploy their apps on higher ones.)
The general idea of using nginx in this scenario is to serve up static resources via nginx and allow Go to handle everything else.
Search for "try_files" in nginx. It allows for checking the disk for the existence of a file and serving it directly instead of having to handle static assets in the Go app.
This has been asked a few times before[1] but for posterity:
It depends.
Out of the box, putting nginx in front as a reverse proxy is going to
give you:
Access logs
Error logs
Easy SSL termination
SPDY support
gzip support
Easy ways to set HTTP headers for certain routes in a couple of lines
Very fast static asset serving (if you're serving off S3/etc. though, this isn't that relevant)
The Go HTTP server is very good, but you will need to reinvent the
wheel to do some of these things (which is fine: it's not meant to be
everything to everyone).
I've always found it easier to put nginx in front—which is what it is
good at—and let it do the "web server" stuff. My Go application does
the application stuff, and only the bare minimum of headers/etc. that
it needs to. Don't look at putting nginx in front as a "bad" thing.
Further, to extend on my answer there, there's also the question of crash resilience: your Go application isn't restricted by a configuration language and can do a lot of things.
Some of these things may crash your program. Having nginx (or HAProxy, or Varnish, etc.) as a reverse proxy can give you a some request buffering (to allow your program to restart) and/or serve stale content from its local cache (i.e. your static home page), which may be better than having the browser time out and serve a "cannot connect to server error".
On the other hand, if you're building small internal services, 'naked' Go web servers with your own logging library can be easier to manage (in terms of ops).
If you do want to keep everything in your Go program, look at gorilla/handlers for gzip, logging and proxy header middleware, and lumberjack for log rotation (else you can use your system's logging tools).

Assign IPs to programs/processes

I need to assign different IP addresses to different processes (mostly PHP & Ruby programs) running on my Linux server. They will be making queries to various servers, including the situation where processes connecting to the same external server should have different IPs.
How this can be achieved?
Any option (system wide, or PHP/Ruby-specific, using proxy servers etc) will suit me.
The processes bind sockets (both incoming and outgoing) to an interface (or multiple interfaces), addressable by IP address, with various ports. In order to have them directly addressable by different IP addresses, you must have them bind their sockets to different NICs (virtual or hardware).
You could point each process to a proxy (configure the hostname of the server to be queried to be a different proxy for each process), in which case the external server will see the different IPs of the proxies. Otherwise, if you could directly configure the processes to use different NICs for their communications, that would be ideal.
You may need to make changes to the code to make this configurable (very often, programmers create outgoing TCP connections with convenience functions without specifying the NIC they will use, as they typically don't care). In PHP, you can use "socket_bind" to bind the endpoint to a nic, e.g. see the first example in the docs for socket_bind.
As per #LeonardoRick request, I'm providing the details for the solution that I ended up with.
Say, I have a server with 172.16.0.1 and 172.16.0.2 IP addresses.
I set up nginx (on the same machine) with the configuration that was looking somewhat like this:
server {
# NEVER EXPOSE THIS SERVER TO THE INTERNET, MAKE SURE PORT 10024 is not available from outside
listen 127.0.0.1:10024;
# block access from outside on nginx level as well
allow 127.0.0.1;
deny all;
# actual proxy rules
location ~* ^/from-172-16-0-1/http(s?)\:\/\/(.*) {
proxy_bind 172.16.0.1;
proxy_pass http$1://$2?$args;
}
location ~* ^/from-172-16-0-2/http(s?)\:\/\/(.*) {
proxy_bind 172.16.0.2;
proxy_pass http$1://$2?$args;
}
}
(Actually I cannot remember all the details now (this code is 'from whiteboard', it's not an actual working one), nevertheless it should represent all the key ideas. Check regexes before deployment).
Double-check that port 10024 is firewalled and not accessible from outside, add extra authentication if necessary. Especially if you are running Docker.
This nginx setup makes it possible to run HTTP requests likehttp://127.0.0.1:10024/from-172-16-0-2/https://example.com/some-URN/object?argument1=something
Once received a request, nginx will fetch the HTTP response from the requested URL using the IP specified by the corresponding proxy_bind directive.
Then - as I was running in-house or open-source software - I simply configured it (or altered its code) so it would perform requests like the one above instead of (original) https://example.com/some-URN/object?argument1=something.
All the management - what IP should be used at the moment - was also done by 'my' software, it simply selected the necessary /from-172-16-0-XXX/ endpoint according to its business logic.
That worked very well for my original question/task. However, this may not be suitable for some other applications, where it could not be possible to alter the request URLs. However, a similar approach with setting some kind of proxy may work for those cases.
(If you are not familiar with nginx, there are some starting guides here and here)

How can I tell nginx to silently ignore requests that dont match and let them time out instead of giving 404

I have a server block listening to 80 port requests on a specific server name along with some location directives. How can I make nginx treat any request that doesnt match as if it hadnt received it, that is let it time out? Currently those requests are treated with an 404 error
There is a way to ignore every request and tell nginx to respond nothing:
server {
listen 80 default_server;
return 444;
}
Documented here: http://nginx.org/en/docs/http/ngx_http_rewrite_module.html
The non-standard code 444 closes a connection without sending a response header.
How can I make nginx treat any request that doesnt match as if it hadnt received it, that is let it time out?
You can't unwind the clock. To find out what server name is being requested (from TLS-SNI and/or Host header), nginx must first accept the connection (which will cause the operating system's TCP/IP stack to send a SYN-ACK packet back to the requestor, in order to advance the TCP handshake). Once the connection has been accepted, the other side knows something is listening. So you've already foregone the opportunity for a connection timeout.
Having irrevocably lost that opportunity, the next opportunity you have is to hit the socket receive timeout. Since the client will be waiting for the server to respond with something, you could theoretically trigger its receive timeout by not responding at all. There is no mechanism in standard nginx currently to tell it to sit there and do nothing for a while, but if you are willing to involve another piece of software as well, then you could use the proxy_pass directive or similar to offload that responsibility to another server. Beware, however, that this will keep some amount of operating system resources in use by your server for the life of the connection. Nginx operates on sockets, not raw packets, and the operating system manages those sockets.
Having answered the question as asked, I think it's better to question the premise of the question. Why do you want the client to time out? If your goal is to mess with or "pay back" a malicious attacker, nginx is the wrong tool. This is the space of computer security and you're going to need to use tools that operate at a lower layer on the networking stack. Nginx is designed to be a webserver, not a honeypot.
If your goal is simply to hide the fact that there's a server listening at all, then your goal is impossible to achieve with the way HTTP works currently. Firewalls are only able to achieve ghosting like that by rejecting the connection before it's accepted. Thus, the TCP handshake never proceeds, no SYN-ACK packet gets sent back to the client, and as far as anyone can tell that entire port is a black hole. But it's not possible to do this after determining what server name has been requested. The IP address and port are available from the very first packet; the server name is not.
Someday in the not-so-distant future, HTTP may use UDP instead of TCP, and the request parameters (such as the server name) may be presented up front on the first packet. Unless and until that becomes the norm, however, the situation I describe above will remain.
I'm assuming you are trying to deflect malicious requests. Something like this might work (untested).
Set up a default server (catches any requests that don't match an existing server name), and then redirect the client back to itself:
server {
listen 80 default_server;
rewrite ^ http://127.0.0.1/;
}
You'd have to setup a similar catch-all for invalid locations inside your valid server blocks. Might be more of a headache than you want.
I don't know how useful this would really be in practice. Might be better to consider fail2ban or some other tool that can monitor your logs and ban clients at the firewall.

Resources