I'm trying to set up a proxy_pass while also using a socks5 proxy. I can access my backing service with curl using the following:
curl -x socks5h://localhost:8001 -svo /dev/null -I http://[abcd:1234::]:8000
So what I've currently got in my nginx config which doesn't work is:
location / {
proxy_pass http://[abcd:1234::]:8000
proxy_redirect http://localhost:8001 /;
}
It also seems like nginx has no notion of ALL_PROXY|HTTP(S)_PROXY environment variables which other applications can use.
Any idea how I can get this to work?
I did find a related question - socks5 proxy/tunnel for nginx upstream? but it's now 6 years old and I'm not sure works still.
Why it does not work?
To my knowledge, proxy_pass, proxy_redirect and other functionality in the ngx_http_proxy_module is meant to act as a HTTP/HTTPS proxy only. This seems to be confirmed by the 'As a protocol, “http” or “https” can be specified.' note in proxy_pass documentation (no mention of SOCKS).
The proxy_pass directive allows you to tell NGINX to take whatever requests it receives at specific location and blindly send them to another HTTP server, wait for the response from said server and return the response to the client. Other directives from the module (for example proxy_redirect) allow slight modifications to requests/responses. What is important is that the entire process is very simple and there is no tunneling (aside from TLS when location is https) or wrapping in additional protocols.
In contrast, SOCKS proxies require implementation of the SOCKS proxy protocol and using it to wrap all the connections. This additional work cannot be performed using the ngx_http_proxy_module.
How to make it work?
Unfortunately, using SOCKS proxies in NGINX does not seem to be supported by any of the core modules (listed here below 'Module reference'). It also does not seem to be a popular use case, so i would not expect support for it in NGINX core anytime soon. In another question you linked one of the answers references a third party nginx module which is also listed on the nginx.com website (the list has no anchors, so CTRL + F for "SOCKS" and you will find it). Last commit is from 2016 but it is possible it will still work.
If you can't change the way you access your backend service, i would say your best bet is either using the module mentioned above and trying to fix it if it does not work or writing your own module. Alternatively, maybe you could establish port forwarding to the backend service over the SOCKS proxy, and just proxy_pass to your local port. If you have an ssh server running on your backend service host, you could set up a simple proof-of-concept like this:
ssh <YOUR-SSH-LOGIN>#<BACKEND-HOST> \
-L 8081:localhost:80 \
-o "ProxyCommand=nc -X 5 -x <YOUR-SOCKS-PROXY-IP>:<YOUR-SOCKS-PROXY-PORT> %h %p"
The -L argument creates port forwarding between your local 8081 port and port 80 (http) on backend host. The -o argument adds a ProxyCommand option which uses netcat to forward traffic over a SOCKS proxy (not all netcat versions support the -X and -x arguments, the one i am using is openbsd-netcat on Arch Linux). After using that you should be able to just proxy_pass to localhost:8081 in NGINX. This setup is not very performant and serves only as a proof-of-concept, if you decide to go this way you should find another method of forwarding ports over the proxy.
Finally, in my personal opinion, if you can you should change the way you access your backend service. If you were the one to set up the connection then SOCKS proxy is an overkill when all you want to do is to connect to few hosts. If it is a proxy put in place by your company or someone else above you then i would discuss it with the network administrators.
Related
I have an application that needs to use a proxy (call it proxy1) to access some https endpoints outside of its network. The application doesn't support proxy settings, so I'd like to provide it a reverse proxy url, and I would prefer not to provide tls certs for proxy1, so I would use http for application -> proxy1.
I don't have access to the application host or forward proxy mentioned below, so I cannot configure networking there.
The endpoints the application needs are https, so proxy1 must make its outbound connections via https.
Finally, this whole setup is within a corporate network that requires a forward proxy (call it proxy2) for outbound internet, so my proxy1 needs to chain to proxy2 / use it as a parent.
I tried squid and it worked well for http only, but I couldn't get it to accept http inbound while using https outbound. Squid easily supported the parent proxy2.
I tried haproxy, but had the same result as with squid.
I tried nginx and it did what I wanted with http -> proxy -> https, but doesn't support a parent proxy. I considered setting up socat as in this answer, or using proxy_pass and proxy_set_header as in this answer, but I can't shake the feeling there's a cleaner way to achieve the requirements.
This doesn't seem like an outlandish setup, is it? Or is there a preferred approach for it? Ideally one using squid or nginx.
You can achive this without the complexity by using a port forwarder like socat. Just install it on a host to do the forwarding (or locally on the app server if you wish to) and create a listener that forwards connections through the proxy server. Then on your application host use a local name resolution overide to map the FQDN to the forwarder.
So, the final config should be the app server using a URI that points to the forwarding server (using its address if no name resolution excists), which has a socat listener that points to the the corporate proxy. No reverse proxy required.
socat TCP4-LISTEN:443,reuseaddr,fork \
PROXY:{proxy_address}:{endpoint_fqdn}:443,proxyport={proxy_port}
Just update with your parameters.
Let's say I have this DNS entry: mysite.sample. I am developing, and have a copy of my website running locally in http://localhost:8080. I want this website to be reachable using the (fake) DNS: http://mysite.sample, without being forced to remember in what port this site is running. I can setup /etc/hosts and nginx to do proxing for that, but ... Is there an easier way?
Can I somehow setup a simple DNS entry using /etc/hosts and/or dnsmasq where also a non-standard port (something different than :80/:443) is specified? Without the need to provide extra configuration for nginx?
Or phrased in a simpler way: Is it possible to provide port mappings for dns entries in /etc/hosts or dnsmasq?
DNS has nothing to do with the TCP port. DNS is there to resolv names (e.g. mysite.sample) into IP addresses - kind of like a phone book.
So it's a clear "NO". However, there's another solution and I try to explain it.
When you enter http://mysite.sample:8080 in your browser URL bar, your client (e.g. browser) will first try to resolve mysite.sample (via OS calls) to an IP address. This is where DNS kicks in, as DNS is your name resolver. If that happened, the job of DNS is finished and the browser continues.
This is where the "magic" in HTTP happens. The browser is connecting to the resolved IP address and the desired port (by default 80 for http and 443 for https), is waiting for the connection to be accepted and is then sending the following headers:
GET <resource> HTTP/1.1
Host: mysite.sample:8080
Now the server reads those headers and acts accordingly. Most modern web servers have something called "virtual hosts" (i.e. Apache) or "sites" (i.e. nginx). You can configure multiple vhosts/sites - one for each domain. The web server will then provide the site matching the requested host (which is retreived by the browser from the URL bar and passed to the server via Host HTTP header). This is pure HTTP and has nothing to do with TCP.
If you can't change the port of your origin service (in your case 8080), you might want to setup a new web server in front of your service. This is also called reverse proxy. I recommend reading the NGINX Reverse Proxy docs, but you can also use Apache or any other modern web server.
For nginx, just setup a new site and redirect it to your service:
location mysite.example {
proxy_pass http://127.0.0.1:8080;
}
There is a mechanism in DNS for discovering the ports that a service uses, it is called the Service Record (SRV) which has the form
_service._proto.name. TTL class SRV priority weight port target.
However, to make use of this record you would need to have an application that referenced that record prior to making the call. As Dominique has said, this is not the way HTTP works.
I have written a previous answer that explains some of the background to this, and why HTTP isn't in the standard. (the article discusses WS, but the underlying discussion suggested adding this to the HTTP protocol directly)
Edited to add -
There was actually a draft IETF document exploring an official way to do this, but it never made it past draft stage.
This document specifies a new URI scheme called http+srv which uses a DNS SRV lookup to locate a HTTP server.
There is an specific SO answer here which points to an interesting post here
I have this install from this guide https://www.linode.com/docs/websites/varnish/use-varnish-and-nginx-to-serve-wordpress-over-ssl-and-http-on-debian-8
is there any advantage for using proxoy protocol? from
https://info.varnish-software.com/blog/five-steps-to-secure-varnish-with-hitch-and-lets-encrypt
to this setup (I have varnish 5)
if so, what is the modification needed for the setup in the linode link above?
best.
To begin with, it will not be possible to use PROXY protocol in the linked setup.
Nginx supports PROXY protocol only on the client side (i.e. when there is another proxy forwarding requests to it). It doesn't support PROXY protocol with proxy_pass, where it would make more sense for Varnish + Nginx SSL setup. Sorry about that.
I'm running uWSGI behind Nginx and have been using proxy_pass to get Nginx to hit uWSGI. Is there any benefit to switch to uwsgi_pass. If so, what is it?
uwsgi_pass uses an uwsgi protocol. proxy_pass uses normal HTTP to contact with uWSGI server. uWSGI docs claims that this protocol is better, faster and can benefit from all of uWSGI special features.
Are there any real benefits? Yes. You can send to uWSGI information what type of data you are sending and what uWSGI plugin should be invoked to generate response. With http (proxy_pass) you won't get that. More on that you can find in uWSGI docs.
But even if there aren't any documented benefits of using uwsgi protocol instead of http for you, you should use uwsgi protocol if you can, because uwsgi is the main protocol of uWSGI server and it just fits better here.
If you want to use uwsgi protocol you must change http-socket parameter in uWSGI start script to socket.
I need to assign different IP addresses to different processes (mostly PHP & Ruby programs) running on my Linux server. They will be making queries to various servers, including the situation where processes connecting to the same external server should have different IPs.
How this can be achieved?
Any option (system wide, or PHP/Ruby-specific, using proxy servers etc) will suit me.
The processes bind sockets (both incoming and outgoing) to an interface (or multiple interfaces), addressable by IP address, with various ports. In order to have them directly addressable by different IP addresses, you must have them bind their sockets to different NICs (virtual or hardware).
You could point each process to a proxy (configure the hostname of the server to be queried to be a different proxy for each process), in which case the external server will see the different IPs of the proxies. Otherwise, if you could directly configure the processes to use different NICs for their communications, that would be ideal.
You may need to make changes to the code to make this configurable (very often, programmers create outgoing TCP connections with convenience functions without specifying the NIC they will use, as they typically don't care). In PHP, you can use "socket_bind" to bind the endpoint to a nic, e.g. see the first example in the docs for socket_bind.
As per #LeonardoRick request, I'm providing the details for the solution that I ended up with.
Say, I have a server with 172.16.0.1 and 172.16.0.2 IP addresses.
I set up nginx (on the same machine) with the configuration that was looking somewhat like this:
server {
# NEVER EXPOSE THIS SERVER TO THE INTERNET, MAKE SURE PORT 10024 is not available from outside
listen 127.0.0.1:10024;
# block access from outside on nginx level as well
allow 127.0.0.1;
deny all;
# actual proxy rules
location ~* ^/from-172-16-0-1/http(s?)\:\/\/(.*) {
proxy_bind 172.16.0.1;
proxy_pass http$1://$2?$args;
}
location ~* ^/from-172-16-0-2/http(s?)\:\/\/(.*) {
proxy_bind 172.16.0.2;
proxy_pass http$1://$2?$args;
}
}
(Actually I cannot remember all the details now (this code is 'from whiteboard', it's not an actual working one), nevertheless it should represent all the key ideas. Check regexes before deployment).
Double-check that port 10024 is firewalled and not accessible from outside, add extra authentication if necessary. Especially if you are running Docker.
This nginx setup makes it possible to run HTTP requests likehttp://127.0.0.1:10024/from-172-16-0-2/https://example.com/some-URN/object?argument1=something
Once received a request, nginx will fetch the HTTP response from the requested URL using the IP specified by the corresponding proxy_bind directive.
Then - as I was running in-house or open-source software - I simply configured it (or altered its code) so it would perform requests like the one above instead of (original) https://example.com/some-URN/object?argument1=something.
All the management - what IP should be used at the moment - was also done by 'my' software, it simply selected the necessary /from-172-16-0-XXX/ endpoint according to its business logic.
That worked very well for my original question/task. However, this may not be suitable for some other applications, where it could not be possible to alter the request URLs. However, a similar approach with setting some kind of proxy may work for those cases.
(If you are not familiar with nginx, there are some starting guides here and here)