ActiveMQ and NGINX - nginx

I can get my head wrapped around ... We have requirement using ActiveMQ hidden behind NGINX proxy, but I have no idea how to set it up.
For the ActiveMQ I've setup different ports for all protocols
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:62716?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5782?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:62713?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1993?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:62714?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
And the nginx configuration like this:
server {
listen *:61616;
server_name 192.168.210.15;
index index.html index.htm index.php;
access_log /var/log/nginx/k1.access.log combined;
error_log /var/log/nginx/k1.error.log;
location / {
proxy_pass http://localhost:62716;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_redirect off;
proxy_method stream;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
}
(same for all other five redefined ports)
I though that this would expose default ports ActiveMQ ports and Nginx would map it to the new definition, but this doesn't work.
For communication, we're using NodeJs library amqp10 in version 3.1.4.
And all the ports are enabled on the server ... if using standard ports without nginx proxy, it works.
Anyone idea what am I missing? Thanks for any thoughts.

You can hide ActiveMq behind nginx proxy, even if you are trying to proxy OpenWire for a AMQP client.
If you are adding your configuration inside http block, its bound to fail.
But get it that, nginx not only supports http, but also tcp block.
If you proxy activemq over tcp, then what happens at http level won't matter and you would still be able to proxy.
Off-course you would lose flexibility that comes along with http.
Open your nginx.conf (at /etc/nginx/nginx.conf).
This would have http block, which in turn would have some include statements.
Outside this http block, add another include statement.
$ pwd
/etc/nginx
$ cat nginx.conf | tail -1
include /etc/nginx/tcpconf.d/*;
The include statement is directing nginx to look for additional configurations in directory "/etc/nginx/tcpconf.d/".
Add desired configuration in this directory. Let's call it amq_stream.conf.
$ pwd
/etc/nginx/tcpconf.d
$ cat amq_stream.conf
stream {
upstream amq_server {
# activemq server
server <amq-server-ip>:<port like 61616.;
}
server {
listen 61616;
proxy_pass amq_server;
}
}
Restart your nginx service.
$ sudo service nginx restart
You are done

Nginx is a HTTP server that is capable of proxying WebSocket and HTTP.
But you are trying to proxy OpenWire for a AMQP client. Which does not work with Nginx or Node.js.
So - if you really need to use Nginx, you need to change client protocol to STOMP or MQTT over WebSocket. Then setup a WebSocket proxy in Nginx.
Nginx-example with TLS. More details at https://www.nginx.com/blog/websocket-nginx/
upstream websocket {
server amqserver.example.com:62714;
}
server {
listen 8883 ssl;
ssl on;
ssl_certificate /etc/nginx/ssl/certificate.cer;
ssl_certificate_key /etc/nginx/ssl/key.key;
location / {
proxy_pass http://websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade websocket;
proxy_set_header Connection upgrade;
proxy_read_timeout 120s;
}
}
However, since you have to rewrite all client code, I would rethink the Nginx idea. There are other software and hardware that can front TCP based servers and do TLS termination and whatnot.

Related

With NGINX upstreams, is it possible to proxy pass to both HTTP and HTTPS backends in the same upstream?

Suppose I want to proxy some portion of my traffic to a remote backend instead of the local listener on the server. For example:
upstream backends {
server 127.0.0.1:8080 weight=20; # local process (HTTP)
server other-remote-backend.company-internal.com:443; # remote server (HTTPS)
}
location / {
# ...other stuff...
proxy_pass http://backends;
}
In the above configuration, every 20 or so requests NGINX will try to route to http://other-remote-backend.company-internal.com:443 which is only listening for SSL.
Is there a way for the upstream to define its own protocol scheme? Right now this seems undoable without changing the local listener process to be SSL as well (which is a less than desirable change to make).
Thanks
As is the usual case, I've figured out my own problem and its quite obvious. If you're trying to accomplish the above the trick is quite simple.
First create a new NGINX Virtual Host that listens on HTTP and proxy_passes to your remote HTTPS backend like so:
/etc/nginx/sites-available/remote_proxy
upstream remote {
server other-remote-backend.company-internal.com:443;
}
server {
# other-remote-backend.company-internal.com:443;
listen 8181;
server_name my_original_server_name;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass https://remote;
}
}
You can now use just http for your upstreams in the original configuration listening on 443:
/etc/nginx/sites-available/default
upstream backends {
server 127.0.0.1:8080 weight=20; # local process (HTTP)
server 127.0.0.1:8181 # local nginx proxying to HTTPS remote
}
location / {
# ...other stuff...
proxy_pass http://backends;
}
Now just enable your new site and restart
$ ln -s /etc/nginx/sites-available/remote_proxy /etc/nginx/sites-enabled/ && systemctl restart nginx

How to NGINX reverse proxy to backend server which has a self signed certificate?

I have a small network with a webserver and an OpenVPN Access Server (with own webinterface). I have only 1 public ip and want to be able to point subdomains to websites on the webserver (e.g. website1.domain.com, website2.domain.com) and point the subdomain vpn.domain.com to the web interface of the OpenVPN access server.
After some Google actions i think the way to go is setup a proxy server. NGINX seems to be able to do this with the "proxy_pass" function. I got it working for HTTP backend URL's (websites) but it does not work for the OpenVPN Access Server web interface as it forces to use HTTPS. I'm fine with HTTPS and prefer to use it also for the websites hosted on the webserver. By default a self signed cert. is installed and i want to use also self signed cert. for the other websites.
How can i "accept" self signed cert. for the backend servers? I found that i need to generate a cert. and define it in the NGINX reverse proxy config but i do not understand how this works as for example my OpenVPN server already has an SSL certificate installed. I'm able to visit the OpenVPN web interface via https://direct.ip.address.here/admin but got an "This site cannot deliver an secure connection" page when i try to access the web interface via Chrome.
My NGINX reverse proxy config:
server {
listen 443;
server_name vpn.domain.com;
ssl_verify_client off;
location / {
# app1 reverse proxy follow
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://10.128.20.5:443;
proxy_ssl_verify off;
}
access_log /var/log/nginx/access_log.log;
error_log /var/log/nginx/access_log.log;
}
server {
listen 80;
server_name website1.domain.com;
location / {
# app1 reverse proxy follow
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://10.128.11.20:80;
}
access_log /var/log/nginx/access_log.log;
error_log /var/log/nginx/access_log.log;
}
A nearby thought...
Maybe NGINX is not the right tool for this at all (now or on long term)? Lets assume i can fix the cert. issue i currently have and we need more backend web servers to handle the traffic, is it possible to scale the NGINX proxy as well? like a cluster or load balancer or something? Should i look for a completely different tool?

Icecast2 running under nginx not able to connect

I want to start saying that I've looked all over the place to find an answer to this problem and it just seems like either nobody else ran into this problem or nobody is doing it. So, I recently install icecast2 on my Debian server, The thing is that I'm completely able to broadcast to my server from my local network connecting to its local IP on port 8000 and hear the stream over the internet on radio.example.com since I proxy it with nginx, so far no problems at all. The problem lies when I want to broadcast to the domain I gave with nginx stream.example.com
I have two theories, one is that the proxy is not giving the source IP to icecast so it thinks it's beign broadcasted from 127.0.0.1 and the other is that nginx is doing something strange with the data stream and thus not delivering the correct format to icecast.
Any thoughts? Thanks in advance!
Here is the nginx config
server {
listen 80;
listen [::]:80;
server_name radio.example.com;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Real-IP $remote_addr;
location / {
proxy_pass http://127.0.0.1:8000/radio;
subs_filter_types application/xspf+xml audio/x-mpegurl audio/x-vclt text/css text/html text/xml;
subs_filter ':80/' '/' gi;
subs_filter '#localhost' '#stream.example.com' gi;
subs_filter 'localhost' $host gi;
subs_filter 'Mount Point ' $host gi;
}
}
server {
listen 80;
listen [::]:80;
server_name stream.example.com;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Real-IP $remote_addr;
location / {
proxy_pass http://localhost:8000/;
subs_filter_types application/xspf+xml audio/x-mpegurl audio/x-vclt text/css text/html text/xml;
subs_filter ':8000/' ':80/' gi;
subs_filter '#localhost' '#stream.example.com' gi;
subs_filter 'localhost' $host gi;
subs_filter 'Mount Point ' $host gi;
}
}
And this is what I get on icecast error.log
[2018-08-10 14:15:45] INFO source/get_next_buffer End of Stream /radio
[2018-08-10 14:15:45] INFO source/source_shutdown Source from 127.0.0.1 at "/radioitavya" exiting
Not sure how much of this is directly relevant to the OP's question, but here's a few snippets from my config.
These are the basics of my block to serve streams to clients over SSL on port 443.
In the first location block any requests with a URI of anything other than /ogg, /128, /192 or /320 are rewritten to prevent clients accessing any output from the Icecast server other than the streams themselves.
server {
listen 443 ssl http2;
server_name stream.example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
rewrite ~*(ogg) https://stream.example.com/ogg last;
rewrite ~*([0-1][0-5]\d) https://stream.example.com/128 last;
rewrite ~*(?|([1][6-9]\d)|([2]\d\d)) https://stream.example.com/192 last;
rewrite ~*([3-9]\d\d) https://stream.example.com/320 break;
return https://stream.example.com/320;
}
location ~ ^/(ogg|128|192|320)$ {
proxy_bind $remote_addr transparent;
set $stream_url http://192.168.100.100:8900/$1;
types { }
default_type audio/mpeg;
proxy_pass_request_headers on;
proxy_set_header Access-Control-Allow-Origin *;
proxy_set_header Host $host;
proxy_set_header Range bytes=0-;
proxy_set_header X-Real-IP $remote_addr;
proxy_buffering off;
tcp_nodelay on;
proxy_pass $stream_url;
}
}
Setting proxy_bind with the transparent flag:
allows outgoing connections to a proxied server originate from a
non-local IP address, for example, from a real IP address of a client
This addresses the issues of local IP addresses in your logs/stats instead of client IPs, for this to work you also need to reconfigure your kernel routing tables to capture the responses sent from the upstream server and route them back to Nginx.
This requires root access and a reasonable understanding of Linux networking configuration, which I appreciate not everyone has. I also appreciate not everyone who uses Icecast and might want to reverse proxy will read this. A much better solution would be making Icecast more Nginx friendly, so I had a go.
I cloned Icecast from github and had a look over the code. I've maybe missed some but these lines looked relevant to me:
./src/logging.c:159: client->con->ip,
./src/admin.c:700: xmlNewTextChild(node, NULL, XMLSTR(mode == OMODE_LEGACY ? "IP" : "ip"), XMLSTR(client->con->ip));
For servers which do not support the PROXY protocol the Nginx default method of passing the client IP upstream is via the X-Real-IP header. Icecast seems to be using the value of client->con->ip for logging listener IPs. Let's change things up a bit. I added this:
const char *realip;
realip = httpp_getvar (client->parser, "x-real-ip");
if (realip == NULL)
realip = client->con->ip;
And changed the previous lines to this:
./src/logging.c:163: realip,
./src/admin.c:700: xmlNewTextChild(node, NULL, XMLSTR(mode == OMODE_LEGACY ? "IP" : "ip"), XMLSTR(realip));
then I built Icecast from source as per the docs. The proxy_set_header X-Real-IP $remote_addr; directive in my Nginx conf is passing the client IP, if you have additional upstream servers also handling the request you will need to add some set_real_ip_from directives specifying each IP, real_ip_recursive on; and use the $proxy_add_x_forwarded_for; which will capture the IP address of each server which handles the request.
Fired up my new Icecast build and this seems to work perfectly. If the X-Real-IP header is set then Icecast logs this as the listener IP and if not then it logs the client request IP so it should work for reverse proxy and normal setups. Seems too simple, maybe I missed something #TBR?
OK so you should now have working listener streams served over SSL with correct stats/logs. You have done the hard bit. Now lets stream something to them!
Since the addition of the stream module to Nginx then handling incoming connections is simple regardless of whether of not they use PUT/SOURCE.
If you specify a server within a stream directive Nginx will simply tunnel the incoming stream to the upstream server without inspecting or modifying the packets. Nginx streams config lesson 101 is all you need:
stream {
server {
listen pub.lic.ip:port;
proxy_pass ice.cast.ip:port;
}
}
I guess one problem unsuspecting people may encounter with SOURCE connections in Nginx is specifying the wrong port in their Nginx config. Don't feel bad, Shoutcast v1 is just weird. Point to remember is:
Instead of the port you specify in the
client encoder it will actually attempt to connect to port+1
So if you were using port 8000 for incoming connections, either set the port to 7999 in client encoders using the Shoutcast v1 protocol, or set up your Nginx stream directives with 2 blocks, one for port 8000 and one for port 8001.
Your Nginx install must be built with the stream module, it's not part of the standard build. Unsure? Run:
nginx -V 2>&1 | grep -qF -- --with-stream && echo ":)" || echo ":("
If you see a smiley face you are good to go. If not you'll need to build Nginx and include it. Many repositories have an nginx-extras package which includes the stream module.
Almost finished, all that we need now is access to the admin pages. I serve these from https://example.com/icecast/ but Icecast generates all the URIs in the admin page links using the root path, not including icecast/ so they won't work. Let's fix that using the Nginx sub filter module to add icecast/ to the links in the returned pages:
location /icecast/ {
sub_filter_types text/xhtml text/xml text/css;
sub_filter 'href="/' 'href="/icecast/';
sub_filter 'url(/' 'url(/icecast/';
sub_filter_once off;
sub_filter_last_modified on;
proxy_set_header Accept-Encoding "";
proxy_pass http://ice.cast.ip:port/;
}
The trailing slash at the end of proxy_pass http://ice.cast.ip:port/; is vitally important for this to work.
If a proxy_pass directive is specified just as server:port then the full original client request URI will be appended and passed to the upstream server. If the proxy_pass has anything URI appended (even just /) then Nginx will replace the part of the client request URI which matches the location block (in this case /icecast/) with the URI appended to the proxy_pass. So by appending a slash a request to https://example.com/icecast/admin/ will be proxied to http://ice.cast.ip:port/admin/
Finally I don't want my admin pages accessible to the world, just my IP and the LAN, so I also include these in the location above:
allow 127.0.0.1;
allow 192.168.1.0/24;
allow my.ip.add.ress;
deny all;
That's it.
sudo nginx -s reload
Have fun.
tl;dr - Don't reverse proxy Icecast.
Icecast for various reasons is better not reverse proxied. It is a purpose built HTTP server and generic HTTP servers tend to have significant issues with the intricacies of continuous HTTP streaming.
This has been repeatedly answered. People like to try anyway and invariably fail in various ways.
If you need it on port 80/443, then run it on those ports directly
If you have already something running on port 80/443, then use another of the remaining 2^64 IPv6 addresses in your /64 and if you are still using legacy IP, get another address, e.g. by spinning up a virtual server in the cloud.
Need HTTPS, Icecast supports TLS (on Debian and Ubuntu make sure to install the official Xiph.org packages as distro packages come without openSSL support)
Make sure to put both private and public key into one file.
This line....
subs_filter '#localhost' '#stream.example.com' gi;
Should probably be....
subs_filter '#localhost' '#example.com' gi;
I am not familiar with nginx so my best guess would be that this line is linking radio.example.com to the main site of example.com. By adding the stream.example.com you are confusing it by directing it to a site that doesn't exist.
I got this from a config file posted here:
Anyway, it wouldn't hurt to try it.

nginx reverse proxy for docker service

I have a simple reverse proxy nginx.conf:
events {
worker_connections 1024;
}
http {
gzip on;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host:$server_port
server {
listen 80;
server_name app.local;
location / {
proxy_pass http://localhost:3000;
}
}
}
localhost:3000 is a docker swarm (1.13) service node app. Everything works great initially, when I request app.local. However whenever I update a service (containers are redeployed):
docker service update --force app
Nginx will think something is wrong (temporarily), and doesn't respond to requests to app.local for 30 seconds or so. This is all running on a CentOS 7 server.
I've configured my docker service to redeploy via rolling updates, so from the outside, 3000 never appears to go down. I can continually request app.local:3000, bypassing nginx with out any perceived downtime.
Nginx is NOT running in a docker container. I've gotta be missing some sort of configuration option.

How does supervisord and nginx handle what tornado port is used?

I am using supervisord to spool 2 instances of tornado on different ports and I use nginx as a reverse proxy to these ports. I have noticed that all traffic is directing to only one port. How does supervisord or nginx decide which instance of tornado is used when a user makes a request from the web service?
nginx config:
http {
upstream frontends {
server xx.xxx.x.xxx:8001;
server xx.xxx.x.xxx:8002;
}
server {
listen 80;
server_name xx.xxx.x.xxx;
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://frontends;
}
}
}
From the nginx docs:
Requests are distributed according to the servers in round-robin manner with respect of the server weight.
By default, servers are given equal weight. Are you sure all requests are going to one port?
Also note that supervisord's role is simply process management - only nginx decides how to distribute traffic to the ports you've configured.

Resources