NGINX Reverse Proxy A Port Between 2 Servers - nginx

Apologies if this is an obvious question, I've been reading up on NGINX and am hoping this is something I can use with my Icecast server.
Essentially I have the following setup:
ipAddress:8080 - Icecast Server (mount point is /stream)
domain.tld - Server running NGINX & hosting a PHP site.
What I'd like todo is take any requests to, for example, domain.tld:8000/stream and have it return what is actually ipAddress:8080/stream
Is this something NGINX can handle? Forgive me if I am missing something obvious, presently all I can find are guides on redirecting files to ports etc
Thanks!

It is generally not advisable to reverse-proxy Icecast. It breaks an array of things and if not configured properly can bring down your web server.
If you want to run Icecast on port 80, then I've explained this for Debian (and derivatives like Ubuntu) here: http://lists.xiph.org/pipermail/icecast/2015-February/013198.html

Related

How to reverse proxy a TFS server

We're using a TFS server 2017 update 2 in our local office.
But we need access the server at home, we tried to use nginx for build a reverse proxy to access TFS server, but failed.
Also the Apache haven't ability to do that to pass the NTLM authorization of TFS.
Do someone know how to do that?
Both nginx and Apache server can't handle NTLM authentication properly. Even if use "stream" server in nginx, it still throw exceptions at times. So the best way that I've found is to write a pure reverse proxy with socket in NodeJS.
Please try the code here: https://gist.github.com/gekowa/7fdd6fa6db51a7671de5469b3943a9da
The implementation was pretty straightforward, it just double pipe local and remote sockets, and everything is just fine.
node tcpproxy.js 8080 your_internal_server_address your_internal_port
Best choice:your_internal_port=8080

How to keep the session when using nginx as the Reverse Proxy to many servers with unsame project

I am new to Nginx. And I have trobule with it. We have many projects with different language and framework. And they are put in different server. How do I keep the session for every project respectively?
Question is not quite clear but from what i understood i will try to guide you a bit...
Nginx is a web server which when used as reverse proxy basically just sits in front of your project appserver. When some client tries to connect to your appserver, it will first connect to nginx and then nginx will forward that request to you appserver.
eg.
client -Req-> nginx (port 8080) -Req-> appserver(jetty, port 9000)
Now if you are trying to use a single nginx instance and direct request to multiple app servers from nginx. You will either have to make nginx listen on different ports and forward them to different appservers. Or nginx can identify which request is meant for which appserver by routes.
Here is a source which can help you to learn how to configure Nginx to do this... please ask again if you need further help.
https://www.digitalocean.com/community/tutorials/how-to-set-up-nginx-server-blocks-virtual-hosts-on-ubuntu-14-04-lts

Go websocket, nginx proxy is this correct?

I have a RESTful server in go, and it's behind nginx. everything is fine and we are happy with this setup (nginx and go) but now we have a websocket route for this application. (Its currently works ok with nginx in our staging server, not real load yet.)
The questions :
Is this good for my websocket route to be behind nginx too? is there any good reason for/against this?
Is there any way to bypass this route from nginx proxy and serve it directly with go? not in another subdomain or another binary.
Thanks!
I am no nginx expert but given that nobody else has answered I will present some of my research.
1) Yes, nginx is definitely a good choice for that. You can find some benchmarks here. Possible caveats are mentioned in this (older) post. The most important point to consider is the timeout aspect. These two answers give helpful information in that regard.
2) Not exactly sure what you want to achieve by that but you could simply use a different port, as websockets are not subject to the same origin policy, or use the tcp forwarding module that is described in one of the answers above.

IMAP Proxy that can connect to multiple IMAP servers

What I am trying to achieve is to have a central Webmail client that I can use in a ISP envioroment but has the capability to connect to multiple mail servers.
I have now been looking at Perdition, NGINX and Dovecot.
But most of the articles have not been updated for a very long time.
The one that I am realy looking at is NGINX imap proxy as it can almost do everything i require.
http://wiki.nginx.org/ImapAuthenticateWithEmbeddedPerlScript
But firstly the issue I have is you can no longer compile NGINX from source with those flags.
And secondly the GitRepo for this project https://github.com/falcacibar/nginx_auth_imap_perl
Does not give detailed information about the updated project.
So all I am trying to achieve is to have one webmail server that can connect to any one of my mailservers where my location is residing in a database. But the location is a hostname and not a IP.
You can tell Nginx to do auth_http with any http URL you set up.
You don't need an embedded perl script specifically.
See http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html to get an idea of the header based protocol Nginx uses.
You can implement the protocol described above in any language - CGI script with apache if you like.
You do the auth and database query and return the appropriate backend servers in this script.
(Personally, I use a python + WSGI server setup.)
Say you set up your script on apache at http://localhost:9000/cgi-bin/nginx_auth.py
In your Nginx config, you use:
auth_http http://localhost:9000/cgi-bin/nginx_auth.py

How do I separate traffic by domain?

Background
I currently have multiple low power servers running on my LAN. They all run different services, but some of them are similar. I also own 3 domains and have many sub-domains for those domains.
So, what's your problem?
Well, as I said before, some of my service are REALLY similar, and run on the same port (I have an Owncloud server on one, and my website is hosted on another). This means that if I want owncloud.mydomain.com to go to my Owncloud server, and www.mydoamain.com to go to my web server, I have a little bit of an issue. Both sub-domains just go to my house and the services use the same port. I can't really separate the traffic per subdomain.
edit: It also needs to be able to direct many types of traffic like SSH, HTTPS, and FTP
Possible Solutions
I've though about just running the different service on different ports, but would not be optimal AT ALL. It means it's weird to look at, people will have a harder time using any of my services, and it will generally be something I do not like.
I've thought about similar services on the same server, but their are some pretty dinky servers. I'd rather not have to do anything like that at all. Also, since the servers are a little old, it's nice to know that if one of them dies, at least I'll have my other services. I don't think this option is good at all.
Best possible solution: I've heard that there's a service that has the exact functionality that I'm looking for called haproxy. My only issues with this is that I don't know how to use this service and I especially don't know How to get the use I want out of it.
My final question
I would love to get haproxy working, I just need to know how to set it up the way I need. If anyone has a link to a tutorial on how to do what I want specifically (I've already found out how to get haproxy working, just not the way I want) then I would be really grateful. I would look for this myself, but I already have, and I don't even know what to search for. Can anyone help me out?
Thank you
Make your own config file, say haproxy.cfg, containing something like the following
defaults
mode http
frontend my_web_frontend
bind 0.0.0.0:80
timeout client 86400000
acl is_owncloud hdr_end(host) -i owncloud.mydomain.com
acl is_webserver hdr_end(host) -i www.mydomain.com
use_backend owncloud if is_owncloud
use_backend webserver if is_webserver
backend owncloud
balance source
option forwardfor
option httpclose
timeout queue 500000
timeout server 500000
timeout connect 500000
server server1 10.0.0.25:5000 weight 1 maxconn 1024 check inter 10000
backend webserver
balance source
option forwardfor
option httpclose
timeout queue 500000
timeout server 500000
timeout connect 500000
server server1 10.0.0.30:80 weight 1 maxconn 1024 check inter 10000
And then run haproxy on one of your servers.
./haproxy -f ~/haproxy.cfg
Point all your domains and subdomain to this machine. They'll route according to the config.
You only need one ip address but you need to configure the virtual host correctly. This link provides step by step details for Ubuntu virtual host configuration. This is the easiest way and everyone else will agree it's the cheapest if you insist on using your personal network.
https://www.digitalocean.com/community/articles/how-to-set-up-nginx-virtual-hosts-server-blocks-on-ubuntu-12-04-lts--3

Resources