I have 5 backend servers. I want nginx to forward the POST request for /myapp/refresh to all 5 backend servers. For any other request, it can do load balancing. Is this possible ? Can you please give a sample configuration ?
I'm not aware about ready to use solution to do what you want.
It is definetely possible to implement such behavior in C or Lua.
You may develop nginx C module, but it not trivial task with serious learning curve.
You may use https://github.com/openresty/lua-nginx-module and use something like https://github.com/openresty/lua-nginx-module#ngxlocationcapture_multi.
But in both cases you should implement some kind of logic when and which response you will send back.
Question to think about - do you need to respond with 200 OK if one of the backend will time out or responds with error?
You can try use the The ngx_http_mirror_module module (1.13.4), this implements mirroring of an original request by creating background mirror subrequests. Responses to mirror subrequests are ignored. https://nginx.org/en/docs/http/ngx_http_mirror_module.html
You should be able to use nginx as a load balancer using a simple config such as:
http {
upstream myproject {
server 127.0.0.1:8000 weight=3;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
}
server {
listen 80;
server_name www.domain.com;
location / {
proxy_pass http://myproject;
}
}
}
docs:
https://www.nginx.com/resources/admin-guide/load-balancer/
This should route all requests including the POST request you mentioned.
Related
My issue is that I have a web server running on port 80. I want to use nginx proxy (not the ingress) bto redirect the connection. I want to use link wwww.example.com. How should I tell nginx to proxy the connection on wwww.example.com (which is a different app). I tried using service with load balancer but it changes the hostname ( to some aws link) I need it to be exactly wwww.example.com.
If I understood your request correctly, you may just use return directive in your nginx config
server {
listen 80;
server_name www.some-service.com;
return 301 $scheme://wwww.example.com$request_uri;
}
If you need something more complex check this doc or this
I'm using the below config in nginx to proxy RDP connection:
server {
listen 80;
server_name domain.com;
location / {
proxy_pass http://192.168.0.100:3389;
}
}
but the connection doesn't go through. My guess is that the problem is http in proxy_pass. Googling "Nginx RDP" didn't yield much.
Anyone knows if it's possible and if yes how?
Well actually you are right the http is the problem but not exactly that one in your code block. Lets explain it a bit:
In your nginx.conf file you have something similar to this:
http {
...
...
...
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
So everything you write in your conf files are inside this http block/scope. But rdp is not http is a different protocol.
The only workaround I know for nginx to handle this is to work on tcp level.
So inside in your nginx.conf and outside the http block you have to declare the stream block like this:
stream {
# ...
server {
listen 80;
proxy_pass 192.168.0.100:3389;
}
}
With the above configuration just proxying your backend on tcp layer with a cost of course. As you may notice its missing the server_name attribute you can't use it in the stream scope, plus you lose all the logging functionality that comes on the http level.
For more info on this topic check the docs
For anyone who is looking to load balance RDP connection using Nginx, here is what I did:
Configure nginx as you normally would, to reroute HTTP(S) traffic to your desired server.
On that server, install myrtille (it needs IIS and .Net 4.5) and you'll be able to RDP into your server from a browser!
I have a server with CentOS, and there I will have at least 4 Golang applications running, every one of them is a different site that I should be able to access in the browser with domain/subdomains as follows:
dev00.mysite.com
dev01.mysite.com
dev02.mysite.com
dev03.mysite.com
So, I need to configure some kind of software that redirects the requests to the correct Golang process. Every site will be running in a different port, so for example if someone calls dev00.mysite.com I should be able to send that request to the process of dev00 site (this is for development porpouses, not production). So, here I'm starting to believe that I need Nginx or Caddy as I read, but I have no experience with none of them.
Can someone confirm that this is the way to fix that problem? and where can I find some example of configuration of any of that servers redirecting to Golang applications?
And, in the future if a have a lot (really a lot) of domains running in the same server, which of that servers is better? who is better with high load?
Yes, Nginx can solve your problem:
Start a web server using the standard library of Go or Caddy.
Redirect request to Go application using Nginx:
Example Nginx configuration:
server {
listen 80;
server_name dev00.mysite.com;
...
location / {
proxy_pass http://localhost:8000;
...
}
}
server {
listen 80;
server_name dev01.mysite.com;
...
location / {
proxy_pass http://localhost:8001;
...
}
}
Goal: Stand up a service that will accept requests to
http://foo.com/a
and turn around and proxy that request to two different services
http://bar.com/b
http://baz.com/c
The background is that I'm using a service that can integrate with other 3rd party services by accepting post request, and then posting event callbacks to that 3rd party service via posting to a URL. Trouble is that it only supports a single URL in its configuration, so it becomes impossible to integrate more than one service this way.
I've looked into other services like webhooks.io (waaaay too expensive for a moderate amount of traffic) and reflector.io (beta - falls over with a moderate amount of traffic), but so far nothing meets my needs. So I started poking around at standing up my own service, and I'm hoping for as hands-off as possible. Feels like nginx ought to be able to do this...
I came across the following snippet which someone else classified as a bug, but feels like the start of what I want:
upstream apache {
server 1.2.3.4;
server 5.6.7.8;
}
...
location / {
proxy_pass http://apache;
}
Rather than round robin request to apache, that will apparently send the same request to both apache servers, which sounds promising. Trouble is, it sends it to the same path on both server. In my case, the two services will have different paths (/b and /c), and neither is the same path as the inbound request (/a)
So... Any way to specify a destination path on each server in the upstream configuration, or some other clever way of doing this?
You can create local servers. Local servers proxy_pass add the different path (b,c).
upstream local{
server 127.0.0.1:8000;
server 127.0.0.1:8001;
}
location / {
proxy_pass http://local ;
}
server {
listen 8000;
location / {
proxy_pass http://1.2.3.4/b;
}
server {
listen 8001;
location / {
proxy_pass http://5.6.7.8/c;
}
We use Nginx as load-balancer for our websocket application. Every backend server keeps session information so every request from client must be forwarded on the same server. So we use ip_hash directive to achieve this:
upstream app {
ip_hash;
server 1;
}
The problem appears when we want to add another backend server:
upstream app {
ip_hash;
server 1;
server 2;
}
New connections go to server 1 and server 2 - but this is not what we need in this situation as load on server 1 continues to increase - we still need sticky sessions but least_conn algorithm enabled too - so our two servers receive approximately equal load.
We also considered using Nginx-sticky-module but the documentaton says that if no sticky cookie available it will fall back to round-robin default Nginx algorithm - so it also does not solve a problem.
So the question is can we combine sticky and least connections logic using Nginx? Do you know which other load balancers solve this problem?
Probably using the split_clients module could help
upstream app {
ip_hash;
server 127.0.0.1:8001;
}
upstream app_new {
ip_hash;
server 127.0.0.1:8002;
}
split_clients "${remote_addr}AAA" $upstream_app {
50% app_new;
* app;
}
This will split your traffic and create the variable $upstreap_app the one you could use like:
server {
location /some/path/ {
proxy_pass http://$upstream_app;
}
This is a workaround to the least_conn and the load balancer that work with sticky sessions, the "downside" is that if more servers need to be added, a new stream needs to be created, for example:
split_clients "${remote_addr}AAA" $upstream_app {
30% app_another_server;
30% app_new;
* app;
}
For testing:
for x in {1..10}; do \
curl "0:8080?token=$(LC_ALL=C; cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)"; done
More info about this module could be found in this article (Performing A/B testing)
You can easily achieve this using HAProxy and I indeed suggest going through it thoroughly to see how your current setup can benefit.
With HA Proxy, you'd have something like:
backend nodes
# Other options above omitted for brevity
cookie SRV_ID prefix
server web01 127.0.0.1:9000 cookie check
server web02 127.0.0.1:9001 cookie check
server web03 127.0.0.1:9002 cookie check
Which simply means that the proxy is tracking requests to-and-fro the servers by using a cookie.
However, if you don't want to use HAProxy, I'd suggest you setup you change your session implementation to use an in-memory DB such as redis/memcached. This way, you can use leastconn or any other algorithm without worrying about sessions.