I've multiple upstreams with same 2 servers in different ports for different apps, but I would need it to keep a consistent connection to the server.
Example:
upstream APP {
ip_hash;
server 10.10.10.1:1111;
server 10.10.10.2:1111;
}
upstream APP_HTTP {
ip_hash;
server 10.10.10.1:2222;
server 10.10.10.2:2222;
}
upstream APP_WS {
ip_hash;
server 10.10.10.1:3333;
server 10.10.10.2:3333;
}
....
location /APP {
proxy_pass http://APP;
}
location /APP_HTTP {
proxy_pass http://APP_HTTP;
}
location /APP_WS {
proxy_pass http://APP_WS;
}
So if a user is redirected at APP starting point to server 10.10.10.1, I need to guarantee that for APP_HTTP and APP_WS goes as well to 10.10.10.1.
Is it possible? How?
IP_Hash seems to not be working as I would expect.
Thanks
Best regards
Related
i have these upstreams declared:
upstream upstream_1 {
server some_container_1:8000;
}
upstream upstream_2 {
server some_container_2:8001;
}
and this server:
server {
listen 7000;
server_name localhost;
location /path {
uwsgi_pass upstream_1;
}
}
where both some_container_1 and some_container_2 are based on same image (thus offer the same apis on the same paths) but differ on env vars and other non related stuff. i want to fork 1% of all traffic from localhost:7000/path to be delivered 'as is' to upstream_2 and 99% to remain on upstream_1. both cases should keep the request as received, altering neither path nor headers
with split_clients i can fork which path will be set before forwarding the request to a single upstream, which is not my case.
here the fork is done inside an upstream between servers, not inside a location splitting between upstreams, as i need.
can i define an upstream of upstreams like
upstream compound_upstream_1 {
upstream upstream_1 weight=99;
upstream upstream_2;
}
to use it on
server {
listen 7000;
server_name localhost;
location /path {
uwsgi_pass compound_upstream_1;
}
is it possible to do this with nginx? considering so, which way should be the standard to accomplish this?
I don't understand, what stops you from using server names in the upstream block directly?
upstream compound_upstream_1 {
server some_container_1:8000 weight=99;
server some_container_2:8001;
}
server {
listen 7000;
server_name localhost;
location /path {
uwsgi_pass compound_upstream_1;
}
}
Or maybe I misunderstand your question?
It might be possible to accomplish this using a load balancer: https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/
I'm not sure what the weights would be for your '1%' scenario but you can toy with it and adjust it to your liking.
I have the following nginx.config file:
events {}
http {
# ...
# application version 1a
upstream version_1a {
server localhost:8090;
}
# application version 1b
upstream version_1b {
server localhost:8091;
}
split_clients "${arg_token}" $appversion {
50% version_1a;
50% version_1b;
}
server {
# ...
listen 7080;
location / {
proxy_set_header Host $host;
proxy_pass http://$appversion;
}
}
}
I have two nodejs servers listening on port 8090 and 8091 and I am hitting the URL http://localhost:7080, my expectation here is the Nginx will randomly split the traffic to version_1a and version_1b upstream, but, all the traffic is going to version_1a. Any insight into why this might be happening?
(I want to have this configuration for the canary traffic)
Validate the variable you are using to split the traffic is set correctly, and the variable's value should be uniformly distributed else the traffic will not be split evenly.
If a user request do not have a set of headers, then the reverse proxy response should be routed to a different back end server , else if the reqeust have those headers, than request must go to a different server.
Is it possible in NGINX and how do we do that ?
Let's say that you're using x-backend-pool for your request header,
you can use the following NGINX module to get what you want: http://nginx.org/en/docs/http/ngx_http_map_module.html#map
The map directive allows you to set variables based on values in other variables, I've provided an example for you below:
upstream hostdefault {
server 127.0.0.1:8080;
}
upstream hosta {
server 127.0.0.1:8081;
}
upstream hostb {
server 127.0.0.1:8082;
}
map $http_x_backend_pool $backend_pool {
default "hostdefault";
a "hosta";
b "hostb";
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://$backend_pool;
}
}
I am using Nginx-sticky-module to add an upstream server persistance using cookies.But I have two upstreams like this:
upstream upstreamA{
sticky;
server server1:8080;
server server2:8080;
}
upstream upstreamB{
sticky;
server server1:9080;
server server2:9080;
}
location /requestA {
proxy_pass http://upstreamA;
}
location /requestB {
proxy_pass http://upstreamB
}
When user request nginx:port/requestA,Nginx can hold the request and distribute it to the same server.But if the user request nginx:port/requestB after requestA,Nginx will give it a new Set-Cookie value (route=xxx) by Nginx-sticky-module according upstreamB, can Nginx just use one Cookie in two upsteams?
I'm trying to a/b (split) test two webpages on a single web application with a single hostname & instance.
Here's what I'm trying to achieve:
HTTP request for /
Request gets proxied to backend.app.com
Request is either proxied to backend.app.com/a or backend.app.com/b
Ideally that would be a sticky session that would maintain appropriate /a or /b during their session similar to what can be achieved with application pools.
Possible? Ideas?
You are looking for the split_clients directive.
Example from https://www.nginx.com/blog/performing-a-b-testing-nginx-plus/
http {
# ...
# application version 1a
upstream version_1a {
server 10.0.0.100:3001;
server 10.0.0.101:3001;
}
# application version 1b
upstream version_1b {
server 10.0.0.104:6002;
server 10.0.0.105:6002;
}
split_clients "${arg_token}" $appversion {
95% version_1a;
* version_1b;
}
server {
# ...
listen 80;
location / {
proxy_set_header Host $host;
proxy_pass http://$appversion;
}
}
}
The arg_token in this case can be pretty much any variable you want.