I need to load balance requests based on the requested URI. E.g.:
requests to http://website.com/* should go to web-server1, 2 and 3.
requests to http://website.com/api should go to api-server1, 2 and 3.
Currently no matter the path/URI all requests go to web-server1-3. This is how it is setup in all my 3 haproxy hosts:
frontend fe
default_backend web-servers
backend web-servers
balance leastconn
server web-server-1 1.1.1.1:80 check weight 1
server web-server-2 1.1.1.2:80 check weight 1
server web-server-3 1.1.1.3:80 check weight 1
Both web and api services are running in the same host (i.e., web-server-1 to 3), with JBoss. Recently, I decided to split the web and api services so I could load balance according to the URI, as I mentioned in the begining.
So, now I have a total of 6 servers:
web-server-1 to 3 (1.1.1.1-3:80)
api-server-1 to 3 (1.1.1.4-6:8088)
To do this I came up with 2 different options:
1) add 3 nginx hosts. The haproxy configuration would look like this:
backend nginx-servers
balance leastconn
server nginx-1 1.1.1.7:80 check weight 1
server nginx-2 1.1.1.8:80 check weight 1
server nginx-3 1.1.1.9:80 check weight 1
And now each nginx host routes based on the URI, such as:
upstream web-servers {
server 1.1.1.1:80;
server 1.1.1.2:80;
server 1.1.1.3:80;
}
upstream api-servers {
server 1.1.1.4:8088;
server 1.1.1.5:8088;
server 1.1.1.6:8088;
}
server {
location ~ "/" {
proxy_pass http://web-servers;
proxy_set_header Host $host;
}
location ~ "/api" {
proxy_pass http://api-servers;
}
}
2) the alternative using only haproxy would be:
frontend fe
acl website_domain req.hdr(host) -i website.com
acl route_api path -i -m beg /api
use_backend api-servers if route_api
use_backend web-servers if website_domain !route_api
backend web-servers
balance leastconn
server web-server-1 1.1.1.1:80 check weight 1
server web-server-2 1.1.1.2:80 check weight 1
server web-server-3 1.1.1.3:80 check weight 1
backend api-servers
balance leastconn
server api-server-1 1.1.1.4:8088 check weight 1
server api-server-2 1.1.1.5:8088 check weight 1
server api-server-3 1.1.1.6:8088 check weight 1
However, with this second option when I access http://website.com/ all my api requests return http/404. How is this second approach different from the first one (that actually works)?
Related
By making a rewrite in Haproxy 1.8, I need to make a URI redirect to another domain (host), but keep header host in request.
Example:
www.mysite.com/api -> 104.4.4.4/api (rw) -> result www.mysite.com/api (response)
I made a lot of tests with some parameters of HA, and I managed to obtain some succes, but with one problem.
This is my actual scnenario
backend site1
acl path_to_rw url_beg /api
acl mysite hdr(host) -i www.mymainsite.com
http-request set-header Host www.mymainsite.com if mysite path_to_rw
reqirep ^Host Host:\ host_to_forward/api if mysite path_to_rw
cookie SERVERID insert indirect nocache maxlife 1h
server site1 myhost:80 check cookie site1
My backend is a IIS server, and my rewrite works. But, I get error bellow:
"HTTP Error 400. The request hostname is invalid"
It seems that my backend does not accept the headerhost tha i send. Have somebody already had this problem ?
I managed to fix this problem, with a simple combination between acl´s and "use backend" directive.
e.g:
Header host:
www.mysite.com
Path to aplication in another origin
/api
acl myhost hdr(host) -i www.myhost.com
acl path_api url_reg -i /API(.*)
use_backend be_origin_servers if myhost path_api
backend be_origin_servers
server myserver1 10.10.10.10 check cookie myserver1
I can't get Nginx working with memcached module, the requirement is to query remote service, cache data in memcached and never fetch remote endpoint until backend invalidates the cache. I have 2 containers with memcached v1.4.35 and one with Nginx v1.11.10.
The configuration is the following:
upstream http_memcached {
server 172.17.0.6:11211;
server 172.17.0.7:11211;
}
upstream remote {
server api.example.com:443;
keepalive 16;
}
server {
listen 80;
location / {
set $memcached_key "$uri?$args";
memcached_pass http_memcached;
error_page 404 502 504 = #remote;
}
location #remote {
internal;
proxy_pass https://remote;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
I tried to set memcached upstream incorrectly but I get HTTP 499 instead and warnings:
*3 upstream server temporarily disabled while connecting to upstream
It seems with described configuration Nginx can reach memcached successfully but can't write or read from it. I can write and read to memcached with telnet successfully.
Can you help me please?
My guesses on what's going on with your configuration
1. 499 codes
HTTP 499 is nginx' custom code meaning the client terminated connection before receiving the response (http://lxr.nginx.org/source/src/http/ngx_http_request.h#0120)
We can easily reproduce it, just
nc -k -l 172.17.0.6 172.17.0.6:11211
and curl your resource - curl will hang for a while and then press Ctrl+C — you'll have this message in your access logs
2. upstream server temporarily disabled while connecting to upstream
It means nginx didn't manage to reach your memcached and just removed it from the pool of upstreams. Suffice is to shutdown both memcached servers and you'd constantly see it in your error logs (I see it every time with error_log ... info).
As you see these messages your assumption that nginx can freely communicate with memcached servers doesn't seem to be true.
Consider explicitly setting http://nginx.org/en/docs/http/ngx_http_memcached_module.html#memcached_bind
and use the -b option with telnet to make sure you're correctly testing memcached servers for availability via your telnet client
3. nginx can reach memcached successfully but can't write or read from it
Nginx can only read from memcached via its built-in module
(http://nginx.org/en/docs/http/ngx_http_memcached_module.html):
The ngx_http_memcached_module module is used to obtain responses from
a memcached server. The key is set in the $memcached_key variable. A
response should be put in memcached in advance by means external to
nginx.
4. overall architecture
It's not fully clear from your question how the overall schema is supposed to work.
nginx's upstream uses weighted round-robin by default.
That means your memcached servers will be queried once at random.
You can change it by setting memcached_next_upstream not_found so a missing key will be considered an error and all of your servers will be polled. It's probably ok for a farm of 2 servers, but unlikely is it what your want for 20 servers
the same is ordinarily the case for memcached client libraries — they'd pick a server out of a pool according to some hashing scheme => so your key would end up on only 1 server out of the pool
5. what to do
I've managed to set up a similar configuration in 10 minutes on my local box - it works as expected. To mitigate debugging I'd get rid of docker containers to avoid networking overcomplication, run 2 memcached servers on different ports in single-threaded mode with -vv option to see when requests are reaching them (memcached -p 11211 -U o -vv) and then play with tail -f and curl to see what's really happening in your case.
6. working solution
nginx config:
https and http/1.1 is not used here but it doesn't matter
upstream http_memcached {
server 127.0.0.1:11211;
server 127.0.0.1:11212;
}
upstream remote {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name server.lan;
access_log /var/log/nginx/server.access.log;
error_log /var/log/nginx/server.error.log info;
location / {
set $memcached_key "$uri?$args";
memcached_next_upstream not_found;
memcached_pass http_memcached;
error_page 404 = #remote;
}
location #remote {
internal;
access_log /var/log/nginx/server.fallback.access.log;
proxy_pass http://remote;
proxy_set_header Connection "";
}
}
server.py:
this is my dummy server (python):
from random import randint
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello: {}\n'.format(randint(1, 100000))
This is how to run it (just need to install flask first)
FLASK_APP=server.py [flask][2] run -p 8080
filling in my first memcached server:
$ telnet 127.0.0.1 11211
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
set /? 0 900 5
cache
STORED
quit
Connection closed by foreign host.
checking:
note that we get a result every time although we stored data
only in the first server
$ curl http://server.lan && echo
cache
$ curl http://server.lan && echo
cache
$ curl http://server.lan && echo
cache
this one is not in the cache so we'll get a response from server.py
$ curl http://server.lan/?q=1 && echo
Hello: 32337
whole picture:
the 2 windows on the right are
memcached -p 11211 -U o -vv
and
memcached -p 11212 -U o -vv
I'm asking myself if it possible to reproduce NGinx proxy_next_upstream system on F5 BIG-IP.
As a reminder, here is how it works on NGinx:
Given a pool of upstream servers let's call it webservers compose by 2 instances:
upstream webservers {
server 192.168.1.10:8080 max_fails=1 fail_timeout=10s;
server 192.168.1.20:8080 max_fails=1 fail_timeout=10s;
}
With the following instruction (proxy_next_upstream error), if a tcp connection fail on first instance when routing a request (because instance is down for example), NGinx automatically forward request to the second instance (USER DOESN'T SEE ANY ERROR).
Furthermore, instance 1 is blacklisted for 10 seconds (fail_timeout=10s).
Every 10 sec, NGinx will try to route 1 request to instance 1 (to know if instance is coming back) and make the instance available again if it succeed otherwise it wait again 10 sec to try.
location / {
proxy_next_upstream error;
proxy_pass http://webservers/$1;
}
I hope I'm clear enough...
Thanks for your help.
Here is something interesting: https://support.f5.com/kb/en-us/solutions/public/10000/600/sol10640.html
I have a server with multiple IP configured on it ( as virtual IP on eth0). I'm using Haproxy for Load balacing. Each IP has been configured/pointed to different domain name and All requests that comes to each IP address is being forwarded to different backend server by using haproxy.
Issue here, all outgoing traffic from haproxy is pass through main interface IP [ by default]. I just wanted to set source ip for backend connection.
I tried the below config, its not working. Any idea ?
backend web1
server ss2 10.11.12.13:80 source ${frontend_ip}
frontend new1
bind 10.11.13.15:8080
mode tcp
use_backend web1
You only have 1 IP in your question so I can't say for sure. But if you have multiple virtual IPs and want to serve different backends, you need to create one frontend each at least. Like this:
frontend new1
bind 10.11.13.15:80
...
acl is_new1domain hdr(host) -i new1.domain.com
use_backend web1 if is_new1domain
frontend new2
bind 10.11.13.16:80
...
acl is_new2domain hdr(host) -i new2.domain.com
use_backend web2 if is_new2domain
backend web1
...
source 10.124.13.15
backend web2
...
source 10.124.13.16
Actually, if you don't have any other rules to parse, just use Layer4 to proxy/balance. Like this:
listen new1
bind 10.11.12.15:80
server ss1 10.11.12.90:8080 check
server ss2 10.11.12.91:8080 check
server ss3 10.11.12.92:8080 check
source 10.124.12.15
listen new2
bind 10.11.12.16:80
server ss4 10.11.12.80:8080 check
server ss5 10.11.12.81:8080 check
server ss6 10.11.12.82:8080 check
source 10.124.12.16
I would like to get following situation:
I have domains: xxx.com zzz.com and yyy.com
I have one server: xxx.yyy.zz.qq
I would like to configure glassfish to start listening on port 80, and basing on the URL choose proper base catalog for my sites i.e.:
Scenario 1: Visitor is entering url xxx.com or www.xxx.com -> Glassfish receive request on port 80 and pick up catalog: ./glassfish4/myXXXcom/ where index.html for xxx.com is placed.
Scenario 2: Visitor is entering url zzz.com or www.zzz.com -> Glassfish receive request on port 80 and pick up catalog: ./glassfish4/anotherSite/ where index.html for zzz.com is placed.
What have I done:
Installed glassfish 4.1 on my server.
Changed A field of my domains to my server address.
Created virtual server:
glassfish4/bin/asadmin/create-virtual-server --hosts xxx.com xxx
Created http listener:
glassfish4/bin/asadmin create-http-listener --listeneraddress xxx.com --listenerport 80 --default-virtual-server xxx xxx
I think that I am doing something completely wrong here. How do I fix this problem?
If I understand correctly, what you need to do is, create two domains in glassfish or create a cluster and assign two instances of local glassfish instances. One running in port 28080 and another domain in 28081 and use nginx as the load balancer to forward the request to appropriate ports when requests comes from different domains. To make it clear, I am writing step by step
Create a new cluster in glassfish admin console
Create and assign a new local glassfish instance to cluster. This instance will be running in port 28080 and handles requests coming from example1.com
Create another glassfish domain 28081 as the port no for handling example2.com
Install nginx, this acts as proxy and forward request to appropriate
domains. Nginx will be running in port 80.
Start the cluster
Configure nginx as below. This is the crucial part
server {
listen 80;
server_name example1.com;
location / {
proxy_pass http://127.0.0.1:28080;
}
}
server {
listen 80;
server_name example2.com;
location / {
proxy_pass http://127.0.0.1:28081;
}
}
Start nginx
I hope you are familiar with creating clusters and domains in glassfish. If you are unfamiliar with creating clusters in commandline. Glassfish admin console is there, where you can achieve everything. If you need more info, please feel free to write in comments.