Hello fellow Overflowers,
I have 2 Nginx Webservers in my OpenStack Enviroment.
I'm trying to set up load balancing with HAProxy right now.
Ubuntu 18 is the OS on all servers.
Added the backend IP's to the default config. When I try connect to my LB via Browser I get:
"503 Service Unavailable"
What I know so far:
Backends are available when I connect directly to them.
I opened the correct ports in the OpenStack GUI
I checked the HAProxy logs and found the following:
Oct 20 13:04:30 HA_Proxy haproxy[2361]: [ALERT] 293/130430 (2361) : Starting frontend haproxynode: cannot bind socket [91.250.78.208:80]
Oct 20 13:04:30 HA_Proxy haproxy[2361]: Proxy backendnodes started.
Oct 20 13:04:30 HA_Proxy haproxy[2361]: Proxy backendnodes started.
Oct 20 13:04:30 HA_Proxy haproxy[2361]: Proxy stats started.
Oct 20 13:04:30 HA_Proxy haproxy[2361]: Proxy stats started.
Oct 20 13:05:27 HA_Proxy haproxy[2399]: Proxy haproxynode started.
Oct 20 13:05:27 HA_Proxy haproxy[2399]: Proxy haproxynode started.
Oct 20 13:05:27 HA_Proxy haproxy[2399]: Proxy backendnodes started.
Oct 20 13:05:27 HA_Proxy haproxy[2399]: Proxy backendnodes started.
Oct 20 13:05:27 HA_Proxy haproxy[2399]: Proxy stats started.
Oct 20 13:05:27 HA_Proxy haproxy[2402]: Server backendnodes/node1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions acti$
I dont know what do to with the "cannot bind socket" message, maybe its something in the config:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
# An alternative list with additional directives can be obtained from
# https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend haproxynode
bind *:80
mode http
default_backend backendnodes
backend backendnodes
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
server node1 192.168.0.77:8080 check
server node2 192.168.0.76:8080 check
listen stats
bind :32700
stats enable
stats uri /
stats hide-version
stats auth someuser:password
Anybody know what else I can check to solve the issue?
Also note that i started my apprenticeship in August and have almost no experience with loadbalancing or webservers at all =(
If you're getting a cannot bind socket error message then try to run the below command
setsebool -P haproxy_connect_any=1
Or else kill the service which was running on the port you want to use and then restart the haproxy
$fuser -k <your_port>/tcp
$sudo systemctl restart haproxy
I set the Ports on the Backend to 8080, but it should have been 80. Solved the issue
Related
I have already installed Mastodon, but I'm in the step of setting up nginx according to this doc https://github.com/mastodon/documentation/blob/master/content/en/admin/install.md
I already changed the mastodon file to put my domain and uncomment the lines of the certificate.
so it looks like this:
# Uncomment these lines once you acquire a certificate:
ssl_certificate /etc/letsencrypt/live/my.domain/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/my.domain/privkey.pem;
In the upper part of the file I have this:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream backend {
server 127.0.0.1:3000 fail_timeout=0;
}
upstream streaming {
server 127.0.0.1:4000 fail_timeout=0;
}
In /etc/nginx/sites-enabled/ there are three files: default which have I don't know what; and mastodon and my.domain.conf that looks exactly the same, as described above.
Once I modified the mastodon file to put my.domain when it appears, I was told to restart nginx so I run:
sudo systemctl restart nginx
I got this:
Job for nginx.service failed because the control process exited with error code.
See "systemctl status nginx.service" and "journalctl -xeu nginx.service" for details.
so I run "systemctl status nginx.service" and got this:
× nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2022-11-12 22:28:59 UTC; 15s ago
Docs: man:nginx(8)
Process: 144802 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=1/FAILURE)
CPU: 40ms
Nov 12 22:28:59 my-instance-1 systemd[1]: Starting A high performance web server and a reverse proxy server...
Nov 12 22:28:59 my-instance-1 nginx[144802]: nginx: [emerg] duplicate upstream "backend" in /etc/nginx/sites-enabled/my.domain.conf:6
Nov 12 22:28:59 my-instance-1 nginx[144802]: nginx: configuration file /etc/nginx/nginx.conf test failed
Nov 12 22:28:59 my-instance-1 systemd[1]: nginx.service: Control process exited, code=exited, status=1/FAILURE
Nov 12 22:28:59 my-instance-1 systemd[1]: nginx.service: Failed with result 'exit-code'.
Nov 12 22:28:59 my-instance-1 systemd[1]: Failed to start A high performance web server and a reverse proxy server.
I don't know why it says I have duplicated upstream backend if it appears only once in the file. I need to be able to restart my nginx.
UPDATE
Current state is that in the picture. I changed in the mastodon file in sites-AVAILABLE all example.com by my.domain (lines 28, 37,38 aprox). cert lines are uncommented and nginx is throwing this error.
My current instance looks like this, I created a user but the mail verification link got ERR_CERT_COMMON_NAME_INVALID.
I have a python flask application which outputs around 300kb for a request.
This application is hosted via uwsgi emperor with the below configuration.
[uwsgi]
chdir = /var/www/%n
socket = /etc/uwsgi/sockets/%n.sock
chmod-socket = 660
vaccum = true
processes = 4
threads = 20
virtualenv = /var/www/%n/.venv
module = app:app
logto = /var/log/uwsgi/%n.log
The uwsgi log has the below line
[pid: 16668|app: 0|req: 1/1] 127.0.0.1 () {46 vars in 700 bytes} [Wed May 2 04:56:24 2018] POST /context-path => generated 293595 bytes in 34172 msecs (HTTP/1.1 200) 2 headers in 75 bytes (3 switches on core 0)
announcing my loyalty to the Emperor...
And the nginx configuration is
location /context-path {
include uwsgi_params;
uwsgi_pass unix:/etc/uwsgi/sockets/app.sock;
uwsgi_read_timeout 300;
}
At the end of 5 minutes, I see the following in nginx error.log
2018/05/02 05:13:42 [error] 16994#16994: *7 upstream timed out (110: Connection timed out) while reading upstream, client: 127.0.0.1, server: 127.0.0.1, request: "POST /context-path HTTP/1.1", upstream: "uwsgi://unix:/etc/uwsgi/sockets/app.sock:", host: "127.0.0.1"
I receive partial data at the end of 5 minutes.
Increasing uwsgi_read_timeout doesn't affect anything.
Help?
Stack configuration:
nginx version: nginx/1.10.3 (Ubuntu)
uwsgi 2.0.17
Python 2.7.12
Flask==0.12.2
As far as I know, this error message can occur with many of the different upstream options.
The ngx_http_upstream_module module is used to define groups of servers that can be referenced by the proxy_pass, fastcgi_pass, uwsgi_pass, scgi_pass, memcached_pass, and grpc_pass directives.
I am using nginx as a load balancer with other servers, using proxy_pass. So I needed one of the proxy_* timeout options, such as proxy_read_timeout, to fix this error.
When requesting via https it looks like serf is funnelling the request via port 80 instead of 443?
[Mon Jan 16 10:25:48.007386 2017] [error] [pid 350] [mod_pagespeed 1.11.33.4-0 #350] Serf status 120171(APR does not understand this error code) polling for 1 threaded fetches for 0.05 seconds
[Mon Jan 16 10:25:48.007539 2017] [error] [pid 350] [mod_pagespeed 1.11.33.4-0 #350] Serf status 120171(APR does not understand this error code) polling for 1 threaded fetches for 0.05 seconds
[Mon Jan 16 10:25:53.021234 2017] [warn] [pid 350] [mod_pagespeed 1.11.33.4-0 #350] Fetch timed out: https://www.domain.com/assets/76dc6ad2/style.min.css (connecting to:10.33.12.222:80) (1) waiting for 50 ms
SSL termination on the load balancer. SSL is also configured to work from behind the load balancer as well so https can be served from within the network.
ModPagespeedFetchHttps enable
ModPagespeedRespectXForwardedProto on
ModPagespeedEnableFilters prioritize_critical_css
How to have serf request https via port 443?
#dhaupin
I don't seem to notice that error anymore.
This is probably what fixed it, explicitly handling https requests.
ModPagespeedLoadFromFile "https://example.com" "/var/www/example/"
ModPagespeedRespectXForwardedProto on
I'm serving two websites through HAProxy and Varnish. There's a wiki site and a wordpress site. The wiki site works continuously and without problem. However the Wordpress site continuously shows a 504 error each time you reload the page.
If I spoof the wordpress site in my hosts file by using the IP of the varnish server instead of HAProxy the site comes back and starts working fine. It's only when wordpress is on haproxy that the site 504's.
I'd like to know how to turn on debug logging for HAProxy and also maybe get some help solving this problem.
This is all that I see in the logs for haproxy:
Apr 3 20:29:18 lb1.example.com haproxy[18501]: 52.21.231.226:52845 [03/Apr/2016:20:29:15.318] varnish-cluster varnish-cluster/varnish1 0/0/0/2786/2786 200 626 - - --NR 2/2/1/1/0 0/0 "HEAD / HTTP/1.1"
Apr 3 20:29:28 lb1.example.com haproxy[18501]: 61.174.10.22:18645 [03/Apr/2016:20:29:09.522] varnish-cluster varnish-cluster/varnish1 0/0/0/18206/19039 404 101736 - - --VN 0/0/0/0/0 0/0 "GET /groups/ HTTP/1.0"
Apr 3 20:29:34 lb1.example.com haproxy[18501]: 61.174.10.22:26372 [03/Apr/2016:20:29:31.045] varnish-cluster varnish-cluster/varnish1 0/0/0/3048/3048 301 549 - - --VN 0/0/0/0/0 0/0 "GET /members/pzwkathi09454/activity HTTP/1.0"
Apr 3 20:29:54 lb1.example.com haproxy[18501]: 61.174.10.22:27761 [03/Apr/2016:20:29:34.879] varnish-cluster varnish-cluster/varnish1 0/0/0/-1/20003 504 194 - - sHVN 0/0/0/0/0 0/0 "GET /activity/ HTTP/1.0"
And this is my config:
global
log 127.0.0.1 local2 debug
user root
group root
defaults
log global
retries 2
timeout connect 12000
timeout server 20000
timeout client 20000
listen varnish-cluster 0.0.0.0:80
mode http
stats enable
stats uri /haproxy?stats
stats realm Strictly\ Private
stats auth admin:secret
balance roundrobin
option http-server-close
timeout http-keep-alive 3000
option forwardfor
option httplog
cookie PHPSESSID prefix
server varnish1 xx.xx.xx.xx:80 cookie s1 check
listen mysql-master-cluster
bind 0.0.0.0:3306
mode tcp
option mysql-check user haproxy_check
balance roundrobin
server mysql-master-1 xx.xx.xx.xx:3306 check
server mysql-master-2 xx.xx.xx.xx:3306 check
I'd appreciate any advice you'd have in solving the 504 error with HAProxy!
I'm trying to host Bottle Application on NGINX using uWSGI.
Here's my nginx.conf
location /myapp/ {
include uwsgi_params;
uwsgi_param X-Real-IP $remote_addr;
uwsgi_param Host $http_host;
uwsgi_param UWSGI_SCRIPT myapp;
uwsgi_pass 127.0.0.1:8080;
}
I'm running uwsgi as this
uwsgi --enable-threads --socket :8080 --plugin python -- wsgi-file ./myApp/myapp.py
I'm using POST Request. For that using dev Http Client. Which goes infinite when I send the request
http://localhost/myapp
uWSGI server receives the request and prints
[pid: 4683|app: 0|req: 1/1] 127.0.0.1 () {50 vars in 806 bytes} [Thu Oct 25 12:29:36 2012] POST /myapp => generated 737 bytes in 11 msecs (HTTP/1.1 404) 2 headers in 87 bytes (1 switches on core 0)
but in nginx error log
2012/10/25 12:20:16 [error] 4364#0: *11 readv() failed (104: Connection reset by peer) while reading upstream, client: 127.0.0.1, server: localhost, request: "POST /myApp/myapp/ HTTP/1.1", upstream: "uwsgi://127.0.0.1:8080", host: "localhost"
What to do?
make sure to consume your post data in your application
for example if you have a Django/python application
def my_view(request):
# ensure to read the post data, even if you don't need it
# without this you get a: failed (104: Connection reset by peer)
data = request.DATA
return HttpResponse("Hello World")
Some details: https://uwsgi-docs.readthedocs.io/en/latest/ThingsToKnow.html
You cannot post data from the client without reading it in your application. while this is not a problem in uWSGI, nginx will fail. You can 'fake' the thing using the --post-buffering option of uWSGI to automatically read datas from the socket (if available), but you'd better to "fix" (even if i do not consider that a bug) your app
This problem occurs when the body of a request is not consumed, since uwsgi cannot know whether it will still be needed at some point. So uwsgi will keep holding on to the data either until it is consumed or until nginx resets the connection (because upstream timed out).
The author of uwsgi explains it here:
08:21 < unbit> plaes: does your DELETE request (not-response) have a body ?
08:40 < unbit> and do you read that body in your app ?
08:41 < unbit> from the nginx logs it looks like it has a body and you are not reading it in the app
08:43 < plaes> so DELETE request shouldn't have the body?
08:43 < unbit> no i mean if a request has a body you have to read/consume it
08:44 < unbit> otherwise the socket will be clobbered
So to fix this you need to make sure to always either read the whole request body or not to send a body if it is not necessary (for a DELETE e.g.).
Not use threads!
I have same problem with Global Interpretator Lock in Python under uwsgi.
When i don't use threads- not connection reset.
Example of uwsgi config ( 1Gb Ram on server)
[root#mail uwsgi]# cat myproj_config.yaml
uwsgi:
print: Myproject Configuration Started
socket: /var/tmp/myproject_uwsgi.sock
pythonpath: /sites/myproject/myproj
env: DJANGO_SETTINGS_MODULE=settings
module: wsgi
chdir: /sites/myproject/myproj
daemonize: /sites/myproject/log/uwsgi.log
max-requests: 4000
buffer-size: 32768
harakiri: 30
harakiri-verbose: true
reload-mercy: 8
vacuum: true
master: 1
post-buffering: 8192
processes: 4
no-orphans: 1
touch-reload: /sites/myproject/log/uwsgi
post-buffering: 8192