I have a tcp service running on port 8080 on 3 servers behind haproxy
i will like to loadbalance the tcp traffic between these servers via haproxy
server1 192.168.10.1 8080
server2 192.168.10.2 8080
server3 192.168.10.3 8080
lets say haproxy server ip is 192.168.10.10
1.
what haproxy configuration can i use to achieve this?
what will be the endpoint to access the loadbalanced tcp traffic after the config is made active?
2.
one other thing is, is it possible to proxy that endpoint to like a url without a port?
similar to http based routing...so can i then put that tcp endpoint and have a way to route an http endpoint via hostname to the loadbalanced tcp service?
so lets say i wan to access the service at http://tcp-app.example.com and then should be routed to the loadbalanced tcp service
To Answer 1 can you use this as start point
listen tcp-in
bind :8080
mode tcp
log stdout format raw daemon
option tcplog
timeout client 5s
timeout connect 30s
timeout server 30s
server server1 192.168.10.1:8080
server server2 192.168.10.2:8080
server server3 192.168.10.3:8080
You can the reach the loadbalancer via 192.168.10.10:8080.
For better understanding of haproxy this blog post a good start point IMHO
https://www.haproxy.com/blog/the-four-essential-sections-of-an-haproxy-configuration/
For question 2 should you switch to Server Name Indication (SNI) because TCP have not the concept of "Hostnames".
I have described how SNI routing works in HAProxy in this blog post https://www.me2digital.com/blog/2019/05/haproxy-sni-routing/
Here a example haproxy config for SNI Routing between TCP and HTTP protocol. It's a little bit complex because you will need to check the TCP routing before the HTTP routing.
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log stdout format raw daemon debug
maxconn 5000
tune.ssl.default-dh-param 3072
# https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy-1.8.0&openssl=1.1.0i&hsts=yes&profile=modern
# set default parameters to the intermediate configuration
ssl-default-bind-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
ssl-default-bind-options ssl-min-ver TLSv1.1 no-tls-tickets
ssl-default-server-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
ssl-default-server-options ssl-min-ver TLSv1.1 no-tls-tickets
# https://www.haproxy.com/blog/dynamic-configuration-haproxy-runtime-api/
stats socket ipv4#127.0.0.1:9999 level admin
stats socket /var/run/haproxy.sock mode 666 level admin
stats timeout 2m
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode tcp
log global
option dontlognull
#option logasap
option srvtcpka
option log-separate-errors
retries 3
timeout http-request 10s
timeout queue 2m
timeout connect 10s
timeout client 5m
timeout server 5m
timeout http-keep-alive 10s
timeout check 10s
maxconn 750
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
##
## Frontend for HTTP
##
frontend http-in
bind :::80 v4v6
mode http
option httplog
tcp-request inspect-delay 5s
tcp-request content accept if HTTP
# redirect http to https .
http-request redirect scheme https unless { ssl_fc }
##
## Frontend for HTTPS
##
frontend public_ssl
bind :::443 v4v6
option tcplog
tcp-request inspect-delay 5s
tcp-request content capture req.ssl_sni len 25
tcp-request content accept if { req.ssl_hello_type 1 }
# https://www.haproxy.com/blog/introduction-to-haproxy-maps/
use_backend %[req.ssl_sni,lower,map(tcp-domain2backend-map.txt)]
default_backend be_sni
##########################################################################
# TLS SNI
#
# When using SNI we can terminate encryption with dedicated certificates.
##########################################################################
backend be_sni
server fe_sni 127.0.0.1:10444 weight 10 send-proxy-v2-ssl-cn
backend be_sni_xmpp
server li_tcp-in 127.0.0.1:8080 weight 10 send-proxy-v2-ssl-cn
# handle https incoming
frontend https-in
# terminate ssl
bind 127.0.0.1:10444 accept-proxy ssl strict-sni alpn h2,http/1.1 crt haproxy-certs
mode http
option forwardfor
option httplog
option http-ignore-probes
# Strip off Proxy headers to prevent HTTpoxy (https://httpoxy.org/)
http-request del-header Proxy
http-request set-header Host %[req.hdr(host),lower]
http-request set-header X-Forwarded-Proto https
http-request set-header X-Forwarded-Host %[req.hdr(host),lower]
http-request set-header X-Forwarded-Port %[dst_port]
http-request set-header X-Forwarded-Proto-Version h2 if { ssl_fc_alpn -i h2 }
http-request add-header Forwarded for=\"[%[src]]\";host=%[req.hdr(host),lower];proto=%[req.hdr(X-Forwarded-Proto)];proto-version=%[req.hdr(X-Forwarded-Proto-Version)]
# Add hsts https://www.haproxy.com/blog/haproxy-and-http-strict-transport-security-hsts-header-in-http-redirects/
# http-response set-header Strict-Transport-Security "max-age=16000000; includeSubDomains; preload;"
# https://www.haproxy.com/blog/introduction-to-haproxy-maps/
use_backend %[req.hdr(host),lower,map(http-domain2backend-map.txt)]
#---------------------------------------------------------------------
# backends
#---------------------------------------------------------------------
## backend for cloud.DOMAIN
backend nextcloud-backend
mode http
option httpchk GET / HTTP/1.1\r\nHost:\ BACKEND_VHOST
server short-cloud 127.0.0.1:81 check
## backend for dashboard.DOMAIN
backend dashboard-backend
mode http
server short-cloud 127.0.0.1:82 check
## backend for upload.DOMAIN
backend httpupload-backend
log global
mode http
server short-cloud 127.0.0.1:8443 check
listen tcp-in
bind :8080 accept-proxy ssl strict-sni crt haproxy-certs
mode tcp
log stdout format raw daemon
option tcplog
timeout client 5s
timeout connect 30s
timeout server 30s
server server1 192.168.10.1:8080
server server2 192.168.10.2:8080
server server3 192.168.10.3:8080
File tcp-domain2backend-map.txt
tcp-service.mydomain.im be_sni_xmpp
File http-domain2backend-map.txt
# http backends
nextcloud.MyDomain.com nextcloud-backend
dashboard.MyDomain.com dashboard-backend
jabupload.MyDomain.com httpupload-backend
Related
In my use case, I need to setup load balancer which can be nginx or something like that which supports TCP load balancing which will be connected to my backend service.
I want to do this in active/passive manner. I can have 5 load balancers instances on docker env and 5 backend service instances lets say NFS (maybe not on docker env).
Now I want my lb1 (load balancer 1) to route the request to nfs1 only unless it is down then route request to nfs2 or nfs3 so on.
lb1 ----- nfs1
lb2 ----- nfs2
:
:
lb5 ----- nfs5
I have tried it with nginx but it only supports 2 servers in active/passive mode with backup keyword.
events {
worker_connections 1024;
}
stream {
upstream stream_backend {
server 172.17.0.5:2049;
server 172.17.0.7:2049 backup;
}
server {
listen 80;
proxy_connect_timeout 1s;
proxy_timeout 3s;
proxy_pass stream_backend;
}
}
Any help will be great.
By using Haproxy able to solve my issue.
defaults
mode tcp
frontend haproxy
bind *:80
mode tcp
timeout client 1s
default_backend nfs
backend nfs
mode tcp
timeout connect 1s
timeout server 1s
server nfs1 172.17.0.7:2049 check
server nfs2 172.17.0.5:2049 check backup
server nfs3 172.17.0.8:2049 check backup
credits: https://www.haproxy.com/blog/failover-and-worst-case-management-with-haproxy/
Hi I'm trying to implement use TCP passthrough based on SNI. It works for SSL but it's not working for 80.
configuration is below:
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
timeout client 30s
timeout server 30s
timeout connect 5s
frontend https
bind *:443
mode tcp
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
acl mytonicssl req_ssl_sni -i staging.mytonic.com
use_backend mytonic-ssl if mytonicssl
backend mytonic-ssl
mode tcp
balance roundrobin
stick-table type binary len 32 size 30k expire 30m
acl clienthello req_ssl_hello_type 1
acl serverhello rep_ssl_hello_type 2
tcp-request inspect-delay 5s
tcp-request content accept if clienthello
tcp-response content accept if serverhello
stick on payload_lv(43,1) if clienthello
stick store-response payload_lv(43,1) if serverhello
option ssl-hello-chk
server server1 10.10.17.222:8443 check
frontend http
bind *:80
mode tcp
acl mytonic_http hdr_dom(host) -i staging.mytonic.com
use_backend mytonic_nonssl if mytonic_http
backend mytonic_nonssl
mode tcp
balance roundrobin
server server1 10.10.17.222:8080 check
If i added default backend then it works. But this is not the virtual host solution. My haproxy version is: HA-Proxy version 1.5.18 2016/05/10 any help is appreciated.
SNI is a TLS extension which contains the target hostname. Since it is a TLS extension it can only be used with SSL/TLS traffic. The matching mechanism with plain HTTP (i.e. no SSL/TLS) is the HTTP Host header. But to balance based on this header you need to use mode http (the default) and not mode tcp. See also How to divert traffic based on hostname using HAProxy?
Using HAProxy, I'm trying to (TCP) load balance Rserve(a service listening in TCP socket for calling R scripts) running at port 6311 in 2 nodes.
Below is my config file. When I run HAProxy, its statting without any issues. But when I connect to the balanced nodes, getting below error. Anything wrong with the config?
Handshake failed: expected 32 bytes header, got -1
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode tcp
log global
option httplog
option dontlognull
option http-server-close
#option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
listen haproxy_rserve
bind *:81
mode tcp
option tcplog
timeout client 10800s
timeout server 10800s
balance leastconn
server rserve1 rserveHostName1:6311
server rserve2 rserveHostName2:6311
listen stats proxyHostName:8080
mode http
stats enable
stats realm Haproxy\ Statistics
stats uri /haproxy_stats
stats hide-version
stats auth admin:password
Tried with below frontend-backend way of balancing as well. Same result.
frontend haproxy_rserve
bind *:81
mode tcp
option tcplog
timeout client 10800s
default_backend rserve
backend rserve
mode tcp
option tcplog
balance leastconn
timeout server 10800s
server rserve1 rserveHostName1:6311
server rserve2 rserveHostName2:6311
After struggling for a week for a solution to load balance R, below (full free/open source software stack) solution worked.
If more people are referring this, I'll post a detailed blog on installation to configuration.
Was able to load balance R script requests coming to Rserve via HAProxy TCP load balancer with the below config. Pretty much similar to config in question section, but with frontend and backend separated.
#Load balancer stats page access at hostname:8080/haproxy_stats
listen stats <load_balancer_hostname>:8080
mode http
log global
stats enable
stats realm Haproxy\ Statistics
stats uri /haproxy_stats
stats hide-version
stats auth admin:admin#rserve
frontend rserve_frontend
bind *:81
mode tcp
option tcplog
timeout client 1m
default_backend rserve_backend
backend rserve_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server rserve1 <rserve hostname1>:6311 check
server rserve2 <rserve hostname2>:6311 check
server rserve3 <rserve hostname3>:6311 check
If SELinux is enabled, the below command will enable remote connections for HAproxy
/usr/sbin/setsebool -P haproxy_connect_any 1
The firewall ports might need opening too:
firewall-cmd --permanent --zone=public --add-port=81/tcp
firewall-cmd --permanent --zone=public --add-port=8080/tcp
Also, enable remote connections in Rserve with remote enable in the Rserve config file.
I have 2 virtual host
app.example.com:80 on ip address xxx.xxx.xxx.xxx
app2.example.com:80 on ip address yyy.yyy.yyy.yyy
my haproxy ipaddress is sss.sss.sss.sss
This is haproxy configuration :
global
log 127.0.0.1 local0 notice
maxconn 2000
user haproxy
group haproxy
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
timeout connect 5000
timeout client 10000
timeout server 10000
frontend www-http
mode http
bind *:80
default_backend appname
stats enable
stats uri /haproxy?stats
stats auth admin:password
stats show-node
backend appname
balance roundrobin
option httpclose
option forwardfor
server lamp1 app.example.com:80 check
server lamp2 app2.example.com:80 check
When trying to access using haproxy ipaddress, web browser returns xampp dashboard instead of backend content.
How can i make haproxy to redirect to backend content?
I do believe that functionality is now available in 1.6,
http://blog.haproxy.com/2015/10/14/whats-new-in-haproxy-1-6/
what you have configured will simply loadbalance the request between those to instances :
server lamp1 app.example.com:80 check
server lamp2 app2.example.com:80 check
if they are 2 seperate apps rather try:
frontend www-http
mode http
bind sss.sss.sss.sss:80
stats enable
stats uri /haproxy?stats
stats auth admin:password
stats show-node
acl app01 hdr(Host) -i app.example.com
acl app02 hdr(Host) -i app02.example.com
use_backend app01 if app01
use_backend app02 if app02
backend app01
balance roundrobin
option httpclose
option forwardfor
server lamp1 xxx.xxx.xxx.xxx:80 check
backend app02
balance roundrobin
option httpclose
option forwardfor
server lamp2 yyy.yyy.yyy.yyy:80 check
If you now hit your haproxy with app.example.com you will be forwarded to lamp1 and app2.example.com will take you to lamp2
if you want to forward everything to the ip to the backend and dont care for extra matching and mapping then id use a straight listen, instead of a frontend :
listen SOMENAME sss.sss.sss.sss:80
balance leastconn
mode http
server lamp1 xxx.xxx.xxx.xxx:80
server lamp2 yyy.yyy.yyy.yyy:80
If i remember correctly, "default_backend" expects to have "backend" value as an attribute, not "listen".
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-default_backend
So i suggest changing to
backend appname
balance roundrobin
option httpclose
option forwardfor
server lamp1 app.example.com:80 check
server lamp2 app2.example.com:80 check
I am trying to setup rabbitmq it can be accessed externally (from non-localhost) through nginx.
nginx-rabbitmq.conf:
server {
listen 5672;
server_name x.x.x.x;
location / {
proxy_pass http://localhost:55672/;
}
}
rabbitmq.conf:
[
{rabbit,
[
{tcp_listeners, [{"127.0.0.1", 55672}]}
]
}
]
By default guest user can only interact from localhost, so we need to create another user with required permissions, like so:
sudo rabbitmqctl add_user my_user my_password
sudo rabbitmqctl set_permissions my_user ".*" ".*" ".*"
However, when I attempt a connection to rabbitmq through pika I get ConnectionClosed exception
import pika
credentials = pika.credentials.PlainCredentials('my_username', 'my_password')
pika.BlockingConnection(
pika.ConnectionParameters(host=ip_address, port=55672, credentials=credentials)
)
--[raises ConnectionClosed exception]--
If I use the same parameters but change host to localhost and port to 5672 then I connect ok:
pika.ConnectionParameters(host=ip_address, port=55672, credentials=credentials)
I have opened port 5672 on the GCE web console, and communication through nginx is happening: nginx access.log file shows
[30/Apr/2014:22:59:41 +0000] "AMQP\x00\x00\x09\x01" 400 172 "-" "-" "-"
Which shows a 400 status code response (bad request).
So by the looks the request fails when going through nginx, but works when we request rabbitmq directly.
Has anyone else had similar problems/got rabbitmq working for external users through nginx? Is there a rabbitmq log file where I can see each request and help further troubleshooting?
Since nginx 1.9 there is stream module for the tcp or udp (not compiled with by default).
I configured my nginx (1.13.3) with ssl stream
stream {
upstream rabbitmq_backend {
server rabbitmq.server:5672
}
server {
listen 5671 ssl;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_ciphers RC4:HIGH:!aNULL:!MD5;
ssl_handshake_timeout 30s;
ssl_certificate /path/to.crt;
ssl_certificate_key /path/to.key;
proxy_connect_timeout 1s;
proxy_pass rabbitmq_backend;
}
}
https://docs.nginx.com/nginx/admin-guide/security-controls/terminating-ssl-tcp/
You have configured nginx as an HTTP reverse proxy, however rabbitmq is configured to use the AMQP protocol (see description of tcp_listeners at https://www.rabbitmq.com/configure.html)
In order for nginx to do anything meaningful you will need to reconfigure rabbitmq to use HTTP - for example http://www.rabbitmq.com/web-stomp.html.
Of course, this may have a ripple effect because any clients that are accessing rabbitmq via AMQP must be reconfigured/redesigned to use HTTP.
You can try and proxy to tcp, installing a tcp-proxy module for nginx to work with AMQP.
https://github.com/yaoweibin/nginx_tcp_proxy_module
Give it a go.
Nginx was originally only HTTP server, I also suggest looking into that above referred tcp proxy module, but if you would like to have proven load-balancer which is general TCP reverse proxy (not just HTTP, but can handle any protocol in general), you might consider using HAproxy.
since amqp is on tcp/udp level you need to configure nginx for tcp/udp connection
https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer
I might be late to the party, but I am very much sure that my article will surely help a lot of people in the upcoming days.
In the article I have explained how to install Letsencrypt certificate for RabbitMQ Management GUI with NGINX as reverse proxy on Port: 15672 which runs on HTTP protocol.
I have also used the same SSL certificates to power up the RabbitMQ Server that runs on AMQP protocol.
Kindly go through the following article for detailed description:
https://stackcoder.in/posts/install-letsencrypt-ssl-certificate-for-rabbitmq-server-and-rabbitmq-management-tool
NOTE: Don't configure RabbitMQ Server running on port 5672 as a reverse proxy. Even if you do then kindly use NGINX streams. But I
highly recommend sticking with adding certificate paths in
rabbitmq.conf file as RabbitMQ works on TCP/UDP