Session stickiness for ejabberd TCP sessions via ELB and HAProxy - tcp

I have an ejabberd cluster in AWS that I want to load balance. I initially tried putting an ELB in front of the nodes, but that makes the sessions to be non-sticky. I then enabled proxy protocol on the ELB and introduced an HAProxy node between the ELB and the ejabberd cluster. My assumption / understanding here was that the HAProxy instance would use the TCP proxy and ensure the sessions are sticky on the ejabberd servers.
However, that still does not seem to be happening! Is this even possible in the first place? Introducing the cookie config in the HAProxy.cfg file gives an error that cookies are enabled only for HTTP, so how can I have TCP sessions stay sticky on the server...
Please do help as seem to be lost on ideas here!

ejabberd does not require sticky load balancing. You do not need to implement this. Just use ejabberd cluster with ELB or HAProxy on front, without stickyness.

Thanks #Michael-sqlbot and #Mickael - seems it had to do with the idle timeout in the ELB. That was set to 60 seconds, so the TCP connection was getting refreshed if I didnt push any data from the client to the ejabberd server. On playing with that plus the health check interval, I can see the ELB giving me a long-running connection... Thanks.
I still have to figure out how to get the client IP's captured in ejabberd (believe enabling proxy protocol on the ELB would help) but that is a separate investigation...

Related

Can I use a reverse proxy for direct database connection?

Is it possible to setup a reverse proxy that would allow a database client to use a ssl port 443 connection and redirect to port 1521? I suspect it would not work. Can someone explain why or why not?
I'm assuming Oracle database based on port 1521.
There is no problem setting up Nginx TCP (L4) proxy for any TCP backend. Look here https://www.nginx.com/blog/tcp-load-balancing-udp-load-balancing-nginx-tips-tricks/ for example configuration.
When it comes to terminating SSL (L5) and sending data decrypted to TCP backend it's also technically possible with ngx_stream_ssl_module but I have never tested it and from what I can read people have problems setting this up for postgresql:
http://nginx.org/en/docs/stream/ngx_stream_ssl_module.html
Can nginx do TCP load balance with SSL termination
I have never seen Nginx setup as proxy for databases. Instead connection poolers (i.e. pgbouncer for postgresql) are often used not only for pooling but also as SSL offloading service. They are in fact L7 proxies for databases.
Oracle equivalent for pgbouncer seems to be Oracle Connection Manager and it supports SSL so I'd strongly recommend using it instead of Nginx or any other general purpose reverse proxy server:
https://docs.oracle.com/en/database/oracle/oracle-database/18/netag/configuring-oracle-connection-manager.html#GUID-AF8A511E-9AE6-4F4D-8E58-F28BC53F64E4

Load balancing go servers in Beanstalk

I'm trying to load balance go servers in AWS beanstalk that uses GRPC/Protobuf for data serialization. Beanstalk makes offers nginx as reverse proxy for client-server communication which makes use of http1.1 protocol. This is resulting in bogus messages exchanged between proxy and server but client messages never seem to reach the server as intended. Any clean ideas would help here.
Nginx doesnt support http/2 to backend yet. Some of us are working on a fix for this but will take another quarter before we could get to upstream it. You can either wait for that or use Envoy (https://github.com/lyft/envoy) in front which supports grpc and http/2 natively. Hope this helps.

nginx stream mode reconnect to upstream without close downstream connection

Hi,I am using nginx stream mode to proxy tcp connection. Could this be possible if I restart my app on the upstream, nginx could auto reconnect to upstream without lost the tcp connection on the downstream?
I found some clue from this HiveMQ blog post comment, hope this help. I copied them as below:
Hi Sourav,
the load balancer doesn’t have any knowledge of MQTT; at least I don’t
know any MQTT-aware load balancer.
HiveMQ replicates its state automatically in the cluster. If a cluster
node goes down and the client reconnects (and is assigned to another
broker instance by the LB), it can resume its complete session. The
client does not need to resubscribe.
Hope this helps, Dominik from the HiveMQ Team

Load balancer for websockets

I know how load balancers work for http requests. A client opens a connection with the LB, LB forwards the request to the backend servers, LB gets a response and from the same connection it sends the response to the client and closes the connection. I want to know the internal details of load balancer for websockets. How the connections are maintained and how the response is sent to the client. I read many questions on stackoverflow but none of them gave a clear picture of the internal implementation of LB
the LB just route the connection to a server behind it.
so as long you keep the connection open you will keep being connected to the same server and do not communicate with the LB again.
depending on the client, on reconnection you could be routed to another server.
I'm not sure how it works when some libraries fallback to JSON-P tho
Implementations of load balancers have great variety. There are load balancers that support websockets, like F5's BIG-IP (https://support.f5.com/kb/en-us/solutions/public/14000/700/sol14754.html), and LB's that I don't think support websocekts, like AWS ELB (there is a thread where somebody says they could make it with ELB but I suppose they added some other component behind ELB: How do you get Amazon's ELB with HTTPS/SSL to work with Web Sockets?.
Load Balancer's not only act as terminators of HTTP connections, they can terminate also HTTPS, SSL, and TCP connections. They can implement stickiness based on different parameters, like cookies, origin IP, etc. (like F5) In the case of ELB's they use only cookies, and it could be application generated cookies or LB generated cookies (both only with HTTP or HTTPS). Also stickiness can be kept for certain defined time, sometimes configurable.
Now, in order to forward data corresponding to websockets, they need to terminate, and forward, connections at level of SSL or TCP (not HTTP or HTTPS). Unless they understand websocket protocol (I don't know if any does it). Additionally, they need to keep stickiness to the server with which the connetion was opened. This is not possible with ELB but yes with more complex LB's like BIG-IP.

Freezing haproxy traffic with maxconn 0 and keepalive connections

Since haproxy v1.5.0 it was possible to temporarily stop reverse-proxying traffic to frontends using
set maxconn frontend <frontend_name> 0
command.
I've noticed that if haproxy is configured to maintain keepalive connections between haproxy and a client then said connections will continue be served whereas the new ones will continue awaiting for "un-pausing" a frontend.
The question is: is it possible to terminate current keepalive connections gracefully so that a client was required to establish new connections?
I've only found shutdown session and shutdown sessions commands but they are obviously not graceful at all.
The purpose of all of this is to make some changes on server seamlessly, otherwise in current configuration it would require a scheduled maintenance window.

Resources