Kubernetes: Using same service(Rabbitmq) for both HTTP and WS traffic - nginx

I am trying to set up rabbitmq cluster within kubernetes. Clients can connect to Rabbitmq using amqp protocol rides on TCP and webclients using websockets. As mentioned in WebSocket support for kubernetes, I need to add websocket services against "nginx.org/websocket-services" annotation for ingress configuration. In this case rabbitmq acts as both websocket and non-websocket service.
Will the configuration work for amqp clients loadbalancing if I give rabbitmq service name against "nginx.org/websocket-services" ?
In short, can a service be both non-ws and ws at sametime?
EDIT [ 05-02-2018 ]
It seems there is a different flow for TCP loabalancing. I implemented that. Atleast the routing part is happening but I am not sure about load-balancing of TCP, I need to debug further on that part.
And there is one more reference for websocket loadbalancing and seems to say "No special configuration required".
Kiran

Related

How to proxy a request from a client to an Apache Pulsar broker?

I'm attempting to connect a client running in a Kubernetes cluster to an Apache Pulsar cluster hosted by StreamNative. Specifically, I'm attempting to use the logstash-input-pulsar Plugin, which doesn't support auth. One option is to fork logstash-input-pulsar and add authentication; however, a more general option would be create a proxy between logstash and pulsar, where the proxy is able to handle authentication. (For example, the proxy could be a sidecar on the kubernetes pod where logstash is running.) I looked into using the Pulsar Proxy; however, this proxy is intended to run on the same Kubernetes cluster as the Pulsar broker(s). If the Pulsar client was using the HTTP protocol, I could set up nginx as a proxy between the client and broker, and nginx could add the appropriate auth - a header, for example. Pulsar, however, uses its own protocol over TCP. Would there still be a way to have a proxy that handles adding auth between the pulsar client and broker?

Putting NGINX in front of kafka

I have elasticsearch, filebeat and kibana behind NGINX server and all three of them uses ssl and basic authentication of Nginx reverse proxy. I want to place kafka behind NGINX as well. Kafka is communicating with filebeat. Is there any possible way that filebeat (with ssl) and kafka (without ssl) can communicate?
I mean is there any exception kind of thing that we can add in NGINX configuration?
There's not much benefit to using Nginx with Kafka beyond the initial client connection. In other words, yes, you can use stream directive, in theory, and point bootstrap.servers at it, but Kafka will return its advertised.listeners after that, and clients then bypass Nginx to communicate directly with individual brokers (including authentication)
Related
Allow access to kafka via nginx

How to make Kubernetes service load balance based on client IP instead of NGINX reverse proxy IP

I have configured NGINX as a reverse proxy with web sockets enabled for a backend web application with multiple replicas. The request from NGINX does a proxy_pass to a Kubernetes service which in turn load balances the request to the endpoints mapped to the service. I need to ensure that the request from a particular client is proxied to the same Kubernetes back end pod for the life cycle of that access, basically maintaining session persistence.
Tried setting the sessionAffinity: ClientIP in the Kubernetes service, however this does the routing based on the client IP which is of the NGINX proxy. Is there a way to make the Kubernetes service do the affinity based on the actual client IP from where the request originated and not the NGINX internal pod IP ?
This is not an option with Nginx. Or rather it's not an option with anything in userspace like this without a lot of very fancy network manipulation. You'll need to find another option, usually an app-specific proxy rules in the outermost HTTP proxy layer.

How can I have both TCP and REST API on puma?

Is there an easy way to enable TCP connectivity for the default puma rails server?
My use case need is to start my rest api puma server and enable TCP connections as well, so that when I call a specific endpoint and use some TCP service inside the rest api it would keep connection alive.
Is this possible? I am not talking about websockets, but TCP pure sockets.

Apache Camel and Netty as a TCP sticky balancer

I'm trying to load balance TCP connections over multiple backend servers via Apache Camel and Netty.
I want to make each connection to the backend mapped to each connection to Camel. Something like this:
Client connects to Camel.
Camel selects a backend server and connects to it.
Client sends something to Camel.
Camel sends it to the associated backend server.
Backend server replies to Camel.
Camel sends it back to client.
...
My protocol is stateful and the connection between client and Camel will stay open. I also need messages starting from backend and going to client.
So far, so good. This is working quite nice.
My problem starts when I connect a new client that goes to the same backend server, it looks like Camel reuses the connection that is already open, for the backend server it looks like the first client sent the message, it doesn't receive a new connection request.
I've looked at Apache Camel Netty Component documentation and didn't find anything to configure this behaviour.
Is it possible to do this?
Sidenote: I'm using Camel because I need to inspect the messages in the protocol to select a backend server, i.e. I need a custom loadbalancing strategy. The problem occurs using any loadbalancing strategy provided by Camel, so it's not related to my code.
Camel has a sticky loadbalancer, you just need to setup an expression to tell camel to decide which object hashcode it need to be use.
from("direct:start").loadBalance().
sticky(header("source")).to("mock:x", "mock:y", "mock:z");

Resources