Google Endpoints + GKE + GRPC - google-cloud-endpoints

I've been trying to deploy a grpc application to be frontend by google endpoints on a GKE cluster and terminating TLS on the load balancer itself for the better part of 3 days now and I am very confused how to get this working.
At first I tried a simple deployment without Google endpoints to make sure the load balancer works. It is described in more detail here
https://github.com/kubernetes/ingress-gce/issues/18#issuecomment-454047010
That did not work. I then followed up by trying to deploy the application here
https://github.com/salrashid123/gcegrpc/tree/master/gke_ingress_lb
That seems to have worked well however I am not quite able to understand what makes it work. It seems to me (as suggested by someone else) that it might be because the application speaks TLS on the grpc endpoint
I have tried enabling TLS in my application grpc endpoint including adding a grpc health check as suggested by someone else however that did not seem to help.
My esp config was something as simple as
- name: endpoints-proxy
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http2_port=8080",
"--backend=grpc://127.0.0.1:50051",
"--service=myapp.endpoints.myproject-34342.cloud.goog",
"--rollout_strategy=managed",
"--service_account_key=/etc/nginx/creds/endpoints-credentials.json"
]
How exactly does one go about terminating TLS on the GLB together with the ESP proxy and a grpc application behind it? There seems to be a sweet spot that I am missing on how to get all those things working together

Figured out how to do it. Turns out there are a couple, not so well documented, things that need to be done.
See here for the details
https://github.com/GoogleCloudPlatform/endpoints-samples/issues/52#issuecomment-454387373

Related

nginx ingress controller limit-rps seems not working

I've a strange behaviour when trying to do some load testing.
Environment :
NGINX Ingress controller version: 0.44.0
Kubernetes version : 1.17.8
openidc.lua version : 1.7.4
Here's the situation :
The nginx ingress controller is deployed as daemonset, and due to the openidc module, I activated the sessionAffinity to ClientIP.
I have a simple stateless rest service deployed with a basic ingress which is tested for load (no sessionAffinity on that one).
When launching load testing on the rest service without the sessionAffinity ClientIP, I reach far beyond 25 req/s (about 130 req/s before the service resources begin to crash, that's another thing).
But with the sessionAffinity activated, I only reach 25 req/s.
After some research, I found some interesting things, desribed like here : https://medium.com/titansoft-engineering/rate-limiting-for-your-kubernetes-applications-with-nginx-ingress-2e32721f7f57
So the formula, as the load test should always be served by the same nginx pod, should be : successful requests = period * rate + burst
So I did try to add the annotation nginx.ingress.kubernetes.io/limit-rps: "100" on my ingress, but no luck, still the same 25 req/s.
I also tried different combinations of the following annotations : https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#rate-limiting, but no luck either.
Am I missing something ?
In fact, it was more vicious than that.
It had nothing to do with the sessionAffinity, nor the rate limiting (in fact there's none by default, I didn't get it at first, the rate limit is only there if we want to limit for ddos purpose).
The prob was, I added in the configmap the options for modsecurity AND owasp rules.
And because of that, the request processing was so slow, it limited the number of request per seconds. When the sessionAffinity was not set, I didn't see the prob, as the req/s were fair, as distributed among all pods.
But with the sessionAffinity, so a load test on a single pod, the prob was clearly visible.
So I had to remove modsecurity and owasp, and it'll be the apps who will be responsible for that.
A little sad, as I wanted more central security on nginx so apps don't need to handle it, but not at that cost...
I'd be curious to understand what modsecurity is doing exatly to be so slow.

AMQP/RabbitMQ consumer on NGINX

Is it possible to have RabbitMQ Consumer listening to a queue for message via AMQP protocol. I am aware that nginx only supports HTTP/s protocol. Was wondering if this can be achieved by using tcp module extension.
I am using nginx as API Gateway and want to do a protocol translation from AMQP to HTTP since all the backend service's are exposed on HTTP.
It would definitely be possible writing your own C extension. nginx is suitable for TCP proxying, therefore I don't see any reason why you couldn't send your own TCP packets to RabbitMQ using nginx, and consequently use nginx as a RabbitMQ consumer. It's probably a lot of work to make it run, and even more work to make it stable and reliable, but doable. Do me a favor though, don't do this. There will always be better, more elegant and simpler solution.
HTTP is definitely not suitable for consuming from a queue (in the amqp sense) because you have to keep the socket open while you consume. However, you could write a C extension to publish/retrieve messages to/from RabbitMQ (and apparently, somebody has already done this). If you're not that much into C or don't want to maintain your own nginx package, you could also write a LUA extension for lua-nginx-module (once again, somebody seems to have worked in this direction). These are PoC for talking to MQ from nginx, but they are not consumers. Both extensions seems to act in the HTTP context, so you need to answer (and close the socket) pretty fast.
However, as far as I know, there isn't any community-driven and well maintained project that would serve this purpose directly or indirectly; you'd have to make and maintain your own extension/client. Moreover, nginx is your current API gateway. Do take the risk into account. Things could go really wrong. Only you can tell whether it is worth the hassle or not, but it's most likely not.
Since you don't gave that much information on what you're exactly looking for, I just answered you on the NGINX/AMQP part. But you might just be looking for an HTTP interface for RabbitMQ. In this case, the Management Plugin might be the way to go. It has a pretty cool HTTP API. Once again, you'd loose every stateful features (like basic consuming, ack/nack/rejects), but that's inherently due to the way HTTP is designed.
Eventually, if you really need a RabbitMQ "basic-"consumer, I would recommend you to write a proper consumer as a separate application and forget about doing this in nginx. That's definitely the best and most supported solution.

Sailsjs distribution across multiple Google compute engine instances

Sailsjs requires setup to handle scaling horizontally. There are multiple ways to do this. I'm not sure if I have done this correctly, due to poor performance during load testing. Please confirm if I understand and am doing the setup correctly.
I've created a load balancer on the Google platform for handling the distribution of requests across the instances. Much is spoken about of Nginx for distributing, but I understand Googles load balancer does all I need in this regard. Note, I use session affinity: Client IP.
I've set up config/session.js to use express-mysql-session, so MemoryStore is not used.
I haven't set up anything in config/sockets.js. My project doesn't use live chat etc with socket.io, all requests are to waterline for data from db. But if this is a issue, please refer me to a way to do this with Mysql db not redis (or memory).
I use pm2 as a way to keep it live and to distribute processing on a instance.
Those are the main factors I've found regarding horizontal scaling with sailsjs.

Load balanced SignalR fails on start. Will Redis backplane fix?

I'm having issues with SignalR failing to complete its connection cycle when running in a load balanced environment. I’m exploring Redis as a means of addressing this, but want a quick sanity check that I’m not overlooking something obvious.
Symptoms –
Looking at the network traffic, I can see the negotiate and connect requests made, via XHR and websockets respectively, which is what I’d expect. However, the start request fails with
Error occurred while subscribing to realtime feed. Error: Invalid start response: ''. Stopping the connection.
And an error message of ({"source":null,"context":{"readyState":4, "responseText":"","status":200, "statusText":"OK"}})
As expected, this occurs when the connect and start requests are made on different servers. Also, works 100% of the time in a non load-balanced environment.
Is this something that a Redis backplane will fix? It seems to make sense, but most of the rational I’ve seen for adding a backplane is around hub messaging getting lost, not failing to make a connection, so I'm wondering if I'm overlooking something basic.
Thanks!
I know this is a little late, but I believe that the backplane only allows one to send messages between the different environments user pools, it doesn't have any affect on how the connections are made or closed.
Yes, you are right. Backplane acts as cache to pass messages to other servers behind load balancer. Connection on load balancer is a different and a tricky topic, for which I am also looking for an answer

Using NGINX to forward tracking data to Flume

I am working on providing analytics for our web property based on instrumentation data we collect via a simple image beacon. Our data pipeline starts with Flume, and I need the fastest possible way to parse query string parameters, form a simple text message and shove it into Flume.
For performance reasons, I am leaning towards nginx. Since serving static image from memory is already supported, my task is reduced to handling the querystring and forwarding a message to Flume. Hence, the question:
What is the simplest reliable way to integrate nginx with Flume? I am thinking about using syslog (Flume supports syslog listeners), but I struggle with how to configure nginx to forward custom log messages to a syslog (or just TCP) listener running on a remote server and on a custom port. Is it possible with existing 3rd party modules for nginx or would I have to write my own?
Separately, anything existing you can recommend for writing a fast $args parser would be much appreciated.
If you think I am on a completely wrong path and can recommend something better performance-wise, feel free to let me know.
Thanks in advance!
You should parse nginx log file like tail -f do and then pass results to Flume. It will be the most simple and reliable way. The problem with syslog is that it blocks nginx and may completely stuck under high-load or if something goes wrong (this is why nginx doesn't support it).

Resources