Nginx proxy stops working after a while - nginx

I have 2 loadbalanced frontend servers running on Amazon AWS. Both have nginx installed on it. Loadbalancer used is Amazon ELB.
There are 2 loadbalanced backend servers. This app is a ruby on rails app. And it uses nginx/unicorn.
The frontend servers proxy to backend server for API calls. Everything works fine but after some time the proxy stops working.
Here is the nginx conf of frontend servers:
nginx.conf
vhost
and one more conf for setting up a variable.
Can someone explain whats the issue? And why after sometime the proxy stops working from both frontend servers?

Related

Ec2 nginx routing and setup

I have a react app(port 8080) and a backend node app(port 3001) running on an aws ec2 instance, I have nginx routing port 80 from the load balancer to the port 8080 on local host in nginx, but, my front end cannot connect to the backend with axios(localhost:3001).
I have an instance of node for the front end and back end in the folders client and server respectively.
I've tried connecting with http and https, adding a direct 3001 listener all the way up to the domain/balancer and replacing all the paths to the domain(didn't work and also insecure if it did), I also tried without nginx.

Is it possible to reverse proxy to native running applications from a containerized Nginx in K3S?

On my server I run some applications directly on the host. In parallel I have a single-node K3S that also contains a few applications. To be able to manage the traffic routing and HTTPS certificates to the individual services in a central place I want to use Nginx. In the cluster runs a traefik ingress controller which I use for the routing in this context.
To be able to reverse proxy to each application, no matter if it runs directly on the host or in a container in K3S, Nginx must be able to reach the applications locally, no matter where it runs (without the traffic leaving the server). E.g. proxy myservice.mydomain.com to localhost:8080 from Nginx should end up on the webserver of a nativly running application and myservice2.mydomain.com to the webserver of a container in K3S.
Now, is this possible if the Nginx runs in the K3S cluster or do I have to install it directly on the host machine?
If you want to use Nginx that way yes you can do it.
keeping Nginx in front of Host and K3S also.
You can expose your service as NodePort from K3s and while local servie that you will be running on Host machine will be also running on one Port.
in this Nginx will forward the traffic like
Nginx -> Port-(8080) MachineIp: 8080 -> Application on K3s
|
Port-(3000) MachineIp: 3000 -> Application running on Host
Example : https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/

How to scale web application in kubernetes?

Let's consider a python web application deployed under uWSGI via Nginx.
HTTP client ↔ Nginx ↔ Socket/HTTP ↔ uWSGI (web server) ↔ webapp
Where nginx is used as reverse proxy / load balancer.
How to scale this kind of applications in kubernetes?
Several options come to my mind:
Deploy nginx and uWSGI in a single pod. Simple approach.
Deploy nginx + uWSGI in single container? Violate the “one process per container” principle.
Deploy only a uWSGI (via HTTP). Omit the usage of nginx.
or there is another solution, involving nginx ingress/load balancer services?
It depends.
I see two scenarios:
Ingress is used
In this case there's no need to have nginx server within the pod, but it can be ingress-nginx which will be balancing traffic across a kubernetes cluster. You can find a good example in this comment on GitHub issue.
No ingress is used.
In this case I'd go with option 1 - Deploy nginx and uWSGI in a single pod. Simple approach.. This way you can easily scale in/out your application and don't have any complicated/unnecessary dependencies.
In case you're not familiar with what ingress is, please find kubernetes documentation - ingress.

nginx.conf file not getting updated with the latest service info using the consul for service discovery

All,
We have an infrastructure where we have 1 consul server, 2 ngnix web servers and 2 application servers. The app servers connects to consul to register the services. The Ngnix server will connect to consul and updates the ngix.conf file using the nginx.ctmpl (consultemplate config) file to have the latest information on the services via consul.
The problem I see is that the nginx.conf is not getting updated on the 2 nginx servers. The following are the agents/services running on each server:
Consul server:
consul
consultemplate
Ngix servers:
nginx
consultemplate
Application Servers:
consul
Couple of questions here:
Which agent/process/service on Nginx will use the nginx.cmptl file
to update the nginx.conf with the latest status?
What can be a problem on my nginx servers?
Nginx does not utilize that template directly. consul-template should be configured to use nginx.cmptl to template the config, and reload nginx.
See Load Balancing with NGINX and Consul Template for an example of this configuration.
Can you verify that the consul-template service is running, and perhaps provide any error logs it might be generating?
The problem is solved. I renamed the data directory for the Consul in Nginx servers and ran the chef-client. I recreated the data directory and the Consul server was able to identify the two Nginx servers and the 2 application servers. Everything is back in the working condition.

Handling CONNECT request with Nginx Ingress on GCP GKE

I have a cluster of proxy servers on GKE, and I'm trying to figure out how to load balance CONNECT requests to these.
Without GKE, I'm using the nginx stream module (http://nginx.org/en/docs/stream/ngx_stream_core_module.html) which works perfectly.
GCP load balancers do not accept CONNECT requests, so I'm trying to take my existing nginx configuration file and apply it to an nginx ingress resource for GKE. Is this possible?

Resources