I'm using the ELK stack with Filebeat to capture Nginx logs, no special setup or anything. But I have multiple domains in multiple virtual hosts and from the logs in Kibana can't tell which line is a request for which vhost, there is simply no variable for that.
So how can I change the configuration to do that? Anyone please?
Related
I got the situation where I have to configure different certificate for two different application on the nginx server. Both application request will be proxy from the nginx server to there respective running application ..
I have to configure this for same server name and same port.
Any suggestion will be appreciated here.
Thanks
You can't do this with stock NGINX, because ssl_certificateq cannot be set per-location.
You can achieve what you want by using Lua nginx module, using, in particular ssl_certificate_by_lua_block, writing logic for loading different SSL cert depending on current URI.
I working with a kubernetes cluster where I have a Nginx instance and a ELK stak to collect the cluster logs.
I have the following configuration for the nginx in order to output its logs to my logstash container:
access_log syslog:server=qa-logstash.monitoring.svc:5046,tag=nginx_access main;
error_log syslog:server=qa-logstash.monitoring.svc:5046,tag=nginx_error info;
This configuration seems to ok, because when I start my nginx, the logs are being sent correctly to my logstash.
The issue arises if, for some reason, the logstash container goes down or is restarted. If that happens, nginxs stops sending its logs to my logstash, even after the logstash is up and running again.
The only way I can get it to work again is to restart my nginx.
Does nginx have a mechanism to handle cases like this? Am I missing something in my configuration? I feel like this should be working out of the box and I've made some mistake on my end.
Thank you
I recommend you to use a log collection tool like fluentd or filebeat to get your nginx's logs. So, even if the logstash instance fail your nginx will continue to work well without needing to restart it.
You can choose to deploy your log collection tool as a sidecar alongside your nginx container or using a daemonset to collect logs from all pods inside your cluster.
I'm looking for a way, if possible to route a request to for eg. team.mysite.com to team.default.svc.cluster.local using nginx. This way I could have multiple wordpress sites using different sub domains on my domains and working as explained above. Basically calling xyz.mysite.com would have the request forwarded to xyz.default.svc.cluster.local, provided the service exists.
Note:
I have the kube-dns service running at 10.254.0.2
Is this possible? And how exactly would I do this?
Thanks.
Edit:
Going over this again I could possibly use variables in the ngonx.conf i.e $host.$host.default.svc.cluster.conf where $host is $host.mydomain.com.
I'd need a way to let nginx resolve the kube dns services also a way to part out the xyz in xyz.mydomain.com in the nginx.conf and assigning it to $host
If your nodes have a public IP address you could use an Ingress resource to do exactly what you describe. An Ingress usually defines a list of paths and their target Service, so your service should be running before you try to expose it.
Ingress controllers then dynamically configure Web servers that listen on a defined list of ports for you (typically the host ports 80 and 443 as you may have guessed) using the Ingress configuration mentioned above, and can be deployed using the Kubernetes resource type of your choice: Pod, ReplicationController, DaemonSet.
Once your Ingress rules are configured and your Ingress controller is deployed, your can point the A or CNAME DNS records of your domain to the node(s) running the actual Web server(s).
I'm currently trying to run two containers on a single host, one being an application (Ruby on Rails) and the other Nginx as a reverse proxy and cache. The app is running on TCP port 80. What I want to be able to do is bring down my application container, remove it and then bring it up again without having to restart nginx. The problem is that Nginx only seems to look up the IP of the container once, so if it goes down then back up at a different address then Nginx will just complain that there's nothing there.
I've tried a few things:
Using resolver 127.0.0.11 valid=5 to use Docker's DNS
Using an upstream block
Using a variable to try to get nginx to resolve at runtime.
I'm not sure where else to look but none of these options work if the application is brought up on a different IP address. Is there something I'm missing making this impossible?
Thanks.
Ended up reading through the 12 factor app which inspired me to remove the Nginx proxying to Rails upstream altogether, and instead used it as a proxy cache which has an upstream of the external DNS name.
I am new to Nginx. And I have trobule with it. We have many projects with different language and framework. And they are put in different server. How do I keep the session for every project respectively?
Question is not quite clear but from what i understood i will try to guide you a bit...
Nginx is a web server which when used as reverse proxy basically just sits in front of your project appserver. When some client tries to connect to your appserver, it will first connect to nginx and then nginx will forward that request to you appserver.
eg.
client -Req-> nginx (port 8080) -Req-> appserver(jetty, port 9000)
Now if you are trying to use a single nginx instance and direct request to multiple app servers from nginx. You will either have to make nginx listen on different ports and forward them to different appservers. Or nginx can identify which request is meant for which appserver by routes.
Here is a source which can help you to learn how to configure Nginx to do this... please ask again if you need further help.
https://www.digitalocean.com/community/tutorials/how-to-set-up-nginx-server-blocks-virtual-hosts-on-ubuntu-14-04-lts