unable to configure and start nginx - nginx

my nginx server seemed to run fine but when i do netstat -tupln, I cant see it bound to port 80.
When I fire a http request, it gives me
502 Bad Gateway
---
nginx/1.4.6 (Ubuntu)
Following is the nginx config I have written to both
/etc/nginx/sites-available/mysite.conf
and /etc/nginx/sites-enabled/mysite.conf
server {
listen 80;
server_name _;
location ~ / {
proxy_pass http://127.0.0.1:8001;
}
}
I am able to run following commands without any error.
nginx start/stop/restart
but making a http request to the machine gives me following error in /var/log/nginx/error.log
08:39:26 [warn] 17294#0: conflicting server name "_" on 0.0.0.0:80, ignored
08:41:17 [error] 20186#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.123.123.123, server: _, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8001/", host: "123.123.123.123"
Even changing the port 8001 to 8003 in mysite.conf files in /etc/nginx/sites-* and restarting nginx doesn't make any difference in above error message which makes me believe that it isn't picking up changes in the conf files.
Can anybody help me understand what is it that i am missing?

It is an old issue. Would like to put my finding here in case someone encounters the same issue in the future. The way I resolve this issue is by changing /etc/nginx/ permission.
sudo chmod -R 777 /etc/nginx/
Probably more than necessary, but that resolve my problem. Please let me know if anyone find any solid solution

Related

Nginx in Cloud Run with internal traffic works but gives connect errors

I'm running an Nginx in a Cloud Run instance as a Reverse-Proxy to a backend Cloud Run app (it will do more stuff in the future, but let's keep it simple for this example).
The Nginx Cloud Run requires authentication (via IAM), but the backend app doesn't. The Nginx is connected to the same VPC and has the setting (vpc_access_egress = all-traffic), and the backend app is set to Allow internal traffic only only.
events {}
http {
resolver 169.254.169.254;
server {
listen 8080;
server_name mirror_proxy;
location / {
proxy_pass https://my-backend.a.run.app:443;
proxy_cache off;
proxy_redirect off;
}
}
}
The setup works, and I send authenticated requests to the Nginx and get the responses from the backend. However I also get a lot of error messages from the Nginx per request.
2022/12/22 13:57:51 POST 200 1.76 KiB 1.151s curl 7.68.0 https://nginx.a.run.app/main
2022/12/22 13:57:50 [error] 4#4: *21 connect() to [1234:5678:4802:34::35]:443 failed
(113: No route to host) while connecting to upstream, client: 169.254.1.1,
server: mirror_proxy, request: "POST /main HTTP/1.1",
upstream: "https://[1234:5678:4802:34::35]:443/main", host: "nginx.a.run.app"
2022/12/22 13:57:50 [error] 4#4: *21 connect() to [1234:5678:4802:36::35]:443 failed
(113: No route to host) while connecting to upstream, client: 169.254.1.1,
server: mirror_proxy, request: "POST /main HTTP/1.1",
upstream: "https://[1234:5678:4802:36::35]:443/main", host: "nginx.a.run.app"
2022/12/22 13:57:50 [error] 4#4: *21 connect() to [1234:5678:4802:32::35]:443 failed
(113: No route to host) while connecting to upstream, client: 169.254.1.1,
server: mirror_proxy, request: "POST /main HTTP/1.1",
upstream: "https://[1234:5678:4802:32::35]:443/main", host: "nginx.a.run.app"
Why are there errors, when the request succeeds?
Doesn't the VPC router don't know the exact IP address of the Cloud Run yet, and Nginx has to try them out? Any idea?
GCP only uses IPv4 inside the VPC network.
Since I forced the Nginx to use the VPC network (vpc_access_egress = all-traffic), Nginx will fail when it tries to resolve an IPv6, and fall back to IPv4.
With the following setting you can force Nginx to immediately resolve the IPv4.
http {
resolver 169.254.169.254 ipv6=off;
...
}
``

nginx forward proxy config is causing "upstream server temporarily disabled while connecting to upstream" error

I want to set up nginx as a forward proxy - much like Squid might work.
This is my server block:
server {
listen 3128;
server_name localhost;
location / {
resolver 8.8.8.8;
proxy_pass http://$http_host$uri$is_args$args;
}
}
This is the curl command I use to test, and it works the first time, maybe even the second time.
curl -s -D - -o /dev/null -x "http://localhost:3128" http://storage.googleapis.com/my.appspot.com/test.jpeg
The corresponding nginx access log is
172.23.0.1 - - [26/Feb/2021:12:38:59 +0000] "GET http://storage.googleapis.com/my.appspot.com/test.jpeg HTTP/1.1" 200 2296040 "-" "curl/7.64.1" "-"
However - on repeated requests, I start getting these errors in my nginx logs (after say the 2nd or 3rd attempt)
2021/02/26 12:39:49 [crit] 31#31: *4 connect() to [2c0f:fb50:4002:804::2010]:80 failed (99: Address not available) while connecting to upstream, client: 172.23.0.1, server: localhost, request: "GET http://storage.googleapis.com/omgimg.appspot.com/test.jpeg HTTP/1.1", upstream: "http://[2c0f:fb50:4002:804::2010]:80/my.appspot.com/test.jpeg", host: "storage.googleapis.com"
2021/02/26 12:39:49 [warn] 31#31: *4 upstream server temporarily disabled while connecting to upstream, client: 172.23.0.1, server: localhost, request: "GET http://storage.googleapis.com/my.appspot.com/test.jpeg HTTP/1.1", upstream: "http://[2c0f:fb50:4002:804::2010]:80/my.appspot.com/test.jpeg", host: "storage.googleapis.com"
What might be causing these issues after just a handful of requests? (curl still fetches the URL fine)
The DNS resolver was resolving to both IPV4 and IPV6 addresses. The IPV6 part seems to be causing an issue with the upstream servers.
Switching it off made those errors disappear.
resolver 8.8.8.8 ipv6=off;

Nginx+Gunicorn - reverse proxy not working

I am trying to setup a python flask application on a server following this guide: https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-gunicorn-and-nginx-on-ubuntu-18-04. I have this working running on my local machine by following the guide. However when I am trying to implement on the actual server with the same config I am running into an issue on proxying requests back to the gunicorn server. I am able to serve static content from Nginx with no problem. When I make a web service call from the static content back to Nginx, it should be proxied back to the gunicorn server.
For example when I try to make the call 'http://example.com/rest/webService', I would expect Nginx to pass anything starting with /rest/ back to gunicorn. The error below is all I can see in the error logs about what is happening:
2019/01/18 12:48:18 [error] 2930#2930: *18 open() "/var/www/html/rest/webService" failed (2: No such file or directory), client: ip_address, server: example.com, request: "GET /rest/webService HTTP/1.1", host: "example.com", referrer: "http://example.com/"
Here is the setup for python_app:
server {
listen 80;
server_name example.com www.example.com;
root /var/www/html;
index index.html;
location ^/rest/(.*)$ {
include proxy_params;
proxy_pass http://unix:/home/username/python_app/python_app.sock;
} }
The only change to my nginx.conf file was to change 'include /etc/nginx/sites-enabled/*' to 'include /etc/nginx/sites-enabled/python_app'.
Please let me know if you have any ideas at all on what I may be missing! Thanks!
Not a solution, but some questions....
If you run
sudo systemctl status myproject
Do you see affirmation that gunicorn is running, and what socket it is bound to?
And does
sudo nginx -t
come back saying no diagnostic?
The regex in the location block for nginx -- I don't see anything similar to that in the guide, I see that you're trying to capture everything after "rest/", but looking at the nginx documents, I think you'd have to have $1 to reference the captured part of the URL. Can you try without the "^/rest/(.*)$" and see whether nginx finds anything?
Is the group that owns your directory a group that nginx is part of (a lot of setups are www-data)

Wildlfy with Nginx not working properly

We had installed wildfly for a couple of time working correctly. We configured right now Nginx as reverse proxy for wildfly.
We're getting on OPTIONS method 405 Method Not Allowed. Here is the configuration of nginx.
/etc/nginx/conf.d/wildfly.conf
upstream wildfly {
server 127.0.0.1:8081;
}
server {
listen 8080;
server_name guest1;
location/ {
proxy_pass http://wildfly;
}
}
Error obtained after installing nginx:
This is the error got by nginx:
2017/06/23 08:16:54 [crit] 1386#0: *9 connect() to 127.0.0.1:8081 failed (13: Permission denied) while connecting to upstream, client: 172.28.128.1, server: guest1, request: "OPTIONS /commty/cmng/users HTTP/1.1", upstream: "http://127.0.0.1:8081/commty/cmng/users", host: "guest1:8080"
What I'm missing?
I've done the following to finally make it work on CentOS7 + Wildfly.
Vagrant up
Install NGINX
yum install epel-release
yum install nginx
Configure /etc/nginx/nginx.conf (default configuration)
Configure /etc/nginx/conf.d/wildfly.conf (using port 80 for nginx and 8080 for wildfly)
upstream wildfly {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name guest1;
location / {
proxy_pass http://wildfly;
}
}
Also set SELinux permissive for let nginx work.
$ setenforce permissive
After that wildfly is working properly through nginx.

uWSGI nginx error : connect() failed (111: Connection refused) while connecting to upstream

I'm experiencing 502 gateway errors when accessing my IP on nginx(http://52.xx.xx.xx/), the logs simply says this:
2015/09/18 13:03:37 [error] 32636#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xx.xx.xx, server: xx.xx.xx.xx, request: "GET / HTTP/1.1", upstream: "uwsgi://127.0.0.1:8000", host: "xx.xx.xx.xx"
my nginx.conf file
# the upstream component nginx needs to connect to
upstream django {
# server unix:///path/to/your/mysite/mysite.sock; # for a file socket
server 127.0.0.1:8000; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 80;
# the domain name it will serve for
server_name xx.xx.xx.xx; # substitute your machine's IP address or FQDN
charset utf-8;
access_log /home/ubuntu/test_django/nginx_access.log;
error_log /home/ubuntu/test_django/nginx_error.log;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media {
alias /home/ubuntu/test_django/static/media/; # your Django project's media files - amend as required
}
location /static {
alias /home/ubuntu/test_django/static/; # your Django project's static files - amend as required
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include /home/ubuntu/test_django/uwsgi_params; # the uwsgi_params file you installed
}
}
Is there anything wrong with nginx.conf file.....if i use default conf then it is working.
I resolved it by changing the socket configuration in uwsgi.ini
from socket = 127.0.0.1:3031, to socket = :3031. I was facing this issue when I ran nginx in one Docker container and uWSGI in another. If you are using command line to start uWSGI then do uwsgi --socket :3031.
Hope this helps someone stuck with the same issue, during deployment of a Django application using Docker.
change this address:
include /home/ubuntu/test_django/uwsgi_params;
to
include /etc/nginx/uwsgi_params;
I ran into this issue when setting up the env by nginx + gunicorn and solve it by
adding '*' to ALLOWED_HOSTS or your specific domain.
In my case with a debian server it worked moving:
include /etc/nginx/uwsgi_params;
In the location tag in my nginx server config file, like this:
location /sistema {
include /etc/nginx/uwsgi_params;
uwsgi_pass unix://path/sistema.sock;
}
Also, check you have the following packages installed:
uwsgi-plugin-python
pip3 install uWSGI did the trick for me :D

Resources