502 bad gateway mvc core app on CentOS - nginx

I made a website in MVC Core and tried to publish it to the web on a CentOS 7 VPS. It runs well, when I curl it it responds. Then i installed nginx and it showed the default page, when trying it from my computer. Then i changed nginx.conf to the below one and all i get is 502 bad gateway. In the nginx log i see only that a get request was received. Any ideas what should i check?
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
# include /etc/nginx/conf.d/*.conf;
server {
listen 80;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
}

I tried apache and had the same problem. Then i found the solution, you have to set httpd_can_network_connect.
http://sysadminsjourney.com/content/2010/02/01/apache-modproxy-error-13permission-denied-error-rhel/
A didn't find the error message in the audit blog that the author was talking about but i tried his solution and it worked.
I have used centos for 4 days now and it's the second time i have to set a bit to solve a problem. These solutions are quite hidden in the web and most articles dealing with the area doesn't mention those so i lost a lot of time. So i share the opinion of the author about SELinux. Probably i will try another linux distribution.
What is also interesting that I followed the official microsoft tutorial "Set up a hosting environment for ASP.NET Core on Linux with Apache, and deploy to it". The operating system that they use is CentOS too. And it doesn't mention this bit either.

Related

CentOS displays NGINX page instead of flask app?

I try to deploy a Flask App on centOS with NGINX. The Flask app is served on 127.0.0.1:5000 and is also accessible with curl 127.0.0.1:5000.
I tried to keep it simple and removed all NGINX config files in conf.d and just use nginx.conf. This is the whole content of the nginx.conf file:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
#include /etc/nginx/conf.d/*.conf;
server {
listen 80;
location / {
proxy_pass http://127.0.0.1:5000/;
proxy_set_header X-Real-IP $remote_addr;
}
}
}
However, NGINX still shows its default page and there seems to be nothing in the world or at least in the nginx.conf file to change this. Is there anything in NGINX that still might require reconfiguration or anything on centOS that could lead to the problem?
I solved it. It was indeed open NGINX commands.
After I ran the following commands it worked:
sudo systemctl stop nginx
sudo systemctl enable nginx
sudo systemctl restart nginx
Maybe it helps someone in the future.

nginx vs kubernetes (as an external balancer) - fail to balance API servers

We are trying to build HA Kubernetese cluster with 3 core nodes each of having full set of vital components: ETCD + APIServer + Scheduller + ControllerManager and external balancer. Since ETCD can make clusters by themselves, we are stack with making HA APIServers. What seemed an obvious task a couple of weeks ago now became a "no way disaster"...
We decided to use nginx as a balancer for 3 independent APIServers. All the rest parts of our cluster that communicate with APIServer (Kublets, Kube-Proxys, Schedulers, ControllerManagers..) are suppose to use balancer to access it. Everything went well before we started the "destructive" tests (as I call it) with some pods runing.
Here is the part of APIServer config that dials with HS:
.. --apiserver-count=3 --endpoint-reconciler-type=lease ..
Here is our nginx.conf:
user nginx;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
worker_processes auto;
events {
multi_accept on;
use epoll;
worker_connections 4096;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
gzip on;
underscores_in_headers on;
include /etc/nginx/conf.d/*.conf;
}
And apiservers.conf:
upstream apiserver_https {
least_conn;
server core1.sbcloud:6443; # max_fails=3 fail_timeout=3s;
server core2.sbcloud:6443; # max_fails=3 fail_timeout=3s;
server core3.sbcloud:6443; # max_fails=3 fail_timeout=3s;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 6443 ssl so_keepalive=1m:10s:3; # http2;
ssl_certificate "/etc/nginx/certs/server.crt";
ssl_certificate_key "/etc/nginx/certs/server.key";
expires -1;
proxy_cache off;
proxy_buffering off;
proxy_http_version 1.1;
proxy_connect_timeout 3s;
proxy_next_upstream error timeout invalid_header http_502; # non_idempotent # http_500 http_503 http_504;
#proxy_next_upstream_tries 3;
#proxy_next_upstream_timeout 3s;
proxy_send_timeout 30m;
proxy_read_timeout 30m;
reset_timedout_connection on;
location / {
proxy_pass https://apiserver_https;
add_header Cache-Control "no-cache";
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header Authorization $http_authorization;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-SSL-CLIENT-CERT $ssl_client_cert;
}
}
What came out after some tests is that Kubernetes seem to use single long living connection instead of tradition open-close sessions. This is probably dew to SSL. So we have to increase proxy_send_timeout and proxy_read_timeout to ridiculous 30m (the default value for APIServer is 1800s). If this settings are under 10m, then all clients (like Scheduler and ControllerManager) will generate tons if INTERNAL_ERROR because of broken streams.
So, for the crash test I simply put one of APIServers down by gently switching it off. Then I restart another one so nginx sees that upstream went down and switch all current connections to the last one. A couple of seconds later restarted APIserver returns back and we have 2 APIServers working. Then, I put network down on the third APIServer by running 'systemctl stop network' on that server so it has no chances to inform Kubernetes or nginx that its going down.
Now, the cluster it totally broken! nginx seem to recognize that upstream went down, but it will not reset already exciting connections to the upstream that is dead. I can still see them with 'ss -tnp'. If I restart Kubernetes services, they will reconnect and continue to work, same if I restart nginx - new sockets will show in ss output.
This happens only if I make APIserver unavailable by putting network down (preventing it from closing existing connections to nginx and informing Kubernetes that it is switching off). If I just stop it - everything work as a charm. But this is not a real case. Server can go down without any warning - just instantly.
What we are doing wrong? Is there is a way to force nginx to drop all connections to the upstream that went down? Anything to try before we move to HAProxy or LVS and ruin a week of kicking nginx in our attempts to make it balance instead of breaking our not so HA cluster.

Setting up Spring boot application on Vagrant behind Nginx proxy

I have a Spring boot application running on embedded Tomcat running on Vagrant CentOS box. It running on port 8080, so I can access application in web browser.
I need to set up Nginx proxy server that listen to port 80 and redirect it to my application.
I'm getting this error it the Nginx log:
[crit] 2370#0: *14 connect() to 10.0.15.21:8080 failed (13: Permission
denied) while connecting to upstream, client: 10.0.15.1, server: ,
request: "GET / HTTP/1.1", upstream: "http://10.0.15.21:8080/", host:
"10.0.15.21"
All the set up examples looks pretty much similar and the only answer I could find that might help was this one. However it doesn't change anything.
Here is my server config located in /etc/nginx/conf.d/reverseproxy.conf
server {
listen 80;
location / {
proxy_pass http://10.0.15.21:8080;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
And here is my nginx.conf file'
user root;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
}
Don't know if this is related, but under journalctl -u nginx I can see this log.
systemd1: Failed to read PID from file /run/nginx.pid: Invalid
argument
centos has SELinux enabled by default.
You would need to turn if off by running
setsebool httpd_can_network_connect on
There are some information about this on internet if you want to learn more. to make it persistent you can run
setsebool -P httpd_can_network_connect on

Reverse proxy of kibana behind nginx - "upstream prematurely closed connection"

I have kibana listening on localhost:5601 and if I SSH tunnel to this port I can access kibana in my browser just fine.
I have installed nginx to act as a reverse proxy but having completed the setup all I get is 502 Bad Gateway. The more detailed error in the nginx error log is
*1 upstream prematurely closed connection while reading response header from upstream,
client: 1.2.3.4,
server: elk.mydomain.com,
request: "GET /app/kibana HTTP/1.1",
upstream: "http://localhost:5601/app/kibana"
My nginx config is:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.fedora.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
index index.html index.htm;
}
My kibana.conf file within /etc/nginx/conf.d/ is:
server {
listen 80 default_server;
server_name elk.mydomain.com;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade \$http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host \$host;
proxy_cache_bypass \$http_upgrade;
}
}
This is a brand new Amazon Linux EC2 instance with the latest versions of kibana and nginx installed.
Has anyone encountered this problem before? I feel like it's a simple nginx config problem but I just can't see it.
It turns out that the slashes before the dollars proxy_set_header Upgrade \$http_upgrade; were a result of a copy-paste from another configuration management tool.
I removed the unnecessary slashes to make proxy_set_header Upgrade $http_upgrade; and reclaimed my sanity.

How to enable nginx proxy pass?

I have an inner server that runs my application. This application runs on port 9001. I want people access this application through nginx which runs on an Ubuntu machine that runs on DMZ network.
I have built nginx from source with the options of sticky and SSL modules. It runs fine but does not do the proxy pass.
The DNS name for the outer IP of the server is: bd.com.tr and I want people to see the page http://bd.com.tr/public/control.xhtml when they enter bd.com.tr but even tough nginx redirects the root request to my desired path, the application does not show up.
My nginx.conf file is:
worker_processes 4;
error_log logs/error.log;
worker_rlimit_nofile 20480;
pid logs/nginx.pid;
events {
worker_connections 1900;
}
http {
include mime.types;
default_type application/octet-stream;
server_tokens off;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
keepalive_timeout 75;
rewrite_log on;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Ssl on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 150;
server {
listen 80;
client_max_body_size 300M;
location = / {
rewrite ^ http://bd.com.tr/public/control.xhtml redirect;
}
location /public {
proxy_pass http://BACKEND_IP:9001;
}
}
}
What might I be missing?
It was a silly problem and I found it. The conf file is correct so you can use it if you want and the problem was; The port 9001 of the BACKEND_IP was not forwarded and thus nginx was not able to reach the inner service. After forwarding the port, it worked fine. I found the problem in error.log so if you encounter such problem please check error logs first :)

Resources