I have a problem when I request de url, the page is not load properly.
This is my nginx configuration:
upstream servers {
server 192.168.13.23:9075;
server 192.168.13.24:9075;
}
server {
listen 80;
server_name proxyserver;
location / {
proxy_pass http://servers/servlet/com.openti.total.hlogin;
}
}
the error is 500
Browser creenshot
Related
I have a very simple load balancing configuration, set it up for PoC purpose. My app server1 and load balancer server is same.Below is my load balncer conf file content. Please help me is this correct?
At the moment, whenever all my request goes to IP1. I expect it to route traffic to IP2 as well whenever I hit IP1, please correct if this understanding is wrong.
upstream myapp1 {
server srv1.example.com;
server srv2.example.com;
server srv3.example.com;
}
server {
listen 80;
location / {
proxy_pass http://myapp1;
}
}
Your configuration is correct. Sending multiple requests to your NGINX Proxy Port 80 will Loadbalance the traffic with the default LB-Algorithem round-robin to one of your backend (upstream) servers.
Check this out:
https://www.nginx.com/resources/wiki/start/topics/examples/loadbalanceexample/
http {
upstream myproject {
server 127.0.0.1:8080;
server 127.0.0.1:8081;
server 127.0.0.1:8082;
}
server {
listen 80;
server_name www.domain.com;
location / {
proxy_pass http://myproject;
}
}
}
You can try this from any Linux command line
for ((i=1;i<=10;i++)); do curl -v "http://localhost"; sleep 1; done
This should print AppServer1, AppServer2, AppServer3 and start again from 1.
A demo-backend could look like
server {
listen 8080;
location / {
return 200 "AppServer1\n";
}
}
server {
listen 8081;
location / {
return 200 "AppServer2\n";
}
}
server {
listen 8082;
location / {
return 200 "AppServer3\n";
}
}
I have just tested in a fresh nginx docker container without any problem.
Inside NGINX config file:
http {
server {
listen 80;
server_name sample.com;
location / {
proxy_pass http://127.0.0.1:8080;
}
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://127.0.0.1:8081;
}
}
}
The above config works fine and web browser is able to access websites and show their content.
But, when I change listen 80; statement to listen 80 http2;, the web browser downloads a file rather than showing webpages of sample.com and example.com. Why is that?
Content-Type: text/html should in Response Headers (like below)
Maybe you should config http2's Response Headers
I have my flask application deployed on Nginx over my VM.
Everything is deployed Ok and I can request my apis on http://my.ip.number (I have a public IP)
But when I run Ngrok (I need https and I don't have a domain name to generate a SSL certificate), the URL https//number.ngrok.io shows me the Nginx home page (Welcome to Nginx) instead my webapp.
Why is this happening?
P.D: When I run "curl localhost" I get the Nginx Welcome Page but when I exec "curl -4 localhost" I get my webapp home page
etc/nginx/site-available/myproject
server {
listen 80;
server_name 0.0.0.0;
location / {
include proxy_params;
proxy_pass http://unix:/home/datascience/chatbot-cima/chatbot.sock;
}
}
server {
listen 80;
server_name 127.0.0.1;
location / {
proxy_pass http://unix:/home/datascience/chatbot-cima/chatbot.sock;
}
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://unix:/home/datascience/chatbot-cima/chatbot.sock;
}
}
server {
listen 80;
server_name public.ip;
location / {
proxy_pass http://unix:/home/datascience/chatbot-cima/chatbot.sock;
}
}
Any request coming in from ngrok, has the Host header set to the ngrok URL. The behaviour of nginx would be to try and match one of the server blocks in your configuration above, and default to the first one if no server_name matches the Host header.
However, I'm guessing there's another configuration file at /etc/nginx/conf.d/default.conf or /etc/nginx/sites-enabled/0-default which has a listen directive with default_server set. That will be catching these requests and serving the "Welcome to nginx!" page.
I suggest you look for that file, and remove it which should solve the issue.
However you could also simplify the above configuration and simply have:
server {
listen 80;
server_name localhost;
location / {
include proxy_params;
proxy_pass http://unix:/home/datascience/chatbot-cima/chatbot.sock;
}
}
Provided there's not another server block hiding somewhere else in the configuration with a directive like listen 80 default_server; then this should catch all requests.
For more info see: How nginx processes a request
I've installed Nginx on one of my servers in order to be used as a load balancer for my Rancher application.
I based my configuration on the one found here: https://rancher.com/docs/rancher/v2.x/en/installation/ha/create-nodes-lb/nginx/
And so my config is:
load_module /usr/lib/nginx/modules/ngx_stream_module.so;
worker_processes 4;
worker_rlimit_nofile 40000;
events {
worker_connections 8192;
}
stream {
upstream rancher_servers_http {
least_conn;
server <ipnode1>:80 max_fails=3 fail_timeout=5s;
server <ipnode2>:80 max_fails=3 fail_timeout=5s;
server <ipnode3>:80 max_fails=3 fail_timeout=5s;
}
server {
listen 80;
proxy_pass rancher_servers_http;
}
upstream rancher_servers_https {
least_conn;
server <ipnode1>:443 max_fails=3 fail_timeout=5s;
server <ipnode2>:443 max_fails=3 fail_timeout=5s;
server <ipnode3>:443 max_fails=3 fail_timeout=5s;
}
server {
listen 443;
proxy_pass rancher_servers_https;
}
}
My configuration is working as expected but I've recently installed Nextcloud on my cluster. Which is giving me the following error:
Your web server is not properly set up to resolve “/.well-known/caldav”. Further information can be found in the
documentation.
Your web server is not properly set up to resolve “/.well-known/carddav”. Further information can be found in the
documentation.
So I would like to add a "location" directive but I'm not able to do it.
I tried to update my config as follow:
...
stream {
upstream rancher_servers_http {
...
}
server {
listen 80;
proxy_pass rancher_servers_http;
location /.well-known/carddav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
location /.well-known/caldav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
}
upstream rancher_servers_https {
...
}
server {
listen 443;
proxy_pass rancher_servers_https;
location /.well-known/carddav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
location /.well-known/caldav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
}
}
But it's telling me
"location" directive is not allowed here in /etc/nginx/nginx.conf:21
Assuming location directive is not allowed in a stream configuration I tried to add an http block like this:
...
stream {
...
}
http {
server {
listen 443;
location /.well-known/carddav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
location /.well-known/caldav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
}
server {
listen 80;
location /.well-known/carddav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
location /.well-known/caldav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
}
}
But then I got this message:
bind() to 0.0.0.0:443 failed (98: Address already in use)
(same for the port 80).
Can someone help me with this ? How can I add the location directive without affecting my actual configuration ?
Thank you for reading.
Edit
Well it seems that the stream directive prevent me from adding other standard directives. I tried to add the client_max_body_size inside server but I'm having the same issue:
directive is not allowed here
Right now your setup uses nginx as an TCP proxy. Such configuration of nginx passes through traffic without analysis - it can be ssh, rdp, whatever traffic and it will work regardless of protocols because nginx do not try to check stream content.
That is the reason why location directive does not work in context of streams - it is http protocol related function.
To take advantage of high level protocol analysis nginx need to be aware of protocol going through it, i.e. be configured as an HTTP reverse proxy.
For it to work server directive should be placed in http scope instead of stream scope.
http {
server {
listen 0.0.0.0:443 ssl;
include /etc/nginx/snippets/letsencrypt.conf;
root /var/www/html;
server_name XXXX;
location / {
proxy_pass http://rancher_servers_http;
}
location /.well-known/carddav {
proxy_pass http://$host:$server_port/remote.php/dav;
}
location /.well-known/caldav {
proxy_pass http://$host:$server_port/remote.php/dav;
}
}
server {
listen 80 default_server;
listen [::]:80 default_server;
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /var/www/letsencrypt;
}
root /var/www/html;
server_name xxxx;
location / {
proxy_pass http://rancher_servers_http;
}
}
}
Drawback of this approach for you would be need of certificate management reconfiguration.
But you will load off ssl encryption to nginx and gain intelligent ballancing based on http queries.
I'm a bit new to using nginx so I'm likely missing something obvious. I'm trying to create an nginx server that will reverse proxy to a set of web servers that use https.
I've been able to get it to work with one server list this:
server {
listen $PORT;
server_name <nginx server>.herokuapp.com;
location / {
proxy_pass https://<server1>.herokuapp.com;
}
}
However, as soon I try to add in the 'upstream' configuration element it no longer works.
upstream backend {
server <server1>.herokuapp.com;
}
server {
listen $PORT;
server_name <nginx server>.herokuapp.com;
location / {
proxy_pass https://backend;
}
}
I've tried adding in 443, but that also fails.
upstream backend {
server <server1>.herokuapp.com:443;
}
server {
listen $PORT;
server_name <nginx server>.herokuapp.com;
location / {
proxy_pass https://backend;
}
}
Any ideas what I'm doing wrong here?