I have a Sinatra application hosted with Unicorn, and nginx in front of it. When the Sinatra application errors out (returns 500), I'd like to serve a static page, rather than the default "Internal Server Error". I have the following nginx configuration:
server {
listen 80 default;
server_name *.example.com;
root /home/deploy/www-frontend/current/public;
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 5;
proxy_read_timeout 240;
proxy_pass http://127.0.0.1:4701/;
}
error_page 500 502 503 504 /50x.html;
}
The error_page directive is there, and I have sudo'd as www-data (Ubuntu) and verified I can cat the file, thus it's not a permission problem. With the above config file, and service nginx reload, the page I receive on error is still the same "Internal Server Error".
What's my error?
error_page handles errors that are generated by nginx. By default, nginx will return whatever the proxy server returns regardless of http status code.
What you're looking for is proxy_intercept_errors
This directive decides if nginx will intercept responses with HTTP
status codes of 400 and higher.
By default all responses will be sent as-is from the proxied server.
If you set this to on then nginx will intercept status codes that are
explicitly handled by an error_page directive. Responses with status
codes that do not match an error_page directive will be sent as-is
from the proxied server.
You can set proxy_intercept_errors especially for that location
location /some/location {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 5;
proxy_read_timeout 240;
proxy_pass http://127.0.0.1:4701/;
proxy_intercept_errors on; # see http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_intercept_errors
error_page 400 500 404 ... other statuses ... =200 /your/path/for/custom/errors;
}
and you can set instead 200 other status what you need
People who are using FastCGI as their upstream need this parameter turned on
fastcgi_intercept_errors on;
For my PHP application, I am using it in my upstream configuration block
location ~ .php$ { ## Execute PHP scripts
fastcgi_pass php-upstream;
fastcgi_intercept_errors on;
error_page 500 /500.html;
}
As mentioned by Stephen in this response, using proxy_intercept_errors on; can work.
Though in my case, as seen in this answer, using uwsgi_intercept_errors on; did the trick...
Related
I'm using docker and running nginx alongside varnish.
Because I'm running docker, I've set the resolver manually at the top of the nginx configuration (resolver 127.0.0.11 ipv6=off valid=10s;) so that changes to container IPs will be picked up without needing to restart nginx.
This is the relevant part of the config that's giving me trouble:
location ~^/([a-zA-Z0-9/]+)$ {
set $args ''; #clear out the entire query string
set $card_name $1;
set $card_name $card_name_lowercase;
rewrite ^ /cards?card=$card_name break;
proxy_set_header x-cache-key card-type-$card_name;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_set_header REQUEST_URI $request_uri;
proxy_http_version 1.1;
set $backend "http://varnish:80";
proxy_pass $backend;
proxy_intercept_errors on;
proxy_connect_timeout 60s;
proxy_send_timeout 86400s;
proxy_read_timeout 86400s;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
error_page 503 /maintenance.html;
}
When I visit a URL for this, e.g. https://example.com/Test, I get 500 internal server error.
In the nginx error log, I see the following:
2022/04/27 23:59:45 [error] 53#53: *1 invalid URL prefix in "", client: 10.211.55.2, server: example.com, request: "GET /Test HTTP/2.0", host: "example.com"
I'm not sure what's causing this issue -- http:// is included in the backend, so it does have a proper prefix.
If I just use proxy_pass http://varnish:80, it works fine, but the backend needs to be a variable in order to force docker to use the resolver.
I've stumble across similar issue. I'm not sure why but defining the
set $backend "http://varnish:80";
outside of location block
I am using Odoo 10. I have implemented subdomain using Nginx with below script and it is working fine. However When I type IP address with port number like http://444.444.444.44:8085/web/database/manager, still im able to access this page. I want users forcibly use subdomain only as I provided xxx.mydomain.com. How can I achieve this plz help.
My script for each of my subdoain URL is as follows:
server {
listen 80;
listen [::]:80;
server_name xxx.mydomain.org;
root /usr/share/nginx/html;
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://127.0.0.1:8085;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 2000;
proxy_send_timeout 2000;
proxy_read_timeout 2000;
send_timeout 2000;
}
location ~* /web/database/manager {
deny all;
}
location ~* /web/database/selector {
deny all;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
Instead of of listening on every interface for Odoo process, use only localhost 127.0.0.1 interface to listen. To achieve that, modify the Odoo configuration file *.conf and add the following:
xmlrpc_interface = 127.0.0.1
Save the conf file and restart Odoo process. By default Odoo process listens to all interface, but this particular line in configuration file will ensure that Odoo process listens to 127.0.0.1 only, so anyone trying to browse from http://444.444.444.44:8085 will not find any response.
I'm no nginx professional or amateur for that mater, just know how to google a lot and I need help finishing of a reverse proxy.
What I currently have a main server that handles connections which then hands the client to one of 8 backend servers that deliver the stream in either TS or HLS. I want to put a proxy at the front that acts as the main server but also delivers the stream (like an edge server i guess, but no caching) so that the origin servers are hidden.
I have got it to work with TS, but I can't for the life of me workout how to get it to work with HLS no matter how much I packet capture. It pulls the manifest fine but unlike with TS it isn't pulling the segments from the origin servers.
Here is the code I've done so far (could probably be cleaner but this was all done with google)
server {
listen 80;
server_name proxy_IP_here;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
location ~ .(m3u8|mpd)$ {
proxy_pass backend_IP_for_Main;
proxy_intercept_errors on;
error_page 301 302 307 = #handle_redirects;
}
}
location / {
proxy_pass backend_IP_for_Main;
sub_filter 'dns_i_have_it_fildering_here' 'proxy_IP_here';
sub_filter_once off;
sub_filter_types text/javascript application/json;
proxy_intercept_errors on;
error_page 301 302 307 = #handle_redirects;
}
location #handle_redirects {
set $saved_redirect_location '$upstream_http_location';
proxy_pass $saved_redirect_location;
}
}
If I remove
proxy_intercept_errors on;
error_page 301 302 307 = #handle_redirects;
that from the .mm3u8 location block HLS will work but will be delivered directly by the origin server to the end client and not through the proxy.
Any help greatly appreciated.
Thanks in advance.
I have the following configuration on a NGINX which is serving as a reverse proxy to my Docker machine located at: 192.168.99.100:3150.
Basically, I need to hit: http://localhost:8150 and the content displayed has to be the content from inside the Docker.
The configuration bellow is doing his job.
The point here is that when hitting the localhost:8150 I'm getting http status code 302, and I would like to get the http status code 200.
Does anyone know if it's possible to be done on Nginx or any other way to do that?
server {
listen 8150;
location / {
proxy_pass http://192.168.99.100:3150;
}
}
Response from a request to http://localhost:8150/products
HTTP Requests
-------------
GET /projects 302 Found
I have found the solution.
Looks that a simple proxy_pass doens't work quite fine with ngrok.
I'm using proxy_pass with upstream and it's working fine.
Bellow my configuration.
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream rorweb {
server 192.168.99.100:3150 fail_timeout=0;
}
server {
listen 8150;
server_name git.example.com;
server_tokens off;
root /dev/null;
client_max_body_size 20m;
location / {
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Frame-Options SAMEORIGIN;
proxy_pass http://rorweb;
}
}
include servers/*;
}
My environment is like this:
Docker (running a rails project on port 3150)
Nginx (as a reverse proxy exposing the port 8150)
Ngrok (exporting my localhost/nginx)
I tried to make custom 404 page for tornado and want to deploy it with nginx but failed.
here is my domain.conf(included by nginx.conf)
server {
listen 80;
server_name vm.tuzii.me;
client_max_body_size 50M;
location ^~ /app/static/ {
root ~/dev_blog;
if ($query_string) {
expires max;
}
}
location = /favicon.ico {
rewrite (.*) /static/favicon.ico;
}
location = /robots.txt {
rewrite (.*) /static/robots.txt;
}
error_page 404 /404.html;
location /404.html {
root /home/scenk;
internal;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://frontends;
}
}
But after reload nginx, nothing happen. It seems like tornado catch the 404error before nginx.
I have no idea to solve this problem.
PS. I just want to make 404error by nginx. But not rewrite 'write_error' in tornado source.
Environment: Ubtuntu 12.04 Tornado2.4.1 runsite with supervisor by Nginx 4 process.
I ran into the same problem and what you actually need is this set:
proxy_intercept_errors on;
From nginx proxy module documentation:
proxy_intercept_errors
Syntax: proxy_intercept_errors on | off
Default: off
Context: http
This directive decides if nginx will intercept responses with HTTP status codes of 400 and higher.
By default all responses will be sent as-is from the proxied server.
If you set this to on then nginx will intercept status codes that are explicitly handled by an error_page directive. Responses with status codes that do not match an error_page directive will be sent as-is from the proxied server.
Finailly solve this problem. Because
proxy_pass_header Server;
So the real TornadoServer is sent. To hide real server, simply change
proxy_pass_header User-Agent;
That's all.