Nginx won't serve the json responses - nginx

I've got a single page application running on a server correctly, serving pages across different urls:
example.com
example.com/jsonendpoint
But when I try to access an endpoint meant to return JSON, I get an HTML response. The nginx config looks like:
server {
root /home/myapplication/server/public;
location / {
proxy_pass http://myapplication/;
proxy_redirect off;
try_files $uri $uri/ /index.html;
}
location /jsonendpoint {
proxy_pass http://myapplication/;
default_type application/json;
proxy_redirect off;
}
}
If I comment out the try_files line, then the JSON response works fine and I can still access the root URL at example.com but when I try to access example.com/jsonendpoint then nginx returns a 404.
How do I fix the config to get both things to work?
EDIT:
When I curl the server from within the VPS it's hosted on:
curl -i -H "Accept: application/json" http://localhost:3000/jsonendpoint
I get a JSON response:
Content-Type: application/json; charset=utf-8
When I make the same curl request from my local machine (which means going through nginx then the response type is wrong:
Content-Type: text/html; charset=UTF-8
This rules out the possibility that the problem lies with the backend server.
The mime.types file does have the json mime type added and it is being included by nginx. I've also tried forcing a response type per location block (see above snippet).

Check that in your in your mime types you have something like:
application/json json;
Then try to query your site using something like this:
curl -i -H "Accept: application/json" http://your-site
Check for the header content-type if your application is returning json it should be:
content-type: application/json
If you get something like:
content-type: text/html; charset=utf-8
Check your backend to add the proper content type.
In case you would like to force the type, try this:
location / {
default_type application/json;
# ...
}

Old question but wanted to share what fixed the issue for me:
nginx-user did not have access to my json files. Apparently root-user loads you the html when you browse to website. But other requests like json calls will be served in nginx worker threads as nginx-user.
I added read and execute permissions to all users in html-folder to solve this problem (there are maybe better solutions also):
chmod -R 755 /usr/share/nginx/html

Related

Does nginx automatically served gzipped location matches when gzip_static is on?

I have a question about serving gzipped static files from nginx. I did gzip -k style.min.css to produce style.min.css.gz and I uploaded it to the server in the static-root directory. My location block is an exact match and looks like this:
location =/style.min.css {
root /home/ubuntu/.../static-root/;
gzip_static on;
expires 100d;
add_header Cache-Control "public";
access_log off;
}
Will nginx just serve up the style.min.css.gz in place of the style.min.css automagically, or do I have to tweak that location block so that the gzipped version is served?
Given the exact match at that location block this test does 404
$ curl -I https://example.com/style.css.gz -H "Accept-Encoding: gzip"
HTTP/1.1 404 Not Found
Is the compressed version still getting served up or do I need to tweak the location block to something like this so that the .gz file gets served?
location ~ /(style.min.css.*) {...
Update ... I can confirm that a gzipped version of a static css file is returned. So it seems to happen automatically. I don't know if the gzip on; in the server section or the gzip_static on; takes care of it but it is working.
$ curl -H "Accept-Encoding: gzip" -I https://example.com/bsmin.css
HTTP/1.1 200 OK
Cache-Control: max-age=7673705, public
Cache-control: no-cache="set-cookie"
Content-Encoding: gzip
From the official documentation https://docs.nginx.com/nginx/admin-guide/web-server/compression/ under Sending Compressed Files
to service a request for /path/to/file, NGINX tries to find and send the file /path/to/file.gz. If the file doesn’t exist, or the client does not support gzip, NGINX sends the uncompressed version of the file.
Note that the gzip_static directive does not enable on-the-fly compression. It merely uses a file compressed beforehand by any compression tool. To compress content (and not only static content) at runtime, use the gzip directive.

Nginx Stripping POST body on proxy_pass

I have a server running behind a firewall, with a single external IP, which therefore has requests proxied by domain through via an Nginx box.
When I run cURL behind the firewall, everything goes to plan:
HTTP/1.1 200 OK
The cURL is:
curl -H "Content-Type: application/json" -X POST --data #test.json 111.111.111.111/endpoint/ -i
As soon as I run this in Postman/Hurl.it/whatever from outside the network, I get 400 errors. The code throws a 400 when it is missing the POST body (JSON). Echoing this out shows that no JSON is being received.
The relevant Nginx configuration is thus:
server {
listen 80;
server_name domain.co;
location / {
proxy_pass http://111.111.111.111/;
proxy_set_header Host $host;
}
}
The domain does sit behind CloudFlare, and I've switched it to DNS only and tried that - I'd be very surprised if that was the issue.
I've had a look at other solutions, and tried some stuff out but I'm not really sure what I'm doing wrong here, unless I fundamentally misunderstand how proxy_pass works?

Custom 404 in Nginx

I'm trying to specify a custom 404 page and preserve the URL, however using the below gives me the error nginx: [emerg] "proxy_pass" cannot have URI part in location given by regular expression, or inside named location, or inside "if" statement, or inside "limit_except" block in...
location / {
error_page 404 #404;
}
location #404 {
proxy_pass https://example.com/404.html;
}
Does anyone know a solution to this?
I have tested your config on nginx 1.6.2. The error message complains about the URL in proxy_pass having path specified. Specifying only https://server would be OK, but doesn't solve your issue.
Named locations (like #fallback) have been introduced also to avoid having to use non-existing locations (which could later become existing and cause issues). But we can also use a location which should never exist on the server itself, for example:
location / { error_page 404 /xx-404-xx; }
location /xx-404-xx { proxy_pass https://example.com/404.html; }
Redirecting to relative URL causes only internal nginx redirect, which does not change the URL as browser sees it.
EDIT: Here is the test result:
kenny:/etc/nginx/sites-enabled:# wget -S server.com/abc/abc
HTTP request sent, awaiting response...
HTTP/1.1 404 Not Found
Server: nginx/1.6.2
Date: Thu, 16 Jul 2015 07:01:38 GMT
Content-Type: text/html
Content-Length: 5
Connection: keep-alive
Last-Modified: Thu, 16 Jul 2015 07:00:59 GMT
ETag: "5037762-5-51af8a241434b"
Accept-Ranges: bytes
2015-07-16 09:01:38 ERROR 404: Not Found.
From apache access log:
1.2.3.4 - - [16/Jul/2015:09:01:38 +0200] "GET /test.html HTTP/1.0" 200 5 "-" "Wget/1.13.4 (linux-gnu)"
In nginx I have proxy_pass to a html file on Apache webserver with just "test\n" in it. As you can see, nginx fetched that, added headers from Apache (Last-Mod, ETag) and also Content-Length: 5, so it received the html file from Apache, but wget doesn't save the content of 404 errors. Also many browsers don't display 404 errors by default if they are smaller than 1 kB (they show their own error page instead). So either make your 404 page bigger or you can configure nginx to serve it as normal html with "200" result code (error_page 404 =200 /xx).
I have no idea why you are receiving external redirects with the same config. Try it with wget to see which headers exactly nginx sent. Also mixing http and https for proxy should be no issue. Try to remove everything from your config and test only this error page, maybe some other directive is causing this (like other location is used instead).

NGINX Serve Precompressed index file without source

I have found an interesting problem.
I am trying to serve some gzipped files without the sources using NGINX's gzip_static module (I know the downsides to this). This means you can have gzipped files on the server that will be served with transfer-encoding: gzip. For example, if there's a file /foo.html.gz, a request for /foo.html will be served the compressed file with content-encoding: text/html.
While this usually works it turns out that when looking for index files in a directory the gzipped versions are not considered.
GET /index.html
200
GET /
403
I was wondering if anyone knows how to fix this. I tried setting index.html.gz as in index file but it is served as a gzip file rather then a gzip encoded html file.
This clearly won't work this way.
This is a part of the module source:
if (r->uri.data[r->uri.len - 1] == '/') {
return NGX_DECLINED;
}
So if the uri ends in slash, it does not even look for the gzipped version.
But, you probably could hack around using rewrite.
(This is a guess, I have not tested it)
rewrite ^(.*)/$ $1/index.html;
Edit: To make it work with autoindex (guess) you can try using this instead of rewrite:
location ~ /$ {
try_files ${uri}/index.html $uri;
}
It probably is better overall than using rewrite. But you need to try ...
You can prepare your precompressed files then serve it.
Below it's prepared by PHP and served without checking if the client supports gzip.
// PHP prepare the precompressed gzip file
file_put_contents('/var/www/static/gzip/script-name.js.gz', gzencode($s, 9));
// where $s is the string containing your file to pre-compress
// NginX serve the precompressed gzip file
location ~ "^/precompressed/(.+)\.js$" {
root /var/www;
expires 262144;
add_header Content-Encoding gzip;
default_type application/javascript;
try_files /static/gzip/$1.js.gz =404;
}
# Browser request a file - transfert 113,90 Kb (uncompressed size 358,68 Kb)
GET http://inc.ovh/precompressed/script-name.js
# Response from the server
Accept-Ranges bytes
Cache-Control max-age=262144
Connection keep-alive
Content-Encoding gzip
Content-Length 113540
Content-Type application/javascript; charset=utf-8
ETag "63f00fd5-1bb84"
Server NginX

Handling OPTIONS request in nginx

We're using HAProxy as a load balancer at the moment, and it regularly makes requests to the downstream boxes to make sure they're alive using an OPTIONS request:
OPTIONS /index.html HTTP/1.0
I'm working with getting nginx set up as a reverse proxy with caching (using ncache). For some reason, nginx is returning a 405 when an OPTIONS request comes in:
192.168.1.10 - - [22/Oct/2008:16:36:21 -0700] "OPTIONS /index.html HTTP/1.0" 405 325 "-" "-" 192.168.1.10
When hitting the downstream webserver directly, I get a proper 200 response. My question is: how to you make nginx pass that response along to HAProxy, or, how can I set the response in the nginx.conf?
I'm probably late, but I had the same problem, and found two solutions to it.
First is tricking Nginx that a 405 status is actually a 200 OK and then proxy_pass it to your HAProxy like this:
error_page 405 =200 #405;
location #405 {
root /;
proxy_pass http://yourproxy:8080;
}
The second solution is just to catch the OPTIONS request and build a response for those requests:
location / {
if ($request_method = OPTIONS ) {
add_header Content-Length 0;
add_header Content-Type text/plain;
return 200;
}
}
Just choose which one suits you better.
I wrote this in a blog post where you can find more details.
In the httpchk option, you can specify the HTTP method like this:
httpchk GET http://example.com/check.php
You can also use POST, or a plain URI like /. I have it check PHP, since PHP runs external to Nginx.

Resources