Nginx has been setup as a reverse proxy but when a request is made, every other request gives a 404 error. Checking the log of the application running on port 9000 shows that the request doesn't reach the application.
The configuration for the reverse proxy is:
server {
listen 8088;
listen [::]:8088;
server_name www.example.com;
access_log /var/log/nginx/www.example.com.access.log;
error_log /var/log/nginx/www.example.com.com.error.log;
location / {
add_header Last-Modified $date_gmt;
add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
if_modified_since off;
expires off;
etag off;
proxy_pass http://localhost:9000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
The part
add_header Last-Modified $date_gmt;
add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
if_modified_since off;
expires off;
etag off;
has been added to try to tackle the problem.
The access log shows the 200 and the 400 alternating:
x.x.x.x - - [02/Feb/2023:11:49:28 +0100] "GET /ping HTTP/1.1" 200 92 "-" "curl/7.68.0"
x.x.x.x - - [02/Feb/2023:11:50:41 +0100] "GET /ping HTTP/1.1" 404 19 "-" "curl/7.68.0"
This looks like a load balancer issue but no load balancer is installed. Doing multiple calls on localhost with url -i http://localhost:9000/ping doesn't show any problems.
Doing the same calls on localhost with the domain name, first call:
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Thu, 02 Feb 2023 10:56:55 GMT
Content-Type: application/json; charset=UTF-8
Content-Length: 92
Connection: keep-alive
Access-Control-Allow-Origin: *
Vary: Origin
X-Frame-Options: DENY
Last-Modified: Thursday, 02-Feb-2023 10:56:55 GMT
Cache-Control: no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0
and then the second call (and every other call):
HTTP/1.1 404 Not Found
Server: nginx/1.18.0 (Ubuntu)
Date: Thu, 02 Feb 2023 10:56:56 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 19
Connection: keep-alive
Cache-Control: max-age=31536000
Vary: Accept-Encoding
X-Content-Type-Options: nosniff
X-Xss-Protection: 1; mode=block
404 page not found
There is a second Nginx service running in a docker container running on port 80. But this should not be an issue. The 404 is definitely coming from the non docker Nginx running as a service on the machine.
I can't find the issue at this moment, any advice where to look or what could cause the problem?
Add the $request_uri as follow:
proxy_pass http://localhost:9000$request_uri;
For proxying requests to the FastCgi servers (such as PHP-Fpm) use fastcgi module and fastcgi_pass directive instead proxy_pass.
Related
Currently, I am trying to make my nginx bypass any content which has the header Cache-Control: no-cache I only want nginx to bypass specifically Cache-Control: no-cache and no other Cache-Control header. Is there any way I can check the value the http_cache_control header and then bypass it ?
My current configuration is like this:
proxy_cache_bypass $http_restock_fridge;
How do I add Cache-Control: no-cache in the proxy_cache_bypass now?
I'm working on a custom json api (no twig involved). During development, I need to keep making constant changes to the codebase, and every response gets cached for a few minutes or until I clrear the symfony cache.
I'm using a local nginx server, which should be properly configured since these are the headers I get:
Server: nginx/1.16.1
Content-Type: application/json
Transfer-Encoding: chunked
Connection: keep-alive
X-Powered-By: PHP/7.4.1
Cache-Control: max-age=0, private
Date: Fri, 24 Jul 2020 07:29:28 GMT
X-Debug-Token: f38aeb
X-Debug-Token-Link: http://localhost:8080/_profiler/f38aeb
X-Robots-Tag: noindex
Last-Modified: Friday, 24-Jul-2020 07:29:28 UTC
Cache-Control: private no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0
and responses are properly updated once I run bin/console c:c
I need to do this every time I change any class (controllers, services, models, whatever).
There must be something obvious I'm missing. Is there a way to disable class caching on my dev environment and not having to clear the cache for every little change?
Edited: adding relevant configuration.
This is my nginx .conf file:
server {
listen 80;
server_name ~.*;
location / {
root /app;
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
client_max_body_size 50m;
fastcgi_pass php:9000;
fastcgi_read_timeout 1800;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /app/public/index.php;
# Disable cache
add_header Last-Modified $date_gmt;
add_header Cache-Control 'private no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
expires off;
etag off;
}
error_log /dev/stderr debug;
access_log /dev/stdout;
}
I finally found the culprit. I'll leave this in case someone faces the same issue. I had opcache enabled in my dev environment, so it had nothing to do with symfony or the nginx location. I just disabled opcache and the issue was fixed.
opcache.enable=0
I've two domains(site1.com, site2.com) running on a single server, site1 is running on port 80 of the server itself while site2 running on port 8082 based on a wordpress docker image. The docker is started using the command:
docker run --privileged -itd --name wordpress2 -e WORDPRESS_DB_HOST=mysql -e WORDPRESS_DB_NAME=wp2 -e WORDPRESS_DB_USER=root -e WORDPRESS_DB_PASSWORD=pass -p 8082:80 --link mysql:mysql wordpress
As site1.com already occupy 80 port and I can't ask user to visit site2.com specifying 8082 port number, so I set nginx proxy_pass rule like below:
server
{
listen 80;
server_name www.site2.com;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header REMOTE-HOST $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:8082;
}
}
But it still jump to 8082 port when I enter the url as http://www.site2.com in chrome, here is the curl result.
HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Sat, 24 Aug 2019 09:29:40 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 0
Connection: keep-alive
X-Powered-By: PHP/7.3.7
Set-Cookie: PHPSESSID=ea2c8aa5c9dfbf79046440cc8f66d35e; path=/
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate
Pragma: no-cache
X-Redirect-By: WordPress
Location: http://www.site2.com:8082/
could anybody please help to explain what's the reason and how to make it act as normal website?
==========================
solved, just forgot to change wordpress SITEURL and HOME settings, thank RichardSmith again for the kindly help!
I've been trying to get http to https redirect via nginx for the better part of a day, and it's been a struggle. I've checked over several stackoverflow questions, and a number of articles on the internet. I finally got http to https redirect, but only for the direct ip address, not the domain I'm trying to use.
So in other words, http://12.345.67.890 redirects to https://app.example.com, but http://app.example.com does not redirect to https://app.example.com.
Is this expected? What don't I understand here?
My site's config file
upstream appupstream {
server 0.0.0.0:3555;
}
server {
error_log /var/log/nginx/error.log warn;
listen [::]:80;
listen 80;
server_name app.example.com 12.345.67.890;
return 301 https://$server_name$request_uri;
access_log /var/log/nginx/access.log;
root /home/ec2-user/app/public;
proxy_set_header X-Forwarded-Proto $scheme;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass https://appupstream;
}
}
When I curl these sites, the headers seem to support what I'm seeing in my browsers:
IP curl results
$ curl -I -L http://12.345.67.890
HTTP/1.1 301 Moved Permanently // <-- Note the permanent redirect on the ip
Server: nginx/1.12.1
Date: Sat, 03 Nov 2018 19:30:10 GMT
Content-Type: text/html
Content-Length: 185
Connection: keep-alive
Location: https://app.example.com/
HTTP/2 200
date: Sat, 03 Nov 2018 19:30:10 GMT
content-type: text/html; charset=utf-8
content-length: 4856
x-frame-options: SAMEORIGIN
x-xss-protection: 1; mode=block
x-content-type-options: nosniff
x-download-options: noopen
strict-transport-security: max-age=15778476; includeSubDomains
p3p: ABCDEF
Domain curl results
$ curl -I -L http://app.example.com
HTTP/1.1 200 OK // <-- No permanent redirect on domain
Date: Sat, 03 Nov 2018 19:30:39 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 4856
Connection: keep-alive
X-FRAME-OPTIONS: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
X-Download-Options: noopen
Strict-Transport-Security: max-age=15778476; includeSubDomains
P3P: ABCDEF
I've run nginx -t successfully, and I've used both nginx reload and nginx restart each time I've updated the file. I've cleared ALL browsing data (cookies, etc.) and revisited, but this behavior persists. Any suggestions/guidance would be much appreciated!
I'm trying to use the following nginx configuration so that a cookie is shared across all subdomains. Unfortunately, it seems like the lines with X-Forwarded-For and proxy_cookie_domain are completely ignored (have no effect) by nginx. Any ideas what I'm doing wrong?
server {
server_name discuss.mysite.com;
error_log /var/log/nginx/discuss.log;
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_cookie_domain ~^(.*)$ "$1; Domain=.discuss.mysite.com";
}
}
This is the output from curl -I:
HTTP/1.1 200 OK
Server: nginx/1.4.6 (Ubuntu)
Date: Wed, 25 Mar 2015 18:14:45 GMT
Content-Type: text/html; charset=utf-8
Connection: keep-alive
Vary: Accept-Encoding
Status: 200 OK
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Set-Cookie: destination_url=http%3A%2F%2Fdiscuss.mysite.com%2F; path=/
Set-Cookie: _forum_session=eTZVVGFzNWJDNjdnV0l0SGFlWDF2MDN2VUtQSnZ0NlN2MmVaR3NKR1A3VFB3MUZFVmRhbTlYNmwxS29TaWkvT05rQmtSaFQwbUhUVjNKeDEwV0JNRGc9PS0teXVLQU92YlRWalJ4WnhpTXNzNkxSdz09--1f472148823725a4e1ad45c0c3b48618c6560be3; path=/; HttpOnly
Set-Cookie: __profilin=p%3Dt; path=/
X-Request-Id: 1cb6fc64-f7b9-45d9-9647-94fbedc44345
X-Runtime: 0.367952
X-UA-Compatible: IE=edge
You are on wrong way, because directive proxy_set_header set additional header from nginx to backend, not from backend+nginx to your curl client; this is the reason why you don't see them.
There are 2 options to send additional header to a client
set them on backend
use add_header directive in proper location
But those header will be 'static' ones (except sent from backend, where you can construct it as you wish) - this will be incorrect while working with cookies. To serve cookies correctly you should use ngx_http_userid_module module and it directives.
To share a cookie across subdomains, use
userid_domain .mysite.com;
# or ".discuss.mysite.com" for 4-th level of subdomains
userid_path /;
The documentation says:
Adds the specified field to a response header provided that the response
code equals 200, 201, 204, 206, 301, 302, 303, 304, or 307.
Try to add the headers on the port 8080 directive
Source: http://nginx.org/en/docs/http/ngx_http_headers_module.html#add_header