I'm trying to set up graphite to work with grafana in docker based on this project : https://github.com/kamon-io/docker-grafana-graphite
and when I run my dockerfile I get 403 Forbidden error for nginx.
my configurations for nginx are almost the same as the project's configurations. I run my dockerfiles on a server and test them on my windows machine. So the configurations are not exactly the same ... for example I have :
server {
listen 80 default_server;
server_name _;
location / {
root /src/grafana/dist;
index index.html;
}
location /graphite/ {
proxy_pass http:/myserver:8000/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header Host $host;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
add_header Access-Control-Allow-Origin "*";
add_header Access-Control-Allow-Methods "GET, OPTIONS";
add_header Access-Control-Allow-Headers "origin, authorization, accept";
}
But I still keep getting 403 forbidden. Checking the error log for nginx says :
directory index of "/src/grafana/dist/" is forbidden
Stopping and running it again it says :
directory index of "/src/grafana/dist/" is forbidden
I'm very new to nginx ... was wondering if there's something in the configurations that I'm misunderstanding.
Thanks in advance.
That's because you are hitting the first location block and the index file is not found.
A request to '/' will look for 'index.html' in '/src/grafana/dist'.
Confirm that:
1. 'index.html' exists.
2. You have the right permissions.
nginx has read-access to the entire directory tree leading up to 'index.html'. That is, it must be able to read directories 'src', 'src/grafana' and 'src/grafana/dist' as well as 'index.html' itself.
A hacky quick-fix to achieve this would be to do 'sudo chmod -R 755 /src', but I don't recommend it.
Related
I have configured my nginx to serve a docker container on /wagtail endpoint.
When I hit localhost/wagtail it gives me 21 requests with 302 status_code.
I dont understand why it is now doing that as it was working fine until I created 3 docker containers and isolated nginx in one instead of 2 containers with a multi stage build for nginx. I ve tried to put back the configuration which was working and it is not anymore!
When I try to access the /wagtail endpoint it is doing 21 requests in one second and I only have 302 status code and no output in the browser.
What I am missing here?
My nginx.conf is as follow :
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://172.20.128.3:3000;
proxy_set_header Host $host;
proxy_redirect off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 20M;
add_header X-Frame-Options "SAMEORIGIN";
}
location /wagtail {
proxy_pass http://172.20.128.2:8000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Script-Name /wagtail;
client_max_body_size 20M;
}
location /static/ {
alias /app/static/;
}
location /media/ {
alias /app/media/;
}
}
I ve changed the gunicorn --bind 0.0.0.0:8000 to 127.0.0.1:8000 and I now have a bad gateway error...
Thank you
best ,
I'm using docker and running nginx alongside varnish.
Because I'm running docker, I've set the resolver manually at the top of the nginx configuration (resolver 127.0.0.11 ipv6=off valid=10s;) so that changes to container IPs will be picked up without needing to restart nginx.
This is the relevant part of the config that's giving me trouble:
location ~^/([a-zA-Z0-9/]+)$ {
set $args ''; #clear out the entire query string
set $card_name $1;
set $card_name $card_name_lowercase;
rewrite ^ /cards?card=$card_name break;
proxy_set_header x-cache-key card-type-$card_name;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_set_header REQUEST_URI $request_uri;
proxy_http_version 1.1;
set $backend "http://varnish:80";
proxy_pass $backend;
proxy_intercept_errors on;
proxy_connect_timeout 60s;
proxy_send_timeout 86400s;
proxy_read_timeout 86400s;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
error_page 503 /maintenance.html;
}
When I visit a URL for this, e.g. https://example.com/Test, I get 500 internal server error.
In the nginx error log, I see the following:
2022/04/27 23:59:45 [error] 53#53: *1 invalid URL prefix in "", client: 10.211.55.2, server: example.com, request: "GET /Test HTTP/2.0", host: "example.com"
I'm not sure what's causing this issue -- http:// is included in the backend, so it does have a proper prefix.
If I just use proxy_pass http://varnish:80, it works fine, but the backend needs to be a variable in order to force docker to use the resolver.
I've stumble across similar issue. I'm not sure why but defining the
set $backend "http://varnish:80";
outside of location block
I have a Nginx server configured as a reverse-proxy cache server to a remote Apache server. At this point, all is running fine. Here's a part of my configuration (I've left some irrelevant parts out):
server {
listen 443 ssl http2;
server_name www.mywebsite.com mywebsite.com;
location / {
proxy_pass https://123.123.123.123;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header REMOTE_ADDR $remote_addr;
proxy_ignore_headers X-Accel-Expires Expires Vary;
proxy_redirect off;
proxy_cache_revalidate off;
proxy_next_upstream off;
proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;
proxy_cache_bypass $bypass $do_not_cache;
proxy_no_cache $do_not_cache;
proxy_cache_valid any 2880m;
proxy_cache_valid 404 1m;
proxy_cache my_cache;
proxy_cache_min_uses 1;
proxy_cache_lock on;
proxy_http_version 1.1;
}
}
Now what I want to do is to serve files from a specific directory from local files stored on the Nginx server. The rest of the content must still be cached from the source server:
//www.mywebsite.com => Serves cached content from //123.123.123.123
//www.mywebsite.com/local => Serves files stored locally on the Nginx server
Is it possible to include another location in the "server" section of the configuration? I tried something like this but it doesn't work:
location /local/ {
root /home/user/public_html/local;
try_files $uri $uri/ =404;
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Max-Age' 1728000;
}
Sorry for my english by the way.
You're on the right way, you should define another location with an alias directive:
location /local/ {
alias /home/user/public_html/local;
}
Now, when you request the http://YOUR_DOMAIN/local/page.html, it will serve /home/user/public_html/local/page.html.
Keep in mind that nginx must have read permission in the specified root directory.
Read this for the subtle difference between root and alias directives.
I'm trying to setup a new site on my server, I've updated the nginx settings to redirect the new site, created the directories, setup the correct permissions and created the DNS entries on my domain.
However when I restart nginx I get the following error:
nginx: [emerg] host not found in upstream "petproject" in
/usr/local/nginx/conf/nginx-vhosts.conf:277 nginx: configuration file
/usr/local/nginx/conf/nginx.conf test failed
Could someone tell me how to fix this? I have identical settings with a different domain and directory and it works fine so I can't quite pinpoint whats wrong here.
I have included my nginx.conf below.
server {
listen 217.23.14.107:80;
server_name petproject.com;
access_log /var/log/nginx/petproject.com_access.log;
error_log /var/log/nginx/petproject_error.log;
root /home/petproject/laravel/public;
location ~* \.(jpg|jpeg|gif|css|js|ico|rar|gz|zip|pdf|tar|bmp|xls|doc|swf|mp3|avi|png|htc|txt|htc|flv$
access_log off;
expires 7d;
}
location / {
index index.php index.html index.htm;
proxy_pass http://petproject:8080;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
# deny access to apache .htaccess files
location ~ /\.ht
{
deny all;
}
}
I stuck to configure a simple reverse proxy on AWS.
Since we have one host (reverse proxy nginx) serving the public access I decided to follow the rules and created the following configuration.
server {
listen 9990;
server_name project-wildfly.domain.me;
access_log /var/log/nginx/wildfly.access.log;
error_log /var/log/nginx/wildfly.error.log;
proxy_buffers 16 64k;
proxy_buffer_size 128k;
root /var/www/;
index index.html index.htm;
location /console {
proxy_set_header Host $server_addr:$server_port;
proxy_set_header X-Forwarded-Proto $scheme;
add_header Cache-Control "no-cache, no-store";
proxy_pass http://10.124.1.120:9990/console;
}
location /management {
proxy_set_header Host $server_addr:$server_port;
proxy_set_header X-Forwarded-Proto $scheme;
add_header Cache-Control "no-cache, no-store";
proxy_pass http://10.124.1.120:9990/management;
}
}
This will serve the admin console and I'm able to log in with the user. Then this message appears:
Access Denied
Insufficient privileges to access this interface.
Nothing within the error log. Thanks for any hint!
I had the same issue when configuring Wildfly 15 and nginx 1.10.3 as reverse proxy.
Setup was very similar to the first post, redirecting /management & /console to wildflyhost:9990.
I was able to access the console directly via :9990 and when comparing the network traffic between direct and nginx-proxied traffic, I noticed that Origin and Host were different.
So in my case the solution was to force the Origin and Host headers in Nginx to something that Wildfly is expecting. I couldn't find this solution elsewhere, so I'm posting it here for future reference anyhow although the thread is old.
location /.../ {
proxy_set_header Host $host:9990;
proxy_set_header Origin http://$host:9990;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass_request_headers on;
proxy_pass http://wildflyhost:9990
...
}
Maybe you need turn on management module.
Try this:sh standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0 &