I have a Nginx reverse proxy in front of my Grafana server.
I'm trying to use Nginx auth_basic to automatically login the user into Grafana.
I would like to do this, to be able to automatically login an embedded iframe graph placed in another web application (not on the same network)
nginx.conf
server {
server_name grafana.mydomain.com;
...
location / {
proxy_pass http://grafana.mydomain.com;
}
location /grafana/ {
proxy_pass http://grafana.mydomain.com;
auth_basic "Restricted grafana.mydomain.com";
auth_basic_user_file /etc/nginx/htpasswd/grafana.mydomain.com;
proxy_set_header X-WEBAUTH-USER $remote_user;
proxy_set_header Authorization "";
}
}
grafana.ini
[auth.basic]
enabled = true
[security]
allow_embedding = true
cookie_samesite = lax
root_url = https://grafana.mydomain.com/grafana/
[auth.proxy]
enabled = true
header_name = X-WEBAUTH-USER
header_property = username
auto_sign_up = true
sync_ttl = 60
enable_login_token = true
What is happening with this setup, is that if I go to grafana.mydomain.com it appears the normal login and everything works fine
While if I go to grafana.mydomain.com/grafana/ after logging in with Nginx, Grafana return this:
If I try to click on any link on the page a lot of unauthorized errors appears and I get logged out.
I've been playing with those settings a lot:
proxy_set_header X-WEBAUTH-USER
root_url
enable_login_token
cookie_samesite
But was unable to make things working
The user is created inside Grafana, so I have tried to give the created user full permissions:
But I still get unauthorized errors and 404 errors
I'm not even sure this is the right path to achieve what I'm trying to do, any suggestions?
I've removed the two locations and placed the authentication for the / location
Then I've switched back cookie_samesite = none and it started working as it was supposed to do.
By doing this I lost the possibility to log into grafana normally
Related
I'm running Joplin Server on my Raspi4 under http://127.0.0.1:23000 and on the Raspi I can successfully access the web app.
Since I don't want to publish the port 23000, I want Joplin Server to be accessible via https://myRaspi/joplinServer. Therefore I'm using Nginx.
I tried at first with:
location /joplinServer {
proxy_pass http://127.0.0.1:22300;
}
Now when calling https://myRaspi/joplinServer from any other machine, Nginx keeps the subresource /joplinServer, resulting in an "inner call" to http://127.0.0.1:22300/joplinServer - which does not exist, sure, because Joplin Server itself knows nothing about the subresource and seems to have troubles with handling it.
I also tried this:
location = /joplinServer {
rewrite ^/joplinServer?$ http://127.0.0.1:22300 break;
}
But now every external requests to https://myRaspi/joplinServer ends up as http://127.0.0.1:22300 on my machine which does obviously not work.
So what do I have to configure on Nginx to make my setting work?
Thanks in advance!
This post gave me the solution, which looks like this:
location /joplinServer/ {
proxy_redirect off;
rewrite ^/joplinServer/(.*)$ /$1 break;
proxy_pass http://127.0.0.1:22300;
}
I have a usecase where node exporter is running under reverse proxy. Here is the snippet of my current configuration:
location /node_exporter {
proxy_pass http://127.0.0.1:9100/metrics;
}
This is running fine, but I want to implement it without metrics subpath, for which I did this change:
location /node_exporter {
proxy_pass http://127.0.0.1:9100/;
}
It is opening the initial page of node exporter with metrics button, but when clicked on it, redirects to /metrics instead of /node_exporter/metrics which inturn gives 404.
Please suggest on how to use the rewrite rule for this usecase.
The following site configuration should be enough
location /node_exporter {
proxy_pass http://127.0.0.1:9100/;
}
as long the telemetry path is changed when starting the node_exporter
./node_exporter/node_exporter --web.telemetry-path="/node_exporter/metrics"
I'm hoping someone can help me.
I am trying to re-deploy a set of flask apps on to a ubuntu 20 machine, from a Ubuntu 18 machine, but they are behaving differently to earlier deployments.
They have successfully been deployed on Ubuntu 14,16 and 18, including the conversion from python 2 to 3 at Ubuntu 18 deployment but the latest deployment on Ubuntu 20 has me totally stumped.
They are running with the config described below (successfully on Ubuntu 18). When deploying on a new Ubuntu 20 machine, flask is seeing the route as the script root (as well as the script root) which is resulting in a 404.
The setup currently working on U18 is as follows (nginx simplified for testing on non TLS connection)
The app running on :9999 is not on it's own location, and is working fine.
NGINX:
server {
underscores_in_headers on;
listen 80 default_server;
server_name _;
location /.well-known {
root /var/www/html/;
}
location / {
include uwsgi_params;
uwsgi_pass 127.0.0.1:9999;
}
location /a {
uwsgi_param SCRIPT_NAME /a;
uwsgi_modifier1 30;
include uwsgi_params;
uwsgi_pass 127.0.0.1:10017;
}
}
UWSGI:
[uwsgi]
socket = :10017
plugin = python3
wsgi-file = /home/webmaster/app/run
callable = app
master = true
enable-threads = true
processes = 1
chdir= /home/webmaster/app
uid = www-data
gid = www-data
I set the 404 handler to return some URL information and instead of getting the app's landing page, I get the 404 with the following (i.e. nginx is passing to uwsgi, and the app is running)
URL:
http://192.168.0.250/a
Output:
url root http://192.168.0.250/a/
script root /a
request url http://192.168.0.250/a/a
request path /a
request full_path /a?
So url_root and script_root are as you'e expect, and what we want to see, but request_url (http://192.168.0.250/a/a) and request_path (/a) are not. For everything to have it's own location, request_path should be "/".
What I've tried
I referred to previous questions, particularly this one:
Q: Serving flask app on subdirectory nginx + uwsgi
and this one:
A: How to host multiple flask apps under a single domain hosted on nginx?
I've tried the suggestions in those posts, including the following nginx configurations:
super simple, with uwsgi being asked to do more:
location /c {
include uwsgi_params;
uwsgi_pass 127.0.0.1:10017;
}
with UWSGI:
[uwsgi]
socket = :10017
plugin = python3
wsgi-file = /home/webmaster/app/run
callable = app
master = true
enable-threads = true
processes = 1
chdir= /home/webmaster/app
uid = www-data
gid = www-data
mount = /a=run
manage-script-name = true
This did not solve the problem
I next tried a re-write to (try) and trick uwsgi into thinking that it is running without the script root
location /b {
rewrite ^/b/(.*) /$1 break;
include uwsgi_params;
uwsgi_pass 127.0.0.1:10017;
}
This too was unsuccessful
I have also been through the suggestion in the above questions discussion sections, and whilst most work on modifying the script root, none have remove the script root from the path enabling a flask route of "/start" to serve the root of the app identified by www.app.com/a/start
I am also aware that uwsgi_modifier1 30; is depreciated, although this is the first deployment of these flask apps where that has been an issue.
I have also tried to use the parameter script_name=None on the flask side as per this question:
How can I avoid uwsgi_modifier1 30 and keep WSGI my application location-independent
but nothing thus far has worked, and I'm totally stumped. It's probably something simple, but I don't know where to look from here.
Last thing, these apps run in emperor mode, and running uwsgi from command line, both as individual .ini files, or the emperor file makes no difference.
uwsgi is running as a service installed via apt, not pip (but appearing to running fine with plugin=python3) in case anyone has had experience with this being problematic.
I'd like to stick with the apt installation if at all possible.
Thanks heaps in advance to anyone that can help.
[on edit]
This is my wsgi file where I tried to let flask handle the static route. Should have included it earlier, but it was an oversight.
Thanks
#!/usr/bin/python3
from application import app
app.config['SECRET_KEY'] = 'XXXXXXXXXX'
app.config["SESSION_COOKIE_SECURE"] = True
app.config["REMEMBER_COOKIE_SECURE"] = True
app.config["SESSION_COOKIE_HTTPONLY"] = True
app.config["REMEMBER_COOKIE_HTTPONLY"] = True
app.config['APPLICATION_ROOT'] = '/a'
app.static_url_path = '/a'
if __name__ == "__main__":
app.run(debug = True, host= '0.0.0.0', port= 5000)
For others who stumble across this:
use re-write as per other questions advice, just watch the expression you use for correctness.
This will fail:
location /foo {
rewrite ^/foo/(.*) /$1 break;
uwsgi_param SCRIPT_NAME /foo;
include uwsgi_params;
uwsgi_pass 127.0.0.1:10017;
}
as it will only work for script name "foo", route "/bar" at URL "/foo/bar". The root of /foo will fail.
This will work:
rewrite ^/foo(.*) /$1 break;
e.g.
location /foo {
rewrite ^/foo(.*) /$1 break;
uwsgi_param SCRIPT_NAME /foo;
include uwsgi_params;
uwsgi_pass 127.0.0.1:10017;
}
it will pass the script name, and take it out of the request path which makes it work.
the problem is the extra "/" in the rewrite.
It was a simple mistake that took me a lot of finding as I couldn't see the forest for the trees... so I hope this can assist someone else.
I have some Problems with gogs and nginx in my local-Network.
everytime i write "domainname" i mean the hostname of the server
I have little Server running with Openmediavault on it (for NAS) and i also will run some Stuff like gogs. It is running, but only on the URL "http:domainname:3000"
I will have gogs available under git.domainname or gogs.domainname
I have tried so many things, but noting is working.
I have add a config on available Site /enabled-sites (symlink):
server {
listen 80;
server_name git.domainname;
location / {
proxy_pass http://localhost:3000;
}
}
In my gogs Configuration, i have the following Server-Section:
[server]
PROTOCOL = http
DOMAIN = domainname
HTTP_PORT = 3000
ROOT_URL = http://domainname:%(HTTP_PORT)s/
DISABLE_SSH = false
SSH_PORT = 22
START_SSH_SERVER = false
OFFLINE_MODE = true
I have not very much experience in Nginx. so I hope anyone can help me to get this work. Also i hope i get more experience to get work other services in a generic Way to work in subdomain
when any Information is missing to help me, please let me know
I am running a Rails 3 site on Ubuntu 8.04 with Nginx 1.0.0 and Passenger 3.0.7.
In my Nginx error.log I started seeing the message X-Accel-Mapping header missing quite a lot. Googling lead me to the docs of Rack::Sendfile and to the Nginx docs.
Now, my app can be accessed through several domains and I am using send_file in my app to deliver some files specific to the domain they are requested from, e.g., if you come to domain1.com/favicon.ico I look up the favicon in at public/websites/domain1/favicon.ico.
This works fine and I don't think I need/want to get Nginx involved and create some private area where I store those files, as the samples in the Rack::Sendfile docs suggest.
How can I get rid of the error message?
this message means that Rack::Sendfile disabled X-Accel-Redirect for you, because you have missing configuration for it in nginx.conf...
I'm using Nginx + Passenger 3 + Rails 3.1.
Gathered information from this pages I've figured it out:
http://wiki.nginx.org/X-accel
http://greenlegos.wordpress.com/2011/09/12/sending-files-with-nginx-x-accel-redirect
http://code.google.com/p/substruct/source/browse/trunk/gems/rack-1.1.0/lib/rack/sendfile.rb?r=355
Serving Large Files Through Nginx via Rails 2.3 Using x-sendfile
I have controller which maps /download/1 requests to storage files which have their own directory structure, like this: storage/00/00/1, storage/01/0f/15 etc. So I need to pass this through Rails, but then I need to use send_file method which will use X-Accel-Redirect to send the final file to the browser through nginx directly.
Within the code I have this:
send_file(
'/var/www/shared/storage/00/00/01',
:disposition => :inline,
:filename => #file.name # an absolute path to the file which you want to send
)
I replaced the filename for this example purposes
Now I had to add these lines to my nginx.conf:
server {
# ...
passenger_set_cgi_param HTTP_X_ACCEL_MAPPING /var/www/shared/storage/=/storage/;
passenger_pass_header X-Accel-Redirect;
location /storage {
root /var/www/shared;
internal;
}
# ...
}
The path /storage is not visible from outside world, it is internal only.
Rack::Sendfile gets the header X-Accel-Mapping, extracts the path from it and replaces /var/www/shared/storage with /storage.... Then it spits out the modified header:
X-Accel-Redirect: /storage/00/00/01
which is then processed by nginx.
I can see this works correctly as the file is downloaded 100x faster than before and no error is shown in the logs.
Hope this helps.
We used the similar technique as NoICE described, but i replaced the "hard-coded" directory containing all the files with the regular expression describing the folder containing the folders containing the files.
Sounds hard, yeah? Just take a look on these (/etc/nginx/sites-available/my.web.site):
location /assets/(.+-[a-z0-9]+\.\w+) {
root /home/user/my.web.site/public/assets/$1;
internal;
}
location /images/(.+)(\?.*)? {
root /home/user/my.web.site/public/images/$1;
internal;
}
This should be used with this check:
location / {
# ...
if (-f $request_filename) {
expires max;
break;
}
# ...
}
to prevent the statics from Rails processing.
I did by this manual
https://mattbrictson.com/accelerated-rails-downloads
my server sends file path /private_upload/file/123/myfile.txt, the file is in /data/myapp-data/private_upload/file/123/myfile.txt
# Allow NGINX to serve any file in /data/myapp-data/private_upload
# via a special internal-only location.
location /private_upload {
internal;
alias /data/myapp-data/private_upload;
}
# ---------- BACKEND ----------
location #backend
{
limit_req zone=backend_req_limit_per_ip burst=20 nodelay;
proxy_pass http://backend;
proxy_set_header X-Sendfile-Type X-Accel-Redirect;
proxy_set_header X-Accel-Mapping /=/; # this header is required, it does nothing
include /etc/nginx/templates/myapp_proxy.conf;
}