I'm using nginx as the front end of my mongrel. And mongrel is listening on 3001, and nginx is listening on 3000.
In my application, there will be a redirection after creating a model. let's say, I post a request to http://xxxx:3000/users, it should be redirect to http://xxxx:3000/users/1, (1 is the id of the new user), but actually, it was redirected to http://xxxx/users/1, which cause a 404 error.
Why the port 3000 is missing?
Are you using proxy_pass ? You should add this line:
proxy_set_header Host $host:3000;
You need put your nginx config up here.
====
better solution:
proxy_set_header Host $http_host;
$host not include port, and $http_host is the value from http header, it is added by browser.
Related
I plan to run an pgAdmin4 instance behind Nginx reverse proxy.
What I want is requests coming to http://myhost.com/pgadmin4 should be forwarded to upstream http://localhost:8087 where pgAdmin is listening.
To achieve this, I followed this nginx.conf recipe from pgAdmin's official doc. Here's the snippet:
server {
listen 80;
server_name myhost.com;
location /pgadmin4/ {
proxy_set_header X-Script-Name /pgadmin4;
proxy_set_header Host $host;
proxy_pass http://localhost:8087/;
proxy_redirect off;
}
}
Everything works fine except that after successful login, the server sends HTTP 301 response with location header sets to root path (i.e. "Location: /") aand bam, user agent is redirected to http://myhost.com/ where nothing is waiting (except nginx default page, for now).
Retyping the url to http://myhost.com/pgadmin4/ is still okay. The user agent's state, cookies and all are all set and user can continue as normal. It's just that it's a mild annoyance for end users having to retype the whole URL again.
I know that I can alter upstream's HTTP redirect response by using proxy_redirect directive, but I can't figure out what the value should be.
Is what I'm trying to do achievable from just by Nginx configuration? Is there any specific PgAdmin4 config that I need to change?
I am deploying Airflow 1.10.10 on Kubernetes using the official Helm Chart (v.7.0.0) but I am running into issues with Oauth.
Here's my setup:
Airflow Webserver with RBAC enabled. Airflow uses Flask Appabuilder in this case.
Server runs behind an Nginx reverse proxy that does https termination.
I try to authenticate against an Azure AD tenant using OAuth
my problem
- When I try to login with the Microsoft account I get the error message "The reply URL specified in the request does not match the reply URLs configured for the application".
- The error is caused by Airflow setting the redirect URL to http://airflow.example.com/oauth_authorized/azure instead of httpS://airflow.example.com/oauhth_authorized/azure
what I think the issue is
Since nginx sends http requests to Flask, flask generates an http url for the redirect url instead of https.
So from what I understand, I need to find a way to tell Airflow/Flask that it should use https to generate the redirect URL instead.
What I tried:
I have two angles of attack:
1. setting the base URL to https explicitly in the webserver_config.py file
I tried putting environ['wsgi.url_scheme'] = 'https' in the config file, but I get a "environ is not defined" error.
Can I even set this in in the config.py file? What would I need to import for it to work?
2. Setting proxy headers in nginx
I tried to set multiple headers in Nginx using Kubernetes annotations, my current settings are:
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
I also tried to set
proxy_set_header Host $host;
but this leads to all traffic being redirected to a comma separated list of domains
airflow.example.com,airflow.example.com
which obviously does not work.
I based these settings on the Flask documentation.
The rest of the Nginx config is the default of the official Nginx ingress controller I have running in my cluster.
Does anybody have an idea what the issue could be? Are my two angles of attack valid or is there a third one that I am missing?
Thanks a lot, any help is appreciated!
Through Jelastic's dashboard, I created this:
I just clicked "New environment", then I selected nodejs. I added a docker image (of mailhog).
Now, I would like that port 80 of my environment serves the nodejs application. This is by default so. Therefore nothing to do.
In addition to this, I would like port 8080 (or any other port than 80, like port 5000 for example) of my environment serves mailhog, hosted on the docker image. To do that, I added the following lines to the nginx-jelastic.conf (right after the first server serving the nodejs app):
server {
listen *:8080;
listen [::]:8080;
server_name _;
location / {
proxy_pass http://mailhog_upstream;
}
}
where I have also defined mailhog_upstream like this:
upstream mailhog_upstream{
server 10.102.8.215; ### DEFUPPROTO for common ###
sticky path=/; keepalive 100;
}
If I now browse my environment's 8080 port, then I see ... the nodejs app. If I try any other port than 80 or 8080, I see nothing. Putting another server_name doesn't help. I tried several things but nothing seems to work. Why is that? What am I doing wrong here?
Then I tried to get rid of the above mailhog_upstream and instead write
server {
listen *:5000;
listen [::]:5000;
server_name _;
location / {
proxy_pass http://10.102.8.215;
}
}
Browsing the environment's port 5000 doesn't work either.
If I replace the IP of the nodejs' app with that of my mailhog service, then mailhog runs on port 80. I don't understand how I can make the nodejs app run on port 80 and the mailhog service on port 5000 (or any other port than 80).
Could someone enlighten me please?
After all those failures, I tried another ansatz. Assume the path my env is example.com/. What I've tried above is to get mailhog to work upon calling example.com:5000, which I failed doing. Then I tried to make mailhog available through a call to example.com/mailhog. In order to do that, I got rid of all my modifications above and completed the current server in nginx-jelastic.conf with
location /mailhog {
proxy_pass http://10.102.8.96:8025/;
add_header Set-Cookie "SRVGROUP=$group; path=/";
}
That works in the sense that if I know browse example.com/mailhog, then I get something on the page, but not exactly what I want: it's the mailhog's page without any styling. Also, when I call mailhog's API through example.com/mailhog/api/v2/messages, I get a successful response without body, when I should've received
{"total":0,"count":0,"start":0,"items":[]}
What am I doing wrong this time?
Edit
To be more explicit, I put the following manifest that exhibits the second problem with the nginx location.
Full locations list for your case is a following:
(please pay attention to URIs in upstreams, they are different)
location /mailhog { proxy_pass http://172.25.2.128:8025/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection " upgrade"; }
location /mailhog/api { proxy_pass http://172.25.2.128:8025/api; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection " upgrade"; }
location /css { proxy_pass http://172.25.2.128:8025; }
location /js { proxy_pass http://172.25.2.128:8025; }
location /images { proxy_pass http://172.25.2.128:8025; }
that works for me with your application
# curl 172.25.2.127/mailhog/api/v2/messages
{"total":0,"count":0,"start":0,"items":[]}
The following ports are opened by default: 80, 8080, 8686, 8443, 4848, 4949, 7979.
Additional ports can be opened using:
endpoints - maps the container internal port to random external
via Jelastic Shared LB
Public IP - provides a direct access to all ports of your
container
Read more in the following article: "Container configuration - Ports". This one may also be useful:"Public IP vs Shared Load Balancer"
I have a setup where Apache listens on port 8080 behind Varnish 4 on port 80, but my client needs ssl working on the site so I setup Nginx for SSL termination on port 443 using this guide.
Everything works fine at first on http, but on attempting to load the site on https, scripts required by the site won't load, So I decided to change the site url in Settings > General to an https url, on saving changes, I get a redirect loop error in Chrome.
I couldn't access the site's wordpress dashboard to change the url back so I had to do it via phpmyadmin. But now when the site is accessed via https, the site breaks cause the scripts it requires to render content aren't authenticated.
Someone else has the same issue here but it doesn't look like it was solved.
How can I have the site on just https without a redirect loop in Chrome?
After hours of trying to get this to work, I finally found a fix, all I had to was add an HTTP_X_FORWARDED_PROTO in my wp-config.php just before changing the site url, this way:
if ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https')
$_SERVER['HTTPS']='on';
One more thing I had to do was to add proxy_set_header X-Forwarded-Protocol $scheme; to my default file in /etc/nginx/sites-enabled/default so it looks like this:
server {
...
location / {
...
proxy_set_header X-Forwarded-Protocol $scheme;
}
}
Hope this helps someone out there.
You can set this header in nginx:
proxy_set_header X-Forwarded-Proto $scheme;
and then in your apache vhost configuration:
SetEnvIf X-Forwarded-Proto "^https$" HTTPS=on
I am using the following configuration for nginx 1.4.1:
server {
listen 8000;
server_name correct.name.gr;
location /test/register {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1;
}
}
What I want to do is when the users vist http://correct.name.gr:8000/test/register/ they should be proxied to the apache which runs on port 80.
When I visit http://correct.name.gr:8000/test/register/ I get correct results (index.php).
When I visit http://correct.name.gr:8000/test/register/asd I get correct results (404 from apache).
When I visit http://correct.name.gr:8000/test/asd I get correct results (404 from nginx).
When I visit http://correct.name.gr:8000/test/register123 I get correct results (404 from apache).
The problem is when I visit http://correct.name.gr:8000/test/register. I get a 301 response and I am redirected to http://localhost/test/register/ (notice the trailing slash and of course the 'localhost')!!!
I haven't done any other configurations to nginx to put trailing slashes or something similar. Do you know what is the problem ? I want http://correct.name.gr:8000/test/register to work correctly by proxying to apache (or if not possible at least to issue a 404 error and not a redirect to the localhost of the user).
Update 1: I tried http://correct.name.gr:8000/test/register from a different computer than the one with which I had the bad behavior yesterday.. Well, it worked: I just got a 301 response that pointed me to the correct http://correct.name.gr:8000/test/register/! How is it possible to work from one computer but not from the other (I'm using the same browser-Chrome in both computers)? I will try again tomorrow to test from a third one to see the behavior.
Thank you !
My guess is that your upstream server (either apache or your script) triggered a redirect to the absolute url http://localhost/test/register/. Because you use http://127.0.0.1 in your proxy_pass directive, nginx doesn't find a match of the domain name and returns the Location header as is.
I think the right solution is to NOT use absolute redirect if the redirect is to an internal url. This is always a good practice.
However, without changing upstream server, there are two quick solutions.
you can use
proxy_pass http://localhost;
This will tell nginx the domain name of upstream is localhost. Then nginx will know to replace http://localhost by http://correct.name.gr:8000 when it finds that part in the Location header from upstream.
Another one is to add a proxy_redirect line to force nginx to rewrite any location header with http://localhost/ in it.
proxy_pass http://127.0.0.1;
proxy_redirect http://localhost/ /;
I prefer to the first solution because it's simpler. There is no DNS lookup overhead of using proxy_pass http://localhost; because nginx does the lookup in advance when it starts the web server.
reference: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect
Did you already try playing with server_name_in_redirect?
However, I found you question via google because I run the same issue with the trailing slash. Nginx forces a 301 to the same URL with an trailing slash.