I have a server that I'm trying to set up. I have a Flask server that needs to run on api.domain.com, while I have other subdomains pointing to the server. I have one problem. 2/3 subdomains have no problem using nginx. Meanwhile, my script tries to bind to port 80 on the same machine, therefore failing. Is there a way I can bind my Flask REST script to port 80 ONLY for the subdomain 'api'?
My current config is:
server {
server_name api.domain.me;
location / {
error_page 404 /404.html;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_max_temp_file_size 0;
proxy_pass http://127.0.0.1:5050/;
proxy_cache off;
proxy_read_timeout 240s;
}
}
There's a little problem though, nginx likes to turn all POST requests into GET requests, any ideas?
Thanks!
There is no way binding two different applications on port 80 at the same time.
I would set up your api like this:
Bind your Flask API to Port 8080.
On NGINX you can configure you subdomain pointing to your Flask Application
upstream flask_app {
server 127.0.0.1:8080;
}
sever {
listen 80;
server_name api.domain.com;
location / {
proxy_pass http://flask_app/;
proxy_set_header Host $host;
}
}
I actually found out after a bit of diagnosis.
server {
if ($host = api.domain.me) {
return 301 https://$host
}
# managed by Certbot
had to become:
server {
if ($host = api.domain.me) {
return 497 '{"code":"497", "text": "The client has made a HTTP request to a port listening for HTTPS requests"}';
}
Because Certbot tries to upgrade the request to https but the HTTP method gets changed to GET because of the 301 response code.
Related
I have a MediaWiki running in a kubernetes cluster. The kubernetes cluster is behind an nginx proxy with the following config:
worker_processes 4;
worker_rlimit_nofile 40000;
events {
worker_connections 1024;
}
http {
upstream rancher {
server 192.168.122.90:80;
}
map $http_upgrade $connection_upgrade {
default Upgrade;
'' close;
}
server {
listen 443 ssl http2;
server_name .domain;
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://rancher;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
# This allows the ability for the execute shell window to remain open for up to 15 minutes. Without this parameter, the default is 1 minute and will automatically close.
proxy_read_timeout 900s;
proxy_connect_timeout 75s;
}
}
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
}
I can get to the main page of the wiki, but have to log in before using it. When I click to login using OAuth2 I get a 502 status from the nginx proxy server (nginx reports that the upstream ended the connection prematurely). If I do the same request with curl I get a 302 with the location of the authorization endpoint as expected. I really don't understand why it is like that. Not using the proxy and directly accessing the cluster (from the vm host) works just as normally but that isn't what I want.
So the issue was not related to nginx, nor kubernetes. It was an issue with mediawiki, where compression had some funny behaviour. See more here, if anyone encounters anything similar:)
I am new to nginx config and I am trying to set up a reverse proxy using nginx and want to use load balancing of nginx to equally distribute the load on the two upstream servers of the upstream custom-domains i.e
server 111.111.111.11;
server 222.222.222.22;.
Shouldn't the distribution be round robin by default?
I have tried weights, no luck yet.
This is what my server config looks like:
upstream custom-domains {
server 111.111.111.11;
server 222.222.222.22;
}
upstream cert-auth {
server 00.000.000.000;
}
server {
listen 80;
server_name _;
#access_log /var/log/nginx/host.access.log main;
location / {
proxy_pass http://custom-domains;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection upgrade;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /.well-known/ {
proxy_pass http://cert-auth;
}
}
Right now all the load seems to be redirecting to just the first server i.e. 111.111.111.11.
Help is greatly appreciated! Thanks again.
The config you posted is fine and should work in round-robin balance mode.
However, as you mentioned, your second webserver is having issues. Once those are fixed, your requests will be load balanced across both servers.
I'm trying to use a proxy pass with nginx to a Kibana pod using a basic auth.
Worked for testing (it's another k8s cluster, but pretty similar, using same namespace, kube-dns, env inside the pods matches and they see each other)
Context: I deploy this via helm at k8s in AWS, the nginx has a Kubernetes LB service type (which basically it's an ELB at AWS with its cname at route53).
If I point nginx pod to kibana-app.kube-system.svc.cluster.local:5601 I see the request at kibana pod from nginx, but returning 404 while trying to go to server.basePath: /api/v1/proxy/namespaces/kube-system/services/kibana-app/
I can access kibana-app pod by getting the url from "kubectl cluster-info" and then checking the logs, the request goes like this:
"method":"get","statusCode":200,"req":{"url":"/app/kibana"
"x-forwarded-uri":"/api/v1/proxy/namespaces/kube-system/services/kibana-logging/app/kibana
Can't find what's going wrong while trying to reach Kibana path from nginx (after doing a basic auth)
server {
listen 80;
server_name localhost;
access_log /var/log/nginx/host.access.log;
location / {
auth_basic "simple auth";
auth_basic_user_file /var/kibana_config/htpasswd;
try_files KIBANA #kibana-app;
}
location #kibanaapp {
return 301 http://kiban-app-url-from-route53/server.basePath;
}
location /api {
proxy_pass https://api.awszone.mydomain/api;
proxy_set_header Authorization "Basic ";
}
}
Also tried to move the proxy_pass statement, removing the return and just doing a proxy_pass from where kibana's pod is listening but either doesn't work, the request never gets to the pod or when the request gets to kibana-app pod, it returns a 404.
Any thoughts?
Thanks!
Update :
I'm almost there, now I can see the "kibana is loading screen" but never finish loading the bundles, json and stuff, nginx pod log:
GET /api/v1/proxy/namespaces/kube-system/services/kibana-logging/bundles/commons.style.css
same request at kibana pod returning 404:
"statusCode":404,"req":{"url":"/app/kibana/v1/proxy/namespaces/kube-system/services/kibana-logging/bundles/commons.bundle.js?v=10146","method":"get","headers":{"host":"kibana.app.env.com","referer":"http://kibana.app.env.com/api "referer":"http://kibana.app.env.com/api"},"res":{"statusCode":404,"responseTime":2,"contentLength":9},"message":"GET /app/kibana/v1/proxy/namespaces/kube-system/services/kibana-logging/bundles/commons.bundle.js?v=10146
my nginx conf:
server {
listen 80;
server_name localhost;
access_log /var/log/nginx/host.access.log;
location / {
auth_basic "simple auth";
auth_basic_user_file /var/kibana_config/htpasswd;
try_files KIBANA #kibana-app;
}
location #kibana-app {
return 301 kibana.app.env.com/server.basePath;
}
location /api {
proxy_pass http://kibana-logging.kube-system.svc.cluster.local:5601;
proxy_set_header HOST $host;
proxy_set_header Referer $http_referer;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Authorization "simple auth ";
}
}
"kibana.app.env.com" it's just the FQDN that kubernetes creates at route53 as a CNAME to an ELB which hits the nodes from where nginx/kibana pods are. That's the url I use at the browser and it should reach nginx, ask me for basic authorization and then take me to kibana pod with server.basePath: /api/v1/proxy/namespaces/kube-system/services/kibana-logging Please, ask me something if I'm not being clear, sorry that I can't just copy/paste everything.
Not sure how this is working on the other cluster. So the base path that you mentioned: /api/v1/proxy/namespaces/kube-system/services/kibana-app/ seems like a kube-apiserver base path, and that's the path that a proxy setup using kubectl proxy would do to talk to your applications and services in the cluster.
If you really want to talk from nginx to Kibana inside the cluster you would have to add the kibana-app.kube-system.svc.cluster.local:5601 endpoint to your nginx backend.
Finally, it's working:
server {
listen 80;
server_name localhost;
access_log /var/log/nginx/host.access.log;
location / {
auth_basic "simple auth";
auth_basic_user_file /var/kibana_config/htpasswd;
try_files KIBANA #kibana-app;
}
location #kibana-app {
return 301 /api/v1/proxy/namespaces/kube-system/services/kibana-logging/;
}
location /api/v1/proxy/namespaces/kube-system/services/kibana-logging/ {
proxy_set_header Authorization "simple auth ";
proxy_pass http://kibana-logging.kube-system.svc.cluster.local:5601/;
proxy_set_header HOST $host;
proxy_set_header Referer $http_referer;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
}
}
Going to the URL that K8s created at AWS as an ELB (kibana-app.env.com) redirects to /api/v1/proxy/namespaces/kube-system/services/kibana-logging/ which proxy_pass to kibana pod : http://kibana-logging.kube-system.svc.cluster.local:5601
I'm setting up a web/app/db stack, and the nginx proxy configuration isn't working the way I thought it would.
so here is an example of the stack...the url of the application is:
https://testapp.com
here is the nginx config:
server {
listen 8886;
server_name _;
root /usr/share/nginx/html;
include /etc/nginx/default.d/*.conf;
#ELB
if ($http_user_agent = 'ELB-HealthChecker/2.0') {
return 200 working;
}
#HTTP to HTTPS
if ($http_x_forwarded_proto != 'https') {
return 301 https://$host$request_uri;
}
location / {
set $proxy_upstream_name "testapp.com";
port_in_redirect off;
proxy_pass http://internal-alb.amazonaws.com:8083/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header Access-Control-Allow-Origin $http_origin;}
The app is proxied to an internal AWS alb, and it forwards it to a single (at this point) application server.
I'm able to get the site to serve. However, the application creates a redirect on login, and I get the following response.
Request URL:https://testapp.com/login
Request Method:POST
Status Code:302
Remote Address:34.192.444.29:443
Referrer Policy:no-referrer-when-downgrade
Response Headers
content-language:en-US
content-length:0
date:Mon, 11 Sep 2017 18:35:34 GMT
location:http://testapp.com:8083/testCode
server:openresty/1.11.2.5
status:302
The redirect fails because it's being served on 443, not 8083.
For some reason the app or the proxy isn't updating the port as it doing it's reverse proxy thing, so that the redirect has the proxied port NOT the actual application port 443.
What do I need to do with nginx config to get it to redirect correctly.
thanks.
myles.
The normal behaviour of the nginx is to rewrite the upstream address to the address the page was served from. It looks like instead of using your upstream address (http://internal-alb.amazonaws.com:8083/), your app is responding using a mixture of the two (http://testapp.com:8083). You can either change the app behaviour, or, to fix it at the nginx level, can use the proxy_redirect directive.
I'm reasonably sure the directive to fix this is proxy_redirect http://testapp.com:8083/ https://testapp.com/;
I am a front end developer and tried my hands in nginx configuration last time which is working fine. The below is the configuration:
server {
listen 80;
server_name localhost;
access_log /var/log/nginx/localhost.access.log;
location / {
#By default route to node.js running on localhost:9000 port
proxy_pass http://localhost:9000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
#currently only one server but will have to redirect to n hosts based on a parameter
location /hosts.json {
proxy_pass http://app-host.net:3000;
}
#currently only one server but will have to redirect to n hosts based on a parameter
location /hosts/ {
proxy_pass http://app-host.net:3000;
}
}
Now, I need to redirect to 4 different servers based on a parameter. ie if the city is Bangalore, I need to redirect to bangalore.corp.net:3000 and if the city is NewYork, then I need to redirect to newyork.corp.net:3000 and so on.
Here is somewhat I am expecting:
location /app1/hosts/ {
proxy_pass http://app1-host.net:3000;
}
#But the proxy pass should point to http://app1-host.net:3000/hosts and not http://app1-host.net:3000/app1/hosts
How can we handle such proxy pass in the nginx configuration file. Please let me know.
You have a URL of the form /app1/hosts/foo which should map to http://app1-host.net:3000/hosts/foo. The can be achieved by appending a URI in the proxy_pass directive, which will act like an alias.
location /app1/hosts/ {
proxy_pass http://app1-host.net:3000/hosts/;
}
See this document for details.