I have a nginx configuration that looks like this:
location /textproxy {
proxy_pass http://$arg_url;
proxy_connect_timeout 1s;
proxy_redirect off;
proxy_hide_header Content-Type;
proxy_hide_header Content-Disposition;
add_header Content-Type "text/plain";
proxy_set_header Host $host;
}
The idea is that this proxies to a remote url, and rewrites the content header to be text/plain.
For example I would call:
http://nx/textproxy?url=http://foo:50070/a/b/c?arg=abc:123
And it would return the contents of http://foo:50070/a/b/c?arg=abc:123, but wrapped with a text/plain header.
This doesn't seem to work though, I constantly get 'invalid upstream port' errors:
2013/07/23 19:05:10 [error] 25292#0: *963177 invalid port in upstream "http://foo:50070/a/b/c?arg=abc:123", client: xx.xxx.xx.xx, server: ~^(?<h>nx)(\..+|)$, request: "GET /textproxy?url=http://foo:50070/a/b/c?arg=abc:123 HTTP/1.1", host: "nx"
Any ideas? I'm having a hard time trying to work it out.
Related
I have a Django app for whose static files need to be served by nginx. I want the app to be accessible through OpenVPN for which I'm using OpenVPN. Both the nginx container and the django container are in the same pod. My limited understanding is that it would be enough to run VPN in the background in the nginx container and it should successfully route requests to the backend using localhost because they're in the same pod. But this doesn't seem to be working.
My vpn config looks like this:
client
dev tun
proto udp
remote <server_ip> 1194
# Push all traffic through the VPN - from stackoverflow answer
redirect-gateway def1
# except these two k8s subnets - from stackoverflow answer
route 10.43.0.0 255.255.0.0 net_gateway
route 10.42.0.0 255.255.0.0 net_gateway
resolv-retry infinite
nobind
persist-key
persist-tun
remote-cert-tls server
auth SHA512
cipher AES-256-CBC
ignore-unknown-option block-outside-dns
block-outside-dns
verb 3
here I added #anemyte's suggestion from here about the routes (also using Calico).
Routing is configured in nginx using this config snippet:
upstream hello_django {
server localhost:8080;
}
server {
listen 80;
server_tokens off;
server_name _;
# Django Static Files - routes beginning with /static/
location /static {
add_header Access-Control-Allow-Origin *;
add_header Cache-Control public;
add_header Pragma public;
add_header Vary Accept-Encoding;
#alias /app/web_static;
root /var/www/;
}
location /static/admin/ {
add_header Access-Control-Allow-Origin *;
add_header Cache-Control public;
add_header Pragma public;
add_header Vary Accept-Encoding;
root /var/www/;
}
location / {
proxy_pass http://hello_django;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
add_header Cache-Control public;
add_header Pragma public;
add_header Vary Accept-Encoding;
}
}
and a deployment file is similar to what #anemyte wrote.
I keep getting:
[warn] 13#13: *5 upstream server temporarily disabled while connecting to upstream, client: 11.254.0.15, server: _, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/"
[error] 13#13: *5 connect() failed (111: Connection refused) while connecting to upstream, client: 11.254.0.15, server: _, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/"
I am trying to access URL from the browser
http://example.com/validateResetPassword/?access_token=8E27UWYuamdf
my Nginx location mapping is:
location ~* /validateResetPassword {
proxy_pass http://localhost:8085/$request_uri;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
so whenever request comes like from browser
http://example.com/validateResetPassword/?access_token=8E27UWYuam
it should redirect to my spring boot application on port 8085.
I also tried after removing ~* but nothing is working.
also, I can see below error:
2019/11/25 13:19:29 [info] 13780#13780: *37 client closed connection while waiting for request, client: 10.68.104.173, server: 10.121.42.22:80
2019/11/25 13:19:29 [error] 13783#13783: *39 no resolver defined to resolve localhost, client: 10.68.104.173, server: mydoamin.com, request: "GET /validateResetPassword/dsfsdfsd HTTP/1.1", host: "example.com"
2019/11/25 13:19:30 [error] 13783#13783: *39 rewrite or internal redirection cycle while internally redirecting to "/index.html", client: 10.68.104.173, server: uc1f-bioinfocloud-algae-dev, request: "GET /favicon.ico HTTP/1.1", host: "uc1f-bioinfocloud-algae-dev", referrer: "http://example.com/validateResetPassword/dsfsdfsd"
This sounds quite critical to me:
*39 no resolver defined to resolve localhost
Did you try changing it to 127.0.0.1?
edit:
it should redirect to my spring boot application on port 8085.
If you use proxy*, you are not about to redirect but doing a subrequest to another host...
I would like to redirect specifc subdomains of my domain to my backend as
prefixes of the URL that is passed to the backend. This is because I have a single server and I do not want to have to handle the multiple domains in the backend due to increased complexity.
Hence if I have:
sub1.domain.com => domain.com/sub1/
sub1.domain.com/pathname => domain.com/sub1/pathname
sub1.domain.com/pathname?searchquery => domain.com/pathname?searchquery
and so forth.
So far what I have come up with is the following:
server {
charset utf8;
listen 80;
server_name
domain.com,
sub1.domain.com,
sub2.domain.com,
sub3.domain.com,
sub4.domain.com,
sub5.domain.com;
# Default
if ($host ~ ^domain\.com) {
set $proxy_uri $request_uri;
}
# Rewrites
if ($host ~ (.*)\.domain\.com) {
set $proxy_uri $1$request_uri;
}
location / {
expires 1s;
proxy_pass http://node:8080$proxy_uri; #node is an internally listed host (docker container)
proxy_set_header Host domain.com;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache_valid 200 1s;
}
}
But unfortunately all I get is a 502: Bad Gateway with the following log, 2017/06/11 12:49:18 [error] 6#6: *2 no resolver defined to resolve node, client: 136.0.0.110, server: domain.com:8888,, request: "GET /favicon.ico HTTP/1.1", host: "sub1.domain.com:8888", referrer: "http://sub1.domain.com:8888/" Any idea how I can achieve my goal? Any help would be greatly appreciated :)
Cheers!
It seems I wasn't so far from the answer - adding an upstream block before the server block was sufficient to finalize the config to the desired effect.
upstream backend {
server node:8080;
keepalive 8;
}
I also had to slightly modify the proxy pass line to the following:
proxy_pass http://backend$proxy_uri;
The problem must likely have been one related to how NGINX parses the proxy pass urls - if anyone reading this can provide an insight into the reason, please edit this answer!
I have a really weird issue with NGINX.
I have the following upstream.conf file, with the following upstream:
upstream files_1 {
least_conn;
check interval=5000 rise=3 fall=3 timeout=120 type=ssl_hello;
server mymachine:6006 ;
}
In locations.conf:
location ~ "^/files(?<command>.+)/[0123]" {
rewrite ^ $command break;
proxy_pass https://files_1 ;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
In /etc/hosts:
127.0.0.1 localhost mymachine
When I do: wget https://mynachine:6006/alive --no-check-certificate, I get HTTP request sent, awaiting response... 200 OK. I also verified that port 6006 is listening with netstat, and its OK.
But when I send to the NGINX file server a request, I get the following error:
no live upstreams while connecting to upstream, client: .., request: "POST /files/save/2 HTTP/1.1, upstream: "https://files_1/save"
But the upstream is OK. What is the problem?
When defining upstream Nginx treats the destination server and something that can be up or down. Nginx decides if your upstream is down or not based on fail_timeout (default 10s) and max_fails (default 1)
So if you have a few slow requests that timeout, Nginx can decide that the server in your upstream is down, and because you only have one, the whole upstream is effectively down, and Nginx reports no live upstreams. Better explained here:
https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/
I had a similar problem and you can prevent this overriding those settings.
For example:
upstream files_1 {
least_conn;
check interval=5000 rise=3 fall=3 timeout=120 type=ssl_hello max_fails=0;
server mymachine:6006 ;
}
I had the same error no live upstreams while connecting to upstream
Mine was SSL related: adding proxy_ssl_server_name on solved it.
location / {
proxy_ssl_server_name on;
proxy_pass https://my_upstream;
}
Im running mu Nginx as revers proxy to serv content from remote url , it was working fine for while , when i moved it to another host , i start getting the following erros
i tested internet though new host all is fine evern nginx serv from root location without issue but when i request a location that serv as revers proxt am getting
8583#0: *2 upstream timed out (110: Connection timed out) while
connecting to upstream, client: 10.64.159.12, server: xxxxx.com,
request: "GET /web/rest/v1/story/656903.json HTTP/1.1", upstream:
"http://requestedurl.com:80/web/rest/v1/story/656903.json", host: "myurl.com"
Location Config :
location /data {
sub_filter 'http' 'https';
sub_filter_once off;
sub_filter_types application/json;
proxy_read_timeout 300;
proxy_pass http://url here ;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header Accept-Encoding "";
Any advise