Variable can't be used in proxy_pass in nginx - nginx

Here is my config below:
server {
server_name www.xxx.com;
location ~* ^/(unstable|staging|prod)-console/ {
set $serverip "";
rewrite_by_lua '
if string.find(ngx.var.uri, "unstable") ~= nil then
ngx.var.serverip = "10.17.21.123"
elseif string.find(ngx.var.uri, "staging") ~= nil then
ngx.var.serverip = "10.17.21.123"
elseif string.find(ngx.var.uri, "prod") ~= nil then
ngx.var.serverip = "10.17.21.123"
end
';
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://$serverip:1234;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
When I visit the page matched this locaion it gives me 500 error. In the nginx error log it says 2017/03/17 15:11:19 [error] 2638#2638: *6 no host in upstream ":1234", client: 10.19.35.20, server: console.allinmoney.com, request: "GET /unstable-console/?arg=ifconfig HTTP/1.1", host: "console.allinmoney.com". It means that the variable hasn't been changed correctly in lua script. Could anyone help for this? Thanks

Related

java.lang.Exception: Host is not set (running a JakartaEE app on Payara micro, behind nginx)

This error trace is polluting my logs and I can't find on SA or else what is causing it:
[2022-01-11T04:15:00.144+0100] [] [[1;91mSEVERE[0m] [AS-WEB-CORE-00037] [[1;94mjavax.enterprise.web.core[0m] [tid: _ThreadID=27428 _ThreadName=http-thread-pool::http-listener(331)] [timeMillis: 1641870900144] [levelValue: 1000] [[
An exception or error occurred in the container during the request processing
java.lang.Exception: Host is not set
at org.glassfish.grizzly.http.server.util.Mapper.map(Mapper.java:865)
at org.apache.catalina.connector.CoyoteAdapter.postParseRequest(CoyoteAdapter.java:496)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:309)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:238)
at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:520)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:217)
at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:182)
at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:156)
at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:218)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:95)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:260)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:177)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:109)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:88)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:53)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:524)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:89)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:94)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:33)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:114)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:569)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:549)
at java.base/java.lang.Thread.run(Thread.java:829)
]]
This is for a JakartaEE app with JSF 2.3 (Faces) running on Payara micro 5.2021.2. If this is of any relevance, here are the parts of the nginx config that redirect the traffic to the app:
upstream payara {
least_conn;
server localhost:8080 max_fails=3 fail_timeout=5s;
server localhost:8181 max_fails=3 fail_timeout=5s;
}
location /jsf-app-1.0-SNAPSHOT/ {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto https;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_no_cache $cookie_nocache $arg_nocache$arg_comment;
proxy_no_cache $http_pragma $http_authorization;
proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment;
proxy_cache_bypass $http_pragma $http_authorization;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host:$server_port;
add_header Access-Control-Allow-Origin *;
proxy_set_header Access-Control-Allow-Origin *;
proxy_pass http://payara$request_uri;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto https;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_no_cache $cookie_nocache $arg_nocache$arg_comment;
proxy_no_cache $http_pragma $http_authorization;
proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment;
proxy_cache_bypass $http_pragma $http_authorization;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host:$server_port;
add_header Access-Control-Allow-Origin *;
proxy_set_header Access-Control-Allow-Origin *;
proxy_pass http://payara$request_uri$is_args$args;
}
Looks like Grizzly is trying to obtain the hostname from the Host header in the request. Since HTTP 1.1 the Host header is required but if the Host header is set an empty name, Grizzly cannot obtain the name and throws an exception.
The Host request header is set by the HTTP client. But even if the Host header exists but its value is empty due to some reason the exception will be thrown.
Grizzly Code: the code that throws the Exception
According to the Javadocs for Grizzly you can set the default hostname by calling the setDefaultHostName(String defaultHostName) method, but the instance of the Mapper in the HttpHanderChain instance is not exposed. The default value set in HttpHanderChain of the Mapper instance is set to "localhost".

Nginx error upstream timed out (110: Connection timed out) while SSL handshaking to upstream

I have three docker containers in my project: Nginx, tornado-app, and DB. My Tornado app serves WebSocket app (URLs are /clientSocket and /gatewaySocket) and Django app (URLs are everything except WebSocket URLs).I use upstream for serving tornado app (that runs in port 8000) with Nginx. my Project just works fine in last few months with no errors until today that I got strange 504 Errors from Nginx. Here is my Nginx config file:
limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=sms:10m rate=1r/m;
upstream my_server{
server web_instance_1:8000; # tornado app
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name server.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name server.com;
ssl on;
ssl_certificate /etc/nginx/ssl/chained.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
location / {
# limit_req zone=one burst=5;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass https://my_server;
}
location /rest/register/gateway/phone_number {
limit_req zone=sms burst=5;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass https://my_server;
}
location ~ /.well-known {
root /var/www/acme;
allow all;
}
location ~ ^/(admin|main-panel) {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass https://my_server;
}
location /gatewaySocket {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_pass https://my_server;
}
location /clientSocket {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_pass https://my_server;
}
}
and here the strange upstream timeout Errors :
2018/06/12 19:23:09 [error] 5#5: *154 upstream timed out (110:Connection timed out) while reading response header from upstream,client: x.x.x.x, server: server.com, request: "GET /admin/main/serverlogs/834591/change/ HTTP/1.1", upstream:"https://172.18.0.3:8000/admin/main/serverlogs/834591/change/",host:"server.com", referrer: "https://server.com/admin/main/serverlogs/"
2018/06/12 19:23:09 [error] 5#5: *145 upstream timed out (110:Connection timed out) while reading response header from upstream,client: x.x.x.x, server: server.com, request: "GET /robots.txtHTTP/1.1", upstream:"https://172.18.0.3:8000/robots.txt",host:"server.com"
2018/06/12 19:40:51 [error] 5#5: *420 upstream timed out (110:Connection timed out) while SSL handshaking to upstream, client:x.x.x.x, server: server.com, request: "GET /gatewaySocket HTTP/1.1",upstream: "https://172.18.0.3:8000/gatewaySocket",host:"server.com:443"

socket.io-client with nginx -- fails with 404 for POST's! Why?

I have an app that works fine on my development machine, but when it is moved to my nginx server is see:
0|app | ::ffff:127.0.0.1 - POST /socket.io-client/?EIO=3&transport=polling&t=LwCov_4 HTTP/1.1 404 157 - 0.344 ms
0|app | POST /socket.io-client/?EIO=3&transport=polling&t=LwCov_4 404 0.344 ms - 157
0|app | ::ffff:127.0.0.1 - GET /socket.io-client/?EIO=3&transport=polling&t=LwCowC_ HTTP/1.1 200 - - 1.045 ms
0|app | GET /socket.io-client/?EIO=3&transport=polling&t=LwCowC_ 200 1.045 ms - -
where ALL GET's succeed and ALL POST's fail with 404.
The relevant part of the nginx config is:
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_pass https://localhost:9000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_redirect https://xxxx.com:9000/ https://xxxx.com/;
}
location ~* \.io {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header Connection "upgrade";
proxy_pass https://localhost:9000;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-NginX-Proxy true;
}
The express piece has:
import { handleSocket } from './config/socketio';
let server = https.createServer(options, app);
let socketio = socket_io(server, {
serveClient: localEnv.env !== 'production',
path: '/socket.io'
});
server.listen(localEnv.port, () => {
log('Express server listening on %d, in %s mode', localEnv.port, app.get('env'));
});
And when run gives:
2017-09-16T17:07:18-0400 app.js:192 (Server.) Express server listening on 9000, in production mode
AND everything works except for the 404's for socket.io POST's
It turns out the issue came from some unknown on the server. Simply restarting the droplet on DigitalOcean took care of the problem.

Elasticsearch : Connection refused while connecting to upstream

I've set up an Elasticsearch server with Kibana to gather some logs.
Elasticsearch is behind a reverse proxy by Nginx, here is the conf :
server {
listen 8080;
server_name myserver.com;
error_log /var/log/nginx/elasticsearch.proxy.error.log;
access_log off;
location / {
# Deny Nodes Shutdown API
if ($request_filename ~ "_shutdown") {
return 403;
break;
}
# Pass requests to ElasticSearch
proxy_pass http://localhost:9200;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
# For CORS Ajax
proxy_pass_header Access-Control-Allow-Origin;
proxy_pass_header Access-Control-Allow-Methods;
proxy_hide_header Access-Control-Allow-Headers;
add_header Access-Control-Allow-Headers 'X-Requested-With, Content-Type';
add_header Access-Control-Allow-Credentials true;
}
}
Everything works well, I can curl -XGET "myserver.com:8080" to check, and my logs come in.
But every minute or so, in the nginx error logs, I get that :
2014/05/28 12:55:45 [error] 27007#0: *396 connect() failed (111: Connection refused) while connecting to upstream, client: [REDACTED_IP], server: myserver.com, request: "POST /_bulk?replication=sync HTTP/1.1", upstream: "http://[::1]:9200/_bulk?replication=sync", host: "myserver.com"
I can't figure out what it is, is there any problem in the conf that would prevent some _bulk requests to come through ?
Seems like upstream and a different keepalive is necessary for the ES backend to work properly, I finally had it working using the following configuration :
upstream elasticsearch {
server 127.0.0.1:9200;
keepalive 64;
}
server {
listen 8080;
server_name myserver.com;
error_log /var/log/nginx/elasticsearch.proxy.error.log;
access_log off;
location / {
# Deny Nodes Shutdown API
if ($request_filename ~ "_shutdown") {
return 403;
break;
}
# Pass requests to ElasticSearch
proxy_pass http://elasticsearch;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
# For CORS Ajax
proxy_pass_header Access-Control-Allow-Origin;
proxy_pass_header Access-Control-Allow-Methods;
proxy_hide_header Access-Control-Allow-Headers;
add_header Access-Control-Allow-Headers 'X-Requested-With, Content-Type';
add_header Access-Control-Allow-Credentials true;
}
}

Nginx map doesn't use the arguments of my regular expression

I'm trying to use the map of nginx, but the results aren't what I expect.
This is what I have:
map $uri $new {
default "";
~*/cc/(?P<suffix>.*)$ test.php?suffix=$suffix;
}
location ~ [a-zA-Z0-9/_]+$ {
proxy_pass http://www.domain.com:81/$new;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
When I go to www.domain.com/cc/abc, I see this in the logs
2012/03/29 17:27:53 [warn] 3382#0: *33 an upstream response is buffered to a temporary file /var/cache/nginx/proxy_temp/5/00/0000000005 while reading upstream, client: 1.2.3.4, server: www.domain.com, request: "GET /cc/abc HTTP/1.1", upstream: "http://1270.0.0.1:81/test.php?suffix=$suffix", host: "www.domain.com"
The $suffix isn't replaced.
But when I do this:
map $uri $new {
default "";
~*/cc/(?P<suffix>.*)$ $suffix;
}
location ~ [a-zA-Z0-9/_]+$ {
proxy_pass http://www.domain.com:81/$new;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
And now, when I go to go to www.domain.com/cc/abc, the logs show me this:
2012/03/29 17:29:39 [warn] 5916#0: *26 an upstream response is buffered to a temporary file /var/cache/nginx/proxy_temp/2/00/0000000002 while reading upstream, client: 1.2.3.4, server: www.domain.com, request: "GET /cc/abc HTTP/1.1", upstream: "http://1270.0.01:81/abc", host: "www.domain.com"
So, when the rewrite contains a string including the variable, it isn't replaced. But if it only contains the variable, it will work.
What am I doing wrong?
As you've discovered, map replacements can only be a static string or a single variable. Since test.php?suffix=$suffix doesn't start with a $, nginx assumes it's just a static string. Instead of using a map, you'll need to use two rewrites to accomplish what you want:
location ~ [a-zA-Z0-9/_]+$ {
rewrite ^/cc/(.*) /test.php?suffix=$1 break;
rewrite ^ / break;
proxy_pass http://www.domain.com:81;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
The first rewrite will strip any initial /cc/ from the url and append the rest as the url arg like your map was trying to. The break flag tells nginx to stop processing rewrite directives. If the first rewrite doesn't match, then the second will always match, and will set the url to /.
EDIT: As of 1.11.0, map values can be complex values, so the original config would work

Resources