I have an Express app running behind Nginx, so when I try to get the user’s IP, I always get 127.0.0.1 instead of the real one, which is set by Nginx in the X-Real-IP header. How do I get this header? Is there a way to have it via the socket object?
The code would be basically like that:
io.sockets.on( 'connection', function( socket ) {
var ip = /* ??? */;
/* do something with the IP…
… some stuff …
*/
});
To get the IP when you're running behind NGINX or another proxy:
var ip = req.header('x-forwarded-for') || req.connection.remoteAddress;
or for Socket.IO
client.handshake.headers['x-forwarded-for'] || client.handshake.address.address;
From: http://www.hacksparrow.com/node-js-get-ip-address.html
Related
I am using nginx as a proxy to a nodejs application. I have the same application running multiple times each on a different port. The request is directed to the correct application/port based on host name.
So
test1.domain.com would be proxied to 127.0.0.1:8000
test2.domain.com would be proxied to 127.0.0.1:8001
test3.domain.com would be proxied to 127.0.0.1:8002
When I hard code " proxy_pass http://127.0.0.1:8000;" Everything works fine.
Now I wrote a njs script to read a file in a users directory to get the port number based on the subdommain. Here is the script.
#inclusion of js file
js_include sites-available/port_assign.js;
js_set $myPort port;
function port(r) {
var host = r.headersIn.host;
var subdomain = host.split('.');
var fs = require('fs');
var filename = '/home/' + subdomain[0] + '/port';
var port = fs.readFileSync(filename);
port.trim();
return(port);
}
this does read the file and returns the port number correctly. I have verified this in the error logs, Because I get:
2020/01/21 04:26:46 [error] 2729#2729: *6 invalid port in upstream "127.0.0.1:8001
", client: 96.54.17.234, server: *.foundryserver.com, request: "GET / HTTP/1.1", host: "test1.foundryserver.com"
now when I tried to issue the directive: proxy_pass http://127.0.0.1:$myPort I get an internal server error and the error stated above.
Not sure what is the difference it the two. I can only think somehow using a variable $myPort is got weird characters or something.
There was some extra information in the port variable. I was able to store the port number in a json format and parse it in the js. {"port":"8000"} is stored in the file.
function port(r) {
var host = r.headersIn.host;
var subdomain = host.split('.');
var fs = require('fs');
var filename = '/home/' + subdomain[0] + '/myport';
var jport = fs.readFileSync(filename);
var port = JSON.parse(jport);
return(port.port);
}
by doing the json parsing it removed any unseen characters in the variable.
I have an Nginx server which clients make requests to with a Client certificate containing a specific CN and SAN. I want to be able to extract the CN (Common Name) and SAN (Subject Alternative Names) fields of that client cert.
rough example config:
server {
listen 443 ssl;
ssl_client_certificate /etc/nginx/certs/client.crt;
ssl_verify_client on; #400 if request without valid cert
location / {
root /usr/share/nginx/html;
}
location /auth_test {
# do something with the CN and SAN.
# tried these embedded vars so far, to no avail
return 200 "
$ssl_client_s_dn
$ssl_server_name
$ssl_client_escaped_cert
$ssl_client_cert
$ssl_client_raw_cert";
}
}
Using the embedded variables exposed as part of the ngx_http_ssl_module module I can access the DN (Distinguished Name) and therefore CN etc but I don't seem to be able to get access to the SAN.
Is there some embedded var / other module / general Nginx foo I'm missing? I can access the raw cert, so is it possible to decode that manually and extract it?
I'd really rather do this at the Nginx layer as opposed to passing the cert down to the application layer and doing it there.
Any help much appreciated.
You can extract them with the Nginx-builtin map, e.g. for CN:
map $ssl_client_s_dn $ssl_client_s_dn_cn {
default "";
~,CN=(?<CN>[^,]+) $CN;
}
I'm not a lua expert, but here's what I got working:
local openssl = require('openssl')
dnsNames = {}
for k,v in pairs(openssl.x509.read(ngx.var.ssl_client_raw_cert):extensions()) do
for k1,v1 in pairs(v:info()) do
if(type(v1)=='table') then
for k2,v2 in pairs(v1) do
if(type(v2)=='table') then
for k3,v3 in pairs(v2) do
if(k3=='dNSName') then
table.insert(dnsNames, v3:toprint())
end
end
end
end
end
end
end
ngx.say(table.concat(dnsNames, ':'))
You can do it through OpenResty + Lua-OpenSSL and parse the raw certificate to get it.
Refer this: https://github.com/Seb35/nginx-ssl-variables/blob/master/COMPATIBILITY.md#ssl_client_s_dn_x509
Just like this:
local varibleName = string.match(require("openssl").x509.read(ngx.var.ssl_client_raw_cert):issuer():oneline(),"/C=([^/]+)")
Had the same problem, when I try to retrieve "subject DN" by a upstream server.
Someone might find the following advice useful. Thus, there is an access
to such fields as ("subject DN" an so on) - you have to look at link1. Beside it, I had to through this data into the request header, so I've done it via 'proxy_set_header' (link2). It was possible without any extra Nginx extension (there is not need to rebuild them with --modules, just default modules)
This is an example how an URI value can be extracted from client certificate extensions and then forwarded to the upstream server as a header. This is useful when implementing WebID over TLS authentication, for example.
location / {
proxy_pass http://upstream;
set_by_lua_block $webid_uri {
local openssl = require('openssl')
webIDs = {}
for k,v in pairs(openssl.x509.read(ngx.var.ssl_client_raw_cert):extensions()) do
for k1,v1 in pairs(v:info()) do
if(type(v1)=='table') then
for k2,v2 in pairs(v1) do
if(type(v2)=='table') then
for k3,v3 in pairs(v2) do
if(k3=='uniformResourceIdentifier') then
table.insert(webIDs, v3:data())
end
end
end
end
end
end
end
return webIDs[1]
}
proxy_set_header X-WebID-URI $webid_uri;
}
Let me know if it can be improved.
I'm trying to make an http request using lua-resty-http.
I created a simple get api in https://requestb.in
I can make a request using the address: https://requestb.in/snf2ltsn
However, when I try to do this in nginx I'm getting error no route to host
My nginx.conf file is:
worker_processes 1;
error_log logs/error.log;
events {
worker_connections 1024;
}
http {
lua_package_path "$prefix/lua/?.lua;;";
server {
listen 8080;
location / {
resolver 8.8.8.8;
default_type text/html;
lua_code_cache off; #enables livereload for development
content_by_lua_file ./lua/test.lua;
}
}
}
and my Lua code is
local http = require "resty.http"
local httpc = http.new()
--local res, err = httpc:request_uri("https://requestb.in/snf2ltsn", {ssl_verify = false,method = "GET" })
local res, err = httpc:request_uri("https://requestb.in/snf2ltsn", {
method = "GET",
headers = {
["Content-Type"] = "application/x-www-form-urlencoded",
}
})
How can I fix this Issue?
Or is there any suggestion to make http request in nginx?
any clue?
PS: There is a commented section in my Lua code. I also tried to make a request using that code but nothing happened.
Change the package_path like:
lua_package_path "$prefix/resty_modules/lualib/?.lua;;";
lua_package_cpath "$prefix/resty_modules/lualib/?.so;;";
By default nginx resolver returns IPv4 and IPv6 addresses for given domain.
resty.http module uses cosocket API.
Cosocket's connect method called with domain name selects one random IP address You are not lucky and it selected IPv6 address. You can check it by looking into nginx error.log
Very likely IPv6 doesn't work on your box.
To disable IPv6 for nginx resolver use directive below within your location:
resolver 8.8.8.8 ipv6=off;
I've got a case where I need to do a different proxy pass in Nginx depending on which CIDR the client's IP address is part of.
So, for example, let's say I have the following CIDRs:
10.50.0.0/16
10.51.0.0/16
10.52.0.0/16
Each of those client addresses needs to have a different proxy_pass in Nginx. How would I go about doing this? I'm very new to Nginx so achieving things like this are still a bit confusing.
You could use Geo module. Your configuration then would look somewhat like this:
geo $upstream {
default default_upstream;
10.50.0.0/16 some_upstream;
10.51.0.0/16 another_upstream;
}
upstream default_upstream {
server 192.168.0.1:80;
}
upstream some_upstream {
server 192.168.0.2:80;
}
upstream another_upstream {
server 192.168.0.3:80;
}
server {
...
location ... {
...
proxy_pass http://$upstream;
}
...
}
I've apache2 and nginx. I set "trust proxy headers" to true in configuration, but anyway get internal ip when calls $request->getClientIp(); What do I wrong?
If I calling getClientIp with parameter $proxy = true then I getting correct IP. But there is configuration where proxy headers enabled, aren't that enough?
Actually this is an issue, fixed by this merge https://github.com/symfony/symfony/commit/40599ec0a24e688ef5903e2bd3cfb29b5ab29a18
In short: you always need to use $proxy = true if you're planning on using some kind of reverse proxy. With this parameter set (and trustProxyData(); enabled), $this->getClientIp(); will return the correct IP with reverse proxy.
Explanation: even after configured, proxy headers will return HTTP_X_FORWARDED_FOR or HTTP_CLIENT_IP as the user IP, while REMOTE_ADDR will return server localhost address (most likely 127.0.0.1). $proxy = true checks exactly that. Here's the source code for this function:
public function getClientIp($proxy = false)
{
if ($proxy) {
if ($this->server->has('HTTP_CLIENT_IP')) {
return $this->server->get('HTTP_CLIENT_IP');
} elseif (self::$trustProxy && $this->server->has('HTTP_X_FORWARDED_FOR')) {
return $this->server->get('HTTP_X_FORWARDED_FOR');
}
}
return $this->server->get('REMOTE_ADDR');
}