Crystal-lang serve index.html - http

I'm a bit new to the language, and I want to start hacking away at a very simple HTTP server. My current code looks like this:
require "http/server"
port = 8080
host = "127.0.0.1"
mime = "text/html"
server = HTTP::Server.new(host, port, [
HTTP::ErrorHandler.new,
HTTP::LogHandler.new,
HTTP::StaticFileHandler.new("./public"),
]) do |context|
context.response.content_type = mime
end
puts "Listening at #{host}:#{port}"
server.listen
My goal here is that I don't want to list the directory, as this will do. I actually want to serve index.html if it is available at public/, without having to place index.html in the URL bar. Let's assume that index.html actually does exist at public/. Any pointers to docs that might be useful?

Something like this?
require "http/server"
port = 8080
host = "127.0.0.1"
mime = "text/html"
server = HTTP::Server.new(host, port, [
HTTP::ErrorHandler.new,
HTTP::LogHandler.new,
]) do |context|
req = context.request
if req.method == "GET" && req.path == "/public"
filename = "./public/index.html"
context.response.content_type = "text/html"
context.response.content_length = File.size(filename)
File.open(filename) do |file|
IO.copy(file, context.response)
end
next
end
context.response.content_type = mime
end
puts "Listening at #{host}:#{port}"
server.listen

Related

Dynamic routing with nginx, lua and redis

I am trying to make nginx perform proxying based on the URI with the help of lua and redis.
So far, I am able to successfully proxy simple URI like '/hello' to desired target. Was able to achieve this by saving the mappings in a redis hashmap something like,
HGETALL "127.0.0.1:8080"
1) "/demo1/test/hello4"
2) "example.com/demo1/test/hello4"
3) "/hello"
4) "example.com/hello"
nginx.conf
worker_processes 2;
error_log logs/error.log info;
events {
worker_connections 1024;
}
http {
server {
listen 8080;
location / {
resolver 8.8.4.4; # use Google's open DNS server
set $target '';
access_by_lua '
local http_host = ngx.var.http_host
if not http_host then
ngx.log(ngx.ERR, "no http-host found")
return ngx.exit(400)
end
local redis = require "resty.redis"
local red = redis:new()
red:set_timeout(1000) -- 1 second
local ok, err = red:connect("127.0.0.1", 6379)
if not ok then
ngx.log(ngx.ERR, "failed to connect to redis: ", err)
return ngx.exit(500)
end
local fPath, err = red:hget(http_host, ngx.var.uri)
if not fPath then
ngx.log(ngx.ERR, "No fPath: ", err)
return ngx.exit(500)
end
ngx.var.target = fPath
';
proxy_pass $target;
}
}
}
However, I also want to handle dynamic URI's like example:-
user/id/1 -> "example.com/user/id/1",
user/id/2 -> "example.com/user/id/2",
user/id/3 -> "example.com/user/id/3",
and so on....
I am not sure how can I create a key value pair in redis and lua logic for this case which can handle the dynamicity of the id's.
I tried looking but haven't been able to find the right direction or some resource to aid me in figuring this out.
Any help would be really great!
If you want to achieve this in production, I would recommend using mature API gateways like Apache APISIX or Kong. To implement it yourself, maybe you could store paths with wildcard or Lua patterns in Redis to allow later matching to the original URI. Applying some simple heuristics would help reduce the range of checking.

NGINX invalid port in upstream

I am using nginx as a proxy to a nodejs application. I have the same application running multiple times each on a different port. The request is directed to the correct application/port based on host name.
So
test1.domain.com would be proxied to 127.0.0.1:8000
test2.domain.com would be proxied to 127.0.0.1:8001
test3.domain.com would be proxied to 127.0.0.1:8002
When I hard code " proxy_pass http://127.0.0.1:8000;" Everything works fine.
Now I wrote a njs script to read a file in a users directory to get the port number based on the subdommain. Here is the script.
#inclusion of js file
js_include sites-available/port_assign.js;
js_set $myPort port;
function port(r) {
var host = r.headersIn.host;
var subdomain = host.split('.');
var fs = require('fs');
var filename = '/home/' + subdomain[0] + '/port';
var port = fs.readFileSync(filename);
port.trim();
return(port);
}
this does read the file and returns the port number correctly. I have verified this in the error logs, Because I get:
2020/01/21 04:26:46 [error] 2729#2729: *6 invalid port in upstream "127.0.0.1:8001
", client: 96.54.17.234, server: *.foundryserver.com, request: "GET / HTTP/1.1", host: "test1.foundryserver.com"
now when I tried to issue the directive: proxy_pass http://127.0.0.1:$myPort I get an internal server error and the error stated above.
Not sure what is the difference it the two. I can only think somehow using a variable $myPort is got weird characters or something.
There was some extra information in the port variable. I was able to store the port number in a json format and parse it in the js. {"port":"8000"} is stored in the file.
function port(r) {
var host = r.headersIn.host;
var subdomain = host.split('.');
var fs = require('fs');
var filename = '/home/' + subdomain[0] + '/myport';
var jport = fs.readFileSync(filename);
var port = JSON.parse(jport);
return(port.port);
}
by doing the json parsing it removed any unseen characters in the variable.

NGINX read body from proxy_pass response

I have two servers:
NGINX (it exchanges file id to file path)
Golang (it accepts file id and return it's path)
Ex: When browser client makes request to https://example.com/file?id=123, NGINX should proxy this request to Golang server https://go.example.com/getpath?file_id=123, which will return the response to NGINX:
{
data: {
filePath: "/static/..."
},
status: "ok"
}
Then NGINX should get value from filePath and return file from the location.
So the question is how to read response (get filePath) in NGINX?
I assume you are software developer and your have full control over your application so there is no need to force square peg in a round hole here.
Different kinds of reverse proxies support ESI(Edge Side Includes) technology which allow developer to replace different parts of responce body with content of static files or with response bodies from upstream servers.
Nginx has such technology as well. It is called SSI (Server Side Includes).
location /file {
ssi on;
proxy_pass http://go.example.com;
}
Your upstream server can produce body with content <!--# include file="/path-to-static-files/some-static-file.ext" --> and nginx will replace this in-body directive with content of the file.
But you mentioned streaming...
It means that files will be of arbitrary sizes and building response with SSI would certainly eat precious RAM resources so we need a Plan #B.
There is "good enough" method to feed big files to the clients without showing static location of the file to the client.
You can use nginx's error handler to server static files based on information supplied by upstream server.
Upstream server for example can send back redirect 302 with Location header field containing real file path to the file.
This response does not reach the client and is feed into error handler.
Here is an example of config:
location /file {
error_page 302 = #service_static_file;
proxy_intercept_errors on;
proxy_set_header Host $host;
proxy_pass http://go.example.com;
}
location #service_static_file {
root /hidden-files;
try_files $upstream_http_location 404.html;
}
With this method you will be able to serve files without over-loading your system while having control over whom do you give the file.
For this to work your upstream server should respond with status 302 and with typical "Location:" field and nginx will use location content to find the file in the "new" root for static files.
The reason for this method to be of "good enough" type (instead of perfect) because it does not support partial requests (i.e. Range: bytes ...)
Looks like you are wanting to make an api call for data to run decision and logic against. That's not quite what proxying is about.
The core proxy ability of nginx is not designed for what you are looking to do.
Possible workaround: extending nginx...
Nginx + PHP
Your php code would do the leg work.
Serve as a client to connect to the Golang server and apply additional logic to the response.
<?php
$response = file_get_contents('https://go.example.com/getpath?file_id='.$_GET["id"]);
preg_match_all("/filePath: \"(.*?)\"/", $response, $filePath);
readfile($filePath[1][0]);
?>
location /getpath {
try_files /getpath.php;
}
This is just the pseudo-code example to get it rolling.
Some miscellaneous observations / comments:
The Golang response doesn't look like valid json, replace preg_match_all with json_decode if so.
readfile is not super efficient. Consider being creative with a 302 response.
Nginx + Lua
sites-enabled:
lua_package_path "/etc/nginx/conf.d/lib/?.lua;;";
server {
listen 80 default_server;
listen [::]:80 default_server;
location /getfile {
root /var/www/html;
resolver 8.8.8.8;
set $filepath "/index.html";
access_by_lua_file /etc/nginx/conf.d/getfile.lua;
try_files $filepath =404;
}
}
Test if lua is behaving as expected:
getfile.lua (v1)
ngx.var.filepath = "/static/...";
Simplify the Golang response body to just return a bland path then use it to set filepath:
getfile.lua (v2)
local http = require "resty.http"
local httpc = http.new()
local query_string = ngx.req.get_uri_args()
local res, err = httpc:request_uri('https://go.example.com/getpath?file_id=' .. query_string["id"], {
method = "GET",
keepalive_timeout = 60,
keepalive_pool = 10
})
if res and res.status == ngx.HTTP_OK then
body = string.gsub(res.body, '[\r\n%z]', '')
ngx.var.filepath = body;
ngx.log(ngx.ERR, "[" .. body .. "]");
else
ngx.log(ngx.ERR, "missing response");
ngx.exit(504);
end
resty.http
mkdir -p /etc/nginx/conf.d/lib/resty
wget "https://raw.githubusercontent.com/ledgetech/lua-resty-http/master/lib/resty/http_headers.lua" -P /etc/nginx/conf.d/lib/resty
wget "https://raw.githubusercontent.com/ledgetech/lua-resty-http/master/lib/resty/http.lua" -P /etc/nginx/conf.d/lib/resty

Nginx - how to access Client Certificate's Subject Alternative Name (SAN) field

I have an Nginx server which clients make requests to with a Client certificate containing a specific CN and SAN. I want to be able to extract the CN (Common Name) and SAN (Subject Alternative Names) fields of that client cert.
rough example config:
server {
listen 443 ssl;
ssl_client_certificate /etc/nginx/certs/client.crt;
ssl_verify_client on; #400 if request without valid cert
location / {
root /usr/share/nginx/html;
}
location /auth_test {
# do something with the CN and SAN.
# tried these embedded vars so far, to no avail
return 200 "
$ssl_client_s_dn
$ssl_server_name
$ssl_client_escaped_cert
$ssl_client_cert
$ssl_client_raw_cert";
}
}
Using the embedded variables exposed as part of the ngx_http_ssl_module module I can access the DN (Distinguished Name) and therefore CN etc but I don't seem to be able to get access to the SAN.
Is there some embedded var / other module / general Nginx foo I'm missing? I can access the raw cert, so is it possible to decode that manually and extract it?
I'd really rather do this at the Nginx layer as opposed to passing the cert down to the application layer and doing it there.
Any help much appreciated.
You can extract them with the Nginx-builtin map, e.g. for CN:
map $ssl_client_s_dn $ssl_client_s_dn_cn {
default "";
~,CN=(?<CN>[^,]+) $CN;
}
I'm not a lua expert, but here's what I got working:
local openssl = require('openssl')
dnsNames = {}
for k,v in pairs(openssl.x509.read(ngx.var.ssl_client_raw_cert):extensions()) do
for k1,v1 in pairs(v:info()) do
if(type(v1)=='table') then
for k2,v2 in pairs(v1) do
if(type(v2)=='table') then
for k3,v3 in pairs(v2) do
if(k3=='dNSName') then
table.insert(dnsNames, v3:toprint())
end
end
end
end
end
end
end
ngx.say(table.concat(dnsNames, ':'))
You can do it through OpenResty + Lua-OpenSSL and parse the raw certificate to get it.
Refer this: https://github.com/Seb35/nginx-ssl-variables/blob/master/COMPATIBILITY.md#ssl_client_s_dn_x509
Just like this:
local varibleName = string.match(require("openssl").x509.read(ngx.var.ssl_client_raw_cert):issuer():oneline(),"/C=([^/]+)")
Had the same problem, when I try to retrieve "subject DN" by a upstream server.
Someone might find the following advice useful. Thus, there is an access
to such fields as ("subject DN" an so on) - you have to look at link1. Beside it, I had to through this data into the request header, so I've done it via 'proxy_set_header' (link2). It was possible without any extra Nginx extension (there is not need to rebuild them with --modules, just default modules)
This is an example how an URI value can be extracted from client certificate extensions and then forwarded to the upstream server as a header. This is useful when implementing WebID over TLS authentication, for example.
location / {
proxy_pass http://upstream;
set_by_lua_block $webid_uri {
local openssl = require('openssl')
webIDs = {}
for k,v in pairs(openssl.x509.read(ngx.var.ssl_client_raw_cert):extensions()) do
for k1,v1 in pairs(v:info()) do
if(type(v1)=='table') then
for k2,v2 in pairs(v1) do
if(type(v2)=='table') then
for k3,v3 in pairs(v2) do
if(k3=='uniformResourceIdentifier') then
table.insert(webIDs, v3:data())
end
end
end
end
end
end
end
return webIDs[1]
}
proxy_set_header X-WebID-URI $webid_uri;
}
Let me know if it can be improved.

nginx lua redis cookie not setting

I am trying to set a cookie with lua+nginx+redis. This is my idea: set cookie if cookie doesn't exist and then save to redis.
local redis = require "resty.redis"
local red = redis:new()
local md5 = require "md5"
local ip = ngx.var.remote_addr
local secs = ngx.time()
local uid_key = ip .. secs
local uid = md5.sumhexa(uid_key)
local cookie = ngx.var.cookie_uid
local red_cookie = red:hget("cookie:"..uid, uid)
local ok, err = red:connect("127.0.0.1", 6379)
if not ok then
ngx.say("failed to connect to Redis: ", err)
return
end
local args = ngx.req.get_headers()
local date_time = ngx.http_time(secs)
if cookie == nil or cookie ~= red_cookie then
ngx.header['Set-Cookie'] = "path=/; uid=" .. uid
local res, err = red:hmset("cookie:".. uid,
"uid", uid,
"date_time", date_time,
"user-agent", args["user-agent"]
)
if not res then
ngx.say("failed to set cookie: ", err)
end
end
and my nginx conf
...
location /cookie {
default_type "text/plain";
lua_code_cache off;
content_by_lua_file /lua/test.lua;
}
I am not seeing the cookie set however. I get [error] 63519#0: *408 attempt to set ngx.header.HEADER after sending out response headers, client: 127.0.0.1, server: localhost, request: "GET /cookie HTTP/1.1", host: "localhost"
I can't seem to figure out why this wouldn't work? I also thought I could set cookies with purely nginx. I need to track users who visit my page. Any thoughts?
Thanks!!
Update
I revised my idea to make redis requests from an upstream access point. Now I keep getting an invalid reply from redis.parser.
local redis = require "resty.redis"
local md5 = require "md5"
local parser = require "redis.parser"
local ip = ngx.var.remote_addr
local secs = ngx.time()
local uid_key = ip .. secs
local uid = md5.sumhexa(uid_key)
local args = ngx.req.get_headers()
local date_time = ngx.http_time(secs)
local test_cookie = ngx.location.capture("/redis_check_cookie", {args = {cookie_uid="cookie:"..uid}});
if test_cookie.status ~= 200 or not test_cookie.body then
ngx.log(ngx.ERR, "failed to query redis")
ngx.exit(500)
end
local reqs = {
{"hmset", "cookie:"..uid, "path=/"}
}
local raw_reqs = {}
for i, req in ipairs(reqs) do
table.insert(raw_reqs, parser.build_query(req))
end
local res = ngx.location.capture("/redis_set_cookie?" .. #reqs,
{ body = table.concat(raw_reqs, "")
})
local replies = parser.parse_replies(res.body, #reqs)
for i, reply in ipairs(replies) do
ngx.say(reply[1])
end
and my nginx conf now has:
upstream my_redis {
server 127.0.0.1:6379;
keepalive 1024 single;
}
and
location /redis_check_cookie {
internal;
set_unescape_uri $cookie_uid $arg_cookie_uid;
redis2_query hexists $cookie_uid uid;
redis2_pass my_redis;
}
location /redis_set_cookie {
internal;
redis2_raw_queries $args $echo_request_body;
redis2_pass my_redis;
}
maybe u forget to display some other things;
i don't have openresty environment; but our environment is similar.
the code below is my test, and it run perfectly
this is nginx.conf
location /cookie {
default_type "text/plain";
lua_code_cache off;
content_by_lua_file test.lua;
}
this is lua script
local redis = require "redis"
local red = redis.connect('192.168.1.51',6379)
local ip = ngx.var.remote_addr
local secs = ngx.time()
local uid_key = ip .. secs
local uid = (uid_key)
local cookie = ngx.var.cookie_uid
local red_cookie = red:hget("cookie:"..uid, uid)
local args = ngx.req.get_headers()
local date_time = ngx.http_time(secs)
if cookie == nil or cookie ~= red_cookie then
ngx.header['Set-Cookie'] = "path=/; uid=" .. uid
local res, err = red:hmset("cookie:".. uid,
"uid", uid, "date_time", date_time,
"user-agent", args["user-agent"])
if not res then
ngx.say("failed to set cookie: ", err)
end
end
will u display more about your code?
friends, yesterday i use my own environment and change some of your code, the program run ok.
But, u say your code is also bad.
Just now, i also use the resty.redis. But the code runs ok.
So, i use your environment and your code according to your post, but the result is ok.
I can't provide u help any more.

Resources