I configured the server as below
Coturn-4.5.1.1 'dan Eider'
tls-listening-port=5349
fingerprint
use-auth-secret
server-name=turn.***.com
realm=turn.****.com
verbose
cert=/etc/coturn/certs/turn.***.com.fullchain.pem
pkey=/etc/coturn/certs/turn.***.com.privkey.pem
dh-file=/etc/coturn/certs/ssl-dhparams.pem
mobility
min-port=49152
max-port=65535
Nginx ( the problem is not Nginx because the problem is still alive when I don't use Nginx )
stream {
...
...
error_log /var/log/nginx/str.error.log;
upstream turnTls {
server turn_tls_IP:5349;
}
map $ssl_preread_server_name $upstream {
....
....
...
turn.****.com turnTls;
}
server {
error_log /var/log/nginx/xxx.err.log;
listen 443;
listen [::]:443;
proxy_pass $upstream;
ssl_preread on;
proxy_buffer_size 10m;
}
}
When I access the server with Android phones with turns protocol like
{
'urls': ['turns:turn.***.com:443?transport=tcp'],
'username': $username,
'credential': $password,
}
The server cannot get user credentials, and the server log is as follows
7: session 002000000000000001: closed (2nd stage), user <> realm <turn.****.com> origin <>, local ****:5349, remote ***:53712, reason: TLS/TCP socket buffer operation error (callback)
As you can see, the user's access user <> information is empty and I got
reason: TLS/TCP socket buffer operation error (callback)
with Trickle ICE tools sometimes work
0.783 Done
0.782 relay 2831610 udp ***** 65082 0 | 31519 | 255 turns:turn.***.com:443?transport=tcp tls
Coturn log
session 000000000000000025: new, realm=<turn.****.com>, username=<1674486335:user_80_156>, lifetime=600, cipher=ECDHE-RSA-AES256-GCM-SHA384, method=TLSv1.2
I did the following but the problem was not solved
disable some TlS protocols
no-tlsv1
no-tlsv1_1
no-tlsv1_2
no-tlsv3
...
I copied lets encrypt keys to /etc/coturn which is chmodded with 600 and owned by turnserver:turnserve
I stopped NGINX and contacted Turn directly via TLS on port 443
With Nginx, I decrypt in server block and then transferred it to the Turn server
stream {
server {
listen 443 ssl;
ssl_certificate ... fullchain.pem;
ssl_certificate_key ... privkey.pem;
ssl_dhparam ... dhparam.pem;
proxy_ssl off;
proxy_pass turn_Ip_NoTLS:3478;
}
}
I tested in many android device with ISRG Root X1 and DST Root CA X3
Related
I'm trying to set up multiple cluster checks in open shift using the free ngx_http_upstream_check_module
https://github.com/alibaba/tengine/blob/master/docs/modules/ngx_http_upstream_check_module.md
http status check request comes only when you explicitly specify Host in the check_http_send section
config example
upstream FPK {
server Host1:80;
server Host2:80;
check interval=3000 rise=2 fall=5 timeout=5000 type=http;
check_http_send "GET /healthz/ready HTTP/1.1\r\nHOST:Host1\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 8090;
location /healthz/ready {
proxy_pass http://FPK;
}
location /nstatus {
check_status;
access_log off;
allow all;
}
}
In this example, the check passes, but only on 1 host, I tried to add a host through a variable, but it seems that this module does not support variables and i get error
2022/09/12 09:17:13 [error] 4123691#0: check protocol http error with peer:Host:80
How to pass multiple hosts in the check_http_send section? Thanks in advance for any help
I am using OpenResty to generate SSL certificates dynamically.
I am trying to find out the user-agent of request before running ssl_certificate_by_lua_block and decide If I want to continue with the request or not.
I found out that ssl_client_hello_by_lua_block directive runs before ssl_certificate_by_lua_block but if I try to execute ngx.req.get_headers()["user-agent"] inside ssl_client_hello_by_lua_block I get the following error
2022/06/13 09:20:58 [error] 31918#31918: *18 lua entry thread aborted: runtime error: ssl_client_hello_by_lua:6: API disabled in the current context
stack traceback:
coroutine 0:
[C]: in function 'error'
/usr/local/openresty/lualib/resty/core/request.lua:140: in function 'get_headers'
ssl_client_hello_by_lua:6: in main chunk, context: ssl_client_hello_by_lua*, client: 1.2.3.4, server: 0.0.0.0:443
I tried rewrite_by_lua_block but it runs after ssl_certificate_by_lua_block
Are there any directive that can let me access ngx.req.get_headers()["user-agent"] and run before ssl_certificate_by_lua_block as well?
My Nginx conf for reference.
nginx.conf
# HTTPS server
server {
listen 443 ssl;
rewrite_by_lua_block {
local user_agent = ngx.req.get_headers()["user-agent"]
ngx.log(ngx.ERR, "rewrite_by_lua_block user_agent -- > ", user_agent)
}
ssl_client_hello_by_lua_block {
ngx.log(ngx.ERR, "I am from ssl_client_hello_by_lua_block")
local ssl_clt = require "ngx.ssl.clienthello"
local host, err = ssl_clt.get_client_hello_server_name()
ngx.log(ngx.ERR, "hosts -- > ", host)
-- local user_agent = ngx.req.get_headers()["user-agent"]
-- ngx.log(ngx.ERR, "user_agent -- > ", user_agent)
}
ssl_certificate_by_lua_block {
auto_ssl:ssl_certificate()
}
ssl_certificate /etc/ssl/resty-auto-ssl-fallback.crt;
ssl_certificate_key /etc/ssl/resty-auto-ssl-fallback.key;
location / {
proxy_pass http://backend_proxy$request_uri;
}
}
If someone is facing the same issue.
Here is the email group of OpenResty that helped me.
I was not thinking correctly. The certificate negotiation happens before a client send user-agent data(that comes in after the SYNACK reaches the client). So you cant save issuing the certificate in the process. Hard luck.
Once the handshake and the Client/Server Hello happens then the server has the user-agent, you can do the blocking under access_by_lua_block.
I setup Nginx Plus to load-balance UDP syslog traffic. Here's a snippet from nginx.conf:
stream {
upstream syslog_standard {
zone syslog_zone 64k;
server cp01.woolford.io:1514 max_fails=1 fail_timeout=10s;
server cp02.woolford.io:1514 max_fails=1 fail_timeout=10s;
server cp03.woolford.io:1514 max_fails=1 fail_timeout=10s;
}
server {
listen 514 udp;
proxy_pass syslog_standard;
proxy_bind $remote_addr transparent;
health_check udp;
}
}
I was a little surprised to hear that NGINX Plus could perform health checks on UDP since UDP is, by design, unreliable. Since there is no acknowledgment in UDP, the messages effectively go into a black hole.
I'm trying to set up a somewhat fault-tolerant and scalable syslog ingestion pipeline. The loss of a node should be detected, by a health check, and be temporarily removed from the list of available servers.
This didn't work, despite the UDP health check. I think the UDP health check only works for services that respond to the caller (e.g. DNS). Since syslog doesn't respond, there's no way to check for errors, e.g. using match.
The process that's ingesting the syslog messages listens on port 1514 and has a REST interface on port 8073:
If the ingest process is healthy a GET request to /connectors/syslog/status on port 8073 returns:
{
"name": "syslog",
"connector": {
"state": "RUNNING",
"worker_id": "10.0.1.41:8073"
},
"tasks": [
{
"id": 0,
"state": "RUNNING",
"worker_id": "10.0.1.41:8073"
}
],
"type": "source"
}
I'd like to create a custom check to see that ingest is running. Is that possible with NGINX Plus? Can we check the health on a completely different port?
This is what I did:
stream {
upstream syslog_standard {
zone syslog_zone 64k;
server cp01.woolford.io:1514 max_fails=1 fail_timeout=10s;
server cp02.woolford.io:1514 max_fails=1 fail_timeout=10s;
server cp03.woolford.io:1514 max_fails=1 fail_timeout=10s;
}
match syslog_ingest_test {
send "GET /connectors/syslog/status HTTP/1.0\r\nHost: localhost\r\n\r\n";
expect ~* "RUNNING";
}
server {
listen 514 udp;
proxy_pass syslog_standard;
proxy_bind $remote_addr transparent;
health_check match=syslog_ingest_test port=8073;
}
}
The match=syslog_ingest_test health check performs a GET request to the URL at port 8073 (i.e. the port that contains the healthcheck endpoint of the ingest process) and confirms that it's running.
I can toggle the service off/on and NGINX detects it and reacts accordingly.
I'm trying to make an http request using lua-resty-http.
I created a simple get api in https://requestb.in
I can make a request using the address: https://requestb.in/snf2ltsn
However, when I try to do this in nginx I'm getting error no route to host
My nginx.conf file is:
worker_processes 1;
error_log logs/error.log;
events {
worker_connections 1024;
}
http {
lua_package_path "$prefix/lua/?.lua;;";
server {
listen 8080;
location / {
resolver 8.8.8.8;
default_type text/html;
lua_code_cache off; #enables livereload for development
content_by_lua_file ./lua/test.lua;
}
}
}
and my Lua code is
local http = require "resty.http"
local httpc = http.new()
--local res, err = httpc:request_uri("https://requestb.in/snf2ltsn", {ssl_verify = false,method = "GET" })
local res, err = httpc:request_uri("https://requestb.in/snf2ltsn", {
method = "GET",
headers = {
["Content-Type"] = "application/x-www-form-urlencoded",
}
})
How can I fix this Issue?
Or is there any suggestion to make http request in nginx?
any clue?
PS: There is a commented section in my Lua code. I also tried to make a request using that code but nothing happened.
Change the package_path like:
lua_package_path "$prefix/resty_modules/lualib/?.lua;;";
lua_package_cpath "$prefix/resty_modules/lualib/?.so;;";
By default nginx resolver returns IPv4 and IPv6 addresses for given domain.
resty.http module uses cosocket API.
Cosocket's connect method called with domain name selects one random IP address You are not lucky and it selected IPv6 address. You can check it by looking into nginx error.log
Very likely IPv6 doesn't work on your box.
To disable IPv6 for nginx resolver use directive below within your location:
resolver 8.8.8.8 ipv6=off;
I'm trying to serve two (Openresty) Lua web applications as virtual hosts from NGINX which both require their own unique lua_package_path, but have a hard time getting the configuration right.
# Failing example.conf
http {
lua_package_path = "/path/to/app/?.lua;;"
server{
listen 80;
server_name example.org
}
}
http {
lua_package_path = "/path/to/dev_app/?.lua;;"
server{
listen 80;
server_name dev.example.org
}
}
If you define the http twice (one for each host), you will receive this error: [emerg] "http" directive is duplicate in example.conf
If you define the lua_package_path inside the server block, you will receive this error: [emerg] "lua_package_path" directive is not allowed here in example.conf
If you define the lua_package_path twice in a http block (which does not make any sense anyway), you will receive this error: [emerg] "lua_package_path" directive is duplicate in example.conf
What is the best practise of serving multiple (Openresty) Lua applications with their own lua_package_path, being virtual hosts on the same IP and port?
I faced this issue several month ago.
I do not recommend no use debug and release projects in the same server. For instance, you launch the one nginx application for both (debug and release) key may lead to unexpectable behaviour.
But, nevertheless, you can setup:
package.path = './mylib/?.lua;' .. package.path inside lua-script.
You can setup your own local DEBUG = false state and manage inside the app.
Obviously, use the other machine for debug. Imo, the best solution.
Execute different my.release.lua or my.debug.lua file:
http {
lua_package_path "./lua/?.lua;/etc/nginx/lua/?.lua;;";
server{
listen 80;
server_name dev.example.org;
lua_code_cache off;
location / {
default_type text/html;
content_by_lua_file './lua/my.debug.lua';
}
}
server{
listen 80;
server_name example.org
location / {
default_type text/html;
content_by_lua_file './lua/my.release.lua';
}
}
}
Fixed it removing the lua_package_path from the NGINX configuration (since the OpenResty bundle already takes care of loading packages) and pointing my content_by_lua_file to the absolute full path of my app: /var/www/app/app.lua
# example.conf
http {
server{
listen 80;
server_name example.org
location / {
content_by_lua_file '/var/www/app/app.lua';
}
}
server{
listen 80;
server_name dev.example.org
location / {
content_by_lua_file '/var/www/app_dev/app.lua';
}
}
}
After that I included this at the top of my app.lua file:
-- app.lua
-- Get the current path of app.lua
local function script_path()
local str = debug.getinfo(2, "S").source:sub(2)
return str:match("(.*/)")
end
-- Add the current path to the package path
package.path = script_path() .. '?.lua;' .. package.path
-- Load the config.lua package
local config = require("config")
-- Use the config
config.env()['redis']['host']
...
This allows me to read the config.lua from the same directory as my app.lua
-- config.lua
module('config', package.seeall)
function env()
return {
env="development",
redis={
host="127.0.0.1",
port="6379"
}
}
end
Using this I can now use multiple virtual hosts with their own package paths.
#Vyacheslav Thank you for the pointer to package.path = './mylib/?.lua;' .. package.path! That was really helpful! Unfortunately it also kept using the NGINX conf root instead of my application root. Even with prepending the . for the path.