I setup Nginx Plus to load-balance UDP syslog traffic. Here's a snippet from nginx.conf:
stream {
upstream syslog_standard {
zone syslog_zone 64k;
server cp01.woolford.io:1514 max_fails=1 fail_timeout=10s;
server cp02.woolford.io:1514 max_fails=1 fail_timeout=10s;
server cp03.woolford.io:1514 max_fails=1 fail_timeout=10s;
}
server {
listen 514 udp;
proxy_pass syslog_standard;
proxy_bind $remote_addr transparent;
health_check udp;
}
}
I was a little surprised to hear that NGINX Plus could perform health checks on UDP since UDP is, by design, unreliable. Since there is no acknowledgment in UDP, the messages effectively go into a black hole.
I'm trying to set up a somewhat fault-tolerant and scalable syslog ingestion pipeline. The loss of a node should be detected, by a health check, and be temporarily removed from the list of available servers.
This didn't work, despite the UDP health check. I think the UDP health check only works for services that respond to the caller (e.g. DNS). Since syslog doesn't respond, there's no way to check for errors, e.g. using match.
The process that's ingesting the syslog messages listens on port 1514 and has a REST interface on port 8073:
If the ingest process is healthy a GET request to /connectors/syslog/status on port 8073 returns:
{
"name": "syslog",
"connector": {
"state": "RUNNING",
"worker_id": "10.0.1.41:8073"
},
"tasks": [
{
"id": 0,
"state": "RUNNING",
"worker_id": "10.0.1.41:8073"
}
],
"type": "source"
}
I'd like to create a custom check to see that ingest is running. Is that possible with NGINX Plus? Can we check the health on a completely different port?
This is what I did:
stream {
upstream syslog_standard {
zone syslog_zone 64k;
server cp01.woolford.io:1514 max_fails=1 fail_timeout=10s;
server cp02.woolford.io:1514 max_fails=1 fail_timeout=10s;
server cp03.woolford.io:1514 max_fails=1 fail_timeout=10s;
}
match syslog_ingest_test {
send "GET /connectors/syslog/status HTTP/1.0\r\nHost: localhost\r\n\r\n";
expect ~* "RUNNING";
}
server {
listen 514 udp;
proxy_pass syslog_standard;
proxy_bind $remote_addr transparent;
health_check match=syslog_ingest_test port=8073;
}
}
The match=syslog_ingest_test health check performs a GET request to the URL at port 8073 (i.e. the port that contains the healthcheck endpoint of the ingest process) and confirms that it's running.
I can toggle the service off/on and NGINX detects it and reacts accordingly.
Related
I configured the server as below
Coturn-4.5.1.1 'dan Eider'
tls-listening-port=5349
fingerprint
use-auth-secret
server-name=turn.***.com
realm=turn.****.com
verbose
cert=/etc/coturn/certs/turn.***.com.fullchain.pem
pkey=/etc/coturn/certs/turn.***.com.privkey.pem
dh-file=/etc/coturn/certs/ssl-dhparams.pem
mobility
min-port=49152
max-port=65535
Nginx ( the problem is not Nginx because the problem is still alive when I don't use Nginx )
stream {
...
...
error_log /var/log/nginx/str.error.log;
upstream turnTls {
server turn_tls_IP:5349;
}
map $ssl_preread_server_name $upstream {
....
....
...
turn.****.com turnTls;
}
server {
error_log /var/log/nginx/xxx.err.log;
listen 443;
listen [::]:443;
proxy_pass $upstream;
ssl_preread on;
proxy_buffer_size 10m;
}
}
When I access the server with Android phones with turns protocol like
{
'urls': ['turns:turn.***.com:443?transport=tcp'],
'username': $username,
'credential': $password,
}
The server cannot get user credentials, and the server log is as follows
7: session 002000000000000001: closed (2nd stage), user <> realm <turn.****.com> origin <>, local ****:5349, remote ***:53712, reason: TLS/TCP socket buffer operation error (callback)
As you can see, the user's access user <> information is empty and I got
reason: TLS/TCP socket buffer operation error (callback)
with Trickle ICE tools sometimes work
0.783 Done
0.782 relay 2831610 udp ***** 65082 0 | 31519 | 255 turns:turn.***.com:443?transport=tcp tls
Coturn log
session 000000000000000025: new, realm=<turn.****.com>, username=<1674486335:user_80_156>, lifetime=600, cipher=ECDHE-RSA-AES256-GCM-SHA384, method=TLSv1.2
I did the following but the problem was not solved
disable some TlS protocols
no-tlsv1
no-tlsv1_1
no-tlsv1_2
no-tlsv3
...
I copied lets encrypt keys to /etc/coturn which is chmodded with 600 and owned by turnserver:turnserve
I stopped NGINX and contacted Turn directly via TLS on port 443
With Nginx, I decrypt in server block and then transferred it to the Turn server
stream {
server {
listen 443 ssl;
ssl_certificate ... fullchain.pem;
ssl_certificate_key ... privkey.pem;
ssl_dhparam ... dhparam.pem;
proxy_ssl off;
proxy_pass turn_Ip_NoTLS:3478;
}
}
I tested in many android device with ISRG Root X1 and DST Root CA X3
I have the following nginx.config file:
events {}
http {
# ...
# application version 1a
upstream version_1a {
server localhost:8090;
}
# application version 1b
upstream version_1b {
server localhost:8091;
}
split_clients "${arg_token}" $appversion {
50% version_1a;
50% version_1b;
}
server {
# ...
listen 7080;
location / {
proxy_set_header Host $host;
proxy_pass http://$appversion;
}
}
}
I have two nodejs servers listening on port 8090 and 8091 and I am hitting the URL http://localhost:7080, my expectation here is the Nginx will randomly split the traffic to version_1a and version_1b upstream, but, all the traffic is going to version_1a. Any insight into why this might be happening?
(I want to have this configuration for the canary traffic)
Validate the variable you are using to split the traffic is set correctly, and the variable's value should be uniformly distributed else the traffic will not be split evenly.
If a user request do not have a set of headers, then the reverse proxy response should be routed to a different back end server , else if the reqeust have those headers, than request must go to a different server.
Is it possible in NGINX and how do we do that ?
Let's say that you're using x-backend-pool for your request header,
you can use the following NGINX module to get what you want: http://nginx.org/en/docs/http/ngx_http_map_module.html#map
The map directive allows you to set variables based on values in other variables, I've provided an example for you below:
upstream hostdefault {
server 127.0.0.1:8080;
}
upstream hosta {
server 127.0.0.1:8081;
}
upstream hostb {
server 127.0.0.1:8082;
}
map $http_x_backend_pool $backend_pool {
default "hostdefault";
a "hosta";
b "hostb";
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://$backend_pool;
}
}
I've multiple upstreams with same 2 servers in different ports for different apps, but I would need it to keep a consistent connection to the server.
Example:
upstream APP {
ip_hash;
server 10.10.10.1:1111;
server 10.10.10.2:1111;
}
upstream APP_HTTP {
ip_hash;
server 10.10.10.1:2222;
server 10.10.10.2:2222;
}
upstream APP_WS {
ip_hash;
server 10.10.10.1:3333;
server 10.10.10.2:3333;
}
....
location /APP {
proxy_pass http://APP;
}
location /APP_HTTP {
proxy_pass http://APP_HTTP;
}
location /APP_WS {
proxy_pass http://APP_WS;
}
So if a user is redirected at APP starting point to server 10.10.10.1, I need to guarantee that for APP_HTTP and APP_WS goes as well to 10.10.10.1.
Is it possible? How?
IP_Hash seems to not be working as I would expect.
Thanks
Best regards
I have un upstream list in my Nginx config. And I would to proxy path each request on two servers instead of one.
For example if I have IP1, IP2 and IP3 in my upstream list. I receive a request on /process and I want to redirect this request to two of three servers available in the upstream list (IP1 and IP2 for instance).
Thanks ! :)
Here's how i think your config could be, you can create multiple upstreams
upstream main_upstream {
server IP1
server IP2
server IP3
}
upstream process_upstream {
server IP2
server IP3
}
server {
location /process {
proxy_pass http://process_upstream;
}
location / {
proxy_pass http://main_upstream;
}