I need to configure MQTT in nginx , I have tried with following code
I have added one .conf file in /usr/local/etc/nginx/mqtt.conf
and mqtt.conf contains:-
log_format mqtt '$remote_addr [$time_local] $protocol $status
$bytes_received '
'$bytes_sent $upstream_addr';
upstream hive_mq {
server 127.0.0.1:18831; #node1
server 127.0.0.1:18832; #node2
server 127.0.0.1:18833; #node3
zone tcp_mem 64k;
}
match mqtt_conn {
# Send CONNECT packet with client ID "nginx health check"
send
\x10\x20\x00\x06\x4d\x51\x49\x73\x64\x70\x03\x02\x00\x3c\x00\x12
\x6e\x67\x69\x6e\x78\x20\x68\x65\x61\x6c\x74\x68\x20\x63\x68\x65
\x63\x6b;
expect \x20\x02\x00\x00; # Entire payload of CONNACK packet
}
server {
listen 1883;
server_name localhost;
proxy_pass hive_mq;
proxy_connect_timeout 1s;
health_check match=mqtt_conn;
access_log /var/log/nginx/mqtt_access.log mqtt;
error_log /var/log/nginx/in.log; # Health check notifications
#access_log /var/log/nginx/access.log;
#error_log /var/log/nginx/error.log; # Health check notifications
}
I have also added few code inside nginx.conf:-
worker_processes 1;
events {
worker_connections 1024;
}
stream {
include stream_conf.d/*.conf;
}
and i have tried to publish message from mosquitto like :
mosquitto_pub -d -h localhost -p 1883 -t "topic/test" -m "test123" -i
"thing001"
but i am not able to connect with nginx, Can any help me how to configure this .
Thanks in advance :)
Related
I'm trying to reverse proxy to another endpoint on /alertmanager but it fails to connect. Weirdly enough I'm able to connect the endpoint directly from inside the pod running nginx.
A quick overview of my application architecture is this:
nginx ingress on cluster-> nginx load balancer -> <many services on different endpoints>
This is a minimized nginx configuration that replicates the issue:
worker_processes 5; ## Default: 1
error_log /dev/stderr;
pid /tmp/nginx.pid;
worker_rlimit_nofile 8192;
events {}
http {
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] $status '
'"$request" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stderr main;
sendfile on;
tcp_nopush on;
resolver kube-dns.kube-system.svc.cluster.local;
server {
listen 8080;
proxy_set_header X-Scope-OrgID 0;
location = / {
return 200 'OK';
auth_basic off;
}
location /alertmanager {
proxy_pass http://mimir-distributed-alertmanager.mimir.svc.cluster.local:8080$request_uri;
}
}
}
I'm able to curl to the mimir endpoint in /alertmanager but I can't reach /alertmanager without getting a 404 error but I can get to / and if I put the proxy_pass inside of / it does work.
Example of what I'm seeing:
/ $ curl localhost:8080/
OK
/ $ curl localhost:8080/alertmanager
the Alertmanager is not configured
Curling http://mimir-distributed-alertmanager.mimir.svc.cluster.local does infact return the html of the page I'm expecting
My design is:
Media Server -> edge servers (Multiple Nginx Cache server -> Nginx Load Balancer)
it is my private CDN system (for Live content delivery)
I have content source and multiple edges; in each edge, there is multiple cache server and a Load Balancer
I started step by step, so for this job I face a problem with Nginx Load Balancer.
In this configuration, I am balancing between two servers s1 and s2.
but when I check traffic by(nload), I see big traffic on the primary server(load balancer) for example from nload see s1=1GBPS s2=1GBPS loadbalancer=2GBPS
note: my content is HLS (.m3u8)
user www-data;
worker_processes 5; ## Default: 1
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
worker_rlimit_nofile 8192;
events {
worker_connections 4096; ## Default: 1024
}
http {
include mime.types;
include /etc/nginx/proxy.conf;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] $status '
'"$request" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
# upstream
upstream origins {
server s1.ip;
server s2.ip;
}
# default route
server {
listen 80;
server_name example.com;
access_log /var/log/nginx/example.com main;
location / {
proxy_set_header Host $host;
proxy_pass http://origins;
}
}
}
As my tcpclient is behind nginx.
My idea is to have backend tcpclient, authenticate the external server.
Finally, i had this configuration:
#secured TCP part
stream {
log_format main '$remote_addr - - [$time_local] protocol $status $bytes_sent $bytes_received $session_time "$upstream_addr"';
server {
listen 127.0.0.1:10515 ;
proxy_pass REALIP:REALPORT;
proxy_ssl on;
#server side authentication (client verifying server's certificate)
proxy_ssl_protocols TLSv1.2;
proxy_ssl_certificate /f0/client.crt;
proxy_ssl_certificate_key /f0/client.key;
access_log /var/log/nginx.tcp1.access.log main;
error_log /var/log/nginx.tcp1.error.log debug;
#The trusted CA certificates in the file named by the proxy_ssl_trusted_certificate directive are used to verify the certificate on the ups
tream
proxy_ssl_verify on;
proxy_ssl_trusted_certificate /f0/client_ca.crt;
proxy_ssl_verify_depth 1;
proxy_ssl_session_reuse on;
proxy_ssl_name localhost;
ssl_session_timeout 4h;
ssl_handshake_timeout 30s;
ssl_prefer_server_ciphers on;
#proxy_ssl_ciphers "ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-A
ES256-SHA384:ECDHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384";
proxy_ssl_ciphers "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:EECDH+AESGCM:EDH
+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-R
SA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:D
HE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA";
#ssl_ecdh_curve prime256v1:secp384r1;
#ssl_session_tickets off;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
}
Can you please authenticate the above configuration and pls let me know, if I am doing right way of verifying the peer.
If yes, my other question:
Is there a way, we can give peer certificate(public key) rather than its CA and verify.
Pls clarify
Thanks,
The question looks obvious as it doesn't mention any specific aspects (apart from TLS).
Please visit the following to get a good start:
https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/
https://www.nginx.com/blog/tcp-load-balancing-udp-load-balancing-nginx-tips-tricks/
Quoting simple example configuration from the above sources:
stream {
server {
listen 12345;
#TCP traffic will be forwarded to the "stream_backend" upstream group
proxy_pass stream_backend;
}
server {
listen 12346;
#TCP traffic will be forwarded to the specified server
proxy_pass backend.example.com:12346;
}
}
im trying to start a nginx server on heroku free environmnent. I ready any how-tos and tutorial, but i dont get it running.
First of all, i would like to start nginx as default web-server on Port 80. Afterwards i would like configure nginx as proxy for the underline express server (other heroku instance).
For 4 days i trying to start only nginx on my heroku instance. I always getting the exception that not permitted to start on port 80.
I forked the nginx-buildback (https://github.com/moohkooh/nginx-buildpack) from (https://github.com/benmurden/nginx-pagespeed-buildpack) to adjust some configuration. If i run nginx via heroku bash on port 2555, nginx is starting, but i get connection refused on web-browser.
If i starting nginx via Dyno i getting error message on logs
State changed from starting to crashed
the Procfile of my Dyno
web: bin/start-nginx
My nginx.config.erb
daemon off;
#Heroku dynos have at least 4 cores.
worker_processes <%= ENV['NGINX_WORKERS'] || 4 %>;
events {
use epoll;
accept_mutex on;
worker_connections 1024;
}
http {
gzip on;
gzip_comp_level 2;
gzip_min_length 512;
server_tokens off;
log_format l2met 'measure#nginx.service=$request_time request_id=$http_x_request_id';
access_log logs/nginx/access.log l2met;
error_log logs/nginx/error.log;
include mime.types;
default_type application/octet-stream;
sendfile on;
server {
listen <%= ENV['PORT'] %>;
server_name _;
keepalive_timeout 5;
root /app/www;
index index.html;
location / {
autoindex on;
}
}
}
I also set PORT variable to 80
heroku config:get PORT
80
Some other configuration:
heroku config:get NGINX_WORKERS
8
heroku buildpacks
https://github.com/heroku/heroku-buildpack-multi.git
heroku stack
cedar-14
My .buildpack file
https://github.com/moohkooh/nginx-buildpack
https://codon-buildpacks.s3.amazonaws.com/buildpacks/heroku/ruby.tgz
I also have the guess, that heroku dont use my variable that i set to 80. Whats wrong? Big thanks for anyone.
Btw: my express server running without any error on port 1000 (for test i start it also on port 80 without any errors)
i solved my problem with this configuration.
daemon off;
#Heroku dynos have at least 4 cores.
worker_processes <%= ENV['NGINX_WORKERS'] || 4 %>;
pid nginx.pid;
events {
worker_connections 1024;
}
http {
gzip on;
gzip_comp_level 2;
gzip_min_length 512;
server_tokens off;
log_format l2met 'measure#nginx.service=$request_time request_id=$http_x_request_id';
access_log logs/nginx/access.log l2met;
error_log logs/nginx/error.log;
include mime.types;
server {
listen <%= ENV['PORT'] %>;
server_name localhost;
port_in_redirect off;
keepalive_timeout 5;
root /app/www;
index index.html;
location / {
autoindex on;
}
}
}
For those who are trying to deploy to an NGINX container (React App in my case):
With the help of this docker image I was able to do it. You will need the following in a file called /etc/nginx/conf.d/default.conf.template inside your container:
server {
listen $PORT;
listen [::]:$PORT;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
}
Then in your Dockerfile, use:
CMD /bin/sh -c "envsubst '\$PORT' < /etc/nginx/conf.d/default.conf.template > /etc/nginx/conf.d/default.conf" && nginx -g 'daemon off;'
Now when you run:
> heroku container:push
> heroku container:release
NGINX will use Heroku's assigned PORT to distribute your application
I have two AP server, and I want to setup NGINX as a proxy server and load balancer.
here is my nginx.conf file:
#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
large_client_header_buffers 8 1024k;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 650;
send_timeout 2000;
proxy_connect_timeout 2000;
proxy_send_timeout 2000;
proxy_read_timeout 2000;
gzip on;
#
# Load config files from the /etc/nginx/conf.d directory
# The default server is in conf.d/default.conf
map $http_upgrade $connection_upgrade {
default Upgrade;
'' close;
}
upstream backend {
server apserver1:8443;
server apserver2:8443;
}
server {
listen 8445 default ssl;
server_name localhost;
client_max_body_size 500M;
client_body_buffer_size 128k;
underscores_in_headers on;
ssl on;
ssl_certificate ./crt/server.crt;
ssl_certificate_key ./crt/server.key;
location / {
proxy_pass https://backend;
break;
}
}
}
apserver1 and apserver2 are my AP server and in fact they are IP address.
when I visit the nginx via https://my.nginx.server:8445, I can get the AP container's default page. In my case, it is the JETTY server default page. that means the NGINX works.
if anything going correctly, user accessing to https://my.nginx.server:8445/myapp will get the log in page. if user has logged in, my app will redirect the user to https://my.nginx.server:8445/myapp/defaultResource.
when I visit via https://my.nginx.server:8445/myapp as a NOT-logged-in user, I can get the log in page correctly.
when I visit via https://my.nginx.server:8445/myapp/defaultResource directly as a logged-in user, I can get the correct page.
but when I visit the url https://my.nginx.server:8445/myapp as a logged-in user, (if correctly, the URL should be redirect to https://my.nginx.server:8445/myapp/defaultResource), but the nginx translate the URL to https://backend/myapp/defaultResource, and Chrome give me the following error:
The server at backend can't be found, because the DNS lookup failed....(omited)
nginx, seems not resolve the upstream backend. what's wrong with my configuration?
AND if I use http instead of https, everything goes well.
any help is appreciated.
Try to add the "resolver" directive to your configuration:
http://nginx.org/r/resolver