Config settings for securing Elasticsearch with Nginx - nginx

I am running Kibana 1.3 and ElasticSearch 1.4 on the same host and I have installed Nginx in an attempt to keep connections to ES locally. To browse Kibana remotely, I have also registered a dynamic DNS domain name and bind it with the host on which Kibana and ElasticSearch are running on e.g. http://example.no-ip.org.
I think the use of dynamic DNS domain name has caused problems with connectionbetween Kibana and ES and I'm not sure how the configurations should be set so that:
1) only Kibana can communicate with ElasticSearch and on a local-basis
2) ElasticSearch API is not exposed to the world.
The guide I followed is: http://www.elasticsearch.org/blog/playing-http-tricks-nginx/
Here's the config for Nginx:
events {
worker_connections 1024;
}
http {
upstream elasticsearch {
server 127.0.0.1:9200;
server 127.0.0.1:9201;
server 127.0.0.1:9202;
keepalive 15;
}
server {
listen 8080;
location / {
proxy_pass http://elasticsearch;
proxy_http_version 1.1;
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
}
}
}
And here's the config settings I added in elasticsearch.yml:
network.host: "127.0.0.1"
http.host: "127.0.0.1"
http.cors.allow-origin: "/.*/"
http.cors.enabled: true
As for Kibana, I have changed the default settings to use port 8080:
elasticsearch: "http://" + window.location.hostname + ":8080",
Thank you very much for your help in advance!

Related

Configure Nginx reverse proxy for MQTT

I'm trying to setting up a reverse proxy that resolve localhost:8081 to a broker installed on an other machine.
My Nginx config file is:
worker_processes 1;
events {
worker_connections 1024;
}
server {
listen 8081;
server_name localhost;
location / {
proxy_pass tcp://192.168.1.177:1883;
}
}
But when I try to connect to the broker (from the machine where I'm configuring Nginx) with the command
mosquitto_sub -h localhost -p 8081 -t "stat/tasmota_8231A8/POWER1"
I get the error Connection refused.
Edit:
Mosquitto broker config:
persistence true
persistence_location /var/lib/mosquitto/
include_dir /etc/mosquitto/conf.d
listener 1883
allow_anonymous true
Edit
I try with this config file for nginx
worker_processes 1;
events {
worker_connections 1024;
}
stream {
listen 8081;
proxy_pass 192.168.1.77:1883;
}
This won't work for native MQTT.
What you have configured is a HTTP proxy, but MQTT != HTTP.
You need to configure nginx as a stream proxy. e.g.
stream {
server {
listen 8081;
proxy_pass 192.168.1.77:1883;
}
}
https://docs.nginx.com/nginx/admin-guide/tcp-udp-load-balancer/
Or configure mosquitto to support MQTT over WebSockets (Assuming the client supports this as well). Then you can use HTTP based proxying as WebSockets bootstrap via HTTP.

How to add healthcheck on multiple ports on nginx load balancer?

We are using nginx for load balancing our application. There are 4 nodes which need to be load-balanced in round-robin fashion.
The load balancing is working fine.
The runtime service is listening at port 9001, which internally redirects to other service on the same node.
So we have defined upstream in nginx.conf, with state file "cluster.state". Following is the excerpt from nginx.conf
upstream cluster {
zone cluster 64k;
state /var/nginx/state/cluster.state;
}
Following is the excerpt in "server" block to route the calls:
location /apipattern {
proxy_set_header Host $host:$server_port;
proxy_read_timeout 300s;
proxy_pass http://cluster/;
}
Following is the excerpt from cluster.state file (changed FQDNs, but port is correct)
server foobar1.com:9001 resolve;
server foobar2.com:9001 resolve;
server foobar3.com:9001 resolve;
server foobar4.com:9001 resolve;
The requirement is to put a healtcheck in place (for nodes mentioned in cluster.state).
The healthcheck services (2 services), on these nodes are available on port 8081 and 8082, with uri=/healthcheck/isup (and NOT on 9001)
How do we configure these healthchecks?
you can add multiple health_check directives under location directive with custom port and uri to have compound/multiple monitors for upstream. This is only possible with NGINX plus, which offers active health checking.
location /juice {
proxy_set_header Host $host;
proxy_pass http://juice/;
health_check port=800 uri=/custom.html;
health_check port=8081 uri=/hello.html;
}

How to configure nginx to expose multiple services on Jelastic?

Through Jelastic's dashboard, I created this:
I just clicked "New environment", then I selected nodejs. I added a docker image (of mailhog).
Now, I would like that port 80 of my environment serves the nodejs application. This is by default so. Therefore nothing to do.
In addition to this, I would like port 8080 (or any other port than 80, like port 5000 for example) of my environment serves mailhog, hosted on the docker image. To do that, I added the following lines to the nginx-jelastic.conf (right after the first server serving the nodejs app):
server {
listen *:8080;
listen [::]:8080;
server_name _;
location / {
proxy_pass http://mailhog_upstream;
}
}
where I have also defined mailhog_upstream like this:
upstream mailhog_upstream{
server 10.102.8.215; ### DEFUPPROTO for common ###
sticky path=/; keepalive 100;
}
If I now browse my environment's 8080 port, then I see ... the nodejs app. If I try any other port than 80 or 8080, I see nothing. Putting another server_name doesn't help. I tried several things but nothing seems to work. Why is that? What am I doing wrong here?
Then I tried to get rid of the above mailhog_upstream and instead write
server {
listen *:5000;
listen [::]:5000;
server_name _;
location / {
proxy_pass http://10.102.8.215;
}
}
Browsing the environment's port 5000 doesn't work either.
If I replace the IP of the nodejs' app with that of my mailhog service, then mailhog runs on port 80. I don't understand how I can make the nodejs app run on port 80 and the mailhog service on port 5000 (or any other port than 80).
Could someone enlighten me please?
After all those failures, I tried another ansatz. Assume the path my env is example.com/. What I've tried above is to get mailhog to work upon calling example.com:5000, which I failed doing. Then I tried to make mailhog available through a call to example.com/mailhog. In order to do that, I got rid of all my modifications above and completed the current server in nginx-jelastic.conf with
location /mailhog {
proxy_pass http://10.102.8.96:8025/;
add_header Set-Cookie "SRVGROUP=$group; path=/";
}
That works in the sense that if I know browse example.com/mailhog, then I get something on the page, but not exactly what I want: it's the mailhog's page without any styling. Also, when I call mailhog's API through example.com/mailhog/api/v2/messages, I get a successful response without body, when I should've received
{"total":0,"count":0,"start":0,"items":[]}
What am I doing wrong this time?
Edit
To be more explicit, I put the following manifest that exhibits the second problem with the nginx location.
Full locations list for your case is a following:
(please pay attention to URIs in upstreams, they are different)
location /mailhog { proxy_pass http://172.25.2.128:8025/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection " upgrade"; }
location /mailhog/api { proxy_pass http://172.25.2.128:8025/api; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection " upgrade"; }
location /css { proxy_pass http://172.25.2.128:8025; }
location /js { proxy_pass http://172.25.2.128:8025; }
location /images { proxy_pass http://172.25.2.128:8025; }
that works for me with your application
# curl 172.25.2.127/mailhog/api/v2/messages
{"total":0,"count":0,"start":0,"items":[]}
The following ports are opened by default: 80, 8080, 8686, 8443, 4848, 4949, 7979.
Additional ports can be opened using:
endpoints - maps the container internal port to random external
via Jelastic Shared LB
Public IP - provides a direct access to all ports of your
container
Read more in the following article: "Container configuration - Ports". This one may also be useful:"Public IP vs Shared Load Balancer"

Docker nginx proxy to host

Short description:
Nginx running on docker, how to configure nginx so that it forwards calls to host.
Long description:
We have one web application which communicates to couple of backends (lets says rest1, rest2 and rest3). We are responsible for rest1.
Lets consider that I started rest1 manually on my pc and running on 2345 port. I want nginx (which is running in docker) to redirect all call torest1 to my own running instance(note, the instance is running on host, not any container and not in docker). And for rest2 and rest3 to some other docker node or may be some other server (who cares).
What I am looking for is:
docker-compose.yml configurations (if needed).
nginx configuration.
Thanks in advance.
Configure nginx like the following (make sure you replace IP of Docker Host) and save it as default.conf:
server {
listen 80;
server_name _;
location / {
proxy_pass http://<IP of Docker Host>;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Now bring up the container:
docker run -d --name nginx -p 80:80 -v /path/to/nginx/config/default.conf:/etc/nginx/conf.d/default.conf nginx
If you are using Docker Compose file version 3 you don't need any special config for docker-compose.yml file at all, just use the special DNS name host.docker.internal to reach a host service, as on the following nginx.conf example:
events {
worker_connections 1024;
}
http {
upstream host_service {
server host.docker.internal:2345;
}
server {
listen 80;
access_log /var/log/nginx/http_access.log combined;
error_log /var/log/nginx/http_error.log;
location / {
proxy_pass http://host_service;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $realip_remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
}
Solution 1
Use network_mode: host, this will bind your nginx instance to host's network interface.
This could result in conflicts when running multiple nginx containers: every exposed port is binded to host's interface.
Solution 2
I'm running more nginx instances for every service I would like expose to outside world.
To keep the nginx configurations simple and avoid binding every nginx to host use the container structure:
dockerhost - a dummy container with network_mode: host
proxy - nginx container used as a proxy to host service,
link dockerhost to proxy, this will add an /etc/hosts entry in proxy contianer - we can use 'dockerhost' as a hostname in nginx configuration.
docker-compose.yaml
version: '3'
services:
dockerhost:
image: alpine
entrypoint: /bin/sh -c "tail -f /dev/null"
network_mode: host
proxy:
image: nginx:alpine
links:
- dockerhost:dockerhost
ports:
- "18080:80"
volumes:
- /share/Container/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
default.conf
location / {
proxy_pass http://dockerhost:8080;
This method allows us to have have automated let's encrtypt certificates generated for every service running on my server. If interested I can post a gist about the solution.
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://host.docker.internal:3000;
}
}
Docker expose host address is host.docker.internal in Mac os
There a couple of things you have to keep in mind:
Docker compose (from version 3) by default uses the service name as hostname for inter container networking
Nginx need to know the upstream first
I strongly recommend mounting the default.conf directly into your docker-compose.yml.
Lastly you have to dockerize your backend to make use of docker internal networking.
An example repo where I use nginx and docker-compose in a full-stack project: https://gitlab.com/datails/api.
The following example have some prerequisites:
you have a folder structure like:
- backend/
- frontend/
- default.conf
- docker-compose.yml
Secondly the backend and front-end dit have a Dockerfile that exposes an application on port 3000.
Example default.conf:
upstream backend {
server backend:3000;
}
upstream frontend {
server frontend:3000;
}
server {
listen 80;
location /api {
proxy_pass http://backend;
}
location / {
proxy_pass http://frontend/;
}
}
Example docker-compose.yml:
version: '3.8'
services:
nginx:
image: nginx:1.19.4
depends_on:
- server
- frontend
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- '8080:80'
Then make sure you have your backend dockerized and called (in this case) backend as a service and a front-end (if needed) called frontend as a service in your docker-compose:
version: '3.8'
services:
nginx:
image: nginx:1.19.4
depends_on:
- server
- frontend
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
ports:
- '8080:80'
frontend:
build: ./frontend
backend:
build: ./backend
This is a bare minimum example to get started. Hope this will help future developers.

How to configure IPython behind nginx in a subpath?

I've got nginx running handling all SSL stuff and already proxying / to a Redmine instance and /ci to a Jenkins instance.
Now I want to serve an IPython instance on /ipython through that very same nginx.
In nginx.conf I've added:
http {
...
upstream ipython_server {
server 127.0.0.1:5001;
}
server {
listen 443 ssl default_server;
... # all SSL related stuff and the other proxy configs (Redmine+Jenkins)
location /ipython {
proxy_pass http://ipython_server;
}
}
}
In my .ipython/profile_nbserver/ipython_notebook_config.py I've got:
c.NotebookApp.base_project_url = '/ipython/'
c.NotebookApp.base_kernel_url = '/ipython/'
c.NotebookApp.port = 5001
c.NotebookApp.trust_xheaders = True
c.NotebookApp.webapp_settings = {'static_url_prefix': '/ipython/static/'}
Pointing my browser to https://myserver/ipython gives me the usual index page of all notebooks in the directory I launched IPython.
However, when I try to open one of the existing notebooks or create a new one, I'm getting the error:
WebSocket connection failed: A WebSocket connection to could not be established. You will NOT be able to run code. Check your network connection or notebook server configuration.
I've tried the same setup with the current stable (1.2.1, via pypi) and development (Git checkout of master) version of IPython.
I also tried adjusting the nginx config according to nginx reverse proxy websockets with no avail.
Due to an enforced policy I'm not able to allow connections to the server on other ports than 443.
Does anybody have IPython running behind an nginx?
I had the same problem. I updated nginx up to the current version (1.6.0). It seems to be working now.
Server config:
location /ipython {
proxy_pass http://ipython_server;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Origin "";
}
See: http://nginx.org/en/docs/http/websocket.html

Resources