I am trying to get nginx to proxy a websocket connection to a backend server. All services linked via docker-compose.
When i create the WebSocket object in my frontend react app:
let socket = new WebSocket(`ws://engine/socket`)
I get the following error:
WebSocket connection to 'ws://engine/socket' failed: Error in connection establishment: net::ERR_NAME_NOT_RESOLVED
I believe the problem comes from converting ws:// to http:// and that my nginx configuration does not seem to be pick up the match location correctly.
Here is my nginx configuration:
server {
# listen on port 80
listen 80;
root /usr/share/nginx/html;
index index.html index.htm;
location ^~ /engine {
proxy_pass http://matching-engine:8081/;
proxy_http_version 1.1;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location / {
try_files $uri $uri/ /index.html;
}
# Media: images, icons, video, audio, HTC
location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
expires 1M;
access_log off;
add_header Cache-Control "public";
}
# Javascript and CSS files
location ~* \.(?:css|js)$ {
try_files $uri =404;
expires 1y;
access_log off;
add_header Cache-Control "public";
}
# Any route containing a file extension (e.g. /devicesfile.js)
location ~ ^.+\..+$ {
try_files $uri =404;
}
}
Here is part of my docker-compose configuration:
matching-engine:
image: amp-engine
ports:
- "8081:8081"
depends_on:
- mongodb
- rabbitmq
- redis
deploy:
restart_policy:
condition: on-failure
max_attempts: 3
window: 120s
client:
image: amp-client:latest
container_name: "client"
ports:
- "80:80"
depends_on:
- matching-engine
deploy:
restart_policy:
condition: on-failure
max_attempts: 3
window: 120s
docker-compose resolves the 'matching-engine' automatically (i can make normal http get/post requests that nginx resolves correctly, and nslookup finds the matching-engine correctly, so i believe the basic networking is working correctly for HTTP requests which leads me to think that the problem comes from the match location in the nginx configuration.
How can one pick up a request that originates from `new WebSocket('ws://engine/socket') in a location directive. I have tried the following ones:
location ^~ engine
location /engine
location /engine/socket
location ws://engine
without any success.
I have also tried changing new Websocket('ws://engine/socket') to new Websocket('/engine/socket') but this fails (only ws:// or wss:// prefixes are accepted)
What's the way to make this configuration work ?
As you are already exposing port 80 of your client container to your host via docker-compose, you could just connect to your websocket-proxy via localhost:
new Websocket('ws://localhost:80/engine')
Related
I have the following nginx.conf and in the access.log I am getting as remote_addr the same IP for every request, which is the IP of my VM.
events{}
# See blow link for Creating NGINX Plus and NGINX Configuration Files
# https://docs.nginx.com/nginx/admin-guide/basic-functionality/managing-configuration-files/
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# The identifier Backend is internal to nginx, and used to name this specific upstream
upstream backend {
# BACKEND_HOST is the internal DNS name used by the Backend Service inside the Kubernetes cluster
# or in the services list of the docker-compose.
server ${BACKEND_HOST}:${BACKEND_PORT};
}
server {
listen ${NODE_PORT};
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
location /api/ {
resolver 127.0.0.11;
#nginx will not crash if host is not found
# The following statement will proxy traffic to the upstream
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
However, I need to have in the remote_addr field the initial client IP. I know that I can use the variable realip_remote_addr, but I wanted to ask if there is any configuration that changes the remote_addr. Is somehow this possible?
EDIT: As I search more about that I think that it is important to mention that I use docker-compose to run the nginx as part of a frontend service. Maybe this is related to the network of docker.
Usually it is enough to add these two fields to the request header:
proxy_set_header x-real-ip $remote_addr;
proxy_set_header X-forwarded-for $proxy_add_x_forwarded_for;
See the documentation at proxy_set_header for more details.
In your case:
server {
listen ${NODE_PORT};
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
location /api/ {
resolver 127.0.0.11;
#nginx will not crash if host is not found
# The following statement will proxy traffic to the upstream
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header x-real-ip $remote_addr;
proxy_set_header X-forwarded-for $proxy_add_x_forwarded_for;
}
}
We have to understand the importance of the field remote_addr, it tell the application server where to respond back, if you overwrite this value than the server won't pass the response to the network interface it came from. So for this use case you want to log real client IP , please refer to the below snippet, it might help:
events{}
# See blow link for Creating NGINX Plus and NGINX Configuration Files
# https://docs.nginx.com/nginx/admin-guide/basic-functionality/managing-configuration-files/
log_format logs_requested '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$request_time" "$upstream_response_time" "$pipe" "$http_x_forwarded_for"';
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# The identifier Backend is internal to nginx, and used to name this specific upstream
upstream backend {
# BACKEND_HOST is the internal DNS name used by the Backend Service inside the Kubernetes cluster
# or in the services list of the docker-compose.
server ${BACKEND_HOST}:${BACKEND_PORT};
}
server {
listen ${NODE_PORT};
access_log /var/log/nginx/access_logs.log logs_requested;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
location /api/ {
resolver 127.0.0.11;
#nginx will not crash if host is not found
# The following statement will proxy traffic to the upstream
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
In above snippet logs_requested is the log_format that is defined according to one's requirement. Client IP information can be see in http_x_forwarded_for variable, and access_log /var/log/nginx/access_logs.log logs_requested line is included in server block to log the request in this logs_requested format.
Check the Nginx documentation on setting up your access log desired format
https://docs.nginx.com/nginx/admin-guide/monitoring/logging/#access_log
Some more info:
https://docs.splunk.com/Documentation/AddOns/released/NGINX/Setupv2
I have the current jenkins configuration:
server {
listen 80;
listen [::]:80;
server_name server_name mysubdomain.maindomain.com;
# This is the jenkins web root directory (mentioned in the /etc/default/jenkins file)
root /var/run/jenkins/war/;
access_log /var/log/nginx/jenkins/access.log;
error_log /var/log/nginx/jenkins/error.log;
#pass through headers from Jenkins which are considered invalid by Nginx server.
ignore_invalid_headers off;
location ~ "^/static/[0-9a-fA-F]{8}\/(.*)$" {
# rewrite all static files into requests to the root
# e.g /static/12345678/css/something.css will become /css/something.css
rewrite "^/static/[0-9a-fA-F]{8}\/(.*)" /$1 last;
}
location /userContent {
#have nginx handle all the static requests to the userContent folder files
#note : This is the $JENKINS_HOME dir
root /var/lib/jenkins/;
if (!-f $request_filename){
#this file does not exist, might be a directory or a /**view** url
rewrite (.*) /$1 last;
break;
}
sendfile on;
}
location #jenkins {
sendfile off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Required for new HTTP-based CLI
proxy_http_version 1.1;
proxy_request_buffering off;
proxy_pass http://127.0.0.1:2021;
}
location / {
# try_files $uri $uri/ =404;
try_files $uri #jenkins;
}
}
which is essentially a copy of this jenkins configuration and my current /etc/default/jenkins file:
NAME=jenkins
# location of java
JAVA=/usr/bin/java
JAVA_ARGS="-Djava.awt.headless=true"
# make jenkins listen on IPv4 address
JAVA_ARGS="-Djava.net.preferIPv4Stack=true"
PIDFILE=/var/run/$NAME/$NAME.pid
JENKINS_USER=$NAME
JENKINS_GROUP=$NAME
JENKINS_WAR=/usr/share/$NAME/$NAME.war
JENKINS_HOME=/var/lib/$NAME
RUN_STANDALONE=true
JENKINS_LOG=/var/log/$NAME/$NAME.log
MAXOPENFILES=8192
HTTP_PORT=2021
HTTP_HOST=127.0.0.1
# servlet context, important if you want to use apache proxying
PREFIX=/$NAME
JENKINS_ARGS="--webroot=/var/cache/$NAME/war --prefix=$PREFIX --httpListenAddress=$HTTP_HOST --httpPort=$HTTP_PORT"
a simple curl requests shows a response of Jenkins running:
$ curl http://localhost:2021/jenkins/
<html><head><meta http-equiv='refresh' content='1;url=/jenkins/login?from=%2Fjenkins%2F'/><script>window.location.replace('/jenkins/login?from=%2Fjenkins%2F');</script></head><body style='background-color:white; color:white;'>
Authentication required
<!--
You are authenticated as: anonymous
Groups that you are in:
Permission you need to have (but didn't): hudson.model.Hudson.Administer
-->
</body></html>
However I am unable to acess the Web UI from the browser. whenever I try to I get a 404. The following are the reevant versions of installed 'wares:
Nginx - 1.13.6
Jenkins - 2.73.2 (using java -jar path-to-warfile --version)
OS - ubuntu 16.04
JDK - openjdk version "1.8.0_131"
An inspection of sudo nginx -T revealed that my site config wasn't being loaded. After correcting the error in my nginx.conf (spelling error in the include directive for the directory), this resolved the issue.
Thanks to SmokedCheese on IRC for his/her help with this issue.
There are 3 ingredients to this issue:
Docker container: I have a Docker container that is deployed on an EC2 instance. More specifically, I have the rocker/shiny image, which I have run using:
sudo docker run -d -v /home/ubuntu/projects/shiny_example:/srv/shiny-server -p 3838:3838 rocker/shiny
Shiny server: The standard Shiny server configuration file is untouched, and is set up to serve everything in the /srv/shiny-server folder on port 3838, and the contents of my local ~/projects/shiny_example are mapped to the container's /srv/shiny-server/.
In my local ~/projects/shiny_example, I have cloned a random Shiny app:
git clone https://github.com/rstudio/shiny_example
nginx: I have set up nginx as a reverse proxy and here are the contents of the /etc/nginx/nginx.conf in its entirety.
The issue is that with this setup, when I try to retrieve http://<ip-address>/shiny/shiny_example, I get a 404. The main clue I have as to what might be wrong is that when I do a:
wget http://localhost:3838/shiny_example
from the command line on my EC2 instance, I get:
--2016-06-13 11:05:08-- http://localhost:3838/shiny_example
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:3838... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: /shiny_example/ [following]
--2016-06-13 11:05:08-- http://localhost:3838/shiny_example/
Reusing existing connection to localhost:3838.
HTTP request sent, awaiting response... 200 OK
Length: 3136 (3.1K) [text/html]
Saving to: ‘shiny_example.3’
100%[==============================================================================================================================>] 3,136 --.-K/s in 0.04s
2016-06-13 11:05:09 (79.6 KB/s) - ‘shiny_example.3’ saved [3136/3136]
where the emphasis is mine.
I think that my nginx configuration does not account for the fact that when requesting a Docker mapped port, there is a 301 redirect. I think that the solution involves proxy_next_upstream, but I would appreciate some help in trying to set this up in my context.
I also think that this question can be shorn of the Docker context, but it would be nice to understand how to prevent a 301 redirect when requesting a resource from Shiny server that is in a Docker container, and whether this behavior is expected.
I can't be sure without more output, but suspect your error is in your proxy_redirect line:
location /shiny/ {
rewrite ^/shiny/(.*)$ /$1 break;
proxy_pass http://localhost:3838;
proxy_redirect http://localhost:3838/ $scheme://$host/shiny_example;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 20d;
}
Try changing that to:
location /shiny/ {
rewrite ^/shiny/(.*)$ /$1 break;
proxy_pass http://localhost:3838;
proxy_redirect http://localhost:3838/ $scheme://$host/shiny/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 20d;
}
The reason for that is when the 301 header comes back from "http://localhost:3838" to add the trailing slash, it gets rewritten to "http://localhost/shiny_example" which doesn't exist in your nginx config, plus it may also remove a slash from the path. This means the 301 from "http://localhost:3838/shiny_example" to "http://localhost:3838/shiny_example/" would get rewritten to to "http://localhost/shiny_exampleshiny_example/", at which point you get a 404.
There was nothing wrong with anything. Basically, one of the lines in /etc/nginx/nginx.conf was include /etc/nginx/sites-enabled/*, which was pulling in the default file for enabled sites, which has the following lines:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm;
# Make site accessible from http://localhost/
server_name localhost;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}
which was overwriting my listen directives for port 80 and for location /. Commenting out the include directive for the default conf file for enabled sites in the /etc/nginx/nginx.conf file resolved all issues for me.
Not sure if this is still relevant but I have a minimal example here: https://github.com/mRcSchwering/butterbirne
A service shinyserver (which is based on rocker/shiny) is started with a service webserver (based on nginx:latest):
version: '2'
services:
shinyserver:
build: shinyserver/
webserver:
build: webserver/
ports:
- 80:80
I configured the ngin, so that it would redirect directly to the shiny server root. In my case I added the app (called myapp here) as the root of shinyserver (so no /myapp is needed). This is the whole nginx.conf:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
# apparently this is needed for shiny server
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# proxy shinyserver
server {
listen 80;
location / {
proxy_pass http://shinyserver:3838;
proxy_redirect http://shinyserver:3838/ $scheme://$host/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 20d;
proxy_buffering off;
}
}
}
I'm using boot2docker since I'm running Mac OSX. I can't figure out how serve up static files using nginx that is running inside a docker container (that also contains the static assets, like my html and js).
I have four docker containers being spun up with this docker-compose.yml:
web:
build: ./public
links:
- nodeapi1:nodeapi1
ports:
- "80:80"
nodeapi1:
build: ./api
links:
- redis
- db
ports:
- "5000:5000"
volumes:
- ./api:/data
redis:
image: redis:latest
ports:
- "6379:6379"
db:
image: postgres:latest
environment:
POSTGRES_USER: root
ports:
- "5432:5432"
This is my nginx.conf:
worker_processes auto;
daemon off;
events {
worker_connections 1024;
}
http {
server_tokens off;
upstream node-app {
ip_hash;
server 192.168.59.103:5000;
}
server {
listen 80;
index index.html;
root /var/www;
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires 1d;
}
location / {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_pass http://node-app;
proxy_cache_bypass $http_upgrade;
}
}
}
My Dockerfile for my web build (which contains my nginx.conf and static assets):
# Pull nginx base image
FROM nginx:latest
# Expost port 80
EXPOSE 80
# Copy custom configuration file from the current directory
COPY nginx.conf /etc/nginx/nginx.conf
# Copy static assets into var/www
COPY ./dist /var/www
COPY ./node_modules /var/www/node_modules
# Start up nginx server
CMD ["nginx"]
The contents of the ./dist folder is a bundle.js file and an index.html file. The file layout is:
public
-- Dockerfile
-- nginx.conf
-- dist (directory)
-- bundle.js
-- index.html
-- node_modules
...various node modules
It is properly sending requests to my node server (which is also in a docker container, which is why my upstream server points to the boot2docker ip), but I'm just getting 404s for attempts to retrieve my static assets.
I'm lost as to next steps. If I can provide any information, please let me know.
Your issue isn't related to docker but to your nginx configuration.
In your nginx config file, you define /var/www/ as the document root (I guess to serve your static files). But below that you instruct nginx to act as a reverse proxy to your node app for all requests.
Because of that, if you call the /index.html URL, nginx won't even bother checking the content of /var/www and will forward that query to nodejs.
Usually you want to distinguish requests for static content from requests for dynamic content by using a URL convention. For instance, all requests starting with /static/ will be served by nginx while anything else will be forwarded to node. The nginx config file would then be:
worker_processes auto;
daemon off;
events {
worker_connections 1024;
}
http {
server_tokens off;
upstream node-app {
ip_hash;
server 192.168.59.103:5000;
}
server {
listen 80;
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires 1d;
}
location /static/ {
alias /var/www/;
index index.html;
}
location / {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_pass http://node-app;
proxy_cache_bypass $http_upgrade;
}
}
}
I have Gitlab 5.2 + Nginx installed on a local machine in my university. Clone over http works for machines that are within the internal network, but trying to clone from a machine on an external network results in an "fatal: Authentication failed" message, even though the exact same credentials are supplied. (I use the same credentials as the ones I use to log in to Gitlab via the web interface)
The Gitlab web interface is accessible from external networks. It is only the clone over http that fails (clone over ssh is not possible because port 22 is blocked)
Here are some lines from the relevant configuration files:
from config/gitlab.yml
host: mydomain
port: 80
https: false
Here are the relevant lines from the ngnix config file
server {
listen *:80 default_server; # In most cases *:80 is a good idea
server_name mydomain; # e.g., server_name source.example.com;
root /home/git/gitlab/public;
# individual nginx logs for this gitlab vhost
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log;
location / {
# serve static files from defined root folder;.
# #gitlab is a named location for the upstream fallback, see below
try_files $uri $uri/index.html $uri.html #gitlab;
}
# if a file, which is not found in the root folder is requested,
# then the proxy pass the request to the upsteam (gitlab unicorn)
location #gitlab {
proxy_read_timeout 2000; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_connect_timeout 2000; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://gitlab;
}
}
Note: I have added this line to /etc/hosts: 127.0.0.1 mydomain but it doesn't really help. (based on https://github.com/gitlabhq/gitlabhq/issues/3483#issuecomment-15783597)
Any ideas on what the issue might be/how I might debug this?
I believe this is fixed in 5.3, so try updating. See:
https://github.com/gitlabhq/gitlabhq/blob/master/CHANGELOG#L41
https://github.com/gitlabhq/gitlabhq/blob/master/config/gitlab.yml.example#L151