I'm using boot2docker since I'm running Mac OSX. I can't figure out how serve up static files using nginx that is running inside a docker container (that also contains the static assets, like my html and js).
I have four docker containers being spun up with this docker-compose.yml:
web:
build: ./public
links:
- nodeapi1:nodeapi1
ports:
- "80:80"
nodeapi1:
build: ./api
links:
- redis
- db
ports:
- "5000:5000"
volumes:
- ./api:/data
redis:
image: redis:latest
ports:
- "6379:6379"
db:
image: postgres:latest
environment:
POSTGRES_USER: root
ports:
- "5432:5432"
This is my nginx.conf:
worker_processes auto;
daemon off;
events {
worker_connections 1024;
}
http {
server_tokens off;
upstream node-app {
ip_hash;
server 192.168.59.103:5000;
}
server {
listen 80;
index index.html;
root /var/www;
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires 1d;
}
location / {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_pass http://node-app;
proxy_cache_bypass $http_upgrade;
}
}
}
My Dockerfile for my web build (which contains my nginx.conf and static assets):
# Pull nginx base image
FROM nginx:latest
# Expost port 80
EXPOSE 80
# Copy custom configuration file from the current directory
COPY nginx.conf /etc/nginx/nginx.conf
# Copy static assets into var/www
COPY ./dist /var/www
COPY ./node_modules /var/www/node_modules
# Start up nginx server
CMD ["nginx"]
The contents of the ./dist folder is a bundle.js file and an index.html file. The file layout is:
public
-- Dockerfile
-- nginx.conf
-- dist (directory)
-- bundle.js
-- index.html
-- node_modules
...various node modules
It is properly sending requests to my node server (which is also in a docker container, which is why my upstream server points to the boot2docker ip), but I'm just getting 404s for attempts to retrieve my static assets.
I'm lost as to next steps. If I can provide any information, please let me know.
Your issue isn't related to docker but to your nginx configuration.
In your nginx config file, you define /var/www/ as the document root (I guess to serve your static files). But below that you instruct nginx to act as a reverse proxy to your node app for all requests.
Because of that, if you call the /index.html URL, nginx won't even bother checking the content of /var/www and will forward that query to nodejs.
Usually you want to distinguish requests for static content from requests for dynamic content by using a URL convention. For instance, all requests starting with /static/ will be served by nginx while anything else will be forwarded to node. The nginx config file would then be:
worker_processes auto;
daemon off;
events {
worker_connections 1024;
}
http {
server_tokens off;
upstream node-app {
ip_hash;
server 192.168.59.103:5000;
}
server {
listen 80;
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires 1d;
}
location /static/ {
alias /var/www/;
index index.html;
}
location / {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_pass http://node-app;
proxy_cache_bypass $http_upgrade;
}
}
}
Related
I have a NUXT js application on Ubuntu 20.04 Server. I used Nginx to serve my nuxt application as follow:
server {
client_max_body_size 300M;
root /var/www/app/dist;
server_name example.com;
location / {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
My NUXT application works with npm on port 8080 and Nginx reverse proxy, passes user requests to localhost:8080 and everything is working fine.
Now I want to access the js service worker file (named: p-sw.js). but when I try to access it via my website address, (https://example.com/p-sw.js) due to Nginx reverse proxy it returns 404. This file is in the dist folder (see Nginx configuration).
Anybody can explain to me how to set Nginx reverse proxy to works fine as before alongside load service worker file when I enter service worker address (https://example.com/p-sw.js) in browser?
Finally, I solved it!
the Nginx config must look like this:
upstream backend {
server localhost:3000;
}
server {
server_name example.com;
client_max_body_size 300M;
root /var/www/app/dist;
location /p-sw.js {
try_files $uri #backend;
add_header 'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
expires off;
proxy_no_cache 1;
}
location #backend {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
In this configuration, I solved the issue by defining the location /p-sw.js for my service worker file, and for Nuxt routes, I used the same proxy pass!
I have asp.net web api application in docker container and it's working good with below commands,
docker build -t hello-aspnetcore3 -f Api.Dockerfile .
docker run -d -p 5000:5000 --name hello-aspnetcore3 hello-aspnetcore3
Browse to: http://localhost:5000/weatherforecast works perfect.
Now I am trying to use Nginx as reverse proxy and running in another container and here is nginx conf and container files,
nginx.conf
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream web-api {
server api:5000;
}
server {
listen 80;
server_name $hostname;
location / {
proxy_pass http://web-api;
proxy_redirect off;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
Nginx.Dockerfile
FROM nginx:latest
COPY nginx.conf /etc/nginx/nginx.conf
and here is my docker compose file,
docker-compose.yml
version: "3.7"
services:
reverseproxy:
build:
context: ./Nginx
dockerfile: Nginx.Dockerfile
ports:
- "80:80"
restart: always
api:
depends_on:
- reverseproxy
build:
context: ./HelloAspNetCore3.Api
dockerfile: Api.Dockerfile
expose:
- "5000"
restart: always
Building and running containers are fine docker-compose build and docker-compose up -d.
But when trying to browse http://localhost/weatherforecast, it's giving HTTP Error 404. The requested resource is not found. error. What's wrong here?
Note - when I browse using host ip address http://192.168.0.103/weatherforcast, then it's works fine.
Docker-Compose ps output is here....
I'm trying to start up my node service on my nginx webserver but I keep getting this error when I try to do nginx -t
nginx: [emerg] "upstream" directive is not allowed here in /etc/nginx/nginx.conf:3
nginx: configuration file /etc/nginx/nginx.conf test failed
My current nginx.conf is like this:
upstream backend {
server 127.0.0.1:5555;
}
map $sent_http_content_type $charset {
~^text/ utf-8;
}
server {
listen 80;
listen [::]:80;
server_name mywebsite.com;
server_tokens off;
client_max_body_size 100M; # Change this to the max file size you want to allow
charset $charset;
charset_types *;
# Uncomment if you are running behind CloudFlare.
# This requires NGINX compiled from source with:
# --with-http_realip_module
#include /path/to/real-ip-from-cf;
location / {
add_header Access-Control-Allow-Origin *;
root /path/to/your/uploads/folder;
try_files $uri #proxy;
}
location #proxy {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://backend;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
I tried to look up some solutions but nothing seem to work for my situation.
Edit: Yes, I did edit the paths and placeholders properly.
tldr; The upstream directive must be embedded inside an http block.
nginx configuration files usually have events and http blocks at the top-most level, and then server, upstream, and other directives nested inside http. Something like this:
events {
worker_connections 768;
}
http {
upstream foo {
server localhost:8000;
}
server {
listen 80;
...
}
}
Sometimes, instead of nesting the server block explicitly, the configuration is spread across multiple files and the include directive is used to "merge" them all together:
http {
include /etc/nginx/sites-enabled/*;
}
Your config doesn't show us an enclosing http block, so you are most likely running nginx -t against a partial config. You should either a) add those enclosing blocks to your config, or b) rename this file and issue an include for it within your main nginx.conf to pull everything together.
I am trying to get nginx to proxy a websocket connection to a backend server. All services linked via docker-compose.
When i create the WebSocket object in my frontend react app:
let socket = new WebSocket(`ws://engine/socket`)
I get the following error:
WebSocket connection to 'ws://engine/socket' failed: Error in connection establishment: net::ERR_NAME_NOT_RESOLVED
I believe the problem comes from converting ws:// to http:// and that my nginx configuration does not seem to be pick up the match location correctly.
Here is my nginx configuration:
server {
# listen on port 80
listen 80;
root /usr/share/nginx/html;
index index.html index.htm;
location ^~ /engine {
proxy_pass http://matching-engine:8081/;
proxy_http_version 1.1;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location / {
try_files $uri $uri/ /index.html;
}
# Media: images, icons, video, audio, HTC
location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
expires 1M;
access_log off;
add_header Cache-Control "public";
}
# Javascript and CSS files
location ~* \.(?:css|js)$ {
try_files $uri =404;
expires 1y;
access_log off;
add_header Cache-Control "public";
}
# Any route containing a file extension (e.g. /devicesfile.js)
location ~ ^.+\..+$ {
try_files $uri =404;
}
}
Here is part of my docker-compose configuration:
matching-engine:
image: amp-engine
ports:
- "8081:8081"
depends_on:
- mongodb
- rabbitmq
- redis
deploy:
restart_policy:
condition: on-failure
max_attempts: 3
window: 120s
client:
image: amp-client:latest
container_name: "client"
ports:
- "80:80"
depends_on:
- matching-engine
deploy:
restart_policy:
condition: on-failure
max_attempts: 3
window: 120s
docker-compose resolves the 'matching-engine' automatically (i can make normal http get/post requests that nginx resolves correctly, and nslookup finds the matching-engine correctly, so i believe the basic networking is working correctly for HTTP requests which leads me to think that the problem comes from the match location in the nginx configuration.
How can one pick up a request that originates from `new WebSocket('ws://engine/socket') in a location directive. I have tried the following ones:
location ^~ engine
location /engine
location /engine/socket
location ws://engine
without any success.
I have also tried changing new Websocket('ws://engine/socket') to new Websocket('/engine/socket') but this fails (only ws:// or wss:// prefixes are accepted)
What's the way to make this configuration work ?
As you are already exposing port 80 of your client container to your host via docker-compose, you could just connect to your websocket-proxy via localhost:
new Websocket('ws://localhost:80/engine')
I am new to webpack and using the angular2 starter pack from angularclass with rc6.
All is well but have questions about deploying to production which I did and seemed to work with nginx and supervisor.
So I did the below to deploy:
npm run build:prod
npm run server:prod
Supervisor Conf
[program:angular]
directory=/var/my-angular2/
autostart=true
autorestart=true
process_name = angular-%(process_num)s
command = npm run server:prod
--port=%(process_num)s
--log_file_prefix=%(here)s/logs/%(program_name)s-%(process_num)s.log
[group:angular_server]
programs=angular
My Nginx Conf
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/my-angular2/dist;
# Add index.php to the list if you are using PHP
index index.html index.htm index.php index.nginx-debian.html;
# Here we proxy pass only the base path
location = / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:8080;
}
# Here we proxy pass all the browsersync stuff including
# all the websocket traffic
location /browser-sync {
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
Then served app out of the dist/ folder in nginx
So in the process of building and running does the webpack process automatically add the below? Or is there another step.
enableProdMode()
When else am I missing for deploying and production angular2 app?