How to conect docker nginx with express and react app on windows - css

I found many tutorials about those but none uniting the 3 of them. I want to learn how to do that because I need to start deploying react websites I made and I wanted to deploy them on my domain and hosting on my computer for testing purposes.
In what order should I learn those to archive that ? What tutorials do you recommend ? I found some tutorials about them but got really confused with the cascade of things to learn. I got windows power shell working, did some node tutorials some express ones and managed to run nginx on docker but couldn't finish.
I am feeling ok with my css, js, and react, made a little game got some things working did some practice but now I am kind of stuck. I really appreciate any help or suggestions you can provide to keep going on my learning path.
Bellow are the video tutorials I watched :
CSS
https://www.youtube.com/watch?v=1Rs2ND1ryYc
React
https://www.youtube.com/watch?v=DLX62G4lc44

You can use a docker-compose.yml file to define and run multi container Docker applications. And then with a single command you can build and start all your services.You can run both Linux and Windows programs and executables in Docker. Docker creates thin virtual environments for your apps.
Here is an example of what you want.This is the folder structure:
|--client
|--Dockerfile
|--components
|--index.js
|--server
|--Dockerfile
|--index.js
|--nginx
|-- Dockerfile
|--default.conf
|--docker.compose.yml
Dockerfile for the react client:
FROM node:alpine as builder
WORKDIR '/app'
COPY ./package.json . /
RUN npm install
COPY . .
RUN npm run build
Dockerfile for nginx
FROM nginx
COPY ./default.conf /etc/nginx/conf.d/default.conf
default.conf for nginx
upstream client {
server client:3000;
}
upstream api {
server api:5000;
}
server {
listen 80;
location / {
proxy_pass http://client;
}
location /sockjs-node {
proxy_pass http://client;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass http://api;
}
}
Dockerfile for node/express server
FROM node:alpine
WORKDIR '/app'
COPY ./package.json ./
RUN npm install
COPY . .
CMD [ "npm", "run", "start" ]
docker-compose.yml file. You can switch mongo with any database you want to use for your api.You can build and run evrything with docker-compose up --build from the main project directory(where the docker-compose file is )
version: '3'
services:
nginx:
restart: always
build:
dockerfile: Dockerfile
context: ./nginx
ports:
- '80:80'
mongo:
container_name: mongo
image: mongo
ports:
- '27017:27017'
api:
restart: always
build:
dockerfile: Dockerfile
context: ./server
volumes:
- /app/node_modules
- ./server:/app
links:
- mongo
ports:
- '5000:5000'
depends_on:
- mongo
client:
build:
dockerfile: Dockerfile
context: ./client
volumes:
- /app/node_modules
- ./client:/app
links:
- api

Related

Nginx not routing browser request to wsgi(python server running)

I am running my flask project from uwsgi on nginx. But my nginx is not routing the request to uwsgi when i hit localhost:80/
My nginx.conf looks like this
server {
listen 80;
server_name <your machine ip/domain>;(if on local it would be localhost but I was running on WSL so I put it IP)
location / {
include uwsgi_params;
uwsgi_pass web_app:5000; (you might see suggestion of .sock files or suffixing http:// or unix: but none work for me plain simple your python server's service name which you would provide in docker-compose)
}
}
docker-compose looks like this
version: '3.7'
services:
web_app:
build: .
container_name: kpi-dashboard
ports:
- 5000:5000
depends_on:
- db
nginx:
build: ./nginx
container_name: nginx
restart: always
ports:
- "80:80"
depends_on:
- web_app
db:
image: postgres:13-alpine
container_name: postgresql
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres
ports:
- 5432:5432
volumes:
postgres_data:
nginx dockerfile
FROM nginx
RUN rm /etc/nginx/conf.d/default.conf (it is important to remove the default conf as it would not take your custom conf no matter where you copy it)
COPY nginx.conf /etc/nginx/conf.d/
(there are answers online to copy it no other places but this only works)
EXPOSE 80
web app dockerfile
FROM python:3.8.16-slim-buster
RUN apt-get update
RUN apt-get install gcc -y && apt-get install python3-dev -y && apt-get install libpq-dev -y
ENV PYTHONPATH=${PYTHONPATH}:${PWD}
RUN pip install poetry
WORKDIR /app
COPY pyproject.toml /app/
COPY . /app/
RUN poetry config virtualenvs.create false
RUN poetry install --no-dev
EXPOSE 5000
CMD ["uwsgi", "--ini", "wsgi.ini"]
wsgi.ini file
[uwsgi]
module = app (this is when you are writing you project entrypoint in app.py. if you are writing in wsgi.py then this would become wsgi:app)
socket = 0.0.0.0:5000
callable = app (this is important as wsgi by default considers your app instance as application either handle it in your main file or just add this configuration)
processes = 1
threads = 1
master = true
vacuum = true
die-on-term = true
This is what the nginx container output looks like
Editing question as the 404 issue was solved. But nginx is still not routing to wsgi.
The solution
changed the location of copying the nginx.conf file in nginx dockerfile
COPY nginx.conf /etc/nginx/nginx.config
Editing question again as nginx routing to wsgi issue also resolved.
The solution
updated files as mentioned above
Yes so this worked for me. There are n number of configurations available online and almost all are same yet a slight difference causes the issue.
I am updating my question to change files with the content that worked. Hope it helps someone.

ASP.NET Core on docker returns ERR_EMPTY_RESPONSE when browsing at localhost

When I try to run my ASP.NET Core application on docker through http://localhost:5004 I'm getting this response from my browser: ERR_CONNECTION_REFUSED
Here's my docker file
# https://hub.docker.com/_/microsoft-dotnet
FROM mcr.microsoft.com/dotnet/sdk:5.0.404 AS build
WORKDIR /code
COPY . .
# copy everything else and build app
RUN dotnet publish -c release -o /app
# final stage/image
FROM mcr.microsoft.com/dotnet/aspnet:5.0.13
WORKDIR /app
COPY --from=build /app ./
ENTRYPOINT ["dotnet", "ProductCatalogApi.dll", "--server.urls", "http://+:5004"]
my docker-compose.yml file (do not mind the password format):
version: "5.0.4"
networks:
frontend:
backend:
services:
catalog:
build:
context: .\src\Services\ProductCatalogApi
dockerfile: Dockerfile
image: shoes/catalog
environment:
- DatabaseServer=mssqlserver
- DatabaseName=CatalogDb
- DatabaseUser=sa
- DatabasePassword=(passwordhere)
- ASPNETCORE_URLS=http://+:5004
- ASPNETCORE_ENVIRONMENT=Production
container_name: catalogapi
ports:
- "5004:80"
networks:
- backend
- frontend
depends_on:
- mssqlserver
mssqlserver:
image: "mcr.microsoft.com/mssql/server:2019-latest"
ports:
- "1445:1433"
container_name: mssqlcontainer
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=(passwordhere)
- MSSQL_PID=Developer
networks:
- backend
CreateHostBuilder in Program.cs:
private static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
// webBuilder.UseKestrel(options => { options.Listen(IPAddress.Any, 5000); });
webBuilder.UseKestrel().UseUrls(Environment.GetEnvironmentVariable("ASPNETCORE_URLS"));
});
}
}
I can't figure what's missing or wrong on my configuration that I can't access the app on my local machine through localhost:5004(or whatever port ex. 5000 )
Note: I run this commands in the following order
docker-compose build
docker-compose up mssqlserver
docker-compose up catalog
When you set ASPNETCORE_URLS=http://+:5004, your app will listen on port 5004. So that's the port you should map to a port on the host. You've mapped port 80.
Change your docker-compose file to
ports:
- "5004:5004"
Now you should be able to access it. Remember that Swagger by default isn't available in the Production environment, so you won't be able to use the Swagger pages.
You've tried to configure what port the app listens on in a lot of different ways. A good idea might be to remove it all and only configure it in the launchSettings.json file. Then it'll listen on the port specified in launchSettings.json when you run it locally during development and it'll listen on port 80 when run in the container.
The reason it'll listen on port 80 when run in a container is that Microsoft set the ASPNETCORE_URLS environment variable to http://+:80 in the aspnet images.

Wordpress Docker behind Nginx Reverse Proxy

I use this Page and their threads to solve problems for years, but know I have to make a question.
I have tried to install WordPress Docker on my Vserver Machine. It pretty works but the only HTTP.
To install the Wordpress Docker I have to use the tutorial from the following Link.
Additionally, I added --restart always at docker run -e ... command.
Then I installed nginx 1.12.xxx to have a Reverse Proxy. But SSL didn't work. After that, I tried to install a newer version 1.15.xx from nginx repository with no better results.
I installed a certificate with Let's Encrypt and Certbot.
After that WordPress was running and the wp-admin.php was accessible.
But I don't get SSL/HTTPS working. I already tried many codes and my workmates at my workplace even can't get a solution.
I hope you can get one :)
I tried to configure wp-config.php to enable https with commands like "$_SERVER['HTTPS'] = 'on';" and others with no working rather destroying effects.
I also tried to enable "X-Forwared-Proto $scheme;" and "FastCGI" which didn't work as well. I tried many variations of them.
I tried some SSL Plugins from Wordpress but none of them are working.
https://www.bilder-upload.eu/upload/a0eb85-1554884646.png
https://www.bilder-upload.eu/upload/028dc9-1554883515.png
I hope its a little fault and you can help me easily.
First Install Docker on Ubuntu
Either you go with a docker provider like Bluemix or you get a virtual machine from softlayer or any other provider. In my case I have chosen a virtual server so I had to install docker on Ubuntu LTS. Which is really easy. Basically you add a new repository entry to your apt sources and install latest stable docker packages. There is also a script available on get.docker.com but I don’t feel comfortable to execute a shell script right from the net with root access. But it’s up to you.
wget -qO- https://get.docker.com/ | sh
Docker on linux does not contain docker-compose compared to the docker installation for example on mac. Installing docker compose is straightforward. The docker compose script can be downloaded from github here: https://github.com/docker/compose/releases.
Docker-compose
Docker-compose takes care of a docker setup containing more than one docker container, including network and also basic monitoring. The following script starts and builds all docker container with nginx, mysql and wordpress. It also exports the volumes on the host file system for easy backup and persistence along docker container rebuilds and monitors if the docker containers are up and running.
version: '3'
services:
db:
image: mysql:latest
volumes:
- ./db:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: easytoguess
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: eveneasier
wordpress:
depends_on:
- db
image: wordpress:latest
restart: always
volumes:
- ./wordpress:/var/www/html/wp-content
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: eveneasier
WORDPRESS_DB_NAME: wordpress
nginx:
depends_on:
- wordpress
restart: always
build:
context: .
dockerfile: Dockerfile-nginx
ports:
- "80:80"
Mysql is the first container we bring up with environment variables for the database like username, password and database name. Line 7 takes care to save the database file outside the docker container so you can delete the docker container, start a new one and still have the same database up and running. Point this where you want to have it. In this case in “db” under the same directory. Also make sure you come up with decent passwords.
The second container is wordpress. Same here with the host folder on line 21. Furthermore make sure you have the same user, password and db name configured as in the mysql container configuration.
Last one is nginx as internet facing container. You expose the port 80 here. While you just specify a container in the other two, in this one you configure a Dockerfile and a build context to customize your nginx regarding to the network setup. If you only want to host static files you can add this via volume mounts, but in our case we need to configure nginx itself so we need a customized Dockerfile as described below.
Dockerfile for nginx setup
FROM nginx:latest
COPY default.conf /etc/nginx/conf.d/default.conf
VOLUME /var/log/nginx/log/
EXPOSE 80
This dockerfile inherits everything from the latest nginx and copies the default.conf file into it. See next chapter for how to setup the config file.
Nginx config file
server {
listen 80;
listen [::]:80;
server_name www.23-5.eu ansi.23-5.eu;
access_log /var/log/nginx/log/unsecure.access.log main;
location / {
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_redirect off;
proxy_pass http://wordpress;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
}
}
Line 2 and 3 configures the port we want to listen on. We need one for ip4 and one for ip6. Important is the proxy configuration in line 8 to 15. Line 11 redirect all calls to “/” (so without a path in the URL) to the server wordpress. As we used docker-compose for it docker takes care to make the address available via the internal DNS server. Line 13-15 rewrites the http header in order to map everything to the different URL, otherwise we would end up with auto generated links in docker pointing to http://wordpress
Start the System
If everything is configured and the docker-compose.yml, default.conf, Dockerfile-nginx and the folders db and wordpress are in the same folder, we can start everything being in this folder with:
docker-compose up --build -d
The parameter “-d” starts the setup in the background (daemon). For the very first run I would recommend using it without the “-d” parameter to see all debug messages.

docker running nginx as proxy to another webserver (on pi)

I am kind of in over my head with my current small project.
(although it should not be that hard)
I am trying to run multiple webpages using docker on my Pi (for testing purposes) which should all be reachable using the PI's IP.
I currently run a minimL LIGHTTPD: (based on the resin/rpi-raspbian image)
docker run -d -v <testconfig>:/etc/lighttpd -p <pi-ip>:8080:80 <image name>
(this server is reachable using the browser on pi and on other computers in the network)
For nginx I run another container with with a simple config
(starting with http://nginx.org/en/docs/beginners_guide.html),
containing a webpage and images to test the container config.
this container is reachable using <pi-ip>:80
then I tried to add a proxy to the locations:
(I played around so now there are 3 locations for the same redirect)
location /prox1/{
proxy_pass http://<pi-ip>:8080
}
location /prox2/{
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://<pi-ip>:8080
}
location /prox3/{
fastcgi_pass <pi-ip>:8080;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param QUERY_STRING $query_string
}
Version 1&2 give a 404 (I tried adding a rewrite, but then I ´nginx redirected on itself due to the /prox1/ being cut).
Version 3 yields a timeout.
Now I am not sure if I still have to dig on the nginx side, or I have to add a connection on the docker side between the containers.
PS: the Pi is running ArchForArm (using Xfce as desktop) because I couldn't find docker-compose in the raspberian repository.
-- EDIT ---:
I currently start everything manually. (so no compose file)
the LIGHTTPD is started with:
docker run -d --name mylighttpd -v <testconfig>:/etc/lighttpd -p <pi-ip>:8080:80 <image name>
if I understood it correctly it is now listening on the local network (in the range of <pi-ip>) port 8080, which represents the test web-servers port 80. (I have added ..name so it is easier to stop it.)
the nginx is started like:
docker run --name mynginx --rm -p <pi-ip>:80:80 -v <config>:/data <image name>
The 8080 was added in the expose in the Docker file.
I current think I misunderstood the connection for two clients on the same machine, and should add a Virtual network, I am currently trying to find some docks there.
PS: I am not using the already existing nginx-zeroconf from the repo because it tells me it cant read the installed docker version. (and the only example for using that with composer also needs another container which seems unavailable for my architecture.)
-- edit2 --:
For the simple proxy_pass the problem could be the URL.
I added a deeper folder "prox1" in the "www" folder, containing an index file, and that one is schown when i ask for the page.
It seems like <pi-ip>:80/prox1/
is redirected to <pi-ip>:8080/prox1/
but if I try rewrite it (inside "location /prox1/") it seems to first delete the prox1, and then decides it now is part of the original location.
<pi-ip>:80/
PS: I am aware that it might be a better design to place the system inside another connection than "bridge" and only expose the proxy, but i am trying to learn this stuff in small steps.
-- edit3 --:
Trying compose now, but it seems I have encounters another part I don't understand (why I wanted to get it work without compose first).
I try to follow http://docs.master.dockerproject.org/compose/compose-file/#ipv4-address-ipv6-address
networks:
backbone:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.16.238.0/16
gateway: 172.16.238.1
services:
nginx:
image: <nginx-image>
ports: 80:80
volumes:
- <config>:/data
depends_on:
- lighttpd
networks:
backbone:
ipv4_address: 172.16.238.2
lighttpd:
image: <lighttpd-image>
ports: 8080:80
volumes:
- <testconfig>:/etc/lighttpd
networks:
backbone:
ipv4_address: 172.16.238.3
Now I have to find out why i get "User specific IP address is supported only when connecting to networks with user configured subnets", I assume the main networks block creates a network called "backbone".
-- edit4 --:
It seems ip blocks have to be written different to all the docks I have seen, the correct form is:
...
networks:
backbone:
ipv4_address: 172.16.0.2/16
...
now I have to figure out how to drop the part of the URL, and I am good to go.
The core problem seems to have been missing nginx parameter proxy_redirect, i found rambling trough the docks, the current nginx.conf is:
(/data/www contains a index.html with a relative link to some images in /data/images)
worker_processes auto;
events {
worker_connections 1024;
}
http {
server {
listen 80;
location / {
root /data/www;
}
location /images/ {
root /data;
}
location /prox0/{
proxy_pass http://lighttpd:80;
proxy_redirect default;
proxy_buffering off;
}
}
}
manual starting on local Ip seems to work, but docker-compose is easyer:
(if compose is not used replace lighttpd:80 with the ip & port used for starting the server.)
networks:
backbone:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.16.238.0/16
gateway: 172.16.238.1
services:
nginx:
image: <nginx-image>
ports: 80:80
volumes:
- <config>:/data
depends_on:
- lighttpd
networks:
backbone:
ipv4_address: 172.16.0.2
lighttpd:
image: <lighttpd-image>
ports: 8080:80
volumes:
- <testconfig>:/etc/lighttpd
networks:
backbone:
ipv4_address: 172.16.0.3

how to reach another container from a dockerised nginx

I have nginx in a docker container, and a nodejs webapp in another docker container.
The nodejs server is reachable from the host server on port 8080.
The nginx docker container is listening to port 80 (will do the certificate later, first this base must be working).
And now I want a subdomain to be forwarded to this 8080 nodejs app. lets say app1.example.com
From outside I can reach the app by the server ip (or hostname) and port 8080 but not on app1.example.com. And it does work on app1.example.com:8080 (I have opened up port 8080 on the host server).
I get a bad gateway nginx message when approaching the app1.example.com So I get in the first nginx container, but how do i get back to the host server to proxy pass it to the port 8080 of the host server (and not port 8080 of the nginx container). looking for the reverse EXPOSE syntax.
the main problem is, of course if I use the ip and port 127.0.0.1:8080 it will try on the nginx container....
So how do I let the nginx container route back to the host 127.0.0.1:8080?
I have tried 0.0.0.0 and defining an upstream, actually been googling a lot, and have tried a lot of configurations... but not yet found a working one....
Edit
Just found out, this command of docker might help:
sudo docker network inspect bridge
This shows the Ip address used inside the containers (in my case 172.17..0.2), but not sure this address stays the same every time the docker will restart... (e.g. server reboot)
Edit
Following alkaline answer I now have (but still not working):
my docker-compose.yml file:
version: "2"
services:
nginx:
container_name: nginx
image: nginx_img
build: ../docker-nginx-1/
ports:
- "80:80"
networks:
- backbone
nodejs:
container_name: nodejs
image: merites/docker-simple-node-server
build: ../docker-simple-node-server/
networks:
- backbone
expose:
- 8080
networks:
backbone:
driver: bridge
and my nginx (skipped the include in the conf.d folder for simplicity):
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream upsrv {
server nodejs:8080;
}
server {
listen 80;
server_name app1.example.com;
location / {
proxy_pass http://upsrv;
}
}
}
edit 31-08-2016
this might be the problem, the name is not backbone, but called after the folder started the service from:
sudo docker network ls
out puts:
NETWORK ID NAME DRIVER SCOPE
1167c2b0ec31 bridge bridge local
d06ffaf26fe2 dockerservices1_backbone bridge local
5e4ec13d790a host host local
7d1f8c32f259 none null local
edit 01-09-2016
It might be caused by the way I have my nginx docker container setup?
this is the docker file I used:
############################################################
# Dockerfile to build Nginx Installed Containers
# Based on Ubuntu
############################################################
# Set the base image to Ubuntu
FROM ubuntu
# File Author / Maintainer
MAINTAINER Maintaner Name
# Install Nginx
# Add application repository URL to the default sources
# RUN echo "deb http://archive.ubuntu.com/ubuntu/ raring main universe" >> /etc/apt/sources.list
# Update the repository
RUN apt-get update
# Install necessary tools
RUN apt-get install -y nano wget dialog net-tools
# Download and Install Nginx
RUN apt-get install -y nginx
# Remove the default Nginx configuration file
RUN rm -v /etc/nginx/nginx.conf
# Copy a configuration file from the current directory
ADD nginx.conf /etc/nginx/
# Append "daemon off;" to the beginning of the configuration
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
# Expose ports
EXPOSE 80
# Set the default command to execute
# when creating a new container
CMD service nginx start
My final solution 1th sept. 2016
I used this compose file now:
version: "2"
services:
nginx:
image: nginx
container_name: nginx
volumes:
- ./nginx-configs:/etc/nginx/conf.d
ports:
- "80:80"
networks:
- backbone
nodejs:
container_name: nodejs
image: merites/docker-simple-node-server
build: ../docker-simple-node-server/
networks:
- backbone
expose:
- 8080
networks:
backbone:
driver: bridge
In the project folder, from which you run docker-compose up -d, I added a folder named nginx-configs. This folder will 'override' all the files in the nginx container named /etc/nginx/conf.d
Therefor I copied the default.cfg from the nginx container before I added this volume mount. using the command:
docker exec -t -i container_name /bin/bash
and than cat /etc/nginx/conf.d/default.conf
and added the same default.conf in the project folder with nginx configs.
Besides the default I added app1.conf with this content:
upstream upsrv1 {
server nodejs:8080;
}
server {
listen 80;
server_name app1.example.com;
location / {
proxy_pass http://upsrv1;
}
}
This way, I can easily add a second app... third and so on.
So the basics is working now.
Here's a best practice. Only expose port 80 outside of the host. The nodejs app can be in a private network only accessible through nginx.
version: "2"
services:
nginx:
...
ports:
- "80:80"
networks:
- backbone
nodejs:
...
networks:
- backbone
expose:
- 8080
networks:
backbone:
driver: bridge
In your nginx.conf file, the upstream servers can be listed as nodejs:8080. The docker daemon will resolve it to the correct internal ip.

Resources