Nginx not routing browser request to wsgi(python server running) - nginx

I am running my flask project from uwsgi on nginx. But my nginx is not routing the request to uwsgi when i hit localhost:80/
My nginx.conf looks like this
server {
listen 80;
server_name <your machine ip/domain>;(if on local it would be localhost but I was running on WSL so I put it IP)
location / {
include uwsgi_params;
uwsgi_pass web_app:5000; (you might see suggestion of .sock files or suffixing http:// or unix: but none work for me plain simple your python server's service name which you would provide in docker-compose)
}
}
docker-compose looks like this
version: '3.7'
services:
web_app:
build: .
container_name: kpi-dashboard
ports:
- 5000:5000
depends_on:
- db
nginx:
build: ./nginx
container_name: nginx
restart: always
ports:
- "80:80"
depends_on:
- web_app
db:
image: postgres:13-alpine
container_name: postgresql
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres
ports:
- 5432:5432
volumes:
postgres_data:
nginx dockerfile
FROM nginx
RUN rm /etc/nginx/conf.d/default.conf (it is important to remove the default conf as it would not take your custom conf no matter where you copy it)
COPY nginx.conf /etc/nginx/conf.d/
(there are answers online to copy it no other places but this only works)
EXPOSE 80
web app dockerfile
FROM python:3.8.16-slim-buster
RUN apt-get update
RUN apt-get install gcc -y && apt-get install python3-dev -y && apt-get install libpq-dev -y
ENV PYTHONPATH=${PYTHONPATH}:${PWD}
RUN pip install poetry
WORKDIR /app
COPY pyproject.toml /app/
COPY . /app/
RUN poetry config virtualenvs.create false
RUN poetry install --no-dev
EXPOSE 5000
CMD ["uwsgi", "--ini", "wsgi.ini"]
wsgi.ini file
[uwsgi]
module = app (this is when you are writing you project entrypoint in app.py. if you are writing in wsgi.py then this would become wsgi:app)
socket = 0.0.0.0:5000
callable = app (this is important as wsgi by default considers your app instance as application either handle it in your main file or just add this configuration)
processes = 1
threads = 1
master = true
vacuum = true
die-on-term = true
This is what the nginx container output looks like
Editing question as the 404 issue was solved. But nginx is still not routing to wsgi.
The solution
changed the location of copying the nginx.conf file in nginx dockerfile
COPY nginx.conf /etc/nginx/nginx.config
Editing question again as nginx routing to wsgi issue also resolved.
The solution
updated files as mentioned above

Yes so this worked for me. There are n number of configurations available online and almost all are same yet a slight difference causes the issue.
I am updating my question to change files with the content that worked. Hope it helps someone.

Related

ASP.NET Core on docker returns ERR_EMPTY_RESPONSE when browsing at localhost

When I try to run my ASP.NET Core application on docker through http://localhost:5004 I'm getting this response from my browser: ERR_CONNECTION_REFUSED
Here's my docker file
# https://hub.docker.com/_/microsoft-dotnet
FROM mcr.microsoft.com/dotnet/sdk:5.0.404 AS build
WORKDIR /code
COPY . .
# copy everything else and build app
RUN dotnet publish -c release -o /app
# final stage/image
FROM mcr.microsoft.com/dotnet/aspnet:5.0.13
WORKDIR /app
COPY --from=build /app ./
ENTRYPOINT ["dotnet", "ProductCatalogApi.dll", "--server.urls", "http://+:5004"]
my docker-compose.yml file (do not mind the password format):
version: "5.0.4"
networks:
frontend:
backend:
services:
catalog:
build:
context: .\src\Services\ProductCatalogApi
dockerfile: Dockerfile
image: shoes/catalog
environment:
- DatabaseServer=mssqlserver
- DatabaseName=CatalogDb
- DatabaseUser=sa
- DatabasePassword=(passwordhere)
- ASPNETCORE_URLS=http://+:5004
- ASPNETCORE_ENVIRONMENT=Production
container_name: catalogapi
ports:
- "5004:80"
networks:
- backend
- frontend
depends_on:
- mssqlserver
mssqlserver:
image: "mcr.microsoft.com/mssql/server:2019-latest"
ports:
- "1445:1433"
container_name: mssqlcontainer
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=(passwordhere)
- MSSQL_PID=Developer
networks:
- backend
CreateHostBuilder in Program.cs:
private static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
// webBuilder.UseKestrel(options => { options.Listen(IPAddress.Any, 5000); });
webBuilder.UseKestrel().UseUrls(Environment.GetEnvironmentVariable("ASPNETCORE_URLS"));
});
}
}
I can't figure what's missing or wrong on my configuration that I can't access the app on my local machine through localhost:5004(or whatever port ex. 5000 )
Note: I run this commands in the following order
docker-compose build
docker-compose up mssqlserver
docker-compose up catalog
When you set ASPNETCORE_URLS=http://+:5004, your app will listen on port 5004. So that's the port you should map to a port on the host. You've mapped port 80.
Change your docker-compose file to
ports:
- "5004:5004"
Now you should be able to access it. Remember that Swagger by default isn't available in the Production environment, so you won't be able to use the Swagger pages.
You've tried to configure what port the app listens on in a lot of different ways. A good idea might be to remove it all and only configure it in the launchSettings.json file. Then it'll listen on the port specified in launchSettings.json when you run it locally during development and it'll listen on port 80 when run in the container.
The reason it'll listen on port 80 when run in a container is that Microsoft set the ASPNETCORE_URLS environment variable to http://+:80 in the aspnet images.

How to conect docker nginx with express and react app on windows

I found many tutorials about those but none uniting the 3 of them. I want to learn how to do that because I need to start deploying react websites I made and I wanted to deploy them on my domain and hosting on my computer for testing purposes.
In what order should I learn those to archive that ? What tutorials do you recommend ? I found some tutorials about them but got really confused with the cascade of things to learn. I got windows power shell working, did some node tutorials some express ones and managed to run nginx on docker but couldn't finish.
I am feeling ok with my css, js, and react, made a little game got some things working did some practice but now I am kind of stuck. I really appreciate any help or suggestions you can provide to keep going on my learning path.
Bellow are the video tutorials I watched :
CSS
https://www.youtube.com/watch?v=1Rs2ND1ryYc
React
https://www.youtube.com/watch?v=DLX62G4lc44
You can use a docker-compose.yml file to define and run multi container Docker applications. And then with a single command you can build and start all your services.You can run both Linux and Windows programs and executables in Docker. Docker creates thin virtual environments for your apps.
Here is an example of what you want.This is the folder structure:
|--client
|--Dockerfile
|--components
|--index.js
|--server
|--Dockerfile
|--index.js
|--nginx
|-- Dockerfile
|--default.conf
|--docker.compose.yml
Dockerfile for the react client:
FROM node:alpine as builder
WORKDIR '/app'
COPY ./package.json . /
RUN npm install
COPY . .
RUN npm run build
Dockerfile for nginx
FROM nginx
COPY ./default.conf /etc/nginx/conf.d/default.conf
default.conf for nginx
upstream client {
server client:3000;
}
upstream api {
server api:5000;
}
server {
listen 80;
location / {
proxy_pass http://client;
}
location /sockjs-node {
proxy_pass http://client;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass http://api;
}
}
Dockerfile for node/express server
FROM node:alpine
WORKDIR '/app'
COPY ./package.json ./
RUN npm install
COPY . .
CMD [ "npm", "run", "start" ]
docker-compose.yml file. You can switch mongo with any database you want to use for your api.You can build and run evrything with docker-compose up --build from the main project directory(where the docker-compose file is )
version: '3'
services:
nginx:
restart: always
build:
dockerfile: Dockerfile
context: ./nginx
ports:
- '80:80'
mongo:
container_name: mongo
image: mongo
ports:
- '27017:27017'
api:
restart: always
build:
dockerfile: Dockerfile
context: ./server
volumes:
- /app/node_modules
- ./server:/app
links:
- mongo
ports:
- '5000:5000'
depends_on:
- mongo
client:
build:
dockerfile: Dockerfile
context: ./client
volumes:
- /app/node_modules
- ./client:/app
links:
- api

Permission denied when executing Symfony Demo app through Docker

In my first attempt at running a more complex application through Docker, I selected the Symfony Demo app and assembled a docker build structure to accommodate it.
The first image is httpd: it runs as root (dropping to www-data afterwards) and talks through the 'server' custom network.
The second image is php (fpm): it runs as root (dropping to www-data afterwards) and also talks through the 'server' custom network.
The third image is composer: it runs as UID and GID 1000. Its entrypoint command is composer create-project symfony/symfony-demo symfony-demo
All containers share the same bind mount, where the symfony-demo app is located.
Then I go to localhost:8080 in the browser just to end up with a Symfony error:
The stream or file "/usr/local/apache2/htdocs/symfony-demo/var/log/dev.log" could not be opened: failed to open stream: Permission denied
The thing is... this file mentioned doesn't even exist at /var/log/. That folder is empty.
All files in the bind mount have permissions 1000:1000 (my user UID/GID) and are configured like this: -rw-r--r--.
I've tried running httpd and php as: UID 33 (www-data) and GID 33; UID 0 (root) and GID 33 (and vice-versa); and also as 1000:1000 or 1000:33, but all these combinations (when they successfully get httpd/php to start up) result in the same error.
docker-compose.yml:
version: "3"
services:
httpd:
build: "./httpd/"
container_name: "webserver"
depends_on:
- php
ports:
- "8080:80"
networks:
- server
volumes:
- ../app:/usr/local/apache2/htdocs/
php:
build: "./php/"
depends_on:
- composer
container_name: "php"
networks:
- server
volumes:
- ../app:/usr/local/apache2/htdocs/
composer:
build: "./composer/"
container_name: "composer"
user: "1000:1000"
volumes:
- ../app:/usr/local/apache2/htdocs/
networks:
server:
driver: bridge
composer Dockerfile:
FROM composer:1.8
WORKDIR /usr/local/apache2/htdocs/
CMD ["composer", "create-project", "symfony/symfony-demo", "symfony-demo"]
httpd Dockerfile:
FROM httpd:2.4
COPY ./config/httpd.conf /usr/local/apache2/conf/httpd.conf
COPY ./config/httpd-vhosts.conf /usr/local/apache2/conf/extra/httpd-vhosts.conf
COPY ./config/php-fpm.conf /usr/local/apache2/conf/extra/php-fpm.conf
WORKDIR /usr/local/apache2/htdocs
php Dockerfile:
FROM php:7.3-fpm
RUN cp "$PHP_INI_DIR/php.ini-development" "$PHP_INI_DIR/php.ini"
COPY ./config/timezone.ini $PHP_INI_DIR/conf.d/
COPY ./config/www.conf /usr/local/etc/php-fpm.d/www.conf
RUN apt-get update && \
apt-get install -y libicu-dev
RUN docker-php-ext-install intl
WORKDIR /usr/local/apache2/htdocs
just give the write permission
chmod -R 777 /usr/local/apache2/htdocs/symfony-demo/var/log/dev.log
here symfony doc for file permission: https://symfony.com/doc/current/setup/file_permissions.html
On second thoughts: my previous solution (as is) doesn't work in RHEL/Fedora/CentOS, because www-data does not exist there by default, causing Docker to fail to start.
My new solution - distro agnostic
For simplicity, I've decided to simply write composer's entrypoint script to set -rw-rw---- permissions at /app. That way, I can run composer as user 1000 and the same group PHP runs as (a new user and group was created just for that). Now PHP can write to SQLite3 database files inside the project and composer writes as user 1000, which I can edit.
It's basically what #habibun said, but I only need to give group write permissions, not full write permissions.
Be aware that SELinux will deny composer write access to your bind mount. You must configure SELinux to allow this operation.
This is my repository where this project is stored, if you're looking for a reference: https://github.com/o-alquimista/symfony-demo-docker/
User namespace solution - works fine for Debian/Ubuntu hosts
Composer should write to /app as user 33 (www-data), and so should php and httpd after they drop privileges. I was able to keep present permission settings (only owner can write) by making use of user namespaces. The user www-data is now mapped to the range 967 and beyond, which will result in user 33 being = me (user 1000).
Now all containers can write where they need to, and I can edit the project files as an unprivileged user.

Docker - how do i restart nginx to apply custom config?

I am trying to configure a LEMP dev environment with docker and am having trouble with nginx because I can't seem to restart nginx once it has it's new configuration.
docker-compose.yml:
version: '3'
services:
nginx:
image: nginx
ports:
- '8080:80'
volumes:
- ./nginx/log:/var/log/nginx
- ./nginx/config/default:/etc/nginx/sites-available/default
- ../wordpress:/var/www/wordpress
php:
image: php:fpm
ports:
- 9000:9000
mysql:
image: mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: secret
volumes:
- ./mysql/data:/var/lib/mysql
I have a custom nginx config that replaces /etc/nginx/sites-available/default, and in a normal Ubuntu environment, I would run service nginx restart to pull in the new config.
However, if I try to do that this Docker environment, the nginx container exits with code 1.
docker-compose exec nginx sh
service nginx restart
-exit with code 1-
How would I be able use nginx with a custom /etc/nginx/sites-available/default file?
Basically you can reload nginx configuration by invoking this command:
docker exec <nginx-container-name-or-id> nginx -s reload
To reload nginx with docker-compose specifically (rather than restart the whole container, causing downtime):
docker-compose exec nginx nginx -s reload
Docker containers should be running a single application in the foreground. When that process it launches as pid 1 inside the container exits, so does the container (similar to how killing pid 1 on a linux server will shutdown that machine). This process isn't managed by the OS service command.
The normal way to reload a configuration in a container is to restart the container. Since you're using docker-compose, that would be docker-compose restart nginx. Note that if this config was part of your image, you would need to rebuild and redeploy a new container, but since you're using a volume, that isn't necessary.

how to reach another container from a dockerised nginx

I have nginx in a docker container, and a nodejs webapp in another docker container.
The nodejs server is reachable from the host server on port 8080.
The nginx docker container is listening to port 80 (will do the certificate later, first this base must be working).
And now I want a subdomain to be forwarded to this 8080 nodejs app. lets say app1.example.com
From outside I can reach the app by the server ip (or hostname) and port 8080 but not on app1.example.com. And it does work on app1.example.com:8080 (I have opened up port 8080 on the host server).
I get a bad gateway nginx message when approaching the app1.example.com So I get in the first nginx container, but how do i get back to the host server to proxy pass it to the port 8080 of the host server (and not port 8080 of the nginx container). looking for the reverse EXPOSE syntax.
the main problem is, of course if I use the ip and port 127.0.0.1:8080 it will try on the nginx container....
So how do I let the nginx container route back to the host 127.0.0.1:8080?
I have tried 0.0.0.0 and defining an upstream, actually been googling a lot, and have tried a lot of configurations... but not yet found a working one....
Edit
Just found out, this command of docker might help:
sudo docker network inspect bridge
This shows the Ip address used inside the containers (in my case 172.17..0.2), but not sure this address stays the same every time the docker will restart... (e.g. server reboot)
Edit
Following alkaline answer I now have (but still not working):
my docker-compose.yml file:
version: "2"
services:
nginx:
container_name: nginx
image: nginx_img
build: ../docker-nginx-1/
ports:
- "80:80"
networks:
- backbone
nodejs:
container_name: nodejs
image: merites/docker-simple-node-server
build: ../docker-simple-node-server/
networks:
- backbone
expose:
- 8080
networks:
backbone:
driver: bridge
and my nginx (skipped the include in the conf.d folder for simplicity):
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream upsrv {
server nodejs:8080;
}
server {
listen 80;
server_name app1.example.com;
location / {
proxy_pass http://upsrv;
}
}
}
edit 31-08-2016
this might be the problem, the name is not backbone, but called after the folder started the service from:
sudo docker network ls
out puts:
NETWORK ID NAME DRIVER SCOPE
1167c2b0ec31 bridge bridge local
d06ffaf26fe2 dockerservices1_backbone bridge local
5e4ec13d790a host host local
7d1f8c32f259 none null local
edit 01-09-2016
It might be caused by the way I have my nginx docker container setup?
this is the docker file I used:
############################################################
# Dockerfile to build Nginx Installed Containers
# Based on Ubuntu
############################################################
# Set the base image to Ubuntu
FROM ubuntu
# File Author / Maintainer
MAINTAINER Maintaner Name
# Install Nginx
# Add application repository URL to the default sources
# RUN echo "deb http://archive.ubuntu.com/ubuntu/ raring main universe" >> /etc/apt/sources.list
# Update the repository
RUN apt-get update
# Install necessary tools
RUN apt-get install -y nano wget dialog net-tools
# Download and Install Nginx
RUN apt-get install -y nginx
# Remove the default Nginx configuration file
RUN rm -v /etc/nginx/nginx.conf
# Copy a configuration file from the current directory
ADD nginx.conf /etc/nginx/
# Append "daemon off;" to the beginning of the configuration
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
# Expose ports
EXPOSE 80
# Set the default command to execute
# when creating a new container
CMD service nginx start
My final solution 1th sept. 2016
I used this compose file now:
version: "2"
services:
nginx:
image: nginx
container_name: nginx
volumes:
- ./nginx-configs:/etc/nginx/conf.d
ports:
- "80:80"
networks:
- backbone
nodejs:
container_name: nodejs
image: merites/docker-simple-node-server
build: ../docker-simple-node-server/
networks:
- backbone
expose:
- 8080
networks:
backbone:
driver: bridge
In the project folder, from which you run docker-compose up -d, I added a folder named nginx-configs. This folder will 'override' all the files in the nginx container named /etc/nginx/conf.d
Therefor I copied the default.cfg from the nginx container before I added this volume mount. using the command:
docker exec -t -i container_name /bin/bash
and than cat /etc/nginx/conf.d/default.conf
and added the same default.conf in the project folder with nginx configs.
Besides the default I added app1.conf with this content:
upstream upsrv1 {
server nodejs:8080;
}
server {
listen 80;
server_name app1.example.com;
location / {
proxy_pass http://upsrv1;
}
}
This way, I can easily add a second app... third and so on.
So the basics is working now.
Here's a best practice. Only expose port 80 outside of the host. The nodejs app can be in a private network only accessible through nginx.
version: "2"
services:
nginx:
...
ports:
- "80:80"
networks:
- backbone
nodejs:
...
networks:
- backbone
expose:
- 8080
networks:
backbone:
driver: bridge
In your nginx.conf file, the upstream servers can be listed as nodejs:8080. The docker daemon will resolve it to the correct internal ip.

Resources