I'm trying to scale my Django app that uses Daphne server inside Docker container with Supervisor because Daphne has no workers. I read on the internet that it should be done that way but I didn't find any explanation of concept and the documentation is very obscure.
I managed to run it all inside container, logs are okay. I firstly run my app without supervisord with multiple containers and it worked fine. That is, I hosted multiple instance of same app in multiple containers because of redundancy. Then I read that I could run multiple processes of my app using supervisor inside container. So I managed to run app with supervisord and daphne inside container, I get logs that app is running, but I can't access it from my browser as I could when I had only one Daphne process per container without supervisord.
UPDATE:
I can even curl my application inside of container when I use curl localhost:8000, but I can't curl it by container's IP address nor inside, nor outside of the container. That means that it's not visible outside of container despite container's port being exposed in docker-compose file.
I'm getting
502 Bad Gateway
nginx/1.18.0
My supervisord config file looks like this:
[supervisord]
nodaemon=true
[supervisorctl]
[fcgi-program:asgi]
User=root
# TCP socket used by Nginx backend upstream
socket=tcp://localhost:8000
# Directory where your site's project files are located
directory=/app
# Each process needs to have a separate socket file, so we use process_num
# Make sure to update "mysite.asgi" to match your project name
command= /usr/local/bin/daphne -u /run/daphne/daphne%(process_num)d.sock --endpoint fd:fileno=0 --access-log - --proxy-headers WBT.asgi:application
# Number of processes to startup, roughly the number of CPUs you have
numprocs=4
# Give each process a unique name so they can be told apart
process_name=asgi%(process_num)d
# Automatically start and recover processes
autostart=true
autorestart=true
# Choose where you want your log to go
stdout_logfile=/home/appuser/supervisor_log.log
redirect_stderr=true
I can't see why NGINX throws 502 error.
This configuration worked until I introduced supervisor.
My Nginx is also inside its own docker container.
upstream django_daphne{
hash $remote_addr consistent;
server django_daphne_1:8000;
server django_daphne_2:8000;
server django_daphne_3:8000;
}
server {
server_name xxx.yyy.zzz.khmm;
listen 80;
client_max_body_size 64M;
location = /favicon.ico { access_log off; log_not_found off; }
location / {
proxy_pass http://django_daphne;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#Websocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /api/ {
proxy_pass http://api_app:8888;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Okay! I found out what is a problem.
Instead of
socket=tcp://localhost:8000
it has to be
socket=tcp://0.0.0.0:8000
so that it can be accessed outside of the container.
Related
I want to set up a reverse proxy to access kibana using a username and a password, so I followed this tutorial.
When I use the URL http://elastic.local to access kibana the timeout occurs and nothing happen.
But when I use 127.0.0.1:80 it accesses kibana without prompting any credentials.
I want to access kibana using for example http://elastic.local but can't make it work, I already googled a lot of solutions and many Nginx configuration files but none of them seems to work.
This is my configuration file named 'default' and located in etc/nginx/site-available and The symbolic link is already created and located under etc/nginx/site-enabled :
server {
listen 80;
server_name elastic.local;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
PS : I already have ELK stack in my Ubuntu vm, and it's working fine and i can access kibana using
http://127.0.0.1:5601
I managed to fix the problem what i did :
add a mapping between elastic.local and 127.0.0.1 in etc/hosts like this :
127.0.0.1 elastic.local
then restarting nginx server using :
sudo systemctl reload nginx.service
I have been having problems trying to deploy my web app in kubernetes.
I wanted to mimic old deploy with nginx working as reverse proxy in front of my back and front end services.
I have 3 pieces in my system, nginx, front and back. I built 3 deploys, 3 services and exposed only my nginx service using nodePort: 30050.
Without further delays, this is my nginx.conf:
upstream my-server {
server myserver:3000;
}
upstream my-front {
server myfront:4200;
}
server {
listen 80;
server_name my-server.com;
location /api/v1 {
proxy_pass http://my-server;
}
location / {
proxy_pass http://my-front;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
I tried to install curl and nslookup inside one of the pods and tried to do manual request on cluster internal endpoints... tears came to my eyes, everything is working...i am almost a developer worthy of the cloud.
Everything is working smoothly...everything but nginx DNS resolution.
If i do kubectl exec -it my-nginx-pod -- /bin/bash and try to curl 1 of the other 2 services: curl myfront:4200 it works properly.
If i try to nslookup one of them it works as well.
After this i tried to replace, in nginx.conf, the service names with the pods IPs. After restarting the nginx service everything was working.
Why doesn't nginx resolve the upstream names properly?
I am going nuts over this.
Nginx caches the resolved IPs. To force Nginx to resolve DNS, you can introduce a variable:
location /api/v1 {
set $url "http://my-server";
proxy_pass $url;
}
More details can be found in this related this answer.
As it is likely a caching in Nginx, what you describe, it would also explain why restarting (or reloading) Nginx will fix the problem. At least for a while until the DNS entry changes, again.
I think, it is not related to Kubernetes. I had the same problem a while ago when Nginx cached DNS entries of AWS ELBs, which frequently change IPs.
I have a simple reverse proxy nginx.conf:
events {
worker_connections 1024;
}
http {
gzip on;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host:$server_port
server {
listen 80;
server_name app.local;
location / {
proxy_pass http://localhost:3000;
}
}
}
localhost:3000 is a docker swarm (1.13) service node app. Everything works great initially, when I request app.local. However whenever I update a service (containers are redeployed):
docker service update --force app
Nginx will think something is wrong (temporarily), and doesn't respond to requests to app.local for 30 seconds or so. This is all running on a CentOS 7 server.
I've configured my docker service to redeploy via rolling updates, so from the outside, 3000 never appears to go down. I can continually request app.local:3000, bypassing nginx with out any perceived downtime.
Nginx is NOT running in a docker container. I've gotta be missing some sort of configuration option.
I'm trying to deploy a basic Phoenix app to a DigitalOcean server running ubuntu 14.04. I'm using exrm to generate the release. The release works when I test it on my local machine and on the server. I'm following the Phoenix guides on deployment. The thing that doesn't seem to work is the last part with the nginx server setup. For some reason I cant get it to load anyting but the default page. When I run the
nginx -t # command. It says everything is fine.
I've tried editing the /etc/nginx/sites-available files. Doesn't seem to do anything. I've tried restarting the nginx server with
sudo service nginx reload
sudo service nginx restart
But that doesn't seem to work either.
And this is the content of my /etc/nginx/sites-available/my_app.conf
upstream my_app {
server 127.0.0.1:4000;
}
server{
listen 80;
server_name www.example.com;
location / {
try_files $uri #proxy;
}
location #proxy {
include proxy_params;
proxy_redirect off;
proxy_pass http://my_app;
# The following two headers need to be set in order
# to keep the websocket connection open. Otherwise you'll see
# HTTP 400's being returned from websocket connections.
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
Update: Tried connecting directly via server_ip:port, and it worked. The url still doesnt display anything.
Solved: For some reason deleting this solves the problem.:
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
Solved: For some reason deleting this solves the problem.
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
I'm currently trying to built my own webserver/service and wanted to set up things like this:
Wordpress for the main "blog"
Gitlab for my git repositories
Owncloud for my data storage
I've been using Docker for getting a nice little gitlab running, which works perfectly fine, mapping to port :81 on my webserver with my domain.
What annoys me a bit is, that Docker images are always bound to a specific portnumber and are thus not really easy to remember, so I'd love to do something like this:
git.mydomain.com for gitlab
mydomain.com (no subdomain) for my blog
owncloud.mydomain.com for owncloud
As far as I understood, I need a reverse proxy for this, which I decided to use nginx for. So I set things up like this:
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name mydomain.com;
location / {
proxy_pass http://localhost:84;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
server {
listen 80;
server_name git.mydomain.com;
location / {
proxy_pass http://localhost:81;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
This way, I have git.mydomain.com up and running flawlessly, but my wordpress just shows me a blank webpage. My DNS is setup like this:
Host Type MX Destination
* A IP
# A IP
www CNAME #
Am I just too stupid or whats going on here?
I know your question is more specifically about your Nginx proxy configuration, but I thought it would be useful to give you this link which details how to set up an Nginx docker container that automagically deploys configurations for reverse-proxying those docker containers. In other words, you run the reverse proxy and then your other containers, and the Nginx container will route traffic to the others based on hostname.
Basically, you pull the proxy container and run it with a few parameters set in the docker run command, and then you bring up the other containers which you want proxied. Once you've got docker installed and pulled the nginx-proxy image, the specific commands I use to start the proxy:
docker run -d --name="nginx-proxy" --restart="always" -p 80:80 \
-v /var/run/docker.sock:/tmp/docker.sock jwilder/nginx-proxy
And now the proxy is running. You can verify by pointing a browser at your address, which should return an Nginx 502 or 503 error. You'll get the errors because nothing is yet listening. To start up other containers, it's super easy, like this:
docker run -d --name="example.com" --restart="always" \
-e "VIRTUAL_HOST=example.com" w3b1x/mywebcontainer
That -e "VIRTUAL_HOST=example.com" is all it takes to get your Nginx proxy routing traffic to the container you're starting.
I've been using this particular method since I started with Docker and it's really handy for exactly this kind of situation. The article I linked gives you step-by-step instructions and all the information you'll need. If you need more information (specifically about implementing SSL in this setup), you can check out the git repository for this software.
Your nginx config look sane, however, you are hitting localhost:xx, which is wrong. It should be either gatewayip:xx or better target_private_ip:80.
An easy way to deal with this is to start your containers with --link and to "inject" the ip via a shell script: have the "original" nginx config with a placeholder instead of the ip, then sed -i with the value from the environment.