I would like to use a reverse proxy with this docker-compose.yml
services:
nginx-proxy:
image: nginxproxy/nginx-proxy
ports:
- 80:80
- 443:443
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
depends_on:
- hello-world
hello-world:
image: nginx
ports:
- 9001:80
environment:
VIRTUAL_HOST: hello.world
volumes:
- ./web:/usr/share/nginx/html
What I expect to happen is that curl -H "Host: hello.world" localhost:80 would return the index.html in the folder web. But actually it is return the "Welcome to nginx!" page so it seems it does not resolve the VIRTUAL_HOST at all.
What actually works as expected is the following call: curl -H "Host: hello.world" localhost:9001 which return the index.html as expected. But why cannot it resolve the hostname?. Eventually this is a very minimal example of my setup.
It is running on an arm64 machine with docker version: Docker version 20.10.6, build 370c28948e.
What fundamental mistake am I doing here? I tried to implement it as described on README of the nginx-proxy project.
EDIT:
When I start with docker-compose up -d and then run docker-compose ps I get:
Name Command State Ports
-------------------------------------------------------------------------------------------------------------------
minimal-proxy-example_hello-world_1 /docker-entrypoint.sh ngin ... Up 0.0.0.0:9001->80/tcp,:::9001->80/tcp
minimal-proxy-example_nginx-proxy_1 /app/docker-entrypoint.sh ... Up 0.0.0.0:80->80/tcp,:::80->80/tcp
I am running docker on Arch Linux on a Raspberry Pi 4 using a AArch64 kernel. I tried the same on a normal Amd64 PC with the same result. However I tried also the very same with docker-desktop on Windows 10. I used a git-bash to be able to use curl. And there it all worked as expected! Why does this work on docker-desktop on Windows 10 but not on Arch Linux? As consequence I also posted this question in the Arch Linux forum.
Update
I think the problem relates to this bug: https://github.com/nginx-proxy/nginx-proxy/issues/1548
Related
I'm on a Debian VPS on OVH cloud provider, running Docker.
Trying to make an apt update on the instance, I noticed that the disk of 40GB was full. What is quite surprising for an instance hosting 2 Wordpress blogs.
I tried to run:
sudo du -h /var/lib/docker/containers
One of the containers weight 27GB !
27G /var/lib/docker/containers/1618df0(...)d6cc61e
However when I run:
docker container ls --size
The same container only weight 500MB
1618df0(...) 782c(...) "docker-entrypoint.s…" 10 months ago Up 10 months 80/tcp blog_wordpress_1 2B (virtual 545MB)
The Docker Compose is pretty simple:
wordpress:
build:
# call the Dockerfile in ./wordpress
context: ./wordpress
restart: always
environment:
# Connect WordPress to the database
WORDPRESS_DB_HOST: db:xxxx
WORDPRESS_DB_USER: xxxx
WORDPRESS_DB_PASSWORD: xxxx
WORDPRESS_DB_NAME: xxxx
volumes:
# save the content of WordPress an enable local modifications
- ./wordpress/data:/var/www/html
networks:
- traefik
- backend
depends_on:
- db
- redis
The Dockerfile:
FROM wordpress
# printf statement mocks answering the prompts from the pecl install
RUN printf "\n \n" | pecl install redis && docker-php-ext-enable redis
RUN /etc/init.d/apache2 restart
Do you know what to investigate to understand this problem ?
Thanks
Ok, this was actually the logs... The logs are not counted by:
docker container ls --size
So I just truncated the logs, brutally:
sudo sh -c "truncate -s 0 /var/lib/docker/containers/*/*-json.log"
This solve the problem for a while.
For the long term, I added these lines to the Wordpress container's Docker Compose, then deleted and recreated the containers:
logging:
options:
max-size: "10m"
max-file: "3"
I have a Docker supported ASP NET Core app.
The docker-compose file looks like this:
version: '3'
services:
test:
image: test
build:
context: ./Test
dockerfile: Dockerfile
networks:
test_nw:
aliases:
- test_alias
oracledb:
image: sath89/oracle-12c
ports:
- "1521:1521"
networks:
test_nw:
aliases:
- oracledb_alias
networks:
test_nw:
But after starting the app I looked in the container of the ASP.NET Core app (docker exec -it ... bash) and checked the /etc/hosts file but the respective alias of the DB oracledb_alias does not appear in it. So the app does not find the DB when using oracledb_alias as host name in the connection string.
What did I do wrong? How do I solve this problem?
You did nothing wrong. Docker's earlier versions used to use /etc/hosts for resolving hostname and links. Now docker uses a internal DNS server for this.
So you don't get to see any information as such. Then only thing you can do is use a command and test if you can reach resolve the name or not
$ dig oracledb_alias
$ ping oracledb_alias
$ telnet oracledb_alias 1521
See the below link for more details
https://docs.docker.com/engine/userguide/networking/configure-dns/
I have the following situation:
My application consists of a single web service that calls an
external API (say, some SaaS service, ElasticSearch or so). For non-unit-testing purposes we want to control the external service and later also inject faults. The application and the "mocked" API are dockerized and
now I want to use docker-compose to spin all containers up.
Because the application has several addresses hardcoded (e.g. the hostname of external services) I cannot change them and need to work around.
The service container makes a call to http://external-service.com/getsomestuff.
My idea was to use some features that are provided by docker to reroute all outgoing traffic to the external http://external-service.com/getsomestuff to the mock container without changing the URL.
My docker-compose.yaml looks like:
version: '2'
services:
service:
build: ./service
container_name: my-service1
ports:
- "5000:5000"
command: /bin/sh -c "python3 app.py"
api:
build: ./api-mock
container_name: my-api-mock
ports:
- "5001:5000"
command: /bin/sh -c "python3 app.py"
Finally, I have a driver that just does the following:
curl -XGET localhost:5000/
curl -XPUT localhost:5001/configure?delay=10
curl -XGET localhost:5000/
where the second curl just sets the delay in the mock to 10 seconds.
There are several options I have considered:
Using iptables-fu (would require modifying Dockerfiles to install it)
Using docker networks (this is really unclear to me)
Is there any simple option to achieve what I want?
Edit:
For clarity, here is the relevant part of the service code:
import requests
#app.route('/')
def do_stuff():
r = requests.get('http://external-service.com/getsomestuff')
return process_api_response(r.text())
Docker runs an internal DNS server for user defined networks. Any unknown host lookups are forwarded to you normal DNS servers.
Version 2+ compose files will automatically create a network for compose to use so there's a number of ways to control the hostnames it resolves.
The simplest way is to name your container with the hostname:
version: "2"
services:
external-service.com:
image: busybox
command: sleep 100
ping:
image: busybox
command: ping external-service.com
depends_on:
- external-service.com
If you want to keep container names you can use links
version: "2"
services:
api:
image: busybox
command: sleep 100
ping:
image: busybox
links:
- api:external-service.com
command: ping external-service.com
depends_on:
- api
Or network aliases
version: "2"
services:
api:
image: busybox
command: sleep 100
networks:
pingnet:
aliases:
- external-service.com
ping:
image: busybox
command: ping external-service.com
depends_on:
- api
networks:
- pingnet
networks:
pingnet:
I'm not entirely clear what the problem is you're trying to solve, but if you're trying to make external-service.com inside the container direct traffic to your "mock" service, I think you should be able to do that using the extra_hosts directive in your docker-compose.yml file. For example, if I have this:
version: "2"
services:
example:
image: myimage
extra_hosts:
- google.com:172.23.254.1
That will result in /etc/hosts in the container containing:
172.23.254.1 google.com
And attempts to access http://google.com will hit my web server at 172.23.254.1.
I was able to solve this with -links, is there a way to do networks in docker-compose?
version: '3'
services:
MOCK:
image: api-mock:latest
container_name: api-mock-container
ports:
- "8081:80"
api:
image: my-service1:latest
links:
- MOCK:external-service.com
I want to make a platform for web development on my PC (MacOS) by using docker. After installing nginx in docker container, I wanted to build nginx. I got this error.
Cannot locate specified Dockerfile:nginx.docker
I searched on the Internet, but I cannot solve my problem.
Container information (docker ps -a):
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2f268b825ba3 nginx:latest "nginx -g 'daemon off" 39 minutes ago Up 39 minutes 443/tcp, 0.0.0.0:8081->80/tcp dockertutorial_nginx_1
This is my docker-compose.yml file:
nginx:
container_name: dockertutorial_nginx_1
build: .
dockerfile: nginx.docker
ports:
- 80:8081
links:
- php
volumes:
- .:/Users/user/docker-tutorial
php:
image: php:7.0-fpm
expose:
- 9000
volumes:
- .:/Users/user/docker-tutorial
How can I solve this problem?
I solved it with the location of Dockerfile. The docker cannot find dockerfile. That's why, I got this error. Btw, thanks to #harald Nordgren.
I've created a small docker-compose.yml which used to work like a charm to deploy small WordPress instances. It looks like this:
wordpress:
image: wordpress:latest
links:
- mysql
ports:
- "1234:80"
environment:
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_NAME: wordpress
WORDPRESS_DB_PASSWORD: "password"
WORDPRESS_DB_HOST: mariadb
MYSQL_PORT_3306_TCP: 3306
volumes:
- /srv/wordpress/:/var/www/html/
mysql:
image: mariadb:latest
mem_limit: 256m
container_name: mariadb
environment:
MYSQL_ROOT_PASSWORD: "password"
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: "password"
volumes:
- /srv/mariadb:/var/lib/mysql
But when I start it now (maybe since docker update to Docker version 1.9.1, build a34a1d5), it fails
wordpress_1 | Warning: mysqli::mysqli(): (HY000/2002): Connection refused in - on line 10
wordpress_1 |
wordpress_1 | MySQL Connection Error: (2002) Connection refused
When I cat /etc/hosts of the wordpress_1 there are entries for MySQL:
172.17.0.10 mysql 12a564fdbc56 mariadb
and I am able to ping the MariaDB server.
When I docker-compose up, WordPress gets installed and after several restarts the MariaDB container prints:
Version: '10.0.22-MariaDB-1~jessie' socket: '/var/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution
Which schould indicate it to be running, isn't it?
How do I get the WordPress to be able to connect to the MariaDB container?
To fix this issue the first thing to do is:
Add the following code to wordpress & database containers (in the docker-compose file):
restart: unless-stopped
This will make sure you Database is started and intialized before wordpress container trying to connect to it. Then restart docker engine
sudo restart docker
or (for ubuntu 15+)
sudo service docker restart
Here the full configuration that worked for me, to setup wordpress with MariaDB:
version: '2'
services:
wordpress:
image: wordpress:latest
links:
- database:mariadb
environment:
- WORDPRESS_DB_USER=wordpress
- WORDPRESS_DB_NAME=mydbname
- WORDPRESS_TABLE_PREFIX=ab_
- WORDPRESS_DB_PASSWORD=password
- WORDPRESS_DB_HOST=mariadb
- MYSQL_PORT_3306_TCP=3306
restart: unless-stopped
ports:
- "test.dev:80:80"
working_dir: /var/www/html
volumes:
- ./wordpress/:/var/www/html/
database:
image: mariadb:latest
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=mydbname
- MYSQL_USER=wordpress
- MYSQL_PASSWORD=password
restart: unless-stopped
ports:
- "3306:3306"
The reason for this behaviour probably was related to a recent kernel and docker update. I recognized several other connection issues in other docker-compose setups. Therefore I restarted the server (not just the docker service) and didn't have had any issues like this ever since.
I had almost same problem, but just restarting the Wordpress container saved me:
$ docker restart wordpress
I hope this help many people.
I too had troubles here. I was using docker-compose to set up multiple wordpress websites on a single (micro) Virtual Private Server, including phpmyadmin and jwilder/nginx-proxy as a controller.
$ docker logs XXXX will help indicate areas of concern. In my case, the MariaDB databases would keep restarting all the time.
It turns out that all that stuff just wouldn't fit on a micro 512M Single CPU service. I never received error messages that told me directly that size was an issue, but after adding things up, I realized that when all the databases were starting up, I was running out of memory. An upgrade to 1Gb, 1 CPU service worked just fine.
I was using your docker-compose.yml, had the same problem. Just restarting didn't fix. After nearly an hour of researching the logs, I found the problem was: wordpress service started connecting mysql service before it had fully started. Simply adding depends_on won't help.Docker Compose wait for container X before starting Y
the work around could be start the db server before Up. When it has fully started, run docker-compose up. Or just use external service.
This simply means you are trying to connect to the wrong host. In order to use this in localhost just use the name of your service as the database host example in your case, it would be mysql you can fix this by specifying the name of the localhost with a default variable like this MYSQL_ROOT_HOST: localhost
In my case, I'm using Mysql (not MariaDb) but I had the same problem.
After upgrading the MySQL version, it's works fine.
You can see my open source docker-compose configuration: https://github.com/rimiti/wordpress-dockerized-environment