Grasshopper is a php web application that connects to a Bticino home automation gateway.
The two recommended ways to use it is either using the RPI image provided with all components installed or install it on a Linux machine with a LASP (Php, apache, sqlite) or LESP (nginx, Php, sqlite) setup.
I try to set grasshopper up in docker-compose by creating two services, the db and the apache webserver. For the db I've tried using the nouchka/sqlite3 image and the keinos/sqlite3 one. Both unfortunately come without documentation and I can nowhere find the mandatory environment variable as root user, psw and so on.
what I do have now only loads the site without DB connection:
version: "3"
services:
database:
image: keinos/sqlite3 #nouchka/sqlite3
#stdin_open: true
#tty: true
volumes:
- ./db/:/root/db/
restart: always
webapp:
build: .
#context: .
#dockerfile: Dockerfile-nginx
ports:
- "8080:80"
depends_on:
- database
restart: always
The Dockerfile:
FROM php:7.2-apache
COPY ./grasshopper_v5_application/ /var/www/html/
Grasshopper documentation: https://sourceforge.net/projects/grasshopperwebapp/files/Grasshopper%20V5%20Installation%20and%20Configuration%20Guide.pdf/download
Grasshopper files : https://sourceforge.net/projects/grasshopperwebapp/files/
Related
I have 2 folders separated, one for backend and one for frontend services:
backend/docker-compose.yml
frontend/docker-compose.yml
The backend has a headless wordpress installation on nginx, with the scope to serve the frontend as an api service. The frontend runs on next.js. Here are the 2 different docker-compose.yml:
backend/docker-compose.yml
version: '3.9'
services:
nginx:
image: nginx:latest
container_name: my-app-nginx
ports:
- '80:80'
- '443:443'
- '8080:8080'
...
networks:
- internal-network
mysql:
...
networks:
- internal-network
wordpress:
...
networks:
- internal-network
networks:
internal-network:
external: true
frontend/docker-compose.yml
version: '3.9'
services:
nextjs:
build:
...
container_name: my-app-nextjs
restart: always
ports:
- 3000:3000
networks:
- internal-network
networks:
internal-network:
driver: bridge
name: internal-network
In the frontend I use the fetch api in nextjs as following:
fetch('http://my-app-nginx/wp-json/v1/enpoint', ...)
I tried also with ports 80 and 8080, without success.
The sequence of commands I run are:
docker network create internal-network
in backend/ folder, docker-compose up -d (all backend containers run fine, I can fetch data with Postman from WordPress api)
in frontend/ folder, docker-compose up -d fails with the error Error: getaddrinfo EAI_AGAIN my-app-nginx
I am not a very expert user of docker so I might miss something here, but I understand that there might be internal network issues over the containers. I read many answers regarding this topic but I couldn't figure it out.
Any recommendations?
Just to add a proper answer:
Generally you should NOT really want to be executing multiple docker-compose up -d commands
If you want to combine two separate docker-compose configs and run as one (slightly more preferable), you can use the extends keyword as described in the docs
However, I would suggest that you treat it as a single docker-compose project which can itself have multiple nested git repositories:
Example SO answer - Git repository setup for a Docker application consisting of multiple repositories
You can keep your code in a mono-repo or multiple repos, up to you
Real working example to backup using your applications that validates this approach:
headless-wordpress-nextjs-starter-kit and it's docker-compose.yml
I have found this thread here
Communication between multiple docker-compose projects
By looking at the most upvoted answers, I wonder if it is related to network prefix?
It seems like the internal-network would be prefixed with frontend_? On the other hand you can also try to locate the network by name in backend/docker-compose.yml:
networks:
internal-network:
external:
name: internal-network
The issue is external networks need the network name specified (because docker compose prefixes resources by default). Your backend docker compose network section should look like this:
networks:
internal-network:
name: internal-network
external: true
You are creating the network in your frontend docker compose so you should omit the docker network create ... command (just need to init frontend first). Or instead treat them both as external and keep the command. In which use the named external network as shown above in your frontend docker compose as well.
I've a question how exactly docker-compose handles environment variables.
services:
wp:
image: wordpress:latest
container_name: "wp"
restart: unless-stopped
links:
- wpdb
environment:
- TZ=Europe/Berlin
- WORDPRESS_DB_HOST=wpdb:3306
- WORDPRESS_DB_USER=wordpress
- WORDPRESS_DB_PASSWORD=password
- WORDPRESS_DB_NAME=wp
volumes:
- ./data:/var/www/html
labels:
- "traefik.enable=true"
- "traefik.backend=wp"
- "traefik.frontend.rule=Host:MASKED"
- "traefik.port=80"
- "traefik.docker.network=web"
networks:
- internal
- web
wpdb:
image: mariadb:latest
restart: unless-stopped
container_name: "wpdb"
environment:
- MYSQL_ROOT_PASSWORD=1234
- MYSQL_USER=wordpress
- MYSQL_PASSWORD=password
- MYSQL_DATABASE=wp
networks:
- internal
labels:
- "traefik.enable=false"
volumes:
- ./sql:/var/lib/mysql
volumes:
data:
sql:
networks:
web:
external: true
internal:
The compose file works great. The containers will be created and work perfectly.
But when I change the defaults at: WORDPRESS_DB_PASSWORD=password and MYSQL_PASSWORD=password.
The Wordpress container throws access denied for user. I also tried to kill the container and volumes.
Hopefully someone has a hint for me.
You should be doing a docker-compose down -v which would delete the named volumes declared in the volumes section. The only downside is that you would be losing all the data created by the service for the first time.
Here is how I could reproduce it -
Used your compose file as reference and on first time used the default password mentioned by you. The services come up fine, I install it and do a Ctrl+C to bring down the service. So all the MYSQL data is written into sql named volume.
When you do a Ctrl+C OR docker-compose down it only removes the containers and networks defined in the service. Not the volumes. Read more about it here
Now when you change password and bring the service back up it still uses the old volumes which has your old password.
So use a docker-compose down -v to remove the volumes too and give it a try.
Here are the steps how I reproduced it
Ctrl+C to stop all the services and then update the docker-compose.yml to update the password and do a docker-compose up again to get access denied error.
Do a docker-compose down -v to clean all the volume too and then do a docker-compose up
On doing a docker-compose down -v you will be losing all the data created by the prior service. Use it cautiously.
I'm working on a docker image for dev environment for a Symfony 4 application. I'm building it on alpine, php-fpm and nginx.
I have configured an application, but the performance was not great (~700ms) even for the simple hello world application, so I thought I can make it faster somehow.
First of all, I went for semantics configuration and configured the volumes to use cached configuration. Then, I moved vendor to separate volume as it caused the most of performance issues.
As a second thing I wanted to use docker-sync as the benchmarks looked amazing. I configured it and everything ran smoothly. But now I realized that the docker is not reacting to changes in code.
First, I thought that it has something to do with Symfony 4 cache, so I did connect to php's container and ran php bin/console cache:clear. Cache has been cleared, but the docker did not react to anything. I double-check the files on both web and php containers and the files are changed there. I'm wondering if there is something more I need to configure or why is Symfony not reacting to changes.
UPDATE
Symfony/Container does not react to changes even after complete image re-build and removal of semantics configuration and docker-sync. So, basically, it's plain docker with hello-world symfony 4 application and it does not react to changes. Changes are not even synced with container
Configuration:
# docker-compose-dev.yml
version: '3'
volumes:
symfony-sync:
external: true
services:
php:
build: build/php
expose:
- 9000
volumes:
- symfony-sync:/var/www/html/symfony
- ./vendor:/var/www/html/vendor
web:
build: build/nginx
restart: always
expose:
- 80
- 443
ports:
- 8080:80
- 8081:443
depends_on:
- php
volumes:
- symfony-sync:/var/www/html/symfony
- ./vendor:/var/www/html/vendor
networks:
default:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.4.0.0/16
# docker-sync.yml
version: "2"
options:
verbose: true
syncs:
symfony-sync:
src: './symfony'
sync_excludes:
- '.git'
- 'composer.lock'
Makefile I use for running the app
start:
docker-sync stop
docker-sync clean
cd symfony
docker volume create --name=symfony-sync
cd ..
docker-compose -f docker-compose-dev.yml down
docker-compose -f docker-compose-dev.yml up -d
docker-sync start
stop:
docker-compose stop
docker-sync stop
I recommend to use dinghy instead docker4mac: https://github.com/codekitchen/dinghy
Have a try to this repo for example too: https://github.com/jorge07/symfony-4-es-cqrs-boilerplate
If this doesn't work the problem will be in you host or dockerfile. Be sure you don't enable opcache for development.
I come here because I develop an app with Symfony3. And I've some questions about the deployment of the app.
Actually I use docker-compose:
version: '2'
services:
nginx:
build: ./docker/nginx/
ports:
- 8081:80
volumes:
- .:/home/docker:ro
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
networks:
- default
php:
build: ./docker/php/
volumes:
- .:/home/docker:rw
- ./docker/php/php.ini:/usr/local/etc/php/conf.d/custom.ini:ro
working_dir: /home/docker
networks:
- default
dns_search:
- php
db:
image: mariadb:latest
ports:
- 3307:3306
environment:
- MYSQL_ROOT_PASSWORD=collectionManager
- MYSQL_USER=collectionManager
- MYSQL_PASSWORD=collectionManager
- MYSQL_DATABASE=collectionManager
volumes:
- mariadb_data:/var/lib/mysql
networks:
- default
dns_search:
- db
search:
build: ./docker/search/
ports:
- 9200:9200
- 9300:9300
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
networks:
- default
dns_search:
- search
volumes:
mariadb_data:
driver: local
elasticsearch_data:
driver: local
networks:
default:
nginx is clear, engine is PHP-FPM with some extensions and composer, db is MariaDB, and search ElasticSearch with some plugins.
Before I don't use Docker and to deploy I used Megallanes or Deployer, when I want to deploy webapp.
With Docker I can use the docker-compose file and recreate images and container on the server, I also can save my containers in images and in tar archive and load it on the server. It's okay for nginx, and php-fpm, but what about elasticsearch and the db ? Because I need to keep data in for future update of the code. Then when I deploy the code I need to execute a Doctrine Migration and maybe some commands, and Deployer do it perfectly with some other interresting things. And how I deploy the code with Docker ? Can we use both ? Deployer for code and Docker for services ?
Thanks a lot for your help.
First of all , Please try using user-defined networks, they have additional features vs legacy linking like Embedded DNS. Meaning you can call other containers on the same network with their names in your applications. Containers on a User defined network are isolate from containers on another User defined network.
To create a user defined network:
docker network create --driver bridge <networkname>
Dockerfile to use user defined network example:
search:
restart: unless-stopped
build: ./docker/search/
ports:
- "9200:9200"
- "9300:9300"
networks:
- <networkname>
Second: I noticed you didnt use data volumes for you DB and ElasticSearch.
You need to mount volumes at certain points to keep your persistant data.
Third: When you export your containers it wont contain mounted volumes. You need to back up volume data and migrate it manually.
To backup volume data:
docker run --rm --volumes-from db -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
The above command will create a container, mounts volumes from DB container and mounts current directory in container as /backup , uses ubuntu image and tar command to create a backup of /dbdata in container (consider changing this to your dbdirectory) in the /backup that is mounted from your docker hosts). after the operation completes the transient container will be removed (ubuntu container we used for creating the backup with --rm switch).
To restore:
You copy the tar archive to the remote location and create your container with empty mounted volume. then extract the tar archive in that volume using the following command.
docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"
I've created a small docker-compose.yml which used to work like a charm to deploy small WordPress instances. It looks like this:
wordpress:
image: wordpress:latest
links:
- mysql
ports:
- "1234:80"
environment:
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_NAME: wordpress
WORDPRESS_DB_PASSWORD: "password"
WORDPRESS_DB_HOST: mariadb
MYSQL_PORT_3306_TCP: 3306
volumes:
- /srv/wordpress/:/var/www/html/
mysql:
image: mariadb:latest
mem_limit: 256m
container_name: mariadb
environment:
MYSQL_ROOT_PASSWORD: "password"
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: "password"
volumes:
- /srv/mariadb:/var/lib/mysql
But when I start it now (maybe since docker update to Docker version 1.9.1, build a34a1d5), it fails
wordpress_1 | Warning: mysqli::mysqli(): (HY000/2002): Connection refused in - on line 10
wordpress_1 |
wordpress_1 | MySQL Connection Error: (2002) Connection refused
When I cat /etc/hosts of the wordpress_1 there are entries for MySQL:
172.17.0.10 mysql 12a564fdbc56 mariadb
and I am able to ping the MariaDB server.
When I docker-compose up, WordPress gets installed and after several restarts the MariaDB container prints:
Version: '10.0.22-MariaDB-1~jessie' socket: '/var/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution
Which schould indicate it to be running, isn't it?
How do I get the WordPress to be able to connect to the MariaDB container?
To fix this issue the first thing to do is:
Add the following code to wordpress & database containers (in the docker-compose file):
restart: unless-stopped
This will make sure you Database is started and intialized before wordpress container trying to connect to it. Then restart docker engine
sudo restart docker
or (for ubuntu 15+)
sudo service docker restart
Here the full configuration that worked for me, to setup wordpress with MariaDB:
version: '2'
services:
wordpress:
image: wordpress:latest
links:
- database:mariadb
environment:
- WORDPRESS_DB_USER=wordpress
- WORDPRESS_DB_NAME=mydbname
- WORDPRESS_TABLE_PREFIX=ab_
- WORDPRESS_DB_PASSWORD=password
- WORDPRESS_DB_HOST=mariadb
- MYSQL_PORT_3306_TCP=3306
restart: unless-stopped
ports:
- "test.dev:80:80"
working_dir: /var/www/html
volumes:
- ./wordpress/:/var/www/html/
database:
image: mariadb:latest
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=mydbname
- MYSQL_USER=wordpress
- MYSQL_PASSWORD=password
restart: unless-stopped
ports:
- "3306:3306"
The reason for this behaviour probably was related to a recent kernel and docker update. I recognized several other connection issues in other docker-compose setups. Therefore I restarted the server (not just the docker service) and didn't have had any issues like this ever since.
I had almost same problem, but just restarting the Wordpress container saved me:
$ docker restart wordpress
I hope this help many people.
I too had troubles here. I was using docker-compose to set up multiple wordpress websites on a single (micro) Virtual Private Server, including phpmyadmin and jwilder/nginx-proxy as a controller.
$ docker logs XXXX will help indicate areas of concern. In my case, the MariaDB databases would keep restarting all the time.
It turns out that all that stuff just wouldn't fit on a micro 512M Single CPU service. I never received error messages that told me directly that size was an issue, but after adding things up, I realized that when all the databases were starting up, I was running out of memory. An upgrade to 1Gb, 1 CPU service worked just fine.
I was using your docker-compose.yml, had the same problem. Just restarting didn't fix. After nearly an hour of researching the logs, I found the problem was: wordpress service started connecting mysql service before it had fully started. Simply adding depends_on won't help.Docker Compose wait for container X before starting Y
the work around could be start the db server before Up. When it has fully started, run docker-compose up. Or just use external service.
This simply means you are trying to connect to the wrong host. In order to use this in localhost just use the name of your service as the database host example in your case, it would be mysql you can fix this by specifying the name of the localhost with a default variable like this MYSQL_ROOT_HOST: localhost
In my case, I'm using Mysql (not MariaDb) but I had the same problem.
After upgrading the MySQL version, it's works fine.
You can see my open source docker-compose configuration: https://github.com/rimiti/wordpress-dockerized-environment