I'm looking to see if it is possible to somehow FTP into an already existing Docker container? For example, I'm using the dockerfile/ghost in combination with jwilder/nginx-proxy, and once I deploy/build a container, I'd like for the user to be able to FTP into the container running Ghost so they can upload additional files such as themes, stylesheets, etc. What would be the best method in accomplishing this? Thanks in advance!
You have a few choices:
run ftp in the Ghost container and expose a port
use a host directory to store the user content and give them FTP to the host (not the best choice)
map the same host directory into both the Ghost container and the FTP server container
Personally I thnk the last one is the best and the most work though the advantages will be worth it in the long run. I'm making the assumption that the uploaded content should survive container termination which is why I recommend using a mapped host directory, if this is not the case you can use linked volumes between the FTP container and the Ghost container.
Probably the most "dockery" way of doing this would be to run a container for your ftp server (something such as this https://github.com/gizur/docker-ftpserver) and a separate one for ghost, then mount a common volume.
The best way to synchronize these is with a version 2 docker-compose.yml
https://docs.docker.com/compose/compose-file/#volume-configuration-reference
The docker-compose.yml file might look something like this:
version: '2'
services:
ghost:
build:
context: ./ghost
ports:
- "80:80"
- "443:443"
volumes:
- "ftpsharevolume:/mnt/ftp"
depends_on:
- ftp
ftp:
image: someftpdockerimage
volumes:
- "ftpsharevolume:/srv/ftp"
ports:
- "21:21"
- "20:20"
volumes:
ftpsharevolume:
driver: local
Related
I have 2 folders separated, one for backend and one for frontend services:
backend/docker-compose.yml
frontend/docker-compose.yml
The backend has a headless wordpress installation on nginx, with the scope to serve the frontend as an api service. The frontend runs on next.js. Here are the 2 different docker-compose.yml:
backend/docker-compose.yml
version: '3.9'
services:
nginx:
image: nginx:latest
container_name: my-app-nginx
ports:
- '80:80'
- '443:443'
- '8080:8080'
...
networks:
- internal-network
mysql:
...
networks:
- internal-network
wordpress:
...
networks:
- internal-network
networks:
internal-network:
external: true
frontend/docker-compose.yml
version: '3.9'
services:
nextjs:
build:
...
container_name: my-app-nextjs
restart: always
ports:
- 3000:3000
networks:
- internal-network
networks:
internal-network:
driver: bridge
name: internal-network
In the frontend I use the fetch api in nextjs as following:
fetch('http://my-app-nginx/wp-json/v1/enpoint', ...)
I tried also with ports 80 and 8080, without success.
The sequence of commands I run are:
docker network create internal-network
in backend/ folder, docker-compose up -d (all backend containers run fine, I can fetch data with Postman from WordPress api)
in frontend/ folder, docker-compose up -d fails with the error Error: getaddrinfo EAI_AGAIN my-app-nginx
I am not a very expert user of docker so I might miss something here, but I understand that there might be internal network issues over the containers. I read many answers regarding this topic but I couldn't figure it out.
Any recommendations?
Just to add a proper answer:
Generally you should NOT really want to be executing multiple docker-compose up -d commands
If you want to combine two separate docker-compose configs and run as one (slightly more preferable), you can use the extends keyword as described in the docs
However, I would suggest that you treat it as a single docker-compose project which can itself have multiple nested git repositories:
Example SO answer - Git repository setup for a Docker application consisting of multiple repositories
You can keep your code in a mono-repo or multiple repos, up to you
Real working example to backup using your applications that validates this approach:
headless-wordpress-nextjs-starter-kit and it's docker-compose.yml
I have found this thread here
Communication between multiple docker-compose projects
By looking at the most upvoted answers, I wonder if it is related to network prefix?
It seems like the internal-network would be prefixed with frontend_? On the other hand you can also try to locate the network by name in backend/docker-compose.yml:
networks:
internal-network:
external:
name: internal-network
The issue is external networks need the network name specified (because docker compose prefixes resources by default). Your backend docker compose network section should look like this:
networks:
internal-network:
name: internal-network
external: true
You are creating the network in your frontend docker compose so you should omit the docker network create ... command (just need to init frontend first). Or instead treat them both as external and keep the command. In which use the named external network as shown above in your frontend docker compose as well.
Grasshopper is a php web application that connects to a Bticino home automation gateway.
The two recommended ways to use it is either using the RPI image provided with all components installed or install it on a Linux machine with a LASP (Php, apache, sqlite) or LESP (nginx, Php, sqlite) setup.
I try to set grasshopper up in docker-compose by creating two services, the db and the apache webserver. For the db I've tried using the nouchka/sqlite3 image and the keinos/sqlite3 one. Both unfortunately come without documentation and I can nowhere find the mandatory environment variable as root user, psw and so on.
what I do have now only loads the site without DB connection:
version: "3"
services:
database:
image: keinos/sqlite3 #nouchka/sqlite3
#stdin_open: true
#tty: true
volumes:
- ./db/:/root/db/
restart: always
webapp:
build: .
#context: .
#dockerfile: Dockerfile-nginx
ports:
- "8080:80"
depends_on:
- database
restart: always
The Dockerfile:
FROM php:7.2-apache
COPY ./grasshopper_v5_application/ /var/www/html/
Grasshopper documentation: https://sourceforge.net/projects/grasshopperwebapp/files/Grasshopper%20V5%20Installation%20and%20Configuration%20Guide.pdf/download
Grasshopper files : https://sourceforge.net/projects/grasshopperwebapp/files/
I have ASP.NET Core app, that is packed to docker.
Here is my docker-compose file, it has kibana and EL images in it.
version: "3.1"
services:
tooseeweb:
image: ${DOCKER_REGISTRY-}tooseewebcontainer
build:
context: .
dockerfile: Dockerfile
ports:
- 5000:80
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
container_name: elasticsearch
ports:
- "9200:9200"
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
networks:
- docker-network
kibana:
image: docker.elastic.co/kibana/kibana:6.2.4
container_name: kibana
ports:
- "5601:5601"
depends_on:
- elasticsearch
networks:
- docker-network
networks:
docker-network:
driver: bridge
volumes:
elasticsearch-data:
I try to deploy this to Azure Container Registry via this article
Article link
It's all okay and I see my APIŠ± it's under 80 port. But I don't see kibana and elastic search.
At local machine I make docker-compose up and see it by 5601 and 9200, but on Azure Container Registry this ports not working. How I can deploy all together? Or I need to deploy containers separately?
Firstly, the Azure Container Registry store the docker images for you. So you need to push the images to it, not the running containers. And you do not need to separate them, but you need to create all the images with the name as your_acr_name.azurecr.io/image_name:tag and then push them to the ACR.
As I see in your question, you only create the image tooseeweb with the name ${DOCKER_REGISTRY-}tooseewebcontainer, when you push this image to the ACR, it only stores this one for you, does not contain the other two images.
If you want to store the other two images in ACR, you need to follow the two steps below.
tag your image. For example:
docker tag docker.elastic.co/elasticsearch/elasticsearch:6.2.4 your_acr_name.azurecr.io/elasticsearch:6.2.4
push the image to ACR.
docker push your_acr_name.azurecr.io/elasticsearch:6.2.4
I'm currently working on a project using the WordPress API.
This is my docker-compose.yml file:
version: '3.1'
services:
wordpress:
image: wordpress
volumes:
- ./volumes/wordpress/:/var/www/html
ports:
- 8080:80
environment:
WORDPRESS_DB_PASSWORD: root
depends_on:
- mysql
mysql:
image: mysql:5.7
volumes:
- ./volumes/mysql/:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
web:
build:
context: .
dockerfile: Dockerfile
volumes:
- ./src/:/src
ports:
- 8081:8081
depends_on:
- wordpress
In order to use the WordPress API, I need to configure the WordPress container manually by going to http://localhost:8080/wp-admin and changing some settings.
Thing is, I need to make this settings changes automatic because everytime I remove the volume folder in order to reset the WordPress content, it also removes the settings.
Any idea on how I can achieve this?
I guess that all settings configured via the wp-admin section are stored in the database.
If that's the case than you can do this:
Setup a first WordPress instance by running your docker-compose and completing the setup steps.
Stop the compose. At this point in the mysql volume folder you have a database structure with a configured wordpress in it.
Store the contents of the folder somewhere.
Now, if you want to create another WordPress instance, you can edit the docker-compose.yml file in order to adjust volume binding and make sure that the initial content of the mysql volume contains the data you got from step 3 above.
When you start the new docker-compose stack it'll start from a populated database and you should have a preconfigured WordPress instance.
You need to locate the file/folder that containes the settings that you are changing.
Start the container, do the changes and backup the settings file into your host machine using:
docker cp <container-name>:<path-to-settings> .
You then can create a custom image that replaces the default settings with the backuped settings you copied to the host.
FROM wordpress
COPY <settings-from-host> <settings-path-in-container>
Docker version (latest for Mac)
Version 17.03.1-ce-mac5 (16048)
I'm trying to externalise the paths so each developer can change a single file to map components to the right path in their local environment. For example where nginx servers a static website.
#localhost.env
INDEX_PATH=/Users/felipe/website/public
This is my compose.yml
nginx:
image: nginx
ports:
- "8081:8081"
volumes:
- ${INDEX_PATH}:/etc/nginx/html:ro
env_file:
- ./localhost.env
In short, I define the INDEX_PATH variable to point to my local path and I want nginx to serve the website from there. Another developer should then set
#localhost.env
INDEX_PATH=/Users/somebodyElse/whatever/public
The problem
For some reason that I don't understand the local variable somehow does not get resolved properly, at least when using it as volume's path .
Testing
docker-compose config
nginx:
environment:
INDEX_PATH: /Users/felipe/website/public
image: nginx
ports:
- 8081:8081
volumes:
- .:/etc/nginx/html:ro //HERE I WAS EXPECTING THE PATH
As you can see, it just get resolved as . (a dot instead of the path /Users/felipe/website/public)
Any idea what I'm doing wrong? I believe this feature is supported but can't work out how to do it.
Thank you!
The env_file definition passes environment variables from the file into the container, but it doesn't get picked up in the docker-compose parsing of the yml file. What you can use is a .env file which is loaded before the docker-compose.yml file is parsed, you can even use this to override the docker-compose.yml filename itself.