How do I copy the files from one directory to another using a dockerfile? - nginx

I am using an ec2 instance with docker. I am creating a docker file that uses a Nginx image. I have two directories on my ec2 one is called Docker (this is where the Dockerfile is located) and the other main, I want to copy the contents of main into the directory /usr/share/nginx/html using the docker file. I have tried it like this but keep on getting an error of the file/directory not existing.
ADD /home/ec2-user/main/ /usr/share/nginx/html
in the main directory, I just have one file called one.html

Related

how to save file from docker container to local downloads folder

I am running a R and python script in a docker container. both scripts save the file to the working folder, but when running a container, there is no local folder.
what changes do I need to ensure that the file goes to the downoads folder of the person running the container?
do I need to update my R and python scrips such that the files are saved to the local host download? If so, what would that look like, as there is not a localhost.
In R, I updated my saving file location to :
write.csv(data,
paste0('C:/Users/',Sys.getenv("USERNAME"),'/Downloads/file_made_by_r_script.csv'))
but while running the container, the resulting file is not found in my downloads folder.
I tried mounting a volume as per Write files outside of a docker container via python, but cannot do that while the container is hosted in azure

Docker-compose re-run existing contain

Yesterday I created a docker container with
docker-compose up -d
(and docker-compose.yaml file). It created a wordpress site, a database, phpmyadmin, etc.)
I made some changes to the wordpress installation, content, etc. I then shut it down with:
docker-compose down -volumes
This morning I wanted to run this container again and run the docker-compose up -d command again and when I visited the url it showed a wordpress configuration wizard instead of the existing installation from yesterday. In hindsight, it makes sense. Not sure why I expected not to create a new container. I then deleted the install* file from wp-admin but it didn't help.
Are the changes from my yesterday's wp installation lost? Have I overwritten everything?
Generally, how can I re-start an existing container with docker/docker-compose
by using docker-compose down -volumes you deleting :
Stops containers and removes containers, networks, volumes, and images created by up
see this
you may use docker-compsoe start/stop instead to stop or start your running containers
The command
docker-compose down
will stop all your containers, delete all your containers and remove any networks defined in your docker compose file.
It does not remove your volumes, by the way (unless you additonally pass the -v flag to the command).
So your command
docker-compose down --volumes
will also remove any volumes.
If you want to persist your wordpress installation for development purpose but want to be able to remove and create containers during development you can mount volumes on your host machine. E.g. for your database data or also for your wordpress source code (if needed).
See also here: https://docs.docker.com/compose/wordpress/
Take a look at the docker compose file provided there and specifically take a look at the volume directives.
In the example the database files are mounted on your host machine so that they don't vanish if you remove the database container.
If you are already using volumes in your docker compose file than you can simply remove the --volumes flag from the docker-compose down command
You can recreate a service inside compose file with following command.
for example you have wordpress,mysql,nginx services inside compose file.
docker-compose -f docker-compose.yml up -f --build wordpress
this command recreate your container

How do I change file permissions on ElasticBeanstalk before docker image gets built?

I am deploying a Docker image (Wordpress) on Elastic Beanstalk using a single container deployment.
My deployment zip file includes:
public folder containing a complete wordpress build
Dockerfile
.ebextensions/permissions.config
The standard Wordpress image creates a volume VOLUME /var/www/html and in my Dockerfile I do
COPY ./public /var/www/html
Now the problem is that I cannot upload media using Wordpress admin dashboard.
Unable to create directory wp-content/uploads/2019/02. Is its parent directory writable by the server?
I've tried to change the permissions on the uploads folder using the EB config in .ebextensions/permissions.config
container_commands:
91writable_dirs:
command: |
chmod -R 777 /var/app/current/public/wp-content/uploads
cwd: "/var/app/current"
I see from the logs that the docker image gets built before running chmod. I've seen on other SO posts that some run the script on /var/app/ondeck/, but that fails with the directory doesn't exist
Despite all the above, my question is actually how do I get to upload media to Wordpress with my current setup.
EDIT: When I attach a shell to the docker container and change the file permissions of wp-content/uploads in the VOLUME /var/www/html I am able to upload media. So how can this be made permanent on the VOLUME?
Whenever wordpress docker image is built and run, the docker ENTRYPOINT of the wordpress image is executed first. Hence your command to change the directory permissions is not getting executed.
This ENTRYPOINT is a bash script located in /usr/local/bin/docker-entrypoint.sh
If you want your command to be executed, you could add your command to this script and it will be called every time your container starts.
You could do it the following way -
Start your container and copy the contents of the existing
docker-entrypoint.sh
Create a new docker-entrypoint.sh outside the container and edit
that script to add your chmod command at appropriate location.
In your Dockerfile add a line to copy this new entrypoint to the
location /usr/local/bin/docker-entrypoint.sh

meteor specify working directory in container

I'm trying to install a meteor application inside a container (singularity), but when I start the application it wants to write to a read only part of the image. Is it possible to specify a working directory different from the application directory? Or, start the application from a writeable directory and point to the applications install directory?
.../promise_server.js:165 throw error;
Error: EROFS, mkdir '/usr/local/mindcontrol/.meteor/local'
Did you install the application to /usr/local yourself? Maybe you can install it to another directory inside the container, e.g. /mindcontrol.
Then you can mount a directory for which you have write-permissions (your home for example):
singularity exec --bind <some_dir>:/mindcontrol <container.img> <command>

Docker and bower links

I'm using docker to run a simple static web project, using the nginx official image. As a bower dependence I have a ui lib that is mine and is shared among two of my projects. To facilitate the development process I created a volume to my local machine to serve local files through the /html folder inside the nginx container. It works fine this way.
But, if I try to use bower link to create a link between a local copy of my ui lib and the bower dependence the nginx web server is not able to find the folder, since the link points to my local machine.
I'm running the docker vm in a Mac.
Did someone experienced something similar and have an idea about how to solve it?
Thanks,
I just run into this issue and found a way to solve it nicely.
The problem is that when you mount as a volume the whole /html folder the symlinks created by bower link are copied into your container but not the actual folders they are pointing at. When nginx tries to serve the file, it follows the symlink but now INSIDE the container, where the route is invalid.
To fix this, create another volume that maps the symlink directly. This way, docker-compose will follow the symlink BEFORE mounting into the container, therefore copying the actual folder contents. The nice thing about this is that in your local file system you still have the folder and the symlink working, so you can work as usually :)
Practical example:
My folder structure
/app
|--/bower_components
|--/packageA
|--/packageB -> symlink to /foo/bar/packageB
My compose file:
version: '2'
services:
nginx:
volumes:
- .:/foo
- ./bower_components/packageB:/foo/bower_components/packageB
...
Let me know if it worked, cheers!

Resources