How do you run functions and variables from shell script into docker? - wordpress

so I am trying to work out what the best way to call this function in my bash script so that is executed in the docker container.
#!/bin/bash
AWS_BUCKET="https://s3-bucket-location.com"
AWS_BUCKET_DEV_FILE="devfile.zip"
AWS_BUCKET_PIC_FILE="WORDPRESSpics.zip"
wordpress_production_S3_download () {
cd /tmp/
wget $AWS_BUCKET/$AWS_BUCKET_PIC_FILE
mv $AWS_BUCKET_PIC_FILE /var/www/html/wp-content/uploads/2017
cd /var/www/html/wp-content/uploads/2017
unzip $AWS_BUCKET_PIC_FILE
rm $AWS_BUCKET_PIC_FILE
chown -R www-data:www-data 12/
}
My thought to run this function in the docker container is something like this
docker exec -ti dockerwordpressmaster_mysql_1 bash -c "wordpress_production_S3_download"
but I get
bash: wordpress_production_S3_download: command not found
I have also tried to subsite the function into the variable
i.e
test=$(wordpress_production_S3_download)
docker exec -ti dockerwordpressmaster_mysql_1 bash -c "$test"
but the function "variable" test only executes on the local machine and not in the Docker container like I would like.
Any help with the issue is greatly appreciated

You need to copy the script file containing the function into the dockerwordpressmaster_mysql_1 container and then execute it.
docker cp wordpress-script.sh dockerwordpressmaster_mysql_1:wordpress-script.js
docker exec -ti dockerwordpressmaster_mysql_1 bash -c "source wordpress-script.sh && wordpress_production_S3_download"

Related

podman mounted volume issue

Bottom line: output from container is not appearing in mounted local directory
I have read the documentation for bind mounts and on another project had success with this.
My docker file:
FROM ubuntu:18.04
RUN apt-get update && apt-get install -yq build-essential autoconf libnetcdf-dev libxml2-dev libproj-dev valgrind wget unzip git nano
# pulls ADBM from github and unzips in folder ADMBcode
RUN mkdir /ADMBcode
RUN wget https://github.com/admb-project/admb/releases/download/admb-12.2/admb-12.2-linux.zip
RUN mv admb-12.2-linux.zip /ADMBcode
RUN unzip ADMBcode/admb-12.2-linux.zip -d /ADMBcode
# pulls hydra repo from github into folder HYDRA
RUN mkdir /HYDRA
RUN git clone https://github.com/NOAA-EDAB/hydra_sim.git /HYDRA
# compiles and runs model
WORKDIR /HYDRA
RUN /ADMBcode/admb-12.2/admb hydra_sim.tpl
RUN ./hydra_sim
# create dir for output and move output
#RUN mkdir -p /HYDRA/output/diagnostics
#RUN mkdir /HYDRA/output/indices
# moves output to folder to be mounted
RUN mv *.out /HYDRA/output/diagnostics
RUN mv *.txt /HYDRA/output/indices
I build the image
podman build -t hydra .
and run the container using the following :
podman run --rm --name hydra --mount "type=bind,src=/path_on_local_machine/test,dst=/HYDRA/output" hydra
I have test folder on my local machine but the output is not mounted.
I have entered the container
podman run -it hydra
and checked that the output is there
I have done this before for another model and everything behaved. Not sure why this is not.
Any ideas what i am doing wrong?
Thanks
However

unable to run R script in crontab in docker container

I'm trying to run a R script inside docker container. Here is the example.
My working directory is like below.
myRDocker
-dockerfile
-scripts
-save_iris.R
In the directory myRDocker, there is a dockerfile and a directory scripts, which contains a R script save_iris.R
My R script save_iris.R is like below:
write.csv(iris, '/data/iris.csv')
My dockerfile is like below:
# Install R version 3.6
FROM r-base:3.6.0
#install crontab
RUN apt-get update && apt-get -y install cron
RUN mkdir /data
COPY /scripts /scripts
I went to my directory myRDocker and build docker image
docker build -t baser .
I run the docker container in bash.
docker run -it baser bash
After I get into the container, I did:
crontab -e
then add this line, then save
* * * * * Rscript /scripts/save_iris.R
It should save the file to the folder /data every min. However, I never found any file in the data folder inside the container.
My question is:
what did I do wrong in the above procedure? I feel like I might be missing something.... but could not figure out why...
what should I do if I want to run the scheduled cron task whenever container start. (something like put the cron task in a file, and run when container start....)
why you are not starting the cronjob on container run time, instead of running after container start? Also, I do not think the crontab process will run in your case as your container is not doing any thing.
Try this, which will start cron on container run time and will also tail logs of cron job. but keep in mind your main process in this case is tail -f /var/log/cron.log not the cron process.
FROM r-base:3.6.0
RUN apt-get update && apt-get -y install cron
RUN touch /var/log/cron.log
COPY hello.r /hello.r
RUN (crontab -l ; echo "* * * * * Rscript /hello.r >> /var/log/cron.log") | crontab
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
So you will get the Rscript console logs to Container stdout.
hello.r
print("Hello World from R")

Cannot copy intermediate docker container files to host

I have a Dockerfile, it does dotnet publish and the dll's are copied to intermediate docker container. I would like to copy the dll's which are generated in container to my local system (Host) as well.
I believe we can use "cp" command to do that, but I am not able to find a solution to get the intermediate container Id to use the "cp" command.
syntax: docker cp CONTAINER:Container_Path Host_Path.
Please suggest me any other better solution for this scenario.
Dockerfile:
FROM microsoft/aspnetcore-build:1.1.4 as builder
COPY . /Code
RUN dotnet restore /Code/MyProj.csproj
RUN dotnet publish -c Release /Code/MyProj.csproj
RUN cp CONTAINER: /Code/bin/Release/netcoreapp1.1/publish /binaries
Thanks.
This answer is outside of the Dockerfile.
first your Dockerfile would have to have a volume.
[VOLUME] /my/path/in/container
To get files into and out of a volume, try using tar -cvf and tar -xvf to put and get files between a container and a host.
To put files from host's newfiles.tar in pwd to a container at /var/lib/neo4j/conf mount.
docker run --rm \
-v my-volume-data:/my/path/in/container -v $(pwd):/newfiles ubuntu bash -c \
"cd /my/path/in/container && tar -xf /newfiles/newfiles.tar"
To get files from into a container at /my/path/in/container mount to a host oldfiles.tar.
docker run --rm \
-v my-volume-data:/my/path/in/container -v $(pwd):/newfiles ubuntu bash -c \
"cd /my/path/in/container && tar -cf /newfiles/origfiles.tar"
The --user 1000:1000 is optional if your container has a user with uid of 1000.

how can I set the working directory in old version of docker in the run command?

I am a bit new to docker and I have been trying to run deploy a meteor container with my meteor application. I have been using the dockerfile and instructions from https://registry.hub.docker.com/u/golden/meteor-dev/
However, I cant run docker run -p 3000:3000 -t -i -v /path/to/meteor/app:/opt/application -w /opt/application meteor-dev because my docker (version 0.5.3) does not recognize the flag (-w) to set the working directory.
Is there some workaround to set the working directory with docker 0.5.3? The work directory is already set in the docker file, but I guess I need to set it again when I run the container.
well, my workaround was to create a bash script that would go to the working directory and call the commands one by one. I created the bash script where my source is located "/path/to/meteor/app" and call docker run -p 3000:3000 -t -i -v /path/to/meteor/app:/opt/application meteor-dev bash /opt/application/start.sh with the bash as command and my script as argument

Run a service automatically in a docker container

I'm setting up a simple image: one that holds Riak (a NoSQL database). The image starts the Riak service with riak start as a CMD. Now, if I run it as a daemon with docker run -d quintenk/riak-dev, it does start the Riak process (I can see that in the logs). However, it closes automatically after a few seconds. If I run it using docker run -i -t quintenk/riak-dev /bin/bash the riak process is not started (UPDATE: see answers for an explanation for this). In fact, no services are running at all. I can start it manually using the terminal, but I would like Riak to start automatically. I figure this behavior would occur for other services as well, Riak is just an example.
So, running/restarting the container should automatically start Riak. What is the correct approach of setting this up?
For reference, here is the Dockerfile with which the image can be created (UPDATE: altered using the chosen answer):
FROM ubuntu:12.04
RUN apt-get update
RUN apt-get install -y openssh-server curl
RUN curl http://apt.basho.com/gpg/basho.apt.key | apt-key add -
RUN bash -c "echo deb http://apt.basho.com precise main > /etc/apt/sources.list.d/basho.list"
RUN apt-get update
RUN apt-get -y install riak
RUN perl -p -i -e 's/(?<=\{http,\s\[\s\{")127\.0\.0\.1/0.0.0.0/g' /etc/riak/app.config
EXPOSE 8098
CMD /bin/riak start && tail -F /var/log/riak/erlang.log.1
EDIT: -f changed to -F in CMD in accordance to sesm his remark
MY OWN ANSWER
After working with Docker for some time I picked up the habit of using supervisord to tun my processes. If you would like example code for that, check out https://github.com/Krijger/docker-cookbooks. I use my supervisor image as a base for all my other images. I blogged on using supervisor here.
To keep docker containers running, you need to keep a process active in the foreground.
So you could probably replace that last line in your Dockerfile with
CMD /bin/riak console
Or even
CMD /bin/riak start && tail -F /var/log/riak/erlang.log.1
Note that you can't have multiple lines of CMD statements, only the last one gets run.
Using tail to keep container alive is a hack. Also, note, that with -f option container will terminate when log rotation happens (this can be avoided by using -F instead).
A better solution is to use supervisor. Take a look at this tutorial about running Riak in a Docker container.
The explanation for:
If I run it using docker run -i -t quintenk/riak-dev /bin/bash the riak process is not started
is as follows. Using CMD in the Dockerfile is actually the same functionality as starting the container using docker run {image} {command}. As Gigablah remarked only the last CMD is used, so the one written in the Dockerfile is overwritten in this case.
By using CMD /bin/riak start && tail -f /var/log/riak/erlang.log.1 in the Buildfile, you can start the container as a background process using docker run -d {image}, which works like a charm.
"If I run it using docker run -i -t quintenk/riak-dev /bin/bash the riak process is not started"
It sounds like you only want to be able to monitor the log when you attach to the container. My use case is a little different in that I want commands started automatically, but I want to be able to attach to the container and be in a bash shell. I was able to solve both of our problems as follows:
In the image/container, add the commands you want automatically started to the end of the /etc/bash.bashrc file.
In your case just add the line /bin/riak start && tail -F /var/log/riak/erlang.log.1, or put /bin/riak start and tail -F /var/log/riak/erlang.log.1 on separate lines depending on the functionality desired.
Now commit your changes to your container, and run it again with: docker run -i -t quintenk/riak-dev /bin/bash. You'll find the commands you put in the bashrc are already running as you attach.
Because I want a clean way to have the process exit later I make the last command a call to the shell's read which causes that process to block until I later attach to it and hit enter.
arthur#macro:~/docker$ sudo docker run -d -t -i -v /raid:/raid -p 4040:4040 subsonic /bin/bash -c 'service subsonic start && read -p "waiting"'
WARNING: Docker detected local DNS server on resolv.conf. Using default external servers: [8.8.8.8 8.8.4.4]
f27229a260c9
arthur#macro:~/docker$ sudo docker ps
[sudo] password for arthur:
ID IMAGE COMMAND CREATED STATUS PORTS
35f253bdf45a subsonic:latest /bin/bash -c service 2 days ago Up 2 days 4040->4040
arthur#macro:~/docker$ sudo docker attach 35f253bdf45a
arthur#macro:~/docker$ sudo docker ps
ID IMAGE COMMAND CREATED STATUS PORTS
as you can see the container exits after you attach to it and unblock the read.
You can of course use a more sophisticated script than read -p if you need to do other clean up, such as stopping services and saving logs etc.
I use a simple trick whenever I start building a new docker container. To keep it alive, I use a ping in the entrypoint script.
So in the Dockerfile, when using debian, for instance, I make sure I can ping.
This is btw, always nice, to check what is accessible from within the container.
...
RUN DEBIAN_FRONTEND=noninteractive apt-get update \
&& apt-get install -y iputils-ping
...
ENTRYPOINT ["entrypoint.sh"]
And in the entrypoint.sh file
#!/bin/bash
...
ping 10.10.0.1 >/dev/null 2>/dev/null
I use this instead of CMD bash, as I always wind up using a startup file.

Resources