when I build an app with docker the command
RUN rm -f something.txt
in Dockerfile
works fine but using the same command with heroku (heroku container:push web -a ...)
doesn't actually delete the file. No error is produced anyway.
Any solution
Thanks
PS: I also tried RUN shred -u something.txt same issue
Related
There are a ton of little-upvoted questions about how to address local folders from inside a docker container, but I can't find one that quite matches mine, so here goes another one:
How can I run a docker container, and mount a local folder so that it's accessible by R/RStudio, inside the container?
That sounds kind of like: mounting local home directory in Rstudio docker? and using an approach similar to that, I can start a container and mount a volume:
docker run -d -p 8787:8787 -v $HOME/my_folder:/LOOKATMEEE -e ROOT=TRUE rocker/tidyverse:3.4
and if I run a bash shell in the container, I can see the folder:
docker exec -it 38b2d6ca427f bash
> ls
bin dev home lib LOOKATMEEE mnt proc run srv tmp var boot etc init lib64 media opt root sbin sys usr
# ^ there is is!
But if I go connect to RStudio server at localhost:8787, I don't see it in the files pane, nor does it show up when run list.files() in the R console:
I'm sure I'm missing something basic, but if someone can tell me what that is... thank you!
In this circumstance, R and RStudio have a default working directory of /home/rstudio, two levels down from /, where I was telling docker to mount the folder.
After the docker run command in the question, you can go list.files('/') to see the folder.
If you want your folder to show up in the default working directory for R, as I do, then modify docker run like this:
docker run -d -p 8787:8787 -v $HOME/my_folder:/home/rstudio/LOOKATMEEE -e ROOT=TRUE rocker/tidyverse:3.4
and there it shall be:
Thank you to user alistaire.
This answer is for future generations :)
The concept is a "match" of the resource from the host with the container:
:
The command structure should be like this:
docker run -d -e PASSWORD= -p 8787:8787 -v
: /home/rstudio/ rocker/rstudio
Check the explanation here
I have created a docker image using Dockerfile mentioned below. There is a jar file in the image which needs few parameters to run it. I am passing the parameters using docker run command but it throws me error. Find the details below.
Dockerfile content
FROM ubuntu:14.04
ENV http_proxy http://http.proxy.nxp.com:7000
ENV https_proxy http://http.proxy.nxp.com:7000
RUN apt-get update
<set of lines for installing java is here>
ENV JAVA_HOME /usr/lib/jvm/java-8-oracle
copy apache-jmeter-3.1 /apache-jmeter-3.1
RUN mkdir /jarloc
copy Test.jar /jarloc
RUN java -version
ENTRYPOINT [ java -jar /jarloc/Test.jar ]
RUN ls -l /jarloc
I created an image called as jmaster:1.0 and gave following command to spin the container.
docker run jmaster:1.0 http://win_loc/soasta_parent/soasta/MyPOC/Login_Data.csv http://win_loc/soasta_parent/soasta/MyPOC/Dpc_data.csv 30 300 30
This gives me following error.
http://win_loc/soasta_parent/soasta/MyPOC/Login_Data.csv: 1: [:
missing ]
I am able to run this script from inside docker (docker run -it jmaster:1.0 /bin/bash). It gives me correct output. But when I try to pass the parameter in docker run command, I am getting this error. Am I passing this in a wrong way or is there any other way to do so?
When I go inside docker using 'docker run -it imagename /bin/bash' and execute following, I am getting correct results from jar.
/jarloc#java -jar Test.jar http://win_loc/soasta_parent/soasta/MyPOC/Login_Data.csv http://win_loc/soasta_parent/soasta/MyPOC/Dpc_data.csv 30 300 30
Try with
ENTRYPOINT ["java","-jar","/jarloc/Test.jar"]
That should take the parameters in docker run
I am a bit new to docker and I have been trying to run deploy a meteor container with my meteor application. I have been using the dockerfile and instructions from https://registry.hub.docker.com/u/golden/meteor-dev/
However, I cant run docker run -p 3000:3000 -t -i -v /path/to/meteor/app:/opt/application -w /opt/application meteor-dev because my docker (version 0.5.3) does not recognize the flag (-w) to set the working directory.
Is there some workaround to set the working directory with docker 0.5.3? The work directory is already set in the docker file, but I guess I need to set it again when I run the container.
well, my workaround was to create a bash script that would go to the working directory and call the commands one by one. I created the bash script where my source is located "/path/to/meteor/app" and call docker run -p 3000:3000 -t -i -v /path/to/meteor/app:/opt/application meteor-dev bash /opt/application/start.sh with the bash as command and my script as argument
I am starting to use Docker, and now I would like to use it for running Alfresco instances.
I tried to install Alfresco using the single file installer obtained from:
http://wiki.alfresco.com/wiki/Community_file_list_4.2.e (572Mb)
After inserting the installer file and run the recently created image, I execute:
root#3e8b72d208e4:/root# chmod u+x alfresco.sh
root#3e8b72d208e4:/root# mv alfresco.sh alfresco.bin
root#3e8b72d208e4:/root# ./alfresco.bin
root#3e8b72d208e4:/root#
After 1 second, the ./alfresco.bin process ends with no output. It is supposed to prompt some installer options.
I'm running Docker on Ubuntu 13.10 64 bits with 8Gb in RAM. What would be the right procedure to install Alfresco on a Docker container using the installer?
The problem is that the bitrock installer requires tmpfs which in turn requires extra privileges for the executing the container. Run your container with
docker run -i -t -privileged <image> [<command>]
and execute
mount none /tmp -t tmpfs
within the container.
After that, the installer will run just fine.
Unfortately, things get messy if what you want is building an image from a dockerfile. docker build does not provide the -privileged switch or a RUNP instruction. You might want to have a look at https://github.com/dotcloud/docker/issues/1916 for further discussion.
For getting the docker-compose.yml of the community version. Here's one command that helped me generate the files needed.
docker run -it --rm -v "$PWD:/app" -w "/app" -e XDG_CONFIG_HOME=/app/.yo_config -e npm_config_cache=/app/.cache node:alpine sh -c "npm i -g yo generator-alfresco-docker-installer && yo alfresco-docker-installer"
I'm setting up a simple image: one that holds Riak (a NoSQL database). The image starts the Riak service with riak start as a CMD. Now, if I run it as a daemon with docker run -d quintenk/riak-dev, it does start the Riak process (I can see that in the logs). However, it closes automatically after a few seconds. If I run it using docker run -i -t quintenk/riak-dev /bin/bash the riak process is not started (UPDATE: see answers for an explanation for this). In fact, no services are running at all. I can start it manually using the terminal, but I would like Riak to start automatically. I figure this behavior would occur for other services as well, Riak is just an example.
So, running/restarting the container should automatically start Riak. What is the correct approach of setting this up?
For reference, here is the Dockerfile with which the image can be created (UPDATE: altered using the chosen answer):
FROM ubuntu:12.04
RUN apt-get update
RUN apt-get install -y openssh-server curl
RUN curl http://apt.basho.com/gpg/basho.apt.key | apt-key add -
RUN bash -c "echo deb http://apt.basho.com precise main > /etc/apt/sources.list.d/basho.list"
RUN apt-get update
RUN apt-get -y install riak
RUN perl -p -i -e 's/(?<=\{http,\s\[\s\{")127\.0\.0\.1/0.0.0.0/g' /etc/riak/app.config
EXPOSE 8098
CMD /bin/riak start && tail -F /var/log/riak/erlang.log.1
EDIT: -f changed to -F in CMD in accordance to sesm his remark
MY OWN ANSWER
After working with Docker for some time I picked up the habit of using supervisord to tun my processes. If you would like example code for that, check out https://github.com/Krijger/docker-cookbooks. I use my supervisor image as a base for all my other images. I blogged on using supervisor here.
To keep docker containers running, you need to keep a process active in the foreground.
So you could probably replace that last line in your Dockerfile with
CMD /bin/riak console
Or even
CMD /bin/riak start && tail -F /var/log/riak/erlang.log.1
Note that you can't have multiple lines of CMD statements, only the last one gets run.
Using tail to keep container alive is a hack. Also, note, that with -f option container will terminate when log rotation happens (this can be avoided by using -F instead).
A better solution is to use supervisor. Take a look at this tutorial about running Riak in a Docker container.
The explanation for:
If I run it using docker run -i -t quintenk/riak-dev /bin/bash the riak process is not started
is as follows. Using CMD in the Dockerfile is actually the same functionality as starting the container using docker run {image} {command}. As Gigablah remarked only the last CMD is used, so the one written in the Dockerfile is overwritten in this case.
By using CMD /bin/riak start && tail -f /var/log/riak/erlang.log.1 in the Buildfile, you can start the container as a background process using docker run -d {image}, which works like a charm.
"If I run it using docker run -i -t quintenk/riak-dev /bin/bash the riak process is not started"
It sounds like you only want to be able to monitor the log when you attach to the container. My use case is a little different in that I want commands started automatically, but I want to be able to attach to the container and be in a bash shell. I was able to solve both of our problems as follows:
In the image/container, add the commands you want automatically started to the end of the /etc/bash.bashrc file.
In your case just add the line /bin/riak start && tail -F /var/log/riak/erlang.log.1, or put /bin/riak start and tail -F /var/log/riak/erlang.log.1 on separate lines depending on the functionality desired.
Now commit your changes to your container, and run it again with: docker run -i -t quintenk/riak-dev /bin/bash. You'll find the commands you put in the bashrc are already running as you attach.
Because I want a clean way to have the process exit later I make the last command a call to the shell's read which causes that process to block until I later attach to it and hit enter.
arthur#macro:~/docker$ sudo docker run -d -t -i -v /raid:/raid -p 4040:4040 subsonic /bin/bash -c 'service subsonic start && read -p "waiting"'
WARNING: Docker detected local DNS server on resolv.conf. Using default external servers: [8.8.8.8 8.8.4.4]
f27229a260c9
arthur#macro:~/docker$ sudo docker ps
[sudo] password for arthur:
ID IMAGE COMMAND CREATED STATUS PORTS
35f253bdf45a subsonic:latest /bin/bash -c service 2 days ago Up 2 days 4040->4040
arthur#macro:~/docker$ sudo docker attach 35f253bdf45a
arthur#macro:~/docker$ sudo docker ps
ID IMAGE COMMAND CREATED STATUS PORTS
as you can see the container exits after you attach to it and unblock the read.
You can of course use a more sophisticated script than read -p if you need to do other clean up, such as stopping services and saving logs etc.
I use a simple trick whenever I start building a new docker container. To keep it alive, I use a ping in the entrypoint script.
So in the Dockerfile, when using debian, for instance, I make sure I can ping.
This is btw, always nice, to check what is accessible from within the container.
...
RUN DEBIAN_FRONTEND=noninteractive apt-get update \
&& apt-get install -y iputils-ping
...
ENTRYPOINT ["entrypoint.sh"]
And in the entrypoint.sh file
#!/bin/bash
...
ping 10.10.0.1 >/dev/null 2>/dev/null
I use this instead of CMD bash, as I always wind up using a startup file.