I am using Docker to run an Rscript which saves the output to the containers /home folder.
Is there some way I can:
A) Access a shell inside that container as the script is running? Trying to attach from another terminal just brings me to the same output as the 'taken-over' original terminal. I would like to access the file-system while it runs. -SOLVED
B) Grab the output saved into /home on the container and put it on my local machine? Does the data 'dissapear' once the container dies, or is there some way to access it after the script finishes and the container closes?
Thanks!
a) you can use docker exec -it yourservername bash assuming you have bash on the image
b) you can set-up a volume which maps a directory on your machine to a directory in the container. Everything that gets written out or changed in the volume will be available locally. Check out the volume documentation
You can also restart the container and pull out the output if you haven't removed the container.
Related
I'm trying to give special permissions to a file located inside a container but I'm getting a "No such file or directory" error message.
The Dockerfile basically runs a R Script that generates an output.pptx file located inside an output folder created inside the container.
I want to send that output into a s3 bucket but for some reason it isn't finding the file inside the container.
# Make the output directory
RUN mkdir output
# Process main file
CMD ["RScript", "Script.R"]
# install AWS CLI
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN unzip awscliv2.zip
RUN ./aws/install -i /usr/local/bin/aws -b /usr/local/bin
# run AWS CLI to push output file to s3 folder
RUN chmod 775 ./output/*.pptx
RUN aws s3 cp ./output/*.pptx s3://bucket
Could this be related to the path I'm using for the file?
(Edited to fix a word-swap brain-fart in the first version.)
I get the idea that there is a misunderstanding of how the image should be used. That is, a DOCKERFILE creates an image, and the CMD is not actually run when building the image.
Up front:
an image is really just a tarball with filesystems; multiple "layers" are there, to indicate the layers of the build process (which can be squashed); an image has no "running" component, no processes are active in an image; and
a container is an image that is in a running state. It might be the CMD you specify, or it might be something else (e.g., docker run -it --rm myimage /bin/bash to run a bash shell with the container as the filesystem/environment). When the running command finishes and exits, the container is stopped.
Typically, you create an image "once" (security updates and such notwithstanding), and then run it as needed (either manually or via some trigger, e.g., cron or CI triggers). That might look like
docker run --rm myimage # using the default `CMD`
docker run --rm myimage R Script.R # specifying the command manually
with a couple assumptions:
the image has R installed and within the PATH ... though you could specify the full /path/to/bin/R instead; and
the default working directory (dockerfile's WORKDIR directive) contains Script.R ... or you can specify the full internal path to Script.R
With this workflow, I suggest a few changes:
from the DOCKERFILE, remove the lines after # run AWS CLI;
add to Script.R steps to copy the file to your S3 instance, either using the awscli you installed in the image, or by using the aws.s3 R package (which might preclude the need to install the awscli);
I don't use AWS S3, but I suspect that you need credentials to be able to access the bucket; there are many ways for dealing with images and "secrets" like S3 credentials, the most naïve approaches involve hard-coding the credentials into the container, which is a security risk; others involve "docker secrets" or environment variables. For instance,
docker run -it --rm -e "S3TOKEN=asldkfjlaskdf"
though even that might be intercepted by neighboring users on the docker host.
I have been working on an ASP.NET application using Docker, and when I launch it through Visual Studio it works great! However, if I try to run anything from the command line (or powershell, or VS's CLI/Powershell) it will run, but the container it generates refuses all connections.
I am on Windows 10 NT with Docker Desktop installed trying to run an ubuntu:18.04 image (i've tried Alpine, ubuntu:16.04 as well).
Steps to reproduce:
-Create a default ASP.NET application in Visual Studio
-Add Docker Support
-Run with 'Docker' selected
-Open browser, navigate to localhost:[YourPort]
-Success! Works as intended.
Then, either using the same image or a downloaded one (I tried dockersamples/static-site to confirm it wasn't a problem with the specific project):
-Open CMD
-Run docker run -p [HostPort]:[ContainerPort] [SameImageVSUses:tag] on a different port
-See that docker ps shows both containers running next to each other
-Open browser (Firefox), get error
The connection was reset
Update
I changed the ASP.NET app's program class to use 0.0.0.0 instead of localhost, I believe this was necessary but now I see
Secure Connection Failed
PR_END_OF_FILE_ERROR_
If I curl localhost:[MyPort], I get (52) empty reply from server
/Update
Well, maybe Visual Studio does more that I'm not aware of.
A little bit of digging shows yes, it throws in a ton of extra arguments! Using the copy/pasted command Visual Studio does gives me... the exact same error.
To clarify, the containers still run from the command line, I can ssh in or docker inspect them (in fact, the VS-started and CMD-started containers' docker inspect is identical other than network addresses it's bound to). I get no error messages at all from the process of building and starting the container, so if some part of it is failing it is doing so silently.
I'm relatively new to Docker but I can't seem to find a fix for this, or even a reason behind it. What is Visual Studio doing that I'm not? I've tried everything I'm aware of, I even had to wipe my machine (unrelated) and the exact same thing happened when I got everything reinstalled. My gut tells me it's something on my machine, but then the VS-launched one should fail too, right?
I can't find anything that tells me to flip a magic switch if I'm running CLI stuff, and nothing I do to the dockerfile or command arguments seems to work. I've never used VirtualBox or Docker Toolbox, this shouldn't be a wonky configuration screwed up by an old program because It works fine when launched from Visual Studio! Agh!
I hope that this is indeed a magic switch I haven't flipped, otherwise there is something very basic that I don't understand about what I'm working with.
If you are trying to run recent VS template you just need to follow this instruction:
Go to the Api project directory:
cd ./src/YourApiDirectory
Build Command:
docker build -f ./Dockerfile --force-rm -t yourapiimage:dev ..
Run Command:
docker run -it --rm -e "ASPNETCORE_ENVIRONMENT=Development" -p 58817:80 --name yourapiname yourapiimage:dev
please note that "-it" flag in last command will run your image in "interactive" mode. Also please note I am using only http connection via port 58817.
Thank you for the suggestions, it ended up being something rather frustrating. I think that it was a combination of two problems:
This stuff could be causing problems for others but I was mistaken, this did not work for me
First and foremost, no amount of docker configuration tells your website to listen for anything inside the container. I believe the website wasn't listening for anything when I initially tried most fixes.
The real problem was that the launchSettings.json in the .csproj Properties folder apparently overrides arguments from the command line!
Remember how I said '...run it alongside the first...'? That means I was never running the website on the correct set of ports. Apparently, -p 8001:443 -e ASPNETCORE_HTTPS_PORT=443 is not enough to make the site listen on 443. You must also set the sslPort in the launchSettings.json. Such is life, I suppose.
This is what finally worked
I ran docker-compose up in the solution directory. That's it. I didn't see a docker-compose.yml when I was looking in VS so I didn't think about it, but that's only because VS doesn't show solution-level items. I guess the thing that VS was doing that I wasn't was running docker-compose instead of individual commands.
When directly launch with Docker profile which is done via docker-compose file in Visual Studio, visual studio behind the screen merges different override files and does different tasks and one of them is attaching remote debugger in the container etc.
To help you I've created a sample asp.net core api via Visual Studio 2019 selecting .Net Core 3.0.
The following is the docker-compose that VS2019 generated on my machine when I launched my API via VS2019.
docker-compose -f "C:\Users\myuser\source\repos\testwebcore\docker-compose.yml" -f "C:\Users\myuser\source\repos\testwebcore\docker-compose.override.yml" -f "C:\Users\myuser\source\repos\testwebcore\obj\Docker\docker-compose.vs.debug.g.yml" -p dockercompose14364360289538262671 --no-ansi up -d --build --force-recreate --remove-orphans
I can get it work directly on powershell by running the following command, here I am using the same settings used in the override file by default created by VS2019. You have to run this command from parent folder outside the project folder.
docker-compose -f "C:\Users\myuser\source\repos\testwebcore\docker-compose.yml" -f "C:\Users\myuser\source\repos\testwebcore\docker-compose.override.yml" up
If you have directly build and run with the docker file instead of docker-compose
You can build with the following command and like before should run from outside folder of the project file.
docker build -f testwebcore/Dockerfile -t testcore
After building the image, you can run it with the below command but before that you need to create a certificate and pass couple of environment variables to the run command. The details of this is mentioned in the following page.. Especially the section Windows subsystem for Linux. I am running Linux containers on my Windows 10 laptop.
So you have to run the following command to generate certificate
dotnet dev-certs https -ep %USERPROFILE%\.aspnet\https\aspnetapp.pfx -p testpassword
So the complete run command with environment variables and certificate generated above the command is as follows.
docker run --rm -it -p 8000:80 -p 8001:443 -e ASPNETCORE_URLS="https://+;http://+" -e ASPNETCORE_HTTPS_PORT=8001 -e ASPNETCORE_Kestrel__Certificates__Default__Password="testpassword" -e ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx -v c:\users\myuser\.aspnet\https:/https/ testcore:latest
Due to the need to direct shiny-server logs to stdout so that "docker logs" (and monitoring utilities relying on it) can see them i'm trying to do some kind of :
tail -f <logs_directory>/*
Which works as needed when no new files are added to the directory, the problem is shiny-server dynamically creates files in this directory which we need to automatically consider.
I found other users have solved this via the xtail package, the problem is i'm using Centos and xtail is not available for centos.
The question is , is there any "clean" way of doing this via standard tail command without needing xtail ? or maybe there exists an equivalent package to xtail for centos?
You will probably find it easier to use the docker run -v option to mount a host directory into the container and collect logs there. Then you can use any tool you want that collects log files out of a directory (logstash is popular but far from the only option) to collect those log files.
This also avoids the problem of having to both run your program and a log collector inside your container; you can just run the service as the main container process, and not have to do gymnastics with tail and supervisord and whatever else to try to keep everything running.
I am running Shiny-Server (to run web applications built in R) in a Docker container. I have an application where user can upload some files. It's working, but on server OS I needed to give write and read permissions to the user "shiny". The problem is that everytime I need to do something with the container (like restart, or simply stop and start) I lose the change made on folder's permissions, which come back to default.
I tried to use docker commit and docker run again on container, using the new image, but it did not work. So now I am searching if I can use docker run and docker exec togheter, doing something like this: docker run <docker commands to run shiny-server> exec -it bash <bash commands to change folder permissions>.
Is it possible? Does anyone has a good solution for this case?
Thanks.
I am trying to reproduce results of an R script on my local Windows OS (reproduce the results which it gave on kaggle server). For this someone suggested to use docker images to run r script on my local.
I have installed docker and finished the steps to set it up by following instructions given here https://docs.docker.com/windows/step_one/
After installing, I am struggling with on how to create the kaggle R image and run an R script on my local using local resources/data. Can someone please help me with these?
You can load already builded image rstat from dockerhub:
docker run kaggle/rstats
For using your local data you should create volume:
docker run -v /you/local/data/path:path/in/docker/container kaggle/rstat
Volume binds your local storage with container storage. Also you can create additional volume for output data.
The last line in rstate dockerfile is
CMD ["R"]
It means that R console will be called after container start. Just past your script in terminal (script should use data from mounted volume in container and write result to mounted output volume). After script execution you can stop container. Your output data will be saved on your local machine.
P.S. image is giant (6Gb). I never seen before such large docker image.