Docker directory structure in local system - directory

I am using http://blog.thoward37.me/articles/where-are-docker-images-stored/ to understand the directory structure of the docker.That link suggests that every docker image entry under /var/lib/docker/graph/ has three fields json,layersize and /layer. But in my local system i dont find the /layer directory for any image.why is that so ?.Docker version iam using is 1.3.2

No issues.I found the answer. The link i specified uses docker 0.8 and i am using docker 1.3.2. In the new docker version layers of rootfs are found in /var/lib/docker/aufs/mnt/

Related

Error at run shiny app in the background with system()?

I found two ways to run a shiny application in the background:
The first one
path_aux = "R -e \"shiny::runApp('inst/app.R', launch.browser = TRUE)\""
system(path_aux, wait = FALSE)
The issue
Seems like this alternative runs a different version of my shiny app, I mean, I have a variable called fileName that before run my app I run this line:
fileName <- "OpenTree"
But when I run
path_aux = "R -e \"shiny::runApp('inst/app.R', launch.browser = TRUE)\""
system(path_aux, wait = FALSE)
My variable fileName have another value, always the same, which I don't know where it is.
Second Alternative
I test with other alternative
rstudioapi::jobRunScript(path = "inst/shiny-run.R")```
shiny-run.R
shiny::runApp(appDir = "inst/app.R",port = 3522)
This alternative works fine
I would like to use the first one because maybe I want to have multiple windows running my app that is a package I just want to know where is this value "fileName" comes from. I don't know if system take a snapshot or something like that.
Thank you
Since I recommended it in a comment, I'll formalize it a little here: Docker. In hindsight, this answer turned into much more of a tutorial/howto than intended. I hope it isn't daunting; the steps are actually quite straight-forward, where once you have docker available, you might only need to run something similar to:
$ docker run --rm -d -p 3838:3838 \
-v c:/Users/r2/R/win-library/4.0/shiny/examples/:/srv/shiny-server \
-v c:/Users/r2/shinylogs/:/var/log/shiny-server/ \
-v c:/Users/r2/R/otherlibrary/:/mylibrary/ \
rocker/shiny-verse:4.0.3
Very little of the below is R code, it is all in a shell/terminal. I am testing it on win10 using git-bash, but this should work on macos or linux with very little modification.
Table-of-Contents
Install docker if not installed.
Pull the shiny-server image.
Start the shiny-server container (with app-dir mount).
Browse to the running app. Update your apps, changes take effect immediately.
Stop the server (when done for the day).
And then two sections on Logs and some ways to deal with Other Packages.
Content
If not already available, make sure the docker is available:
$ docker -v
Docker version 20.10.2, build 2291f61
If not found, then Get Docker is the best resource for installing it.
Pull one of the available images. I'm demonstrating using rocker/shiny-verse which includes shiny and all of the tidyverse dependencies (image size is 1.95GB), but there is also rocker/shiny that is slightly smaller (1.56GB). (The docs are a bit more detailed at the second link.)
I strongly recommend pulling a specific version instead of :latest (or no version), since you likely need to closely mimic the dev-environment on your laptop. This running container will not use your local R, it has its own. So it is possible that you only have R-3.6 installed on your host computer and use R-4.0.3 within the container. Use the same (major.minor) version you're testing/developing on.
$ docker pull rocker/shiny-verse:4.0.3
Using default tag: latest
latest: Pulling from rocker/shiny-verse
a4a2a29f9ba4: Already exists
127c9761dcba: Already exists
d13bf203e905: Already exists
4039240d2e0b: Already exists
fffc4b622efe: Pull complete
c265253654a5: Pull complete
e0161c6ad391: Pull complete
8e7558fa9ec5: Pull complete
Digest: sha256:ce760db38a4712a581aa6653cf3a6562ddea9b48d781aad4f9582b8056401317
Status: Downloaded newer image for rocker/shiny-verse:4.0.3
docker.io/rocker/shiny-verse:4.0.3
Determine the (host) directory to "mount". I'm going to use the examples installed with R's shiny package, but you can use anything. There are two options:
Single App
Mount: the app directory itself, e.g., c:/Users/r2/R/win-library/4.0/shiny/examples/01_hello/
Use: http://localhost:3838/ will run that one app.
Directory of Multiple Apps
Mount: a directory with other app-directories within it, e.g., c:/Users/r2/R/win-library/4.0/shiny/examples/
Use: http://localhost:3838/ will show the directory of apps, and http://localhost:3838/01_hello/ will run the first app.
In your case, you might choose something like /path/to/your/package/inst/ and opt for the single-app option from above.
Per the docs at rocker/shiny, we'll mount this on /srv/shiny-server/.
$ docker run --rm -d -p 3838:3838 \
-v c:/Users/r2/R/win-library/4.0/shiny/examples/:/srv/shiny-server \
rocker/shiny-verse:4.0.3
ba4654d541ef39fb4882364446ece0b516e518baf946c21d1857565f05acd2c5
(FYI: the -p 3838:3838 is needed to "expose" the server outside of the running shiny server "container". The first number is what is visible to your computer, the second number must remain 3838 and is what the server is actually running internally. If you don't include this, you won't see it.)
Point your browser to http://localhost:3838 (or whatever port you assigned above) and you should see the app (if single-app) or the directory of apps.
When you are done, kill (stop) the container:
$ docker stop exciting_gagarin
Logs
There will come a time when you want/need to look at logs (e.g., warnings/errors). There are two types of logs:
server logs, accessible by running
docker logs -f exciting_gagarin
(Ctrl-C to stop the logs, server keeps running.)
app logs (per session). For this, you can jump into the running container and look at the logs, but these logs will be deleted when the container is stopped (because files inside docker containers are by default ephemeral) ... and this can be inconvenient. Instead, you can mount the app-logs to a local directory by including this volume-mount directive to the docker run command:
-v c:/Users/r2/shinylogs/:/var/log/shiny-server/
When an app is running and a warning/error occurs, you should see files like:
$ ls -l c:/Users/r2/shinylogs/
total 5
-rw-r--r-- 1 r2 197121 112 Jan 8 08:38 shiny-server-shiny-20210108-133524-43293.log
-rw-r--r-- 1 r2 197121 237 Jan 8 08:53 shiny-server-shiny-20210108-135551-35903.log
-rw-r--r-- 1 r2 197121 271 Jan 8 08:54 shiny-server-shiny-20210108-135610-44445.log
-rw-r--r-- 1 r2 197121 271 Jan 8 08:54 shiny-server-shiny-20210108-135612-40369.log
-rw-r--r-- 1 r2 197121 145 Jan 8 08:57 shiny-server-shiny-20210108-140001-43857.log
Other Packages
It is possible that you have packages that are not included in rocker/shiny or rocker/shiny-verse. Fear not! Follow these instructions:
Create a local (not in-container) directory where you will install these missing packages. I produce a separate folder because it is possible that the host OS is different from the in-container OS (debian), in which case packages may not be in the right format. I'll use c:/Users/r2/R/otherlibrary. I will not use this path in the host OS "R" instance.
Add a volume-mount to your command that makes this available inside the container:
-v c:/Users/r2/R/otherlibrary/:/mylibrary/
If you have a container running, you'll need to docker stop it and then rerun it with this addition.
In your app, add
.libPaths( c("/mylibrary", .libPaths()) )
Two things about this: (1) it is required for R to look in the alternate directory; and (2) this will do no harm if run on the host and "/mylibrary" does not exist. You can optionally include a conditional of if (dir.exists("/mylibrary"))... if you like.
Install the additional CRAN packages.
I see many apps that are built-in to install packages as needed, such as
if (!require("ggrepel")) install.packages("ggrepel")
This isn't wrong (and is the correct use of require versus library), but it isn't my preferred way of doing things.
An alternative is to install these needed packages manually, which we can do by entering R within the container and install to the appropriate library-path.
$ docker exec -it exciting_gagarin R
R version 4.0.3 (2020-10-10) -- "Bunny-Wunnies Freak Out"
Copyright (C) 2020 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)
R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.
R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.
Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.
> .libPaths( c("/mylibrary", .libPaths()) )
> .libPaths()
[1] "/mylibrary" "/usr/local/lib/R/site-library"
[3] "/usr/local/lib/R/library"
> install.packages("ggrepel")
Installing package into ‘/mylibrary’
(as ‘lib’ is unspecified)
trying URL 'https://packagemanager.rstudio.com/all/__linux__/focal/latest/src/contrib/ggrepel_0.9.0.tar.gz'
Content type 'binary/octet-stream' length 980180 bytes (957 KB)
==================================================
downloaded 957 KB
* installing *binary* package ‘ggrepel’ ...
* DONE (ggrepel)
The downloaded source packages are in
‘/tmp/RtmpgPG8YS/downloaded_packages’
> q("no")
Non-CRAN packages. I realize that you're working on a local package. Both of the referenced images contain the devtools package, so with an extra volume-mount, you can use devtools::install, devtools::load_all, or if you've built the source package already, you can install.packages("/mylibrary/mypackage.tar.gz").
More detail on why it might be necessary to have a different library of packages: compiled code. For instance, when I install ggrepel (one example) on windows, among the files are
ggrepel/libs/i386/ggrepel.dll
ggrepel/libs/x64/ggrepel.dll
which are compiled libraries specific to the Windows OS. When I install the same package on linux, I do not have those libraries, instead I see
ggrepel/libs/ggrepel.so
Why do I say linux? Because these rocker/shiny* images are running linux under the hood ... even if your host OS is windows or macos.
The file formats are incompatible, so renaming doesn't work. Many packages might work out-of-the-box (have not tested), but if you try to mount your host-OS R library path into the container and see errors, think of this discussion.
Notes on docker
(In case you are not familiar.)
In general, everything in a running docker container is ephemeral, meaning logs and saved files are gone when the container is stopped. The way around this is to use volume-mounts as we have done above. In this case, if the app saves a file to its own directory, then you will see it in the host OS under /path/to/mypackage/inst/....
Depending on your OS configuration, this is visible only on the local host, so either http://localhost:3838 or http://127.0.0.1:3838 (which are not always identical, e.g. windows sometimes). If you need it exposed to other computers on your network ... then I recommend you do a little research on this. Running servers in docker and exposing it to the rest of the network has risks and consequences.

Docker : Ubuntu/Shiny R : error when I try to run my own custom environment

I’m quite new at docker, and I’d like to create a docker environement with exactly the same configuration as my production server one. My docker will be used as a local development environement for one specific R Shiny Server application.
Here are my settings :
I’m working locally on Windows 7
Server is Ubuntu 18.04.1 LTS
Server R version : 3.5.1
I was managed to use rocker/rstudio, but it doesn’t allow me to deal with R versions; furthermore, it’s based on Debian distribution.
So, quite innocently, I tried to build my own Dockerfile based on already existing Dockerfiles, to perform installation from Ubuntu -> R -> RStudio + Shiny server.
My Dockerfile is built successfully, but I get the following error when I try to run it with the following command line :
docker run -p 8787:8787 -e PASSWORD=Mypswd -v /c/Users/njeanray/Documents/Myproject:/home/rstudio/myproject rstudio:R3.5.1
Please, find my Dockerfile at this place :
https://wetransfer.com/downloads/972d94d2ec730ecb8afbc2b315c8fbb020200429094458/3c31aa
It’s quite weird because I’ve taken the code from Dockerfile rocker/rstudio, and running rocker/rstudio works…
How can I manage to run my environment from Ubuntu 18.04, with R 3.5.1 and RStudio ?
Can you tell me what I'm doing wrong ?
Many thanks in advance,
Best regards
I created a docker image from the Dockerfile shared by you. It is hosted on https://hub.docker.com/r/aktechthoughts/r-studio-docker.
It is working fine.

How to install Julia kernel for Jupyterhub

I'm trying to make julia language available through jupyterhub on an ubuntu server.
I already have installed and configured the jupyterhub. Its working fine with python3.5.
And the authentication method is Regular Unix users and PAM.
I installed the julia language in /usr/local/julia-1.0.2/ and it is available for all users globally.
then with the root user I set the JULIA_DEPOT_PATH="/usr/share/juliapackages/
then again with the root user, I run the julia and run the
using Pkg
Pkg.add("IJulia")
it installs the IJulia in the specified path.
from this point, I didn't find any further useful instructions on the internet over the subject of installing julia kernel for jupyterhub, so I don't know how to proceed.
does anybody have a good step by step document to find the solution?
I followed the instruction proposed here but it seems doesn't work for me.
As you are using Jupyterhub, the best way would be to use a docker spawner and use the data science docker image which has Julia already installed and configured.
https://github.com/jupyter/docker-stacks/blob/master/datascience-notebook/Dockerfile

Meteor - Test application using local package over the published one

I'm using Meteor 0.9.3, and I want to try to make some changes to a Meteor smart package. I'm using the package in my app already, let's call it: author:smartpackage.
First, I removed my reference to the published package:
meteor remove author:smartpackage
I've forked the repository on GitHub, and made a local clone in:
/somedir/meteor-smartpackage/
I've created a directory in my meteor app:
/meteor/myApp/packages
and created a symlink:
ln -s /somedir/meteor-smartpackage /meteor/myApp/packages/meteor-smartpackage
How do I now add this local package into my app? I've tried a variety of
meteor add xxxx
options, but I can't find the right command. Am I even close?
The steps you described look good to me, so maybe this is the symlink stuff which is messing around.
The proper way of maintaining private packages is to have a packages/ directory somewhere in your filesystem, let's say in ~/meteor/packages, then you have to set a special environment variable that is called PACKAGE_DIRS, which is looked up by the meteor command line tool to find local packages that reside out of official package repositories.
So let's set this environment variable in your .bashrc and resource it :
echo "export PACKAGE_DIRS=$HOME/meteor/packages" >> ~/.bashrc;
. ~/.bashrc
Then assuming your forked package resides in ~/meteor/packages, meteor add author:package should work normally.
Update to saimeunt's answer, for Meteor 1.2+
I found that loading the local package requires leaving out the author when running meteor add.
Loads Local Package
meteor add cocos2d-meteor
Loads Remote Package
meteor add jakelin:cocos2d-meteor

Build and run a development environment with Docker

We are trying to create a Docker container which will host and run our webapp (mainly written in PHP with Symfony2).
For the moment, the container embeds all the application code, cloned when building the image (through a Dockerfile). The app runs correctly, on OSX, through Vagrant (Precise64 base image).
We are now struggling to share the container embedded code with the host (Vagrant -> OSX) for development purpose (edit a file on the host OSX should affect the container code).
Seems that there is no way to share this folder from container to the host.
Sharing a folder from host to container (-v option of the run command) overwrites the original container folder.
A soft link does not work as well since the hosts (Vagrant and OSX) could not read the original location.
I'm sure that the solution is with the Docker's volumes (http://docs.docker.io/en/latest/use/working_with_volumes/) but we have not figured out yet how to make it works.
Do you have feedback / experience on it ?
You can share your file in OSX to container in the following line:
OSX dir(host) -shared fold-> /vagrant(vagrant) -volume-> container dir(container)
but the the file is saved in you host not container.
If you want save the file in container and share it to your OSX host, all your container file is in a aufs dir in /var/lib/docker/aufs/mnt/{container id}, your can share this fold to you OSX by the feather supported by vagrant or others:
container dir(container) -aufs-> /var/lib/docker/aufs/mnt/{id}(vagrant) -some-> OSX dir(host)

Resources