Docker and Analytics Install - google-analytics

I have a docker file called quasar.dockerfile. I built the docker file and everything loaded successfully.
#quasar.dockerfile
FROM java:8
WORKDIR /app
ADD docker/quasar-config.json quasar-config.json
RUN apt-get update && \
apt-get install -y wget && \
wget https://github.com/quasar-analytics/quasar/releases/download/v2.3.3-SNAPSHOT-2121-web/web_2.11-2.3.3-SNAPSHOT-one-jar.jar
EXPOSE 8080
CMD java -jar web_2.11-2.2.3-SNAPSHOT-one-jar.jar -c /app/quasar-config.json
I then tried running the docker and I get this error saying that I am unable to access the jarfile.
[test]$ docker build -f docker/quasar.dockerfile -t quasar_fdw_test/quasar .
Sending build context to Docker daemon 1.851 MB
Successfully built a7d4bc6c906f
[test]$ docker run -d --name quasar_fdw_test-quasar --link quasar_fdw_test-mongodb:mongodb quasar_fdw_test/quasar
6af2f58bf446560507bdf4a2db8ba138de9ed94a408492144e7fdf6c1fe05118
[test]$ docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6af2f58bf446 quasar_fdw_test/quasar "/bin/sh -c 'java -ja" 5 seconds ago Exited (1) 4 seconds ago quasar_fdw_test- quasar
[test]$ docker logs 6af2f58bf446
Error: Unable to access jarfile web_2.11-2.2.3-SNAPSHOT-one-jar.jar
How come the process keeps getting killed? Seems like it has to do with being unable to run the jarfile but the build needed to access that file and happened successfully. Is this a linking issue?

Try to use full path on Dockerfile
CMD java -jar /web_2.11-2.2.3-SNAPSHOT-one-jar.jar -c /app/quasar-config.json

Related

How to call Dockerized R Plumber API from an external source?

I have created an R Plumber API, and deploy it in a Docker image.
Dockerfile:
ARG R_VERSION=latest
FROM rocker/r-ver:${R_VERSION}
# install the linux libraries needed for plumber
RUN apt-get update -qq && apt-get install -y \
libssl-dev \
libcurl4-gnutls-dev
COPY / /
# Making home & test folders
RUN Rscript required-packages/required-packages.R
# Giving permission to tests to run
RUN chmod +x tests/run_tests.sh
# Run Tests
RUN tests/run_tests.sh
# open port 7575 to traffic
EXPOSE 7575
# when the container starts, start the main.R script
ENTRYPOINT ["Rscript", "./main.R"]
main.R
library(plumber)
r <- plumb("./plumber.R")
r$run(port = 7575, host = "0.0.0.0")
I run the container with the following command.
docker run --rm -p 7575:7575 'container-name'
On the machine, http://localhost:7575/echo works perfectly fine. However, I cannot call the API from an external computer with http://ip_address:7575/echo.
What could be the problem? As far as I know, the 7575 port is open.

How to force a shiny app ran in Docker to use https

I am trying to run a shiny application on an open port on my server. I usually run docker images using command docker run -p 4000:3838 tag_name, assuming that docker container has exposed port, and shiny app is running at this port.
This all works completely fine for any shiny application that is using http. But I need https.
So the Dockerfile I use consists of:
FROM rocker/r-ver:4.0.1
# System libs:
RUN apt-get update
RUN apt-get install -y libcurl4-openssl-dev
RUN apt-get install -y libssl-dev
RUN apt-get install -y zlib1g-dev
RUN apt-get install -y libxml2-dev
# R packages installed
RUN R -e "install.packages('remotes')"
RUN R -e "remotes::install_version('searchConsoleR')"
RUN R -e "remotes::install_version('googleAuthR')"
# [...] more R libraries are installed
# Copy application files to a dir
RUN mkdir /root/app
COPY . /root/app
# Expose and set run command from dir above
EXPOSE 3838
CMD ["R", "-e", "shiny::runApp('/root/app', port = 3838, host = '0.0.0.0')"]
Then I execute docker build -t tag_name . and docker run -p 4000:3838 tag_name.
The page is available at http://server.host:4000
However, since I am using Google Login, I need to use https. But when I visit server's https://server.host:4000 I see an error of page not existing.
Can someone please help?

How to find error logs when my dockerized shiny app does not work

I'm trying to put my shiny app in docker container. My shiny app works totally fine on my local computer. But after dockerize my shiny app, I always have error message on my localhost like The application failed to start. The application exited during initialization..
I have no idea why that happens. I'm new to docker. How can I find the error logs when I run the docker image? I need the log to know what goes wrong.
Here is my dockfile:
# Install R version 3.6
FROM r-base:3.6.0
# Install Ubuntu packages
RUN apt-get update && apt-get install -y \
sudo \
gdebi-core \
pandoc \
pandoc-citeproc \
libcurl4-gnutls-dev \
libcairo2-dev/unstable \
libxt-dev \
libssl-dev
# Download and install ShinyServer (latest version)
RUN wget --no-verbose https://s3.amazonaws.com/rstudio-shiny-server-os-build/ubuntu-12.04/x86_64/VERSION -O "version.txt" && \
VERSION=$(cat version.txt) && \
wget --no-verbose "https://s3.amazonaws.com/rstudio-shiny-server-os-build/ubuntu-12.04/x86_64/shiny-server-$VERSION-amd64.deb" -O ss-latest.deb && \
gdebi -n ss-latest.deb && \
rm -f version.txt ss-latest.deb
# Install R packages that are required
# TODO: add further package if you need!
RUN R -e "install.packages(c( 'tidyverse', 'ggplot2','shiny','shinydashboard', 'DT', 'plotly', 'RColorBrewer'), repos='http://cran.rstudio.com/')"
# Copy configuration files into the Docker image
COPY shiny-server.conf /etc/shiny-server/shiny-server.conf
COPY /app /srv/shiny-server/
# Make the ShinyApp available at port 80
EXPOSE 80
# Copy further configuration files into the Docker image
COPY shiny-server.sh /usr/bin/shiny-server.sh
CMD ["/usr/bin/shiny-server.sh"]
I built image and ran like below:
docker build -t myshinyapp .
docker run -p 80:80 myshinyapp
Usually the logs for any (live or dead) container can be found by just using:
docker logs full-container-name
or
docker logs CONTAINERID
(replacing the actual ID of your container)
As first said, this usually works as well even for stopped (not still removed) containers, which you can list with:
docker container ls -a
or just
docker ps -a
However, sometimes you won't even have a log, since the container was never created at all (which I think, by experience, fits more to your case)
And it can be happening simply because the docker engine is unable to allocate all of the resources that your service definition is requiring to have available.
The application failed to start. The application exited during initialization
is usually reflect of your docker engine being unable to get the required resources.
And the most common case for that, is just as simple as your host ports:
If you have another service (being dockerized or not) using (for example) that port that you want to use for your service (in your case, port 80) then Docker would just be unable to start your container.
So... in short... the easiest fix for that situation (and your first try whenever you face this kind of issues) is just to bind any other port from your host (say: 8080), to that 80 port that your service will be listening to internally (inside your container):
docker run -p 8080:80 myshinyapp
The same principle applies to unallocatable volumes (e.g.: trying to bind a volume as read-only that doesn't actually exist in the host)
As an aside comment/trick:
Since you're not setting a name for your container, you will need to use the container id instead when looking for its logs.
But instead of typing (or copy-pasting) the full container id (usually something like: 1283c66babea or even larger) you can just type in a few first digits instead, and it will still work as expected:
docker logs 1283c6 or docker logs 1283 or even docker logs 128
(of course... as long as you don't have any other 128***** container)

css file integrity check fails after docker build

I have the following issue.
Failed to find a valid digest in the 'integrity' attribute for resource 'http://127.0.0.1:8080/uistatic/css/bootstrap4.0.0.min.css' with computed SHA-256 integrity 'xLbtJkVRnsLBKLrbKi53IAUvhEH/qUxPC87KAjEQBNo='. The resource has been blocked.
And this is happening when i put my site into docker with building this Dockerfile:
FROM python:3.6
COPY skfront /app
WORKDIR /app
RUN mkdir -p /static/resources
RUN mkdir logs
RUN mkdir certs
RUN pip3 install -r requirements.txt
RUN python3 manage.py collectstatic --settings blog_site.settings
EXPOSE 8080
CMD python3 website.py -l 0.0.0.0 -p 8080
This css is static file that doesn't change.
Any ideas why this is happening?
I found out that building Dockerfile on Windows can mess-up the final image.
When I build my site under Linux there ware no integrity errors.

Cannot copy intermediate docker container files to host

I have a Dockerfile, it does dotnet publish and the dll's are copied to intermediate docker container. I would like to copy the dll's which are generated in container to my local system (Host) as well.
I believe we can use "cp" command to do that, but I am not able to find a solution to get the intermediate container Id to use the "cp" command.
syntax: docker cp CONTAINER:Container_Path Host_Path.
Please suggest me any other better solution for this scenario.
Dockerfile:
FROM microsoft/aspnetcore-build:1.1.4 as builder
COPY . /Code
RUN dotnet restore /Code/MyProj.csproj
RUN dotnet publish -c Release /Code/MyProj.csproj
RUN cp CONTAINER: /Code/bin/Release/netcoreapp1.1/publish /binaries
Thanks.
This answer is outside of the Dockerfile.
first your Dockerfile would have to have a volume.
[VOLUME] /my/path/in/container
To get files into and out of a volume, try using tar -cvf and tar -xvf to put and get files between a container and a host.
To put files from host's newfiles.tar in pwd to a container at /var/lib/neo4j/conf mount.
docker run --rm \
-v my-volume-data:/my/path/in/container -v $(pwd):/newfiles ubuntu bash -c \
"cd /my/path/in/container && tar -xf /newfiles/newfiles.tar"
To get files from into a container at /my/path/in/container mount to a host oldfiles.tar.
docker run --rm \
-v my-volume-data:/my/path/in/container -v $(pwd):/newfiles ubuntu bash -c \
"cd /my/path/in/container && tar -cf /newfiles/origfiles.tar"
The --user 1000:1000 is optional if your container has a user with uid of 1000.

Resources