I would like to run an R Shiny application via Shinyproxy locally first.
My Application is functional, after docking it, it still works with a docker command.
I created a ‘shinyproxy’ image and launched the container but I get the following error message when I click on the application name:
# Error
**Status code:** 500
**Message:** Container did not respond in time
I suspect a problem related to the different ports … notions that I don’t master.
The dockerfile of shiny app looks like :
with the following Rprofile :
local({
options(shiny.port = 3838, shiny.host = "0.0.0.0")
})
The shinyproxy application.yml looks like :
What I tried is the following command :
docker run -d -v /var/run/docker.sock:/var/run/docker.sock --net sp-net -p 8080:8080 shinyproxy_container
All help is welcome
Related
I'm trying to run a docker image, which is essentially the default ASP.Net 6 default API:
docker run my-image-name --restart=aways -d -p 80:5111
I've tried a few different ways of running this, but it appears to always do the same thing. First of all, whether I launch in detached mode or interactive, I get the following display:
{"EventId":14,"LogLevel":"Information","Category":"Microsoft.Hosting.Lifetime","Message":"Now listening on: http://[::]:80","State":{"Message":"Now listening on: http://[::]:80","address":"http://[::]:80","{OriginalFormat}":"Now listening on: {address}"}}
{"EventId":0,"LogLevel":"Information","Category":"Microsoft.Hosting.Lifetime","Message":"Application started. Press Ctrl\u002BC to shut down.","State":{"Message":"Application started. Press Ctrl\u002BC to shut down.","{OriginalFormat}":"Application started. Press Ctrl\u002BC to shut down."}}
{"EventId":0,"LogLevel":"Information","Category":"Microsoft.Hosting.Lifetime","Message":"Hosting environment: Production","State":{"Message":"Hosting environment: Production","envName":"Production","{OriginalFormat}":"Hosting environment: {envName}"}}
{"EventId":0,"LogLevel":"Information","Category":"Microsoft.Hosting.Lifetime","Message":"Content root path: /app","State":{"Message":"Content root path: /app","contentRoot":"/app","{OriginalFormat}":"Content root path: {contentRoot}"}}
It then seems to stall - I can't ctrl-c, nor can I attach to the process (it does show under docker ps). I also can't connect to the API:
http://localhost:5111/swagger
Just returns ERR_CONNECTION_REFUSED
My guess here is that there's something wrong with the dockerfile itself - however, it builds fine. The question I have is: how can I debug this in order to determine the error?
I can't add this as a comment because i don't have the reputation so it will have to be an answer instead.
I am new to Docker myself so I am not sure why it is stalling but when you specify the port with -p 80:5111 you are mapping port 5111 in the container to port 80 on the Docker host.
So you should connect with http://localhost:80/swagger instead.
I have configured Subuid and Subgid after installing Podman in RHEL7
I have created a simple Dockerfile to print hello world and was trying to build the image.
My Dockerfile
FROM alpine
CMD ["echo", "Hello World"]
To test I am running below command
Podman build -t imagename .
I see the below error received.
STEP 1: FROM alpine
Error: error creating build container: The following failures happened while trying to pull image specified by "alpine" based on search registries in /etc/containers/registries.conf:
* "localhost/alpine": Error initializing source docker://localhost/alpine:latest: error pinging docker registry localhost: Get https://localhost/v2/: dial tcp [::1]:443: connect: connection refused
* "registry.access.redhat.com/alpine": Error initializing source docker://registry.access.redhat.com/alpine:latest: error pinging docker registry registry.access.redhat.com: Get https://registry.access.redhat.com/v2/: read tcp 10.70.85.174:17758->23.54.147.129:443: read: connection reset by peer
* "registry.redhat.io/alpine": Error initializing source docker://registry.redhat.io/alpine:latest: error pinging docker registry registry.redhat.io: Get https://registry.redhat.io/v2/: read tcp 10.70.85.174:36028->104.79.150.216:443: read: connection reset by peer
* "docker.io/library/alpine": Error initializing source docker://alpine:latest: error pinging docker registry registry-1.docker.io: Get https://registry-1.docker.io/v2/: read tcp 10.70.85.174:53352->18.213.137.78:443: read: connection reset by peer
Am I missing any configuration ?
Thanks
Have you still the docket Daemon running and/or docker installed?
First stop the docker Daemon
sudo systemctl stop docker
OR
sudo service docker stop
Then uninstall docker
Ubuntu here but what ever you need you can Google :D
sudo apt-get remove docker docker-engine docker.io containerd runc
Try again,
If other fail now try a refreshed install of podman
sudo --reinstall install podman
Sources
https://www.cyberciti.biz/faq/debian-ubuntu-linux-reinstall-a-package-using-apt-get-command/
https://askubuntu.com/questions/935569/how-to-completely-uninstall-docker
https://intellipaat.com/community/43965/how-to-stop-docker
https://podman.io/getting-started/installation
I suggest that you first search your image in registries
podman search alpine
you should get a list of images available. Choose the one you want - version, name, tag etc and put that in the dockerfile.
to be sure it is accessible, do the 'pull' manually
podman pull alpine<version,tag>
I have a shiny app that is running on an EC2 instance that I'd like to deploy in docker. Running the shiny app on localhost (so the EC2 instance) works fine, albeit it takes ~5 minutes to load.
However, I've been told to run the following command to get the website running:
sudo docker run --rm -p $PORT:1234 --mount type=bind,source=/mnt/compbio/jupyter_s3contents/web/dashboard,target=/srv/shiny-server/ --mount type=bind,source=/mnt/compbio/jupyter_s3contents/,target=/home/rstudio/compbio/ --mount type=bind,source=/var/log/shiny-server/,target=/var/log/shiny-server/ shiny
However, all I get is the following output, and it doesn't run beyond this point.
[2020-11-10T15:25:36.498] [INFO] shiny-server - Shiny Server v1.5.13.944 (Node.js v12.15.0)
[2020-11-10T15:25:36.503] [INFO] shiny-server - Using config file "/etc/shiny-server/shiny-server.conf"
[2020-11-10T15:25:36.533] [WARN] shiny-server - Running as root unnecessarily is a security risk! You could be running more securely as non-root.
[2020-11-10T15:25:36.536] [INFO] shiny-server - Starting listener on http://[::]:1234```
I am new to docker, any idea what I am doing wrong here?
What am doing?
I am trying to deploy a R model on Google App Engine Flex with docker container. My final objective is to serve model as API. I am getting errors when deploying app using plumber and docker container.
R code with plumber runs fine on my local computer using RStudio. But now I am using AI platform jupyter notebooks with R. I tested the docker locally using Docker Run image-name command and I get below message once Docker run.
Starting server to listen on port 8080
When I run the R + plumber code in my local Rstudio , I get below messages
Starting server to listen on port 8080
Running the swagger UI at http://127.0.0.1:8080/__swagger__/
After this I run gcloud app deploy ( this agains build docker image etc) , build runs for more than 15 mins and fails with error message , as shown in the end.
Details of code etc:
app.yaml
service: iris-custom
runtime: custom
env: flex
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 20
# added below to increase app_start_timeout_sec
readiness_check:
path: "/readiness_check"
check_interval_sec: 5
timeout_sec: 4
failure_threshold: 2
success_threshold: 2
app_start_timeout_sec: 900
Dockerfile
FROM gcr.io/gcer-public/plumber-appengine
# install the linux libraries needed for plumber
RUN export DEBIAN_FRONTEND=noninteractive; apt-get -y update \
&& apt-get install -y
# install plumber commented as plumber is preinstalled
#RUN R -e "install.packages(c('plumber'), repos='http://cran.rstudio.com/')"
# copy everything from the current directory into the container
WORKDIR /payload/
COPY [".", "./"]
# open port 8080 to traffic
EXPOSE 8080
# when the container starts, start the main.R script
ENTRYPOINT ["Rscript", "main.R"]
main.R
library(plumber)
r <- plumb("rest_controller.R")
r$run(port=8080, host="0.0.0.0")
rest_controller.R
#* #get /predict_petal_length
get_predict_length <- function(){
dataset <- iris
# create the model
model <- lm(Petal.Length ~ Petal.Width, data = dataset)
petal_width = "0.4"
# convert the input to a number
petal_width <- as.numeric(petal_width)
#create the prediction data frame
prediction_data <- data.frame(Petal.Width=petal_width)
# create the prediction
predict(model,prediction_data)
}
Error message:
ERROR: (gcloud.app.deploy) Error Response: [4] Your deployment has
failed to become healthy in the allotted time and therefore was rolled
back. If you believe this was an error, try adjusting the
'app_start_timeout_sec' setting in the 'readiness_check' section.
I tried a little modified code ,deployment succeeds but app engine still does not work.
issue with code link
From the Google Cloud Doku it seems like in order for your Apllication to pass it needs to return the http status code 200 (see https://cloud.google.com/appengine/docs/flexible/custom-runtimes/configuring-your-app-with-app-yaml#updated_health_checks).
But your Application returns the http status code 404 on the path you have defined for redincess check, since it doesn't exist.
readiness_check:
path: "/readiness_check"
So I would either suggest to add this path as an option to your rest_controller.R file like
#* #get /readiness_check
readiness_check<- function(){
return ("app ready")
}
or modify your app.yml so that it checks the get_predict_length enpoint instead
readiness_check:
path: "/get_predict_length"
So, I am right now at this point. The webpage can be accessed without any errors and without using any specific port. Example: www.my-example.com.
But, this works only when I run the command "uwsgi --socket 0.0.0.0:4567 --protocol=http -w wsgi" in my server.
How to automate this app deployment through nginx?
You can use something like Supervisor to automatically start uWSGI, restart it if it fails, and log stderr/stdout:
[program:app]
# emulates a virtualenv
directory = /srv/app/
environment = PATH="/srv/app/virtualenv/bin"
command = /srv/app/virtualenv/bin/uwsgi --ini /srv/app/config/uwsgi.ini
autostart = true
autorestart = true
user = app-user