I am trying to run a WordPress app inside of a docker container on Ubuntu VPS using Nginx-Proxy.
First I run the nginx-proxy server using the following command
docker run -d \
-p 80:80 \
-p 443:443 \
--name proxy_server \
--net nginx-proxy-network \
-v /etc/certificates:/etc/nginx/certs \
-v /var/run/docker.sock:/tmp/docker.sock:ro \
jwilder/nginx-proxy
Then I run the mysql database server using the following command
docker run -d \
--name mysql_db \
--net nginx-proxy-network \
-e MYSQL_DATABASE=db1 -e \
MYSQL_USER=db1 -e \
MYSQL_PASSWORD=db1 -e \
MYSQL_ROOT_PASSWORD=db12 \
-v mysql_server_data:/var/lib/mysql \
mysql:latest
I am able to verify that MySql server is running by connecting to it using the following command
root:~# docker exec -it mysql_db /bin/bash
root#dd7643384f76:/# mysql -h localhost -u root -p
mysql> show databases;
Now that nginx-proxy and mysql_db images are running, I want to proxy the WordPress image on the usa.mydomain.com. To do that, I run the following command
docker run -d \
--name wordpress \
--expose 80 \
--net nginx-proxy-network \
-e DEFAULT_HOST=usa.mydomain.com \
-e WORDPRESS_DB_HOST=mysql_db:3306 \
-e WORDPRESS_DB_NAME=db1 \
-e WORDPRESS_DB_USER=db1 \
-e WORDPRESS_DB_PASSWORD=db1 \
-v wordpress:/var/www/html \
wordpress:latest
I can see all 3 container running by executing docker ps -a
However, when I browser http://usa.mydomain.com I get HTTP error 503
503 Service Temporarily Unavailable nginx/1.17.5
I validated that usa.mydomain.com is pointing to the server's IP address by doing the following using the command line my my machine.
ipconfig /flushdns
ping usa.mydomain.com
Even when I try to browse my server's ip address I get the same 503 error.
What could be causing this issue?
Related
With docker I was able to run WordPress example for docker-compose on nearly every platform, without prior docker knowledge.
I look for a way to achieve the same with Podman.
In my case, to have a fast cross-platform way to setup a working WordPress installation for development.
As Podman is far younger, a valid answer in 2022 would also be: It is not possible, because... / only possible provided constraint X.
Still I would like to create an entry point for other people, who run into the same issue in the future.
I posted my own efforts below. Before I spend more hours debugging lots of small (but still solvable) issues, I wanted to find out if someone else faced the same problem and already has a solution. If you have, please clearly document its constraints.
My particular issue, as a reference
I am on Ubuntu 20.04 and podman -v gives 3.4.2.
docker/podman compose
When I use docker-compose up with Podman back-end on docker's WordPress .yml-file, I run into the "duplicate mount destination" issue.
podman-compose is part of Podman 4.1.0, which is not available on Ubuntu as I write this.
Red Hat example
The example of Red Hat gives "Error establishing a database connection ... contact with the database server at mysql could not be established".
A solution for the above does not work for me. share is likely a typo. I tried to replace with unshare.
Cent OS example
I found an example which uses pods instead of a docker-compose.yml file. But it is written for Cent OS.
I modified the Cent OS example, see the script below. I get the containers up and running. However, WordPress is unable to connect to the database.
#!/bin/bash
# Set environment variables:
DB_NAME='wordpress_db'
DB_PASS='mysupersecurepass'
DB_USER='justbeauniqueuser'
POD_NAME='wordpress_with_mariadb'
CONTAINER_NAME_DB='wordpress_db'
CONTAINER_NAME_WP='wordpress'
mkdir -P html
mkdir -P database
# Remove previous attempts
sudo podman pod rm -f $POD_NAME
# Pull before run, bc: invalid reference format eror
sudo podman pull mariadb:latest
sudo podman pull wordpress
# Create a pod instead of --link. So both containers are able to reach each others.
sudo podman pod create -n $POD_NAME -p 80:80
sudo podman run --detach --pod $POD_NAME \
-e MYSQL_ROOT_PASSWORD=$DB_PASS \
-e MYSQL_PASSWORD=$DB_PASS \
-e MYSQL_DATABASE=$DB_NAME \
-e MYSQL_USER=$DB_USER \
--name $CONTAINER_NAME_DB -v "$PWD/database":/var/lib/mysql \
docker.io/mariadb:latest
sudo podman run --detach --pod $POD_NAME \
-e WORDPRESS_DB_HOST=127.0.0.1:3306 \
-e WORDPRESS_DB_NAME=$DB_NAME \
-e WORDPRESS_DB_USER=$DB_USER \
-e WORDPRESS_DB_PASSWORD=$DB_PASS \
--name $CONTAINER_NAME_WP -v "$PWD/html":/var/www/html \
docker.io/wordpress
Also, I was a bit unsure where to post this question. If server fault or another stack exchange are a better fit, I will happily post there.
Actually, your code works with just small changes.
I removed the sudo's and changed the pods external port to 8090, instead of 80. So now everything is running as a non-root user.
#!/bin/bash
# https://stackoverflow.com/questions/74054932/how-to-install-and-setup-wordpress-using-podman
# Set environment variables:
DB_NAME='wordpress_db'
DB_PASS='mysupersecurepass'
DB_USER='justbeauniqueuser'
POD_NAME='wordpress_with_mariadb'
CONTAINER_NAME_DB='wordpress_db'
CONTAINER_NAME_WP='wordpress'
mkdir -p html
mkdir -p database
# Remove previous attempts
podman pod rm -f $POD_NAME
# Pull before run, bc: invalid reference format error
podman pull docker.io/mariadb:latest
podman pull docker.io/wordpress
# Create a pod instead of --link.
# So both containers are able to reach each others.
podman pod create -n $POD_NAME -p 8090:80
podman run --detach --pod $POD_NAME \
-e MYSQL_ROOT_PASSWORD=$DB_PASS \
-e MYSQL_PASSWORD=$DB_PASS \
-e MYSQL_DATABASE=$DB_NAME \
-e MYSQL_USER=$DB_USER \
--name $CONTAINER_NAME_DB -v "$PWD/database":/var/lib/mysql \
docker.io/mariadb:latest
podman run --detach --pod $POD_NAME \
-e WORDPRESS_DB_HOST=127.0.0.1:3306 \
-e WORDPRESS_DB_NAME=$DB_NAME \
-e WORDPRESS_DB_USER=$DB_USER \
-e WORDPRESS_DB_PASSWORD=$DB_PASS \
--name $CONTAINER_NAME_WP -v "$PWD/html":/var/www/html \
docker.io/wordpress
This is what worked for me:
#!/bin/bash
# https://stackoverflow.com/questions/74054932/how-to-install-and-setup-wordpress-using-podman
# Set environment variables:
POD_NAME='wordpress_mariadb'
DB_ROOT_PW='sup3rS3cr3t'
DB_NAME='wp'
DB_PASS='s0m3wh4tS3cr3t'
DB_USER='wordpress'
podman pod create --name $POD_NAME -p 8080:80
podman run \
-d --restart=always --pod=$POD_NAME \
-e MYSQL_ROOT_PASSWORD="$DB_ROOT_PW" \
-e MYSQL_DATABASE="$DB_NAME" \
-e MYSQL_USER="$DB_USER" \
-e MYSQL_PASSWORD="$DB_PASS" \
-v $HOME/public_html/wordpress/mysql:/var/lib/mysql:Z \
--name=wordpress-db docker.io/mariadb:latest
podman run \
-d --restart=always --pod=$POD_NAME \
-e WORDPRESS_DB_NAME="$DB_NAME" \
-e WORDPRESS_DB_USER="$DB_USER" \
-e WORDPRESS_DB_PASSWORD="$DB_PASS" \
-e WORDPRESS_DB_HOST="127.0.0.1" \
-v $HOME/public_html/wordpress/html:/var/www/html:Z \
--name wordpress docker.io/library/wordpress:latest
I want to connect an individual app within shiny proxy to a docker network.
I have a few apps on shinyproxy, only one needs to connect to the database.
It is a postgresql DB running on the same machine in a docker set up to receive connections though the network my-docker-network
In application.yml Should I use
container-network: my-docker-network
or
container-network-connections: ["my-docker-network"]
?
Even though I don’t need internal networks in shiny proxy do I still need to set ``internal-networking: trueunderdocker:```
At the moment the container isn’t starting, but as the container runs fine by itself using docker run --net my-docker-network --env-file /mypath/.Renviron my_app_image it seems to be a connection issue. The container also works if I run it with --network="host"
I've tried various options of putting the .Renviron in different places and don't think that is the issue.
Full dockerfile (other apps deleted and pseudonomised):
FROM rocker/r-ver:3.6.3
RUN apt-get update --allow-releaseinfo-change && apt-get install -y \
lbzip2 \
libfftw3-dev \
libgdal-dev \
libgeos-dev \
libgsl0-dev \
libgl1-mesa-dev \
libglu1-mesa-dev \
libhdf4-alt-dev \
libhdf5-dev \
libjq-dev \
liblwgeom-dev \
libpq-dev \
libproj-dev \
libprotobuf-dev \
libnetcdf-dev \
libsqlite3-dev \
libssl-dev \
libudunits2-dev \
netcdf-bin \
postgis \
protobuf-compiler \
sqlite3 \
tk-dev \
unixodbc-dev \
libssh2-1-dev \
r-cran-v8 \
libv8-dev \
net-tools \
libsqlite3-dev \
libxml2-dev
#for whatever reason it wasn't working
#RUN export ADD=shiny && bash /etc/cont-init.d/add
#install packages
RUN R -e "install.packages(c('somepackages'))"
#copy app script and variables into docker
RUN mkdir /home/app
COPY .Renviron /home/app/
COPY global.R /home/app/
COPY ui.R /home/app/
COPY server.R /home/app/
COPY Rprofile.site /usr/lib/R/etc/
#add run script
CMD ["R", "-e", "shiny::runApp('home/app')"]
Useful parts of the application.yml
At the moment I always get "500/container doesn't respond/run" on the shinyproxy side even though it runs on the standalone.
proxy:
title: apps - page
# logo-url: https://link/to/your/logo.png
landing-page: /
favicon-path: favicon.ico
heartbeat-rate: 10000
heartbeat-timeout: 60000
container-wait-time: 40000
port: 8080
authentication: simple
admin-groups: admins
container-log-path: /etc/shinyproxy/logs
# Example: 'simple' authentication configuration
users:
- name: admin
password: password
groups: admins
- name: user
password: password
groups: users
# Docker configuration
docker:
cert-path: /home/none
url: http://localhost:2375
port-range-start: 20000
# internal-networking: true
specs:
- id: 06_rshiny_dashboard_r_ver
display-name: app r_ver container r_app_r_ver
description: using simple rver set up docker and the r_app_r_ver image
container-cmd: ["R", "-e", "shinyrunApp('/home/app')"]
#container-cmd: ["R", "-e", "shiny::runApp('/home/app', shiny.port = 3838, shiny.host = '0.0.0.0')"]
container-image: asela_r_app_r_ver:latest
#container-network: my-docker-network
container-network-connections: [ "my-docker-network" ]
container-env-file: /home/app/.Renviron
access-groups: [admins]
logging:
file:
name: /etc/shinyproxy/shinyproxy.log
Various commented out lines show the current set up but have tried with/without
Fixed it by using a shiny server version of the docker - not sure why but this sorted out some connection issue.
Dockerfile:
FROM rocker/r-ver:3.6.3
RUN apt-get update --allow-releaseinfo-change && apt-get install -y \
lbzip2 \
libfftw3-dev \
libgdal-dev \
libgeos-dev \
libgsl0-dev \
libgl1-mesa-dev \
libglu1-mesa-dev \
libhdf4-alt-dev \
libhdf5-dev \
libjq-dev \
liblwgeom-dev \
libpq-dev \
libproj-dev \
libprotobuf-dev \
libnetcdf-dev \
libsqlite3-dev \
libssl-dev \
libudunits2-dev \
netcdf-bin \
postgis \
protobuf-compiler \
sqlite3 \
tk-dev \
unixodbc-dev \
libssh2-1-dev \
r-cran-v8 \
libv8-dev \
net-tools \
libsqlite3-dev \
libxml2-dev \
wget \
gdebi
##No version control
#then install shiny
RUN wget --no-verbose https://download3.rstudio.org/ubuntu-14.04/x86_64/VERSION -O "version.txt" && \
VERSION=$(cat version.txt) && \
wget --no-verbose "https://download3.rstudio.org/ubuntu-14.04/x86_64/shiny-server-$VERSION-amd64.deb" -O ss-latest.deb && \
gdebi -n ss-latest.deb && \
rm -f version.txt ss-latest.deb
#install packages
RUN R -e "install.packages(c('xtable', 'stringr', 'glue', 'data.table', 'pool', 'RPostgres', 'palettetown', 'deckgl', 'sf', 'shinyWidgets', 'shiny', 'stats', 'graphics', 'grDevices', 'datasets', 'utils', 'methods', 'base'))"
##No version control over
##with version control and renv.lock file
##With version control over
#copy shiny server config over
COPY shiny-server.conf /etc/shiny-server/shiny-server.conf
#avoid some errors
#already in there
#RUN echo 'sanitize_errors off;disable_protocols xdr-streaming xhr-streaming iframe-eventsource iframe-htmlfile;' >> /etc/shiny-server/shiny-server.conf
# copy the app to the image
COPY .Renviron /srv/shiny-server/
COPY global.R /srv/shiny-server/
COPY server.R /srv/shiny-server/
COPY ui.R /srv/shiny-server/
# select port
EXPOSE 3838
# Copy further configuration files into the Docker image
COPY shiny-server.sh /usr/bin/shiny-server.sh
RUN ["chmod", "+x", "/usr/bin/shiny-server.sh"]
# run app
CMD ["/usr/bin/shiny-server.sh"]
application.yml:
proxy:
title: apps - page
# logo-url: https://link/to/your/logo.png
landing-page: /
favicon-path: favicon.ico
heartbeat-rate: 10000
heartbeat-timeout: 60000
container-wait-time: 40000
port: 8080
authentication: simple
admin-groups: admins
container-log-path: /etc/shinyproxy/logs
# Example: 'simple' authentication configuration
users:
- name: admin
password: password
groups: admins
- name: user
password: password
groups: users
# Docker configuration
docker:
cert-path: /home/none
url: http://localhost:2375
port-range-start: 20000
# internal-networking: true
- id: 10_asela_rshiny_shinyserv
display-name: ASELA Dash internal shiny server version
description: container has own shinyserver within it functions on docker network only not on host container-network version
container-cmd: ["/usr/bin/shiny-server.sh"]
access-groups: [admins]
container-image: asela_r_app_shinyserv_ver:latest
container-network: asela-docker-net
logging:
file:
name: /etc/shinyproxy/shinyproxy.log
I'm attempting to set up Mautic (https://github.com/mautic/docker-mautic) on Dokku. I have everything working well except for the mounted volume. Mautic stores config files in the volume, so every time the container restarts it needs to be reconfigured if the volume is not set up. The instructions on the above page are:
$ docker volume create mautic_data
$ docker run --name mautic -d \
--restart=always \
-e MAUTIC_DB_HOST=127.0.0.1 \
-e MAUTIC_DB_USER=root \
-e MAUTIC_DB_PASSWORD=mypassword \
-e MAUTIC_DB_NAME=mautic \
-e MAUTIC_RUN_CRON_JOBS=true \
-e MAUTIC_TRUSTED_PROXIES=0.0.0.0/0 \
-p 8080:80 \
-v mautic_data:/var/www/html \
mautic/mautic:latest
I have created a persistent volume in dokku with
dokku storage:mount mautic /var/lib/dokku/data/storage/mautic:/mautic_data
this is confirmed:
root#apps:/var/lib# dokku storage:report mautic
=====> mautic storage information
Storage build mounts:
Storage deploy mounts: -v /var/lib/dokku/data/storage/mautic:/mautic_data
Storage run mounts: -v /var/lib/dokku/data/storage/mautic:/mautic_data
However the config file is not saved. Can anyone point out where I am going wrong?
It looks like the directory that the config files are stored in is /var/www/html instead of /mautic_data. In the docker command referenced, mautic_data in -v mautic_data:/var/www/html is the name of the volume on the host created by docker volume create mautic_data, not the directory inside the container.
Try using:
dokku storage:mount mautic /var/lib/dokku/data/storage/mautic:/var/www/html
This will bind /var/lib/dokku/data/storage/mautic in the host computer to /var/www/html inside the container.
my server running gitlab in podman.
I want gitlab connecting of subdomain.
Test command
podman start gitlab-ce --VIRTUAL-HOST=test.example.com -p 80
how to virtualHost setting in podman?
According to the GitLab documentation, the container can be started with:
sudo podman run --detach \
--hostname gitlab.example.com \
--env GITLAB_OMNIBUS_CONFIG="external_url 'http://test.example.com/';" \
--publish 443:443 --publish 80:80 \
--name gitlab \
--restart always \
gitlab/gitlab-ce:latest
sudo is needed to bind port 80 and 443.
Ive been been trying to get spark working on kubernetes on my local machine.
However I`m having an issue trying to understand how the networking of services work.
I`m running kubernetes in containers on my laptop:
Etcd 2.0.5.1
Kubelet 1.1.2
Proxy 1.1.2
SkyDns 2015-03-11-001
Sky2kube 1.11
Then i`m launching spark which is located in the examples of the kubernetes github repo.
kubectl create -f kubernetes/examples/spark/spark-master-controller.yaml
kubectl create -f kubernetes/examples/spark/spark-master-service.yaml
kubectl create -f kubernetes/examples/spark/spark-webui.yaml
kubectl create -f kubernetes/examples/spark/spark-worker-controller.yaml
kubectl create -f kubernetes/examples/spark/zeppelin-controller.yaml
kubectl create -f kubernetes/examples/spark/zeppelin-service.yaml
My local network: 10.7.64.0/24
My docker network: 172.17.0.1/16
What works:
Spark master launches and I can connect to the webUI.
Spark worker tries to do dns query for spark-master and is
successful. (it returns the correct service ip of the master)
What does not work:
Spark worker cannot connect to the service ip. there is no route to
this host in that container nor on the local machine(laptop). Also
I see nothing happening in iptables. It tries to connect to somewhere
in the 10.0.0.0/8 network which i don`t have any routing too. Can
someone shed a light on this ?
Details:
How I start the containers:
sudo docker run \
--net=host \
-d kubernetes/etcd:2.0.5.1 \
/usr/local/bin/etcd \
--addr=$(hostname -i):4001 \
--bind-addr=0.0.0.0:4001 \
--data-dir=/var/etcd/data
sudo docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/dev:/dev \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged=true \
-d \
gcr.io/google_containers/hyperkube:v1.2.0 \
/hyperkube kubelet --containerized --hostname-override="127.0.0.1" --address="0.0.0.0" --api-servers=http://localhost:8080 --config=/etc/kubernetes/manifests --cluster-dns=10.7.64.184 --cluster-domain=kubernetes.local
sudo docker run -d --net=host --privileged gcr.io/google-containers/hyperkube:v1.2.0 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2 --cluster-dns=10.7.64.184 --cluster-domain=kubernetes.local --cloud-provider=""
sudo docker run -d --net=host --restart=always \
gcr.io/google_containers/kube2sky:1.11 \
-v=10 -logtostderr=true -domain=kubernetes.local \
-etcd-server="http://127.0.0.1:4001"
sudo docker run -d --net=host --restart=always \
-e ETCD_MACHINES="http://127.0.0.1:4001" \
-e SKYDNS_DOMAIN="kubernetes.local" \
-e SKYDNS_ADDR="10.7.64.184:53" \
-e SKYDNS_NAMESERVERS="8.8.8.8:53,8.8.4.4:53" \
gcr.io/google_containers/skydns:2015-03-11-001
Thanks !
I found what the issue was, the proxy was not running due to --cluster-dns and --cluster-domain not being parameters of the proxy. Now the iptables are created and the spark workers are able to connect to the service ip of the spark-master.