how to virtualhost with podman - networking

my server running gitlab in podman.
I want gitlab connecting of subdomain.
Test command
podman start gitlab-ce --VIRTUAL-HOST=test.example.com -p 80
how to virtualHost setting in podman?

According to the GitLab documentation, the container can be started with:
sudo podman run --detach \
--hostname gitlab.example.com \
--env GITLAB_OMNIBUS_CONFIG="external_url 'http://test.example.com/';" \
--publish 443:443 --publish 80:80 \
--name gitlab \
--restart always \
gitlab/gitlab-ce:latest
sudo is needed to bind port 80 and 443.

Related

How to install and setup WordPress using Podman

With docker I was able to run WordPress example for docker-compose on nearly every platform, without prior docker knowledge.
I look for a way to achieve the same with Podman.
In my case, to have a fast cross-platform way to setup a working WordPress installation for development.
As Podman is far younger, a valid answer in 2022 would also be: It is not possible, because... / only possible provided constraint X.
Still I would like to create an entry point for other people, who run into the same issue in the future.
I posted my own efforts below. Before I spend more hours debugging lots of small (but still solvable) issues, I wanted to find out if someone else faced the same problem and already has a solution. If you have, please clearly document its constraints.
My particular issue, as a reference
I am on Ubuntu 20.04 and podman -v gives 3.4.2.
docker/podman compose
When I use docker-compose up with Podman back-end on docker's WordPress .yml-file, I run into the "duplicate mount destination" issue.
podman-compose is part of Podman 4.1.0, which is not available on Ubuntu as I write this.
Red Hat example
The example of Red Hat gives "Error establishing a database connection ... contact with the database server at mysql could not be established".
A solution for the above does not work for me. share is likely a typo. I tried to replace with unshare.
Cent OS example
I found an example which uses pods instead of a docker-compose.yml file. But it is written for Cent OS.
I modified the Cent OS example, see the script below. I get the containers up and running. However, WordPress is unable to connect to the database.
#!/bin/bash
# Set environment variables:
DB_NAME='wordpress_db'
DB_PASS='mysupersecurepass'
DB_USER='justbeauniqueuser'
POD_NAME='wordpress_with_mariadb'
CONTAINER_NAME_DB='wordpress_db'
CONTAINER_NAME_WP='wordpress'
mkdir -P html
mkdir -P database
# Remove previous attempts
sudo podman pod rm -f $POD_NAME
# Pull before run, bc: invalid reference format eror
sudo podman pull mariadb:latest
sudo podman pull wordpress
# Create a pod instead of --link. So both containers are able to reach each others.
sudo podman pod create -n $POD_NAME -p 80:80
sudo podman run --detach --pod $POD_NAME \
-e MYSQL_ROOT_PASSWORD=$DB_PASS \
-e MYSQL_PASSWORD=$DB_PASS \
-e MYSQL_DATABASE=$DB_NAME \
-e MYSQL_USER=$DB_USER \
--name $CONTAINER_NAME_DB -v "$PWD/database":/var/lib/mysql \
docker.io/mariadb:latest
sudo podman run --detach --pod $POD_NAME \
-e WORDPRESS_DB_HOST=127.0.0.1:3306 \
-e WORDPRESS_DB_NAME=$DB_NAME \
-e WORDPRESS_DB_USER=$DB_USER \
-e WORDPRESS_DB_PASSWORD=$DB_PASS \
--name $CONTAINER_NAME_WP -v "$PWD/html":/var/www/html \
docker.io/wordpress
Also, I was a bit unsure where to post this question. If server fault or another stack exchange are a better fit, I will happily post there.
Actually, your code works with just small changes.
I removed the sudo's and changed the pods external port to 8090, instead of 80. So now everything is running as a non-root user.
#!/bin/bash
# https://stackoverflow.com/questions/74054932/how-to-install-and-setup-wordpress-using-podman
# Set environment variables:
DB_NAME='wordpress_db'
DB_PASS='mysupersecurepass'
DB_USER='justbeauniqueuser'
POD_NAME='wordpress_with_mariadb'
CONTAINER_NAME_DB='wordpress_db'
CONTAINER_NAME_WP='wordpress'
mkdir -p html
mkdir -p database
# Remove previous attempts
podman pod rm -f $POD_NAME
# Pull before run, bc: invalid reference format error
podman pull docker.io/mariadb:latest
podman pull docker.io/wordpress
# Create a pod instead of --link.
# So both containers are able to reach each others.
podman pod create -n $POD_NAME -p 8090:80
podman run --detach --pod $POD_NAME \
-e MYSQL_ROOT_PASSWORD=$DB_PASS \
-e MYSQL_PASSWORD=$DB_PASS \
-e MYSQL_DATABASE=$DB_NAME \
-e MYSQL_USER=$DB_USER \
--name $CONTAINER_NAME_DB -v "$PWD/database":/var/lib/mysql \
docker.io/mariadb:latest
podman run --detach --pod $POD_NAME \
-e WORDPRESS_DB_HOST=127.0.0.1:3306 \
-e WORDPRESS_DB_NAME=$DB_NAME \
-e WORDPRESS_DB_USER=$DB_USER \
-e WORDPRESS_DB_PASSWORD=$DB_PASS \
--name $CONTAINER_NAME_WP -v "$PWD/html":/var/www/html \
docker.io/wordpress
This is what worked for me:
#!/bin/bash
# https://stackoverflow.com/questions/74054932/how-to-install-and-setup-wordpress-using-podman
# Set environment variables:
POD_NAME='wordpress_mariadb'
DB_ROOT_PW='sup3rS3cr3t'
DB_NAME='wp'
DB_PASS='s0m3wh4tS3cr3t'
DB_USER='wordpress'
podman pod create --name $POD_NAME -p 8080:80
podman run \
-d --restart=always --pod=$POD_NAME \
-e MYSQL_ROOT_PASSWORD="$DB_ROOT_PW" \
-e MYSQL_DATABASE="$DB_NAME" \
-e MYSQL_USER="$DB_USER" \
-e MYSQL_PASSWORD="$DB_PASS" \
-v $HOME/public_html/wordpress/mysql:/var/lib/mysql:Z \
--name=wordpress-db docker.io/mariadb:latest
podman run \
-d --restart=always --pod=$POD_NAME \
-e WORDPRESS_DB_NAME="$DB_NAME" \
-e WORDPRESS_DB_USER="$DB_USER" \
-e WORDPRESS_DB_PASSWORD="$DB_PASS" \
-e WORDPRESS_DB_HOST="127.0.0.1" \
-v $HOME/public_html/wordpress/html:/var/www/html:Z \
--name wordpress docker.io/library/wordpress:latest

How can I run wordpress docker-image using nginx-proxy?

I am trying to run a WordPress app inside of a docker container on Ubuntu VPS using Nginx-Proxy.
First I run the nginx-proxy server using the following command
docker run -d \
-p 80:80 \
-p 443:443 \
--name proxy_server \
--net nginx-proxy-network \
-v /etc/certificates:/etc/nginx/certs \
-v /var/run/docker.sock:/tmp/docker.sock:ro \
jwilder/nginx-proxy
Then I run the mysql database server using the following command
docker run -d \
--name mysql_db \
--net nginx-proxy-network \
-e MYSQL_DATABASE=db1 -e \
MYSQL_USER=db1 -e \
MYSQL_PASSWORD=db1 -e \
MYSQL_ROOT_PASSWORD=db12 \
-v mysql_server_data:/var/lib/mysql \
mysql:latest
I am able to verify that MySql server is running by connecting to it using the following command
root:~# docker exec -it mysql_db /bin/bash
root#dd7643384f76:/# mysql -h localhost -u root -p
mysql> show databases;
Now that nginx-proxy and mysql_db images are running, I want to proxy the WordPress image on the usa.mydomain.com. To do that, I run the following command
docker run -d \
--name wordpress \
--expose 80 \
--net nginx-proxy-network \
-e DEFAULT_HOST=usa.mydomain.com \
-e WORDPRESS_DB_HOST=mysql_db:3306 \
-e WORDPRESS_DB_NAME=db1 \
-e WORDPRESS_DB_USER=db1 \
-e WORDPRESS_DB_PASSWORD=db1 \
-v wordpress:/var/www/html \
wordpress:latest
I can see all 3 container running by executing docker ps -a
However, when I browser http://usa.mydomain.com I get HTTP error 503
503 Service Temporarily Unavailable nginx/1.17.5
I validated that usa.mydomain.com is pointing to the server's IP address by doing the following using the command line my my machine.
ipconfig /flushdns
ping usa.mydomain.com
Even when I try to browse my server's ip address I get the same 503 error.
What could be causing this issue?

Docker networking reverse proxy without docker-compose

The challenge
As described, I want to accomplish the same goal with docker itself as I would with the help of docker-compose.
I want to get a deeper understanding of docker and enable the ability to work with docker on platforms, where docker-compose is not an option.
What I do currently (with docker-compose)
1)
I use this docker-compose file:
---
version: '3'
services:
app:
build: .
proxy:
build: docker/proxy
ports:
- "80:80"
The "app" service starts a container which runs node on port 3002 (is exposed in the dockerfile)
The "proxy" service starts a container which runs an nginx with - among others - the following conf:
server {
listen 80;
server_name app;
location / {
proxy_pass http://app:3002;
}
}
2)
Then I add this to the /etc/hosts of my host pc:
127.0.0.1 app
3)
Now I run docker-compose up and vist http://app , which hits the node app.
Nice and simple, right?
Now I want to do the same only with docker.
What I've tried
1 using the same nginx configuration.
2 Starting the containers with a bash script
To accomplish this I
Created a network
Add the network to both containers
Setting up "app"-container hostname, network-alias and dns-search to "app" (because I hoped one of the options would help)
Here the script:
docker network create --driver bridge dockertest_nw
docker build -t dockertest_app .
docker create \
--name dockertest_app_con \
--network dockertest_nw \
--hostname app \
--network-alias=app \
--dns-search=app \
dockertest_app
docker build -t dockertest_proxy ./docker/proxy/
docker create \
--name dockertest_proxy_con \
--network dockertest_nw \
--hostname proxy \
--network-alias=proxy \
--dns-search=proxy \
-p 80:80 \
dockertest_proxy
docker start dockertest_proxy_con
docker start dockertest_app_con
Unfortunately, this doesn't work.
I also know there is a dns service from docker which docker-compose somehow uses and I should also use it on some way?
Could any one give some suggestions?
Update:
Just the info I got the following logs from the nginx container, which i would say shows the nginx doesn't can resolve "app" :
172.18.0.1 - - [13/Apr/2017:14:49:06 +0000] "GET / HTTP/1.1" 502 576 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36" "-"
2017/04/13 14:49:06 [error] 5#5: *13 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: app, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3002/", host: "app"
You're tripping yourself up with all those options. All you really need is --network-alias to set the short form names app and proxy in your containers, which will be available in addition to the container names dockertest_app and dockertest_proxy.
docker network create --driver bridge dockertest_nw
docker build -t dockertest_app .
docker create \
--name dockertest_app \
--network dockertest_nw \
--network-alias=app \
dockertest_app
docker build -t dockertest_proxy ./docker/proxy/
docker create \
--name dockertest_proxy \
--network dockertest_nw \
--network-alias=proxy \
-p 80:80 \
dockertest_proxy
docker start dockertest_proxy
docker start dockertest_app

Cannot login with wp-cli generated user wordpress behind reverse proxy

Hello fellows I have made a custom wordpress image located there: https://github.com/ellakcy/wordpressWithPlugins
And on entrypoint script I am using wp-cli in order to generate a custom user in order to preinstall plugins. But I cannot login to the control panel with the generated user from wp-cli.
Do you have any Idea how to fix it?
The entrypoint of the script is the following: https://github.com/ellakcy/wordpressWithPlugins/blob/master/docker-entrypoint.sh
I run the containers with these commands: (for development purpose)
docker run --name wpdb -e MYSQL_ROOT_PASSWORD=1234 -d mariadb
docker run --name mywordpress --link wpdb:mysql -p 8080:80 -ti wp
And I am using apache as reverse proxy in order to access the wordpress running in the mywordpress container:
<VirtualHost *:80>
ProxyPass / http://172.17.0.3/
ProxyPassReverse http://172.17.0.3/ /
</Virtualhost>
(In place of 172.17.0.3 can be the ip of the container running the wordpress)
Edit 1
I managed to login by setting up a network:
docker network create --subnet="172.19.0.0/16" wordpress_default
And setting the custom ips to the coontainers. (Also I set some Enviromental variables too.)
RUN MYSQL/MARIADB
docker run --name wpdb --net wordpress_default --ip 172.19.0.2 -e MYSQL_ROOT_PASSWORD=1234 -d mariadb
run wordpress docker with some extra enviiromental variables
docker run --name mywordpress --net wordpress_default --ip 172.19.0.3 --link wpdb:mysql -e WORDPRESS_ADMIN_PASSWORD=1234 -e WORDPRESS_ADMIN_EMAIL=pc_magas#openmailbox.org -e WORDPRESS_URL=172.19.0.3 -p 8080:80 -ti wp
And visiting the wordpress site via the ip given oon the second coommand. But I still have problems with the local apache running as reverse proxy.
In the end just manually setting the machine's ip as url works like a charm.
docker run --name wpdb --net wordpress_default --ip 172.19.0.2 -e MYSQL_ROOT_PASSWORD=1234 -d mariadb
run wordpress docker with some extra enviiromental variables
docker run --name mywordpress --net wordpress_default --ip 172.19.0.3 --link wpdb:mysql -e WORDPRESS_ADMIN_PASSWORD=1234 -e WORDPRESS_ADMIN_EMAIL=pc_magas#openmailbox.org -e WORDPRESS_URL=172.19.0.3 -p 8080:80 -ti wp
All I had to do wat to set the following vhost to my apache:
<VirtualHost *:80>
RequestHeader set X-Forwarded-Proto "http"
ProxyPass / http://172.19.0.3/
ProxyPassReverse http://172.19.0.3/ /
</Virtualhost>
(Perhaps for production may need some changes)

Kubernetes services networking

Ive been been trying to get spark working on kubernetes on my local machine.
However I`m having an issue trying to understand how the networking of services work.
I`m running kubernetes in containers on my laptop:
Etcd 2.0.5.1
Kubelet 1.1.2
Proxy 1.1.2
SkyDns 2015-03-11-001
Sky2kube 1.11
Then i`m launching spark which is located in the examples of the kubernetes github repo.
kubectl create -f kubernetes/examples/spark/spark-master-controller.yaml
kubectl create -f kubernetes/examples/spark/spark-master-service.yaml
kubectl create -f kubernetes/examples/spark/spark-webui.yaml
kubectl create -f kubernetes/examples/spark/spark-worker-controller.yaml
kubectl create -f kubernetes/examples/spark/zeppelin-controller.yaml
kubectl create -f kubernetes/examples/spark/zeppelin-service.yaml
My local network: 10.7.64.0/24
My docker network: 172.17.0.1/16
What works:
Spark master launches and I can connect to the webUI.
Spark worker tries to do dns query for spark-master and is
successful. (it returns the correct service ip of the master)
What does not work:
Spark worker cannot connect to the service ip. there is no route to
this host in that container nor on the local machine(laptop). Also
I see nothing happening in iptables. It tries to connect to somewhere
in the 10.0.0.0/8 network which i don`t have any routing too. Can
someone shed a light on this ?
Details:
How I start the containers:
sudo docker run \
--net=host \
-d kubernetes/etcd:2.0.5.1 \
/usr/local/bin/etcd \
--addr=$(hostname -i):4001 \
--bind-addr=0.0.0.0:4001 \
--data-dir=/var/etcd/data
sudo docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/dev:/dev \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged=true \
-d \
gcr.io/google_containers/hyperkube:v1.2.0 \
/hyperkube kubelet --containerized --hostname-override="127.0.0.1" --address="0.0.0.0" --api-servers=http://localhost:8080 --config=/etc/kubernetes/manifests --cluster-dns=10.7.64.184 --cluster-domain=kubernetes.local
sudo docker run -d --net=host --privileged gcr.io/google-containers/hyperkube:v1.2.0 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2 --cluster-dns=10.7.64.184 --cluster-domain=kubernetes.local --cloud-provider=""
sudo docker run -d --net=host --restart=always \
gcr.io/google_containers/kube2sky:1.11 \
-v=10 -logtostderr=true -domain=kubernetes.local \
-etcd-server="http://127.0.0.1:4001"
sudo docker run -d --net=host --restart=always \
-e ETCD_MACHINES="http://127.0.0.1:4001" \
-e SKYDNS_DOMAIN="kubernetes.local" \
-e SKYDNS_ADDR="10.7.64.184:53" \
-e SKYDNS_NAMESERVERS="8.8.8.8:53,8.8.4.4:53" \
gcr.io/google_containers/skydns:2015-03-11-001
Thanks !
I found what the issue was, the proxy was not running due to --cluster-dns and --cluster-domain not being parameters of the proxy. Now the iptables are created and the spark workers are able to connect to the service ip of the spark-master.

Resources