Ive been been trying to get spark working on kubernetes on my local machine.
However I`m having an issue trying to understand how the networking of services work.
I`m running kubernetes in containers on my laptop:
Etcd 2.0.5.1
Kubelet 1.1.2
Proxy 1.1.2
SkyDns 2015-03-11-001
Sky2kube 1.11
Then i`m launching spark which is located in the examples of the kubernetes github repo.
kubectl create -f kubernetes/examples/spark/spark-master-controller.yaml
kubectl create -f kubernetes/examples/spark/spark-master-service.yaml
kubectl create -f kubernetes/examples/spark/spark-webui.yaml
kubectl create -f kubernetes/examples/spark/spark-worker-controller.yaml
kubectl create -f kubernetes/examples/spark/zeppelin-controller.yaml
kubectl create -f kubernetes/examples/spark/zeppelin-service.yaml
My local network: 10.7.64.0/24
My docker network: 172.17.0.1/16
What works:
Spark master launches and I can connect to the webUI.
Spark worker tries to do dns query for spark-master and is
successful. (it returns the correct service ip of the master)
What does not work:
Spark worker cannot connect to the service ip. there is no route to
this host in that container nor on the local machine(laptop). Also
I see nothing happening in iptables. It tries to connect to somewhere
in the 10.0.0.0/8 network which i don`t have any routing too. Can
someone shed a light on this ?
Details:
How I start the containers:
sudo docker run \
--net=host \
-d kubernetes/etcd:2.0.5.1 \
/usr/local/bin/etcd \
--addr=$(hostname -i):4001 \
--bind-addr=0.0.0.0:4001 \
--data-dir=/var/etcd/data
sudo docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/dev:/dev \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged=true \
-d \
gcr.io/google_containers/hyperkube:v1.2.0 \
/hyperkube kubelet --containerized --hostname-override="127.0.0.1" --address="0.0.0.0" --api-servers=http://localhost:8080 --config=/etc/kubernetes/manifests --cluster-dns=10.7.64.184 --cluster-domain=kubernetes.local
sudo docker run -d --net=host --privileged gcr.io/google-containers/hyperkube:v1.2.0 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2 --cluster-dns=10.7.64.184 --cluster-domain=kubernetes.local --cloud-provider=""
sudo docker run -d --net=host --restart=always \
gcr.io/google_containers/kube2sky:1.11 \
-v=10 -logtostderr=true -domain=kubernetes.local \
-etcd-server="http://127.0.0.1:4001"
sudo docker run -d --net=host --restart=always \
-e ETCD_MACHINES="http://127.0.0.1:4001" \
-e SKYDNS_DOMAIN="kubernetes.local" \
-e SKYDNS_ADDR="10.7.64.184:53" \
-e SKYDNS_NAMESERVERS="8.8.8.8:53,8.8.4.4:53" \
gcr.io/google_containers/skydns:2015-03-11-001
Thanks !
I found what the issue was, the proxy was not running due to --cluster-dns and --cluster-domain not being parameters of the proxy. Now the iptables are created and the spark workers are able to connect to the service ip of the spark-master.
Related
With docker I was able to run WordPress example for docker-compose on nearly every platform, without prior docker knowledge.
I look for a way to achieve the same with Podman.
In my case, to have a fast cross-platform way to setup a working WordPress installation for development.
As Podman is far younger, a valid answer in 2022 would also be: It is not possible, because... / only possible provided constraint X.
Still I would like to create an entry point for other people, who run into the same issue in the future.
I posted my own efforts below. Before I spend more hours debugging lots of small (but still solvable) issues, I wanted to find out if someone else faced the same problem and already has a solution. If you have, please clearly document its constraints.
My particular issue, as a reference
I am on Ubuntu 20.04 and podman -v gives 3.4.2.
docker/podman compose
When I use docker-compose up with Podman back-end on docker's WordPress .yml-file, I run into the "duplicate mount destination" issue.
podman-compose is part of Podman 4.1.0, which is not available on Ubuntu as I write this.
Red Hat example
The example of Red Hat gives "Error establishing a database connection ... contact with the database server at mysql could not be established".
A solution for the above does not work for me. share is likely a typo. I tried to replace with unshare.
Cent OS example
I found an example which uses pods instead of a docker-compose.yml file. But it is written for Cent OS.
I modified the Cent OS example, see the script below. I get the containers up and running. However, WordPress is unable to connect to the database.
#!/bin/bash
# Set environment variables:
DB_NAME='wordpress_db'
DB_PASS='mysupersecurepass'
DB_USER='justbeauniqueuser'
POD_NAME='wordpress_with_mariadb'
CONTAINER_NAME_DB='wordpress_db'
CONTAINER_NAME_WP='wordpress'
mkdir -P html
mkdir -P database
# Remove previous attempts
sudo podman pod rm -f $POD_NAME
# Pull before run, bc: invalid reference format eror
sudo podman pull mariadb:latest
sudo podman pull wordpress
# Create a pod instead of --link. So both containers are able to reach each others.
sudo podman pod create -n $POD_NAME -p 80:80
sudo podman run --detach --pod $POD_NAME \
-e MYSQL_ROOT_PASSWORD=$DB_PASS \
-e MYSQL_PASSWORD=$DB_PASS \
-e MYSQL_DATABASE=$DB_NAME \
-e MYSQL_USER=$DB_USER \
--name $CONTAINER_NAME_DB -v "$PWD/database":/var/lib/mysql \
docker.io/mariadb:latest
sudo podman run --detach --pod $POD_NAME \
-e WORDPRESS_DB_HOST=127.0.0.1:3306 \
-e WORDPRESS_DB_NAME=$DB_NAME \
-e WORDPRESS_DB_USER=$DB_USER \
-e WORDPRESS_DB_PASSWORD=$DB_PASS \
--name $CONTAINER_NAME_WP -v "$PWD/html":/var/www/html \
docker.io/wordpress
Also, I was a bit unsure where to post this question. If server fault or another stack exchange are a better fit, I will happily post there.
Actually, your code works with just small changes.
I removed the sudo's and changed the pods external port to 8090, instead of 80. So now everything is running as a non-root user.
#!/bin/bash
# https://stackoverflow.com/questions/74054932/how-to-install-and-setup-wordpress-using-podman
# Set environment variables:
DB_NAME='wordpress_db'
DB_PASS='mysupersecurepass'
DB_USER='justbeauniqueuser'
POD_NAME='wordpress_with_mariadb'
CONTAINER_NAME_DB='wordpress_db'
CONTAINER_NAME_WP='wordpress'
mkdir -p html
mkdir -p database
# Remove previous attempts
podman pod rm -f $POD_NAME
# Pull before run, bc: invalid reference format error
podman pull docker.io/mariadb:latest
podman pull docker.io/wordpress
# Create a pod instead of --link.
# So both containers are able to reach each others.
podman pod create -n $POD_NAME -p 8090:80
podman run --detach --pod $POD_NAME \
-e MYSQL_ROOT_PASSWORD=$DB_PASS \
-e MYSQL_PASSWORD=$DB_PASS \
-e MYSQL_DATABASE=$DB_NAME \
-e MYSQL_USER=$DB_USER \
--name $CONTAINER_NAME_DB -v "$PWD/database":/var/lib/mysql \
docker.io/mariadb:latest
podman run --detach --pod $POD_NAME \
-e WORDPRESS_DB_HOST=127.0.0.1:3306 \
-e WORDPRESS_DB_NAME=$DB_NAME \
-e WORDPRESS_DB_USER=$DB_USER \
-e WORDPRESS_DB_PASSWORD=$DB_PASS \
--name $CONTAINER_NAME_WP -v "$PWD/html":/var/www/html \
docker.io/wordpress
This is what worked for me:
#!/bin/bash
# https://stackoverflow.com/questions/74054932/how-to-install-and-setup-wordpress-using-podman
# Set environment variables:
POD_NAME='wordpress_mariadb'
DB_ROOT_PW='sup3rS3cr3t'
DB_NAME='wp'
DB_PASS='s0m3wh4tS3cr3t'
DB_USER='wordpress'
podman pod create --name $POD_NAME -p 8080:80
podman run \
-d --restart=always --pod=$POD_NAME \
-e MYSQL_ROOT_PASSWORD="$DB_ROOT_PW" \
-e MYSQL_DATABASE="$DB_NAME" \
-e MYSQL_USER="$DB_USER" \
-e MYSQL_PASSWORD="$DB_PASS" \
-v $HOME/public_html/wordpress/mysql:/var/lib/mysql:Z \
--name=wordpress-db docker.io/mariadb:latest
podman run \
-d --restart=always --pod=$POD_NAME \
-e WORDPRESS_DB_NAME="$DB_NAME" \
-e WORDPRESS_DB_USER="$DB_USER" \
-e WORDPRESS_DB_PASSWORD="$DB_PASS" \
-e WORDPRESS_DB_HOST="127.0.0.1" \
-v $HOME/public_html/wordpress/html:/var/www/html:Z \
--name wordpress docker.io/library/wordpress:latest
I'm attempting to set up Mautic (https://github.com/mautic/docker-mautic) on Dokku. I have everything working well except for the mounted volume. Mautic stores config files in the volume, so every time the container restarts it needs to be reconfigured if the volume is not set up. The instructions on the above page are:
$ docker volume create mautic_data
$ docker run --name mautic -d \
--restart=always \
-e MAUTIC_DB_HOST=127.0.0.1 \
-e MAUTIC_DB_USER=root \
-e MAUTIC_DB_PASSWORD=mypassword \
-e MAUTIC_DB_NAME=mautic \
-e MAUTIC_RUN_CRON_JOBS=true \
-e MAUTIC_TRUSTED_PROXIES=0.0.0.0/0 \
-p 8080:80 \
-v mautic_data:/var/www/html \
mautic/mautic:latest
I have created a persistent volume in dokku with
dokku storage:mount mautic /var/lib/dokku/data/storage/mautic:/mautic_data
this is confirmed:
root#apps:/var/lib# dokku storage:report mautic
=====> mautic storage information
Storage build mounts:
Storage deploy mounts: -v /var/lib/dokku/data/storage/mautic:/mautic_data
Storage run mounts: -v /var/lib/dokku/data/storage/mautic:/mautic_data
However the config file is not saved. Can anyone point out where I am going wrong?
It looks like the directory that the config files are stored in is /var/www/html instead of /mautic_data. In the docker command referenced, mautic_data in -v mautic_data:/var/www/html is the name of the volume on the host created by docker volume create mautic_data, not the directory inside the container.
Try using:
dokku storage:mount mautic /var/lib/dokku/data/storage/mautic:/var/www/html
This will bind /var/lib/dokku/data/storage/mautic in the host computer to /var/www/html inside the container.
I am trying to run a WordPress app inside of a docker container on Ubuntu VPS using Nginx-Proxy.
First I run the nginx-proxy server using the following command
docker run -d \
-p 80:80 \
-p 443:443 \
--name proxy_server \
--net nginx-proxy-network \
-v /etc/certificates:/etc/nginx/certs \
-v /var/run/docker.sock:/tmp/docker.sock:ro \
jwilder/nginx-proxy
Then I run the mysql database server using the following command
docker run -d \
--name mysql_db \
--net nginx-proxy-network \
-e MYSQL_DATABASE=db1 -e \
MYSQL_USER=db1 -e \
MYSQL_PASSWORD=db1 -e \
MYSQL_ROOT_PASSWORD=db12 \
-v mysql_server_data:/var/lib/mysql \
mysql:latest
I am able to verify that MySql server is running by connecting to it using the following command
root:~# docker exec -it mysql_db /bin/bash
root#dd7643384f76:/# mysql -h localhost -u root -p
mysql> show databases;
Now that nginx-proxy and mysql_db images are running, I want to proxy the WordPress image on the usa.mydomain.com. To do that, I run the following command
docker run -d \
--name wordpress \
--expose 80 \
--net nginx-proxy-network \
-e DEFAULT_HOST=usa.mydomain.com \
-e WORDPRESS_DB_HOST=mysql_db:3306 \
-e WORDPRESS_DB_NAME=db1 \
-e WORDPRESS_DB_USER=db1 \
-e WORDPRESS_DB_PASSWORD=db1 \
-v wordpress:/var/www/html \
wordpress:latest
I can see all 3 container running by executing docker ps -a
However, when I browser http://usa.mydomain.com I get HTTP error 503
503 Service Temporarily Unavailable nginx/1.17.5
I validated that usa.mydomain.com is pointing to the server's IP address by doing the following using the command line my my machine.
ipconfig /flushdns
ping usa.mydomain.com
Even when I try to browse my server's ip address I get the same 503 error.
What could be causing this issue?
my server running gitlab in podman.
I want gitlab connecting of subdomain.
Test command
podman start gitlab-ce --VIRTUAL-HOST=test.example.com -p 80
how to virtualHost setting in podman?
According to the GitLab documentation, the container can be started with:
sudo podman run --detach \
--hostname gitlab.example.com \
--env GITLAB_OMNIBUS_CONFIG="external_url 'http://test.example.com/';" \
--publish 443:443 --publish 80:80 \
--name gitlab \
--restart always \
gitlab/gitlab-ce:latest
sudo is needed to bind port 80 and 443.
I've started tensorflow serving using docker:
docker run -d -p 8500:8500 -p 8501:8501 -p 8503:8503 \
-v /tfserving/models:/models \
-v /tfserving/config:/tfconfig \
tensorflow/serving:1.14.0 \
--model_config_file=/tfconfig/models.config \
--monitoring_config_file=/tfconfig/monitor.config \
--enable_batching=true \
--batching_parameters_file=/tfconfig/batching.config \
--model_config_file_poll_wait_seconds=30 \
--file_system_poll_wait_seconds=30
--rest_api_port=8503 \
--allow_version_labels_for_unavailable_models=true
I can get the log by docker logs containerid, but only show the basic log:
2019-11-22 07:03:48.912930: I tensorflow_serving/servables/tensorflow/saved_model_warmup.cc:103]
2019-11-22 07:03:48.938568: I tensorflow_serving/model_servers/server.cc:324] Running gRPC ModelServer at 0.0.0.0:8500 ...
[warn] getaddrinfo: address family for nodename not supported
2019-11-22 07:03:48.947993: I tensorflow_serving/model_servers/server.cc:344] Exporting HTTP/REST API at:localhost:8503 ...
[evhttp_server.cc : 239] RAW: Entering the event loop ...
When I sent grpc request from client, there aren't any more logs output.
And I'v tried to use the -v=1 when starting, but it didn't work.
What I want to do is to get the grpc request log from server for problem solving. How to do with it? Thanks~