Kibana for ARM architecture - kibana

I can see that elastic is available as docker image for ARM.
https://hub.docker.com/r/arm64v8/elasticsearch/
But I do not see kibana for ARM. Is there any official or unofficial kibana version available that is compatible with elastic version 7.10.1?

You can use unofficial kibana version using the commands like this...
docker network create somenetwork
docker run -d --name elasticsearch --net somenetwork -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" arm64v8/elasticsearch:7.10.1
docker run -d --name kibana --net somenetwork -p 5601:5601 gagara/kibana-oss-arm64
This version does not includes "Machine Learning" feature.
# curl -u user:password -XPUT -H "Content-Type: application/json" "http://localhost:9200/_ml/inference/telco_churn" -d #telco_churn_model.json
{"error":{"root_cause":[{"type":"security_exception","reason":"current
license is non-compliant for [ml]" }]}

Related

How to install and setup WordPress using Podman

With docker I was able to run WordPress example for docker-compose on nearly every platform, without prior docker knowledge.
I look for a way to achieve the same with Podman.
In my case, to have a fast cross-platform way to setup a working WordPress installation for development.
As Podman is far younger, a valid answer in 2022 would also be: It is not possible, because... / only possible provided constraint X.
Still I would like to create an entry point for other people, who run into the same issue in the future.
I posted my own efforts below. Before I spend more hours debugging lots of small (but still solvable) issues, I wanted to find out if someone else faced the same problem and already has a solution. If you have, please clearly document its constraints.
My particular issue, as a reference
I am on Ubuntu 20.04 and podman -v gives 3.4.2.
docker/podman compose
When I use docker-compose up with Podman back-end on docker's WordPress .yml-file, I run into the "duplicate mount destination" issue.
podman-compose is part of Podman 4.1.0, which is not available on Ubuntu as I write this.
Red Hat example
The example of Red Hat gives "Error establishing a database connection ... contact with the database server at mysql could not be established".
A solution for the above does not work for me. share is likely a typo. I tried to replace with unshare.
Cent OS example
I found an example which uses pods instead of a docker-compose.yml file. But it is written for Cent OS.
I modified the Cent OS example, see the script below. I get the containers up and running. However, WordPress is unable to connect to the database.
#!/bin/bash
# Set environment variables:
DB_NAME='wordpress_db'
DB_PASS='mysupersecurepass'
DB_USER='justbeauniqueuser'
POD_NAME='wordpress_with_mariadb'
CONTAINER_NAME_DB='wordpress_db'
CONTAINER_NAME_WP='wordpress'
mkdir -P html
mkdir -P database
# Remove previous attempts
sudo podman pod rm -f $POD_NAME
# Pull before run, bc: invalid reference format eror
sudo podman pull mariadb:latest
sudo podman pull wordpress
# Create a pod instead of --link. So both containers are able to reach each others.
sudo podman pod create -n $POD_NAME -p 80:80
sudo podman run --detach --pod $POD_NAME \
-e MYSQL_ROOT_PASSWORD=$DB_PASS \
-e MYSQL_PASSWORD=$DB_PASS \
-e MYSQL_DATABASE=$DB_NAME \
-e MYSQL_USER=$DB_USER \
--name $CONTAINER_NAME_DB -v "$PWD/database":/var/lib/mysql \
docker.io/mariadb:latest
sudo podman run --detach --pod $POD_NAME \
-e WORDPRESS_DB_HOST=127.0.0.1:3306 \
-e WORDPRESS_DB_NAME=$DB_NAME \
-e WORDPRESS_DB_USER=$DB_USER \
-e WORDPRESS_DB_PASSWORD=$DB_PASS \
--name $CONTAINER_NAME_WP -v "$PWD/html":/var/www/html \
docker.io/wordpress
Also, I was a bit unsure where to post this question. If server fault or another stack exchange are a better fit, I will happily post there.
Actually, your code works with just small changes.
I removed the sudo's and changed the pods external port to 8090, instead of 80. So now everything is running as a non-root user.
#!/bin/bash
# https://stackoverflow.com/questions/74054932/how-to-install-and-setup-wordpress-using-podman
# Set environment variables:
DB_NAME='wordpress_db'
DB_PASS='mysupersecurepass'
DB_USER='justbeauniqueuser'
POD_NAME='wordpress_with_mariadb'
CONTAINER_NAME_DB='wordpress_db'
CONTAINER_NAME_WP='wordpress'
mkdir -p html
mkdir -p database
# Remove previous attempts
podman pod rm -f $POD_NAME
# Pull before run, bc: invalid reference format error
podman pull docker.io/mariadb:latest
podman pull docker.io/wordpress
# Create a pod instead of --link.
# So both containers are able to reach each others.
podman pod create -n $POD_NAME -p 8090:80
podman run --detach --pod $POD_NAME \
-e MYSQL_ROOT_PASSWORD=$DB_PASS \
-e MYSQL_PASSWORD=$DB_PASS \
-e MYSQL_DATABASE=$DB_NAME \
-e MYSQL_USER=$DB_USER \
--name $CONTAINER_NAME_DB -v "$PWD/database":/var/lib/mysql \
docker.io/mariadb:latest
podman run --detach --pod $POD_NAME \
-e WORDPRESS_DB_HOST=127.0.0.1:3306 \
-e WORDPRESS_DB_NAME=$DB_NAME \
-e WORDPRESS_DB_USER=$DB_USER \
-e WORDPRESS_DB_PASSWORD=$DB_PASS \
--name $CONTAINER_NAME_WP -v "$PWD/html":/var/www/html \
docker.io/wordpress
This is what worked for me:
#!/bin/bash
# https://stackoverflow.com/questions/74054932/how-to-install-and-setup-wordpress-using-podman
# Set environment variables:
POD_NAME='wordpress_mariadb'
DB_ROOT_PW='sup3rS3cr3t'
DB_NAME='wp'
DB_PASS='s0m3wh4tS3cr3t'
DB_USER='wordpress'
podman pod create --name $POD_NAME -p 8080:80
podman run \
-d --restart=always --pod=$POD_NAME \
-e MYSQL_ROOT_PASSWORD="$DB_ROOT_PW" \
-e MYSQL_DATABASE="$DB_NAME" \
-e MYSQL_USER="$DB_USER" \
-e MYSQL_PASSWORD="$DB_PASS" \
-v $HOME/public_html/wordpress/mysql:/var/lib/mysql:Z \
--name=wordpress-db docker.io/mariadb:latest
podman run \
-d --restart=always --pod=$POD_NAME \
-e WORDPRESS_DB_NAME="$DB_NAME" \
-e WORDPRESS_DB_USER="$DB_USER" \
-e WORDPRESS_DB_PASSWORD="$DB_PASS" \
-e WORDPRESS_DB_HOST="127.0.0.1" \
-v $HOME/public_html/wordpress/html:/var/www/html:Z \
--name wordpress docker.io/library/wordpress:latest

How can I run wordpress docker-image using nginx-proxy?

I am trying to run a WordPress app inside of a docker container on Ubuntu VPS using Nginx-Proxy.
First I run the nginx-proxy server using the following command
docker run -d \
-p 80:80 \
-p 443:443 \
--name proxy_server \
--net nginx-proxy-network \
-v /etc/certificates:/etc/nginx/certs \
-v /var/run/docker.sock:/tmp/docker.sock:ro \
jwilder/nginx-proxy
Then I run the mysql database server using the following command
docker run -d \
--name mysql_db \
--net nginx-proxy-network \
-e MYSQL_DATABASE=db1 -e \
MYSQL_USER=db1 -e \
MYSQL_PASSWORD=db1 -e \
MYSQL_ROOT_PASSWORD=db12 \
-v mysql_server_data:/var/lib/mysql \
mysql:latest
I am able to verify that MySql server is running by connecting to it using the following command
root:~# docker exec -it mysql_db /bin/bash
root#dd7643384f76:/# mysql -h localhost -u root -p
mysql> show databases;
Now that nginx-proxy and mysql_db images are running, I want to proxy the WordPress image on the usa.mydomain.com. To do that, I run the following command
docker run -d \
--name wordpress \
--expose 80 \
--net nginx-proxy-network \
-e DEFAULT_HOST=usa.mydomain.com \
-e WORDPRESS_DB_HOST=mysql_db:3306 \
-e WORDPRESS_DB_NAME=db1 \
-e WORDPRESS_DB_USER=db1 \
-e WORDPRESS_DB_PASSWORD=db1 \
-v wordpress:/var/www/html \
wordpress:latest
I can see all 3 container running by executing docker ps -a
However, when I browser http://usa.mydomain.com I get HTTP error 503
503 Service Temporarily Unavailable nginx/1.17.5
I validated that usa.mydomain.com is pointing to the server's IP address by doing the following using the command line my my machine.
ipconfig /flushdns
ping usa.mydomain.com
Even when I try to browse my server's ip address I get the same 503 error.
What could be causing this issue?

How do I use JFrog CLI with CircleCI 2.0?

I'm trying to use JFrog CLI with CircleCI 2.0 to publish my docker image into my JFrog artifactory, after some research I've found this tutorial: https://circleci.com/docs/1.0/Artifactory/ but it's based on CircleCI 1.0 specification.
my config.yml file currently is:
version: 2
jobs:
build:
docker:
- image: docker:17.05.0-ce-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache \
py-pip=9.0.0-r1
pip install \
docker-compose==1.12.0 \
awscli==1.11.76
- run:
name: Setup JFrog
command: |
wget http://dl.bintray.com/jfrog/jfrog-cli-go/1.7.1/jfrog-cli-linux-amd64/jfrog
chmod +x jfrog
./jfrog rt config --url $ARTIFACTORY_URL --user $ARTIFACTORY_USER --apikey $ARTIFACTORY_PASSWORD
docker login -e $ARTIFACTORY_EMAIL -u $ARTIFACTORY_USER -p $ARTIFACTORY_PASSWORD $ARTIFACTORY_DOCKER_REPOSITORY
But I'm getting the following error:
#!/bin/sh -eo pipefail
wget http://dl.bintray.com/jfrog/jfrog-cli-go/1.7.1/jfrog-cli-linux-amd64/jfrog
chmod +x jfrog
./jfrog rt config --url $ARTIFACTORY_URL --user $ARTIFACTORY_USER --apikey $ARTIFACTORY_PASSWORD
docker login -e $ARTIFACTORY_EMAIL -u $ARTIFACTORY_USER -p $ARTIFACTORY_PASSWORD $ARTIFACTORY_DOCKER_REPOSITORY
Connecting to dl.bintray.com (35.162.24.14:80)
Connecting to akamai.bintray.com (23.46.57.209:80)
jfrog 100% |*******************************| 9543k 0:00:00 ETA
/bin/sh: ./jfrog: not found
Exited with code 127
Does anyone know what is the correct way to use JFrog CLI with CircleCI 2.0?
I've fixed this installing JFrog CLI through npm:
version: 2
jobs:
build:
docker:
- image: docker:17.05.0-ce-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache \
py-pip=9.0.0-r1 \
openssl \
nodejs
pip install \
docker-compose==1.12.0 \
awscli==1.11.76
- run:
name: Setup JFrog
command: |
npm install -g jfrog-cli-go
jfrog rt config --url $ARTIFACTORY_URL --user $ARTIFACTORY_USER --apikey $ARTIFACTORY_PASSWORD
docker login -u $ARTIFACTORY_USER -p $ARTIFACTORY_PASSWORD $ARTIFACTORY_DOCKER_REPOSITORY
Now it's working.
As an alternative to installing with Node.js (which is perfectly possible too, especially if you're running a Node.js build in CircleCI), you can use a cURL command to install it for you.
curl -fL https://getcli.jfrog.io | sh
This script will download the latest released version of the JFrog CLI based on your operating system and your architecture (32 vs 64 bits).

Development environment for Wordpress with Docker incl. FTP access

I would like to know if it is possible to provide a local development environment for Wordpress, where the data can be uploaded via FTP?
I already got the official Docker image of Wordpress running:
docker run --name wordpress-db -e MYSQL_ROOT_PASSWORD=admin -p 3306:3306 -d mysql
docker run --name wordpress --link wordpress-db:mysql -p 8080:80 -d wordpress
Next, I'm trying to use an FTP Docker image. After a short search, I chose metabrainz/docker-anon-ftp. But I do not have to continue using this image if there are better ones. I have extended my commands as follows:
docker run --name wordpress --link wordpress-db:mysql -p 8080:80 -v /wordpress-volume:/var/www/html/wp-content -d wordpress
docker run --name wordpress-ftp -p 20-21:20-21 -p 65500-65515:65500-65515 -v /wordpress-volume:/var/www/html/wp-content -d metabrainz/docker-anon-ftp
Now I have the problem that the FTP server is reachable, but does not provide me with any content.
What did I do wrong? Can someone give me the commands with which I can upload files (from my IDE) into Wordpress via FTP?

Kubernetes services networking

Ive been been trying to get spark working on kubernetes on my local machine.
However I`m having an issue trying to understand how the networking of services work.
I`m running kubernetes in containers on my laptop:
Etcd 2.0.5.1
Kubelet 1.1.2
Proxy 1.1.2
SkyDns 2015-03-11-001
Sky2kube 1.11
Then i`m launching spark which is located in the examples of the kubernetes github repo.
kubectl create -f kubernetes/examples/spark/spark-master-controller.yaml
kubectl create -f kubernetes/examples/spark/spark-master-service.yaml
kubectl create -f kubernetes/examples/spark/spark-webui.yaml
kubectl create -f kubernetes/examples/spark/spark-worker-controller.yaml
kubectl create -f kubernetes/examples/spark/zeppelin-controller.yaml
kubectl create -f kubernetes/examples/spark/zeppelin-service.yaml
My local network: 10.7.64.0/24
My docker network: 172.17.0.1/16
What works:
Spark master launches and I can connect to the webUI.
Spark worker tries to do dns query for spark-master and is
successful. (it returns the correct service ip of the master)
What does not work:
Spark worker cannot connect to the service ip. there is no route to
this host in that container nor on the local machine(laptop). Also
I see nothing happening in iptables. It tries to connect to somewhere
in the 10.0.0.0/8 network which i don`t have any routing too. Can
someone shed a light on this ?
Details:
How I start the containers:
sudo docker run \
--net=host \
-d kubernetes/etcd:2.0.5.1 \
/usr/local/bin/etcd \
--addr=$(hostname -i):4001 \
--bind-addr=0.0.0.0:4001 \
--data-dir=/var/etcd/data
sudo docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/dev:/dev \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged=true \
-d \
gcr.io/google_containers/hyperkube:v1.2.0 \
/hyperkube kubelet --containerized --hostname-override="127.0.0.1" --address="0.0.0.0" --api-servers=http://localhost:8080 --config=/etc/kubernetes/manifests --cluster-dns=10.7.64.184 --cluster-domain=kubernetes.local
sudo docker run -d --net=host --privileged gcr.io/google-containers/hyperkube:v1.2.0 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2 --cluster-dns=10.7.64.184 --cluster-domain=kubernetes.local --cloud-provider=""
sudo docker run -d --net=host --restart=always \
gcr.io/google_containers/kube2sky:1.11 \
-v=10 -logtostderr=true -domain=kubernetes.local \
-etcd-server="http://127.0.0.1:4001"
sudo docker run -d --net=host --restart=always \
-e ETCD_MACHINES="http://127.0.0.1:4001" \
-e SKYDNS_DOMAIN="kubernetes.local" \
-e SKYDNS_ADDR="10.7.64.184:53" \
-e SKYDNS_NAMESERVERS="8.8.8.8:53,8.8.4.4:53" \
gcr.io/google_containers/skydns:2015-03-11-001
Thanks !
I found what the issue was, the proxy was not running due to --cluster-dns and --cluster-domain not being parameters of the proxy. Now the iptables are created and the spark workers are able to connect to the service ip of the spark-master.

Resources