For do not load default DAGS, I edited the files airflow.cfg of the containers: airflow-scheduler_1,airflow-webserver_1 and airflow-worker_1. After editing each of them, i made db reset. Unfortunately the default dags are always there.
DO you how to do that ?
docker-compose ps
airflow-init_1 /bin/bash -c function ver( ... Exit 0
airflow-scheduler_1 /usr/bin/dumb-init -- /ent ... Up (unhealthy) 8080/tcp
airflow-triggerer_1 /usr/bin/dumb-init -- /ent ... Up (unhealthy) 8080/tcp
airflow-webserver_1 /usr/bin/dumb-init -- /ent ... Up (healthy) 0.0.0.0:8080->8080/tcp,:::8080->8080/tcp
airflow-worker_1 /usr/bin/dumb-init -- /ent ... Up (unhealthy) 8080/tcp
flower_1 /usr/bin/dumb-init -- /ent ... Up (healthy) 0.0
.0.0:5555->5555/tcp,:::5555->5555/tcp, 8080/tcp
postgres_1 docker-entrypoint.sh postgres Up (healthy) 5432/tcp
redis_1 docker-entrypoint.sh redis ... Up (healthy) 6379/tcp
In the docker-compose.yaml file, there's a line
AIRFLOW__CORE__LOAD_EXAMPLES: 'true'
where you should change 'true' to 'false'. After that, the default example DAGs will not be loaded.
Related
$ dokku postgres:expose wiki-fashion-hasura
docker: Error response from daemon: Conflict. The container name "/dokku.postgres.wiki-fashion-hasura.ambassador" is already in use by container "05ac13c5682af1b1334ffda6d9142c2e577c81f0776c9a0449516d5ca6d55c8d". You have to remove (or rename) that container to be able to reuse that name.
See 'docker run --help'.
I checked docker ps and there is no container 05ac13c5682af1b1334ffda6d9142c2e577c81f0776c9a0449516d5ca6d55c8d
Then when trying to expose again:
$ dokku postgres:expose wiki-fashion-hasura
! Service wiki-fashion-hasura already exposed on port(s) 729
$ dokku postgres:info wiki-fashion-hasura
=====> wiki-fashion-hasura postgres service information
Config dir: /var/lib/dokku/services/postgres/wiki-fashion-hasura/data
Data dir: /var/lib/dokku/services/postgres/wiki-fashion-hasura/data
Dsn: postgres://postgres:03baa499ae71ae371a9276536df5fa56#dokku-postgres-wiki-fashion-hasura:5432/wiki_fashion_hasura
Exposed ports: 5432->729
Id: 89aa118cd1a41fc28170f6de3ed236171d3f3e2d8c019c62f74b2381282284f9
Internal ip: 172.17.0.8
Links: wiki-fashion-hasura
Service root: /var/lib/dokku/services/postgres/wiki-fashion-hasura
Status: running
Version: postgres:12
But
telnet <HOST> 729
telnet: connect to address <HOST>: Connection refused
It isn't exposed. (other ports with this same IP are resolving)
How can I debug this further?
Can someone educate me on how to run symfony migrations at the shell in a docker container environment?
I login to my app container
$ docker exec -it 79dcd1240fdf /bin/bash
and run
bin/console doctrine:schema:validate
only receive
SQLSTATE[HY000] [2002] No such file or directory
I'm able to login to my database container and access the database from there, so am confused since the db exists?
$ docker exec -it bd96edcb1164 /bin/bash
I have no name!#bd96edcb1164:/$ mysql -h localhost -P 4307 -u andy -ppassword symfony_red
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 8
Server version: 10.3.23-MariaDB Source distribution
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
MariaDB [symfony_red]> show databases
-> ;
+--------------------+
| Database |
+--------------------+
| information_schema |
| symfony_red |
| test |
+--------------------+
3 rows in set (0.001 sec)
MariaDB [symfony_red]>
Here's my docker config:
# docker-compose.yml
version: '2'
services:
myapp:
image: 'docker.io/bitnami/symfony:1-debian-10'
ports:
- '8000:8000'
volumes:
- '.:/app'
depends_on:
- mariadb
mariadb:
image: 'docker.io/bitnami/mariadb:10.3-debian-10'
environment:
- ALLOW_EMPTY_PASSWORD=yes
- MARIADB_USER=andy
- MARIADB_DATABASE=symfony_red
- MARIADB_PASSWORD=password
ports:
- "4307:3306"
and my .env config:-
# .env.local
DATABASE_URL=mysql://andy:password#localhost:4307/symfony_red?serverVersion=5.7
I already tried switching 127.0.0.1 to localhost - that just toggles another error:
SQLSTATE[HY000] [2002] Connection refused
help?
I have Wordpress run as an app container in Google Cloud Kubernetes Cluster.
I've ruined my site a bit by wrong modifications of theme's functions.php file. So now i would like to remove my bad code to make site working. Hoever I can not find where Wordpress is located.
As all I need is to remove couple lines of PHP code I thought it might be easier to do it right from the SSH command line without playing with SFTP and keys (sorry I'm newby in WordPress/Sites in general)
This how it looks like in Google Cloud Console
Wordpress install
Google Cloud Console: my cluster
I'm connecting to cluster through SSH by pressing "Connect" button.
And... tada! I see NO "/var/www/html" in "var" folder! ".../www/html" folder is not exists/visible even under root
Can someone help me with finding WordPress install, please :)
Here is the output for $ kubectl describe pod market-engine-wordpress-0 mypod -n kalm-system comand
Name: market-engine-wordpress-0
Namespace: kalm-system
Priority: 0
Node: gke-cluster-1-default-pool-6c5a3d37-sx7g/10.164.0.2
Start Time: Thu, 25 Jun 2020 17:35:54 +0300
Labels: app.kubernetes.io/component=wordpress-webserver
app.kubernetes.io/name=market-engine
controller-revision-hash=market-engine-wordpress-b47df865b
statefulset.kubernetes.io/pod-name=market-engine-wordpress-0
Annotations: <none>
Status: Running
IP: 10.36.0.17
IPs:
IP: 10.36.0.17
Controlled By: StatefulSet/market-engine-wordpress
Containers:
wordpress:
Container ID: docker://32ee6d8662ff29ce32a5c56384ba9548bdb54ebd7556de98cd9c401a742344d6
Image: gcr.io/cloud-marketplace/google/wordpress:5.3.2-20200515-193202
Image ID: docker-pullable://gcr.io/cloud-marketplace/google/wordpress#sha256:cb4515c3f331e0c6bcca5ec7b12d2f3f039fc5cdae32f0869abf19238d580575
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 29 Jun 2020 15:37:38 +0300
Finished: Mon, 29 Jun 2020 15:40:08 +0300
Ready: False
Restart Count: 774
Environment:
POD_NAME: market-engine-wordpress-0 (v1:metadata.name)
POD_NAMESPACE: kalm-system (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4f6xq (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
market-engine-wordpress-pvc:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: market-engine-wordpress-pvc-market-engine-wordpress-0
ReadOnly: false
apache-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: market-engine-wordpress-config
Optional: false
config-map:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: market-engine-wordpress-config
Optional: false
default-token-4f6xq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4f6xq
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 8m33s (x9023 over 2d15h) kubelet, gke-cluster-1-default-pool-6c5a3d37-sx7g Readiness probe failed: HTTP probe failed with statuscode: 500
Warning BackOff 3m30s (x9287 over 2d15h) kubelet, gke-cluster-1-default-pool-6c5a3d37-sx7g Back-off restarting failed container
As you described, your application is crashing because of a change you have made in the code. This is making your website to fail and your pod is configured to check if the website is running fine and if not, the container will be restarted. The configuration that makes it happen is the LivenessProbe and the ReadinessProbe.
The problem here is that prevents you from fixing the problem.
The good news is that your data is saved under /var/www/html and this directory is on a external storage.
So, the easiest solution is to create a new pod and attach this storage to this pod. Problem is that this storage cannot be mounted on more than one container at the same time.
Creating this new pod, requires you to temporarily remove your wordpress pod. I know, it may be scary but we will recreate it after.
I reproduced your scenario and tested these steps. So Let's start. (All steps as mandatory)
Before we start, let's save your market-engine-wordpress manifest:
$ kubectl get statefulsets market-engine-wordpress -o yaml > market-engine-wordpress.yaml
Delete your wordpress statefulset:
$ kubectl delete statefulsets market-engine-wordpress
This commands delete the instruction that creates your wordpress pod.
Now, let's create a new pod using the following manifest:
apiVersion: v1
kind: Pod
metadata:
name: fenix
namespace: kalm-system
spec:
volumes:
- name: market-engine-wordpress-pvc
persistentVolumeClaim:
claimName: market-engine-wordpress-pvc-market-engine-wordpress-0
containers:
- name: ubuntu
image: ubuntu
command: ['sh', '-c', "sleep 36000"]
volumeMounts:
- mountPath: /var/www/html
name: market-engine-wordpress-pvc
subPath: wp
To create this pod, save this content in a file as fenix.yaml and run the following command:
$ kubectl apply -f fenix.yaml
Check if the pod is ready:
$ kubectl get pods fenix
NAME READY STATUS RESTARTS AGE
fenix 1/1 Running 0 5m
From this point, you can connect to this pod and fix your functions.php file:
$ kubectl exec -ti fenix -- bash
root#fenix:/# cd /var/www/html/wp-includes/
root#fenix:/var/www/html/wp-includes#
When you are done fixing your code, we can delete this pod and re-create your wordpress pod.
$ kubectl delete pod fenix
pod "fenix" deleted
$ kubectl apply -f market-engine-wordpress.yaml
statefulset.apps/market-engine-wordpress created
Check if the pod is ready:
$ kubectl get pod market-engine-wordpress-0
NAME READY STATUS RESTARTS AGE
market-engine-wordpress-0 2/2 Running 0 97s
If you need to exec into the wordpress container, your application uses the concept of multi-container pod and connecting to the right container requires you to indicate what container you want to connect.
To check how many containers and the name of which one you can run kubectl get pod mypod -o yaml or run kubectl describe pod mypod.
To finally exec into it, use the following command:
$ kubectl exec -ti market-engine-wordpress-0 -c wordpress -- bash
root#market-engine-wordpress-0:/var/www/html#
I have a Dockerized Zabbix server (3.4) connecting to a CentOS 7 host w/ Mariadb.
This one works fine:
# zabbix_get -s <ipOfRemoteHost> -p 10050 -k mysql.version
mysql Ver 15.1 Distrib 5.5.56-MariaDB, for Linux (x86_64) using readline 5.1
This one does not:
# zabbix_get -s <ipOfRemoteHost> -p 10050 -k mysql.ping
mysqladmin: connect to server at 'localhost' failed
error: 'Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (13)'
Check that mysqld is running and that the socket: '/var/lib/mysql/mysql.sock' exists!
From /etc/zabbix/zabbix_agentd.d/userparameter_mysql.conf:
UserParameter=mysql.version,mysql -V
UserParameter=mysql.ping,HOME=/etc/zabbix mysqladmin ping | grep -c alive
It does read the .my.cnf in the HOME dir, when change host=localhost to host=127.0.0.1 I get:
# zabbix_get -s <ipOfRemoteHost> -p 10050 -k mysql.ping
mysqladmin: connect to server at '127.0.0.1' failed
error: 'Can't connect to MySQL server on '127.0.0.1' (13)'
Check that mysqld is running on 127.0.0.1 and that the port is 3306.
You can check this by doing 'telnet 127.0.0.1 3306'
Also tried adding the username and password directly in the command, but same result:
UserParameter=mysql.ping,mysqladmin -uroot --password="mypassword" ping | grep -c alive
Running that command on the host works fine:
mysqladmin -uroot --password="mypassword" ping | grep -c alive
1
The agent itself seems to run fine:
$ sudo -u zabbix zabbix_agentd -t mysql.ping
mysql.ping [t|1]
Socket is available:
# ls -l /var/lib/mysql/mysql.sock
srwxrwxrwx. 1 mysql mysql 0 Nov 5 18:01 /var/lib/mysql/mysql.sock
Process details:
# ps -ef | grep mysqld
mysql 3218 1 0 18:01 ? 00:00:00 /bin/sh /usr/bin/mysqld_safe --basedir=/usr
mysql 3488 3218 99 18:01 ? 06:08:26 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --log-error=/var/log/mariadb/mariadb.log --pid-file=/var/run/mariadb/mariadb.pid --socket=/var/lib/mysql/mysql.sock
systemd+ 6547 6530 0 Oct31 ? 00:06:15 mysqld --character-set-server=utf8 --collation-server=utf8_bin
Any suggestions?
Note: don't think it is relevant but just in case, on the host I also have a Dockerized mysql running on port 3307
localhost/127.0.0.1 in the container is not the same as localhost/127.0.0.1 on the host, because of cgroup network namespacing. Use proper IP in .my.cnf or run container in the host network namespace (docker run --net host ...).
If you want to use a socket for MySQL connection from the container, then you will need to use Docker volumes. You may have problems with socket permissions, socket owner, ... I recommend IP connecting approach.
The cause is selinux. selinux is prohibiting the Zabbix agent from accessing the mysql socket file and possibly other resources.
Run tail -f /var/log/audit/audit.log while you try zabbix_get and you'll see the denials in real-time.
Then you'll need to devise an selinux policy that enables access as needed.
I have two services in my docker-compose.yml: docker-gen and nginx. Docker-gen is linked to nginx. In order for docker-gen to work I must pass the actual name or hash of nginx container so that docker-gen can restart nginx on change.
When I link docker-gen to nginx, a set of environment variables appears in the docker-gen container, the most interesting to me is NGINX_NAME – it's the name of nginx container.
So it should be straightforward to put $NGINX_NAME in command field of service and get it to work. But $NGINX_NAME doesn't expand when I start the services. Looking through docker-gen logs I see the lines:
2015/04/24 12:54:27 Sending container '$NGINX_NAME' signal '1'
2015/04/24 12:54:27 Error sending signal to container: No such container: $NGINX_NAME
My docker_config.yml is as follows:
nginx:
image: nginx:latest
ports:
- '80:80'
volumes:
- /tmp/nginx:/etc/nginx/conf.d
dockergen:
image: jwilder/docker-gen:latest
links:
- nginx
volumes_from:
- nginx
volumes:
- /var/run/docker.sock:/tmp/docker.sock
- ./extra:/etc/docker-gen/templates
- /etc/nginx/certs
tty: true
command: >
-watch
-only-exposed
-notify-sighup "$NGINX_NAME"
/etc/docker-gen/templates/nginx.tmpl
/etc/nginx/conf.d/default.conf
Is there a way to put environment variable placeholder in command so it could expand to actual value when the container is up?
I've added entrypoint setting to dockergen service and changed command a bit:
dockergen:
image: jwilder/docker-gen:latest
links:
- nginx
volumes_from:
- nginx
volumes:
- /var/run/docker.sock:/tmp/docker.sock
- ./extra:/etc/docker-gen/templates
- /etc/nginx/certs
tty: true
entrypoint: ["/bin/sh", "-c"]
command: >
"
docker-gen
-watch
-only-exposed
-notify-sighup $(echo $NGINX_NAME | tail -c +2)
/etc/docker-gen/templates/nginx.tmpl
/etc/nginx/conf.d/default.conf
"
Container names injected by Docker linking start with '/', but when I send SIGHUP to containers with leading slash, the signal doesn't arrive:
$ docker kill -s SIGHUP /myproject_dockergen_1/nginx
If I strip it though, nginx restarts as it should. So this $(echo $NGINX_NAME | tail -c +2) part is here to remove first char from $NGINX_NAME.