I am running test kitchen with salt using salt_solo and I cannot pass variables into the formula if I declare them in the platform.
For example if this was my .kitchen.yml
---
driver:
name: vagrant
platforms:
- name: ubuntu-14.04
grains:
org:
bat: batz
suites:
- name: binary
provisioner:
name: salt_solo
state_top:
base:
'*':
- binary
formula: binary
grains:
org:
foo: bar
Then my formula is not able to access {{grains['org']['bat']}}, but it is able to access {{grains['org']['foo']}}.
The solution is to add provisioner: before the platform variables. This fix to the example .kitchen.yml from above will solve the issue:
---
driver:
name: vagrant
platforms:
- name: ubuntu-14.04
provisioner:
grains:
org:
bat: batz
suites:
- name: binary
provisioner:
name: salt_solo
state_top:
base:
'*':
- binary
formula: binary
grains:
org:
foo: bar
Related
I want to use Airflow in Kubernetes on my local machine.
From the Airflow helm chart doc I should use a PVC to use my local DAG files, so I setup my PV and PVC like so:
apiVersion: v1
kind: PersistentVolume
metadata:
name: dags-pv
spec:
volumeMode: Filesystem
storageClassName: local-path
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
hostPath:
path: /mnt/c/Users/me/dags
type: Directory
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dags-pvc
spec:
storageClassName: local-path
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
Then I create a override-values.yaml file
config:
core:
dags_folder: "/usr/somepath/airflow/dags"
dags:
persistence:
enabled: true
existingClaim: "dags-pvc"
gitSync:
enabled: false
Note here that I want to change the default DAG folder path. And that's where I am having difficulties (it works if I keep the default DAG folder path). I don't know how to create a mount point and attach the PVC to it.
I tried to add, in my override file:
worker:
extraVolumeMounts:
- name: w-dags
mountPath: "/usr/somepath/airflow/dags"
extraVolumes:
- name: w-dags
persistentVolumeClaim:
claimName: "dags-pvc"
scheduler:
extraVolumeMounts:
- name: s-dags
mountPath: "/usr/somepath/airflow/dags"
extraVolumes:
- name: s-dags
persistentVolumeClaim:
claimName: "dags-pvc"
But that doesn't work, my scheduler is stuck on Init:0/1: "Unable to attach or mount volumes: unmounted volumes=[dags], unattached volumes=[logs dags s-dags config kube-api-access-9mc4c]: timed out waiting for the condition". So, I can tell I broke a condition - dags should be mounted (aka my extraVolumes section is wrong) - but I am not sure where to go from here.
I want to create a VM with 3 SRIOV ports ( heat template pasted below). I would like each port to appear with some specific name in VM, is it possible? . Is there a guarantee that port will appear in VM in order specified in heat template?
For Eg
resources:
vm1_server_0:
type: OS::Nova::Server
properties:
name: {get_param: [vm1_names, 0]}
image: {get_param: vm1_image_name}
flavor: {get_param: vm1_flavor_name}
availability_zone: {get_param: availability_zone_0}
networks:
- port: {get_resource: vm1_0_direct_port_0}
- port: {get_resource: vm1_0_direct_port_1}
- port: {get_resource: vm1_0_direct_port_2}
Can I rename vm1_0_direct_port_0 to "eth0" and vm1_0_direct_port_1 to "10Geth0" and vm1_0_direct_port_2 to "10Geth1" in heat template itself ?
If above is not possible, I need to be sure of order with which they appear in lspci|grep "Virtual Function" ( if those are sriov ports) ie like vm1_0_direct_port_0 appearing as 0000:04.01.00 and next vm1_0_direct_port_1 as 0000:04:01.01 and vm1_0_direct_port_2 as 0000:04:01.02 ? for me to rename using udev rules in VM.
I used the following code to create a cluster
from dask_kubernetes import KubeCluster
cluster = KubeCluster.from_yaml('worker.yaml')
cluster.adapt(minimum=1, maximum=10)
with the following yaml code (worker.yaml):
kind: Pod
metadata:
labels:
foo: bar
spec:
restartPolicy: Never
containers:
- image: daskdev/dask:latest
imagePullPolicy: IfNotPresent
args: [dask-worker, --nthreads, '4', --no-bokeh, --memory-limit, 3GB, --death-timeout, '300']
name: dask
resources:
limits:
cpu: "4"
memory: 3G
requests:
cpu: "2"
memory: 2G
This worked as expected. Now I added a volume mount as shown
kind: Pod
metadata:
labels:
foo: bar
spec:
restartPolicy: Never
containers:
- image: daskdev/dask:latest
imagePullPolicy: IfNotPresent
args: [dask-worker, --nthreads, '4', --no-bokeh, --memory-limit, 3GB, --death-timeout, '300']
name: dask
resources:
limits:
cpu: "4"
memory: 3G
requests:
cpu: "2"
memory: 2G
volumeMounts:
- name: somedata
mountPath: /opt/some/data
volumes:
- name: somedata
azureFile:
secretName: azure-secret
shareName: somedata
readOnly: true
I don't see the volume getting mounted. But when I simply run
kubectl create -f worker.yaml
I can see the volume getting mounted.
Does KubeCluster support volume mounts? And if so how do you configure them?
I am unable to reproduce your issue when testing with a HostPath volume.
from dask_kubernetes import KubeCluster
cluster = KubeCluster.from_yaml('worker.yaml')
cluster.adapt(minimum=1, maximum=10)
# worker.yaml
kind: Pod
metadata:
labels:
foo: bar
spec:
restartPolicy: Never
containers:
- image: daskdev/dask:latest
imagePullPolicy: IfNotPresent
args: [dask-worker, --nthreads, '4', --no-bokeh, --memory-limit, 3GB, --death-timeout, '300']
name: dask
resources:
limits:
cpu: "4"
memory: 3G
requests:
cpu: "2"
memory: 2G
volumeMounts:
- name: somedata
mountPath: /opt/some/data
volumes:
- name: somedata
hostPath:
path: /tmp/data
type: Directory
If I run kubectl describe po <podname> for the worker that is created I can see the volume created successfully.
Volumes:
somedata:
Type: HostPath (bare host directory volume)
Path: /tmp/data
HostPathType: Directory
And it is mounted where I would expect.
Mounts:
/opt/some/data from somedata (rw)
Also if I create a shell into the container with kubectl exec -it <podname> bash and ls /opt/some/data I can see files that I create in the host path.
Therefore volumes work with KubeCluster, so if you are experiencing issues with the azureFile storage then there must be some configuration issue with your Kubernetes cluster.
I'm using amazon-ecs to launch docker containers that I have. Everything works fine locally, but when I'm running the containers on ECS I'm getting the following error:
"NOTICE: PHP message: Unable to open PDO connection [wrapped: SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Name or service not known]"
I'm linking the containers in the docker-compose file, and I'm able to ping the the mysql container from the nginx container, so I know their linked.
docker-compose
version: '2'
services:
nginx:
image: "xxxx.dkr.ecr.us-east-2.amazonaws.com/nginx:latest"
ports:
- "8086:80"
links:
- fpm
- mysql
fpm:
image: "xxxx.dkr.ecr.us-east-2.amazonaws.com/php-fpm:latest"
links:
- redis
mysql:
image: "xxxx.dkr.ecr.us-east-2.amazonaws.com/mysql:latest"
environment:
MYSQL_DATABASE: strix
MYSQL_USER: strix
MYSQL_PASSWORD: rRCd29b3fG76ypM3
MYSQL_ROOT_PASSWORD: root
redis:
image: redis:latest
My symfony database.yml has the following:
dev:
propel:
param:
classname: DebugPDO
debug: { realmemoryusage: true, details: { time: { enabled: true }, slow: { enabled: true, threshold: 0.1 }, mem: { enabled: true }, mempeak: { enabled: true }, memdelta: { enabled: true } } }
task:
propel:
param:
profiler: false
test:
propel:
param:
classname: DebugPDO
all:
propel:
class: sfPropelDatabase
param:
classname: PropelPDO
dsn: 'mysql:host=mysql;dbname=strix'
username: strix
password: xxxx
encoding: utf8
persistent: true
pooling: true
I'm not sure if there's some network config that I have wrong on ECS or if I'm pointing to the wrong hostname. Any help would be appreciated. I am not familiar with symfony and am trying to raise an old application from the dead.
Turns out when I was running docker on my local machine, all the containers could talk to each other even though I hadn't explicitly linked them. In this case the fpm container needed to connect to the mysql container (and was doing so locally) but I didn't know this. When it was up on ECS, because it was not explicitly linked, it was throwing the connection error.
I simply fixed it by adding mysql to the fpm links
fpm:
image: "xxxx.dkr.ecr.us-east-2.amazonaws.com/php-fpm:latest"
links:
- redis
- mysql
I am trying to install salt minion from master using salt ssh
This is my sls file
salt-minion:
pkgrepo:
- managed
- ppa: saltstack/salt
- require_in:
- pkg: salt-minion
pkg.installed:
- version: 2015.5.3+ds-1trusty1
service:
- running
- watch:
- file: /etc/salt/minion
- pkg: salt-minion
/etc/salt/minion:
file.managed:
- source: salt://minion/minion.conf
- user: root
- group: root
- mode: 644
And this my roster file
minion3:
host: 192.168.33.103
user: vagrant
passwd: vagrant
sudo: True
My problem is that when I run
sudo salt-ssh -i '*' state.sls
I get this error
ID: salt-minion
Function: service.running
Result: False
Comment: One or more requisite failed: install_minion./etc/salt/minion
Started:
Duration:
Changes:
Strangely it works fine when I run it for the second time.
Any pointers to what I am doing wrong would be very helpful.
When installing salt on a machine via SSH you might want to look at the Salt's saltify module.
It will connect to a machine using SSH, run a bootstrap method, and register the new minion with the master. By default it runs the standard Salt bootstrap script, but you can provide your own.
I have a similar setup running in my Salt/Consul example here. This was originally targeted at DigitalOcean, but it also works with Vagrant (see cheatsheet.adoc for more information). A vagrant up followed by a salt-cloud -m mapfile-vagrant.yml will provision all minion using ssh.
Solved it.
The state file should be like this:
salt-minion:
pkgrepo:
- managed
- ppa: saltstack/salt
- require_in:
- pkg: salt-minion
pkg.installed:
- version: 2015.5.3+ds-1trusty1
/etc/salt/minion:
file.managed:
- template: jinja
- source: salt://minion/files/minion.conf.j2
- user: root
- group: root
- mode: 644
salt-minion_watch:
service:
- name: salt-minion
- running
- enable: True
- restart: True
- watch:
- file: /etc/salt/minion
- pkg: salt-minion
This is working for me. Though I am not clear on the reason.