SaltStack lazy variable evaluation when rendering template - salt-stack

I need to get the default gateway ip address of docker0 network interface in my SaltStack state file. The .sls might look like this
include:
- docker
postgresql:
docker.running:
- container: postgresql
- port_bindings:
"5432/tcp":
HostIp: {{ grains['ip_interfaces']['docker0'][0]}}
HostPort: "5432"
This works when docker was provisioned before I run state.highstate, however, when this template is rendered by SaltStack, the docker is not actually provisioned, so docker0 network interface is not available by the time. As a result, a key error will be raised.
I know in most cases, the docker0 default gateway will be 172.17.42.1 and I can set it to this value directly. However, what if I encounter another situation like this? Is there any way to render the template lazily, I think that would be something like
lazy_render: True
So that it will be rendered only before executing it. Is there anything like this available in SaltStack? Or do you guys have other solution for this issue?

I think you should either wait for docker state execution or some other conditional.
e.g.
include:
- docker
postgresql:
docker.running:
- container: postgresql
- port_bindings:
"5432/tcp":
HostIp: {{ grains['ip_interfaces']['docker0'][0]}}
HostPort: "5432"
- require:
- sls: docker

Related

Add new port in running docker compose

I am trying to add a SSL certificate to a wordpress container but the default compose configuration only redirects port 80.
How can I add a new port in the running container? I tried to modify the docker-compose.yml file and restart the container but this doesn't solve the problem.
Thank you.
You should re-create container, when listening new port, like this
docker-compose up -d --force-recreate {CONTAINER}
Expose ports.
Either specify both ports (HOST:CONTAINER), or just the container port (an ephemeral host port is chosen).
Note: When mapping ports in the HOST:CONTAINER format, you may experience erroneous results when using a container port lower than 60, because YAML parses numbers in the format xx:yy as a base-60 value. For this reason, we recommend always explicitly specifying your port mappings as strings.
ports:
- "3000"
- "3000-3005"
- "8000:8000"
- "9090-9091:8080-8081"
- "49100:22"
- "127.0.0.1:8001:8001"
- "127.0.0.1:5000-5010:5000-5010"
- "6060:6060/udp"
https://docs.docker.com/compose/compose-file/#pid
After you add the new port to the docker-compose file, what I did that works is:
Stop the container
docker-compose stop <service name>
Run the docker-compose up command (NOTE: docker-compose start did not work)
docker-compose up -d
According to the documentation the 'docker-compose' command:
Builds, (re)creates, starts, and attaches to containers for a service
... Unless they are already running
That started up the stopped service, WITH the exposed ports I had configured.
Have you tried like in this example:
https://docs.docker.com/compose/compose-file/#ports
Should work like this:
my-services:
ports:
- "80:80"
- "443:443"
you just add the new port in the port section of the docker-compose.yml and then you must do
docker-compose up -d
because it will read the .yml file again and recreate the container. If you do just restart it will not read the new config from the .yml and just restart the same container.

Docker - port prevents listening

I am trying to setup xdebug integration on my docker-based setup.
I am using Docker for Mac 1.12.0-rc2-beta17 with the "native" docker machine
I have a container, with xdebug installed, exposing port 9000 and mapping it to the port 9000:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6950c2a2b05d app "/usr/bin/supervisord" 9 minutes ago Up 9 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:9000->9000/tcp, 0.0.0.0:2222->22/tcp app_1
When I'm trying to use PhpStorm to listen to the port 9000 for debug connections, I'm getting an error "Cannot listen: port 9000 is busy".
I must precise that I'm a newbie in networks..
It dependent how you want to connect via Xdebug
xdebug.remote_connect_back=1 said that PHP will wait until a HTTP request with GET parameter XDEBUG_SESSION_START=<IDE_key>. Then will PHP within the server try to connect back via port 9000 where your PHPStorm is listing. Classic don't call us, we will call you situation.
Now your situation with docker say simple, your container is responsible for port 9000. So PHP will get a loopback and PHPStorm isn't able to use port 9000 because its already used by your docker container.
So skip the assignment of port 9000 to docker, that will fix this situation.
You must bind 9000 port with --expose option.
This is the reference
if you are using docker compose sample docker-compose.yml file is here:
version: '2'
services:
your_app:
ports:
- "80:80"
expose:
- "9000"
image: "your-image:tag"
Firstly check your container logs to debug:
docker logs 6950c2a2b05d
or
docker logs app_1
Add -f flags for tail-like behavior:
docker logs -f app_1
Two things I discovered:
There is no need to expose the port 9000 on a container with xdebug (that seems rather counter-intuitive for me, as I do not exactly understand how my IDE connects to xdebug then).
I was able to use xdebug using the workaround described in https://forums.docker.com/t/ip-address-for-xdebug/10460/4.

Use host networking and additional networks in docker compose

I'm trying to set up a dev environment for my project.
I have a container (ms1) which should be put in his own network ("services" in my case), and a container (apigateway) which should access that network while exposing an http port to the host's network.
Ideally my docker compose file would look like this:
version: '2'
services:
ms1:
expose:
- "13010"
networks:
services:
aliases:
- ms1
apigateway:
networks:
services:
aliases:
- api
network_mode: "host"
networks:
services:
docker-compose doesn't allow to use network_mode and networks at the same time.
Do I have other alternatives?
At the moment I'm using this:
apigateway:
networks:
services:
aliases:
- api
ports:
- "127.0.0.1:10000:13010"
and then apigateway container listens on 0.0.0.0:13010. It works but it is slow and it freezes if the host's internet connection goes down.
Also, I'm planning on using vagrant in the future upon docker, does it allow to solve in a clean way?
expose in docker-compose does not publish the port on the host. Since you probably don't need service linking anymore (instead you should rely on Docker networks as you do already), the option has limited value in general and seems to provide no value at all to you in your scenario.
I suspect you've come to using it by mistake and after realizing that it didn't seem to have any effect by itself, stumbled upon the fact that using the host network driver would "make it work". This had nothing to do with the expose property, mind you. It's just that the host network driver lets contained processes bind to the host network interface directly. Thanks to this, you could reach the API gateway process from the outside. You could remove the expose property and it would still work.
If this is the only reason why you picked the host network driver, then you've fallen victim of the X-Y problem:
(tl;dr)
You should never need to use the host network driver in normal situations, the default bridge network driver works just fine. What you're looking for is the ports property, not expose. This sets up the appropriate port forwarding behind the scenes.
In docker 1.13 you should be able to create a service to bridge between the two networks. I'm using something similar to fix another problem and I think this could also help here:
docker service create \
--name proxy \
--network proxy \
--publish mode=host,target=80,published=80 \
--publish mode=host,target=443,published=443 \
--constraint 'node.hostname == myproxynode' \
--replicas 1 \
letsnginx
I would try this :
1/ Find the host network
docker network ls
2/ Use this dockercompose file
services:
ms1:
ports:
- "13010"
networks:
- service
apigateway:
networks:
- front
- service
networks:
front:
service:
external:
name: "<ID of the network>"

Docker container cannot resolve request to service in another container

I'm running gitlab-ce and gitlab-ci-multi-runner in separated docker containers, but on the same server.
Gitlab CE works fine, I can access it via browser and clone projects using both http and ssh.
However my runner cannot connect to Gitlab using domain/server ip. It can connect to it only via local docker network (for example using ip address 172.17.0.X or, if linked, by using service alias).
Ping to domain/server ip returns response.
I tried to link it as gitlab:example.domain.com but it didn't work, as somehow runner resolved server ip address instead of local network address
Checking for builds... failed: couldn't execute POST against http://example.domain.com/ci/api/v1/builds/register.json: Post http://example.domain.com/ci/api/v1/builds/register.json: dial tcp server.ip:80: i/o timeout
#Edit
docker-compose.yml
gitlab:
image: gitlab/gitlab-ce:8.2.2-ce.0
hostname: domain.name
privileged: true
volumes:
- ./gitlab-config:/etc/gitlab
- ./gitlab-data:/var/opt/gitlab
- ./gitlab-logs:/var/log/gitlab
restart: always
ports:
- server.ip:22:22
- server.ip:80:80
- server.ip:443:443
runner:
image: gitlab/gitlab-runner:alpine
restart: always
volumes:
- ./runner-config:/etc/gitlab-runner
- /var/run/docker.sock:/var/run/docker.sock
I have no clue what's the issue here.
I'd appreciate your help.
Thanks in advance! :)
Seems like it was a firewall problem. Unlocking docker0 interface allowed traffic from containers :)

Can nginx.conf access environment variables?

I'm trying to run a docker container with nginx on a kubernetes cluster. I'm using the environment variable service discovery for all my other containers, so I would like to keep it consistent and not have to bring something like skydns into the mix just because of this. Is it possible to access environment variables in nginx such that I can tell it to proxy-pass to a kubernetes service?
How about this the shell script below which is run by a Docker container?
https://github.com/GoogleCloudPlatform/kubernetes/blob/295bd3768d016a545d4a60cbb81a4983c2a26968/cluster/addons/fluentd-elasticsearch/kibana-image/run_kibana_nginx.sh ?
You mean use the value of an env var set in this way in a config file for nginx? One thing I have done in the past is to have a run.sh config script that is run by the Docker container which uses the env variable to effect substation in template file for an nginx config -- is that would you mean?
There were tons of issues with doing the hacky HEREDOC, including it only having one time service discovery (not much better than hard coding). So my solution ended up being using confd to template nginx and restart nginx when the environment variables change. Here's the link to confd: https://github.com/kelseyhightower/confd
Keeping an included config file in the ConfigMap mounted as volume should work too.
You might need to change the config files' structure for that though.
In the spec you can define an environment variable e.g.
spec:
containers:
- name: kibana-logging
image: gcr.io/google_containers/kibana:1.3
livenessProbe:
name: kibana-liveness
httpGet:
path: /
port: 5601
initialDelaySeconds: 30
timeoutSeconds: 5
env:
- name: "ELASTICSEARCH_URL"
value: "http://elasticsearch-logging:9200"
ports:
- containerPort: 5601
name: kibana-port
protocol: TCP
This will cause the environment variable ELASTICSEARCH_URL to be set to http://elasticsearch-logging:9200. Will this work for you?
Cheers,
Satnam

Resources