I'm trying to run a docker container with nginx on a kubernetes cluster. I'm using the environment variable service discovery for all my other containers, so I would like to keep it consistent and not have to bring something like skydns into the mix just because of this. Is it possible to access environment variables in nginx such that I can tell it to proxy-pass to a kubernetes service?
How about this the shell script below which is run by a Docker container?
https://github.com/GoogleCloudPlatform/kubernetes/blob/295bd3768d016a545d4a60cbb81a4983c2a26968/cluster/addons/fluentd-elasticsearch/kibana-image/run_kibana_nginx.sh ?
You mean use the value of an env var set in this way in a config file for nginx? One thing I have done in the past is to have a run.sh config script that is run by the Docker container which uses the env variable to effect substation in template file for an nginx config -- is that would you mean?
There were tons of issues with doing the hacky HEREDOC, including it only having one time service discovery (not much better than hard coding). So my solution ended up being using confd to template nginx and restart nginx when the environment variables change. Here's the link to confd: https://github.com/kelseyhightower/confd
Keeping an included config file in the ConfigMap mounted as volume should work too.
You might need to change the config files' structure for that though.
In the spec you can define an environment variable e.g.
spec:
containers:
- name: kibana-logging
image: gcr.io/google_containers/kibana:1.3
livenessProbe:
name: kibana-liveness
httpGet:
path: /
port: 5601
initialDelaySeconds: 30
timeoutSeconds: 5
env:
- name: "ELASTICSEARCH_URL"
value: "http://elasticsearch-logging:9200"
ports:
- containerPort: 5601
name: kibana-port
protocol: TCP
This will cause the environment variable ELASTICSEARCH_URL to be set to http://elasticsearch-logging:9200. Will this work for you?
Cheers,
Satnam
Related
I am having issue with date and time in kubernetes cluster. Cluster is setup in the data center using the kubeadm. My host server time is sync using the NTP, though i have synced it after configuring the cluster. Now all the pods created within my cluster will have wrong time. So the cause for it seems to be the docker taking the UTC timezone. For the temporary solution, i use volume mount /etc/localtime with the hostmeachine in the pods which we create but it seems not feasible for the application i install using helm from helm repo. Is there any way to fix this issue? I don't want every pods have the volume mounts for the correct time. Is there any way through which the docker gets timezone from the host machine.
FYI the k8s cluster is setup upon the CentOS 7. They are VM created over the EXSi. Thank You
It's not broken. It's working as designed.
Clock in a container is the same as on the host machine because it’s controlled by the kernel of that machine.
Timezone is controlled by the OS layer so it may be different inside the container.
The way around it is using specific timezone config and hostPath volume to set specific timezone.
apiVersion: v1
kind: Pod
metadata:
name: busybox-sleep
spec:
containers:
- name: busybox
image: busybox
args:
- sleep
- "1000000"
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Europe/Prague
type: File
Because you are using helm, you should check the documentation for the image you are using and look for a timezone variable that you could change so you can put that in your value.yaml or use --set option when deploying.
I recommend reading Kubernetes Container Timezone Management.
I'm a beginner of helm and Kubernetes in general. Recently I have started trialling deployments to an AKS cluster which will include multiple Cluster IP services hidden behind a load balancing NGINX node. For today I'm using Helm 2.2 and have successfully installed the NGINX node. My understanding is now that for each of my individual service charts in Helm I use annotations to enable Nginx routing. As I see it, I should be able to modify the values.yaml file at the top of the chart (nowhere else) to perform these actions.
service:
type: ClusterIP
port: 80
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- chart-example.local
When I execute the above (the rest of the file is excluded for brevity) I receive the error "converting YAML to JSON: yaml: line 38: did not find expected key."
Line 38 corresponds to the end of the ingress section (the blank line straight afterwards). I'm assuming that my yaml is badly formed, and I cannot for the life of me find any examples of this file being used in this fashion. Am I doing it right? If yes what mistake have I made in the yaml. If not, what should I be doing to route to one of my many services via the ingress file? Are there any actual examples of the values.yaml file being set in this fasion to be seen somewhere? Every time I search I find the Ingress.yaml file is modified as a Kubernetes object rather than as a templated Helm chart.
It turns out that with the values.yaml I didn't give people a fair chance. The offending yaml line happened after the code fragment that I provided and was subtle. The code that was necessary to ensure the correct ingress definition was supplied is this:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- host: chart-example.local
paths:
- /test
tls: {}
My mistake was the tls line that came afterwards. I had neglected to realise that the indendenting of the tls section meant it was included with the ingress section and it had one missing space.
_tls: {}
instead of
__tls: {}
My example now renders the template correctly (the underscores are only included to demonstrate the number of spaces and should be removed of course).
I am trying to add a SSL certificate to a wordpress container but the default compose configuration only redirects port 80.
How can I add a new port in the running container? I tried to modify the docker-compose.yml file and restart the container but this doesn't solve the problem.
Thank you.
You should re-create container, when listening new port, like this
docker-compose up -d --force-recreate {CONTAINER}
Expose ports.
Either specify both ports (HOST:CONTAINER), or just the container port (an ephemeral host port is chosen).
Note: When mapping ports in the HOST:CONTAINER format, you may experience erroneous results when using a container port lower than 60, because YAML parses numbers in the format xx:yy as a base-60 value. For this reason, we recommend always explicitly specifying your port mappings as strings.
ports:
- "3000"
- "3000-3005"
- "8000:8000"
- "9090-9091:8080-8081"
- "49100:22"
- "127.0.0.1:8001:8001"
- "127.0.0.1:5000-5010:5000-5010"
- "6060:6060/udp"
https://docs.docker.com/compose/compose-file/#pid
After you add the new port to the docker-compose file, what I did that works is:
Stop the container
docker-compose stop <service name>
Run the docker-compose up command (NOTE: docker-compose start did not work)
docker-compose up -d
According to the documentation the 'docker-compose' command:
Builds, (re)creates, starts, and attaches to containers for a service
... Unless they are already running
That started up the stopped service, WITH the exposed ports I had configured.
Have you tried like in this example:
https://docs.docker.com/compose/compose-file/#ports
Should work like this:
my-services:
ports:
- "80:80"
- "443:443"
you just add the new port in the port section of the docker-compose.yml and then you must do
docker-compose up -d
because it will read the .yml file again and recreate the container. If you do just restart it will not read the new config from the .yml and just restart the same container.
Docker version (latest for Mac)
Version 17.03.1-ce-mac5 (16048)
I'm trying to externalise the paths so each developer can change a single file to map components to the right path in their local environment. For example where nginx servers a static website.
#localhost.env
INDEX_PATH=/Users/felipe/website/public
This is my compose.yml
nginx:
image: nginx
ports:
- "8081:8081"
volumes:
- ${INDEX_PATH}:/etc/nginx/html:ro
env_file:
- ./localhost.env
In short, I define the INDEX_PATH variable to point to my local path and I want nginx to serve the website from there. Another developer should then set
#localhost.env
INDEX_PATH=/Users/somebodyElse/whatever/public
The problem
For some reason that I don't understand the local variable somehow does not get resolved properly, at least when using it as volume's path .
Testing
docker-compose config
nginx:
environment:
INDEX_PATH: /Users/felipe/website/public
image: nginx
ports:
- 8081:8081
volumes:
- .:/etc/nginx/html:ro //HERE I WAS EXPECTING THE PATH
As you can see, it just get resolved as . (a dot instead of the path /Users/felipe/website/public)
Any idea what I'm doing wrong? I believe this feature is supported but can't work out how to do it.
Thank you!
The env_file definition passes environment variables from the file into the container, but it doesn't get picked up in the docker-compose parsing of the yml file. What you can use is a .env file which is loaded before the docker-compose.yml file is parsed, you can even use this to override the docker-compose.yml filename itself.
I have a webserver that requires websocket connection in production. I deploy it using docker-compose with nginx as proxy.
So my compose file look like this:
version: '2'
services:
app:
restart: always
nginx:
restart: always
ports:
- "80:80"
Now if I scale "app" service to multiple instances, docker-compose will perform round robin on each call to the internal dns "app".
Is there a way to tell docker-compose load balancer to apply sticky sessions?
Another solution - is there a way to solve it using nginx?
Possible solution that I don't like:
multiple definitions of app
version: '2'
services:
app1:
restart: always
app2:
restart: always
nginx:
restart: always
ports:
- "80:80"
(And then on nginx config file I can define sticky sessions between app1 and app2).
Best result I got from searching:
https://github.com/docker/dockercloud-haproxy
But this requires me to add another service (maybe replace nginx?) and the docs is pretty poor about sticky sessions there.
I wish docker would just allow configuring it with simple line in the compose file.
Thanks!
Take a look at jwilder/nginx-proxy. This image provides an nginx reverse proxy that listens for containers that define the VIRTUAL_HOST variable and automatically updates its configuration on container creation and removal. tpcwang's fork allows you to use the IP_HASH directive on a container level to enable sticky sessions.
Consider the following Compose file:
nginx:
image: tpcwang/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
app:
image: tutum/hello-world
environment:
- VIRTUAL_HOST=<your_ip_or_domain_name>
- USE_IP_HASH=1
Let's get it up and running and then scale app to three instances:
docker-compose up -d
docker-compose scale app=3
If you check the nginx configuration file you'll see something like this:
docker-compose exec nginx cat /etc/nginx/conf.d/default.conf
...
upstream 172.16.102.132 {
ip_hash;
# desktop_app_3
server 172.17.0.7:80;
# desktop_app_2
server 172.17.0.6:80;
# desktop_app_1
server 172.17.0.4:80;
}
server {
server_name 172.16.102.132;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://172.16.102.132;
}
}
The nginx container has automatically detected the three instances and has updated its configuration to route requests to all of them using sticky sessions.
If we try to access the app we can see that it always reports the same hostname on each refresh. If we remove the USE_IP_HASH environment variable we'll see that the hostname actually changes, this is, the nginx proxy is using round robin to balance our requests.