I have a "Deployment" in Kubernetes which works fine in GKE, but fails in MiniKube.
I have a Pod with 2 containers:-
(1) Nginx as reverse proxy ( reads secrets and configMap volumes at /etc/tls & /etc/nginx respectively )
(2) A JVM based service listening on localhost
The problem in the minikube deployment is that the Nginx container fails to read the TLS certs which appear not to be there - i.e. the volume mount of the secrets to the Pod appears to have failed.
nginx: [emerg] BIO_new_file("/etc/tls/server.crt") failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/tls/server.crt','r') error:2006D080:BIO routines:BIO_new_file:no such file)
But if I do "minikube logs" I get a large amount of seemingly "successful" tls volume mounts...
MountVolume.SetUp succeeded for volume "kubernetes.io/secret/61701667-eca7-11e6-ae16-080027187aca-scriptwriter-tls" (spec.Name: "scriptwriter-tls")
And the secret themselves are in the cluster okay ...
$ kubectl get secrets scriptwriter-tls
NAME TYPE DATA AGE
scriptwriter-tls Opaque 3 1h
So it would appear that as far as miniKube is concerned all is well from a secrets point of view. But on the other hand the nginx container can't see it.
I can't logon to the container either since it keeps terminating.
For completeness the relevant sections from the Deployment yaml ...
Firstly the nginx config...
- name: nginx
image: nginx:1.7.9
imagePullPolicy: Always
ports:
- containerPort: 443
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx", "-s", "quit"]
volumeMounts:
- name: "nginx-scriptwriter-dev-proxf-conf"
mountPath: "/etc/nginx/conf.d"
- name: "scriptwriter-tls"
mountPath: "/etc/tls"
And secondly the volumes themselves at the container level ...
volumes:
- name: "scriptwriter-tls"
secret:
secretName: "scriptwriter-tls"
- name: "nginx-scriptwriter-dev-proxf-conf"
configMap:
name: "nginx-scriptwriter-dev-proxf-conf"
items:
- key: "nginx-scriptwriter.conf"
path: "nginx-scriptwriter.conf"
Any pointers of help would be greatly appreciated.
I am a first class numpty! :-) Sometimes the error is just the error! So the problem was that the secrets are created using local $HOME/.ssh/* certs ... and if you are generating them from different computers with different certs then guess what?! So all fixed now :-)
Related
I am trying to setup the mailcow installation behind Traefik proxy. Apparently, Traefik proxy is not able to recognize the nginx-mailcow container in its network and hence does not create a certificate for https connection. so when I bring up the mailcow service using docker-compose up, I can access the mailcow services but on insecure connection (http) and browser warns that connection is not secure.
When I check my acme.json file from Traefik: I can not find any certificate related to mailcow domain i.e., mail.tld.com there.
I have the following setup:
Logs of affected containers:
Traefik Container Logs:
time="2020-04-18T13:40:35+02:00" level=error msg="accept tcp [::]:80: use of closed network connection" entryPointName=http
time="2020-04-18T13:40:35+02:00" level=error msg="accept tcp [::]:443: use of closed network connection" entryPointName=https
time="2020-04-18T13:40:35+02:00" level=error msg="close tcp [::]:80: use of closed network connection" entryPointName=http
time="2020-04-18T13:40:35+02:00" level=error msg="close tcp [::]:443: use of closed network connection" entryPointName=https
time="2020-04-18T13:40:35+02:00" level=error msg="Cannot connect to docker server context canceled" providerName=docker
time="2020-04-18T13:40:37+02:00" level=info msg="Configuration loaded from file: /traefik.yml"
time="2020-04-19T00:27:31+02:00" level=error msg="service \"nginx-mailcow\" error: unable to find the IP address for the container \"/mailcowdockerized_nginx-mailcow_1\": the server is ignored" container=nginx-mailcow-mailcowdockerized-5f3a25b43c42fd85df675d2d9682b6053501844c2cfe15b7802cf918df138025 providerName=docker
time="2020-04-19T00:33:32+02:00" level=error msg="service \"nginx-mailcow\" error: unable to find the IP address for the container \"/mailcowdockerized_nginx-mailcow_1\": the server is ignored" providerName=docker container=nginx-mailcow-mailcowdockerized-f4d41ee79e382b413e04b039b5fc91e1c6217c78740245c8666373fe2d6a9b23
2020/04/19 00:39:44 reverseproxy.go:445: httputil: ReverseProxy read error during body copy: unexpected EOF
time="2020-04-19T00:50:32+02:00" level=error msg="service \"nginx-mailcow\" error: unable to find the IP address for the container \"/mailcowdockerized_nginx-mailcow_1\": the server is ignored" providerName=docker container=nginx-mailcow-mailcowdockerized-915f80e492c2c22917d0af81add1dde15577173c82cc928b0b6101c8a260adc5
time="2020-04-19T00:58:43+02:00" level=error msg="service \"nginx-mailcow\" error: unable to find the IP address for the container \"/mailcowdockerized_nginx-mailcow_1\": the server is ignored" container=nginx-mailcow-mailcowdockerized-852985c4efc48559ca3568b1829e31b46eb9f968fc328a8566e3dc6ab6f1af21 providerName=docker
time="2020-04-19T02:02:39+02:00" level=error msg="Error while Peeking first byte: read tcp 172.21.0.2:80->208.91.109.90:55153: read: connection reset by peer"
time="2020-04-19T08:11:32+02:00" level=error msg="service \"nginx-mailcow\" error: unable to find the IP address for the container \"/mailcowdockerized_nginx-mailcow_1\": the server is ignored" providerName=docker container=nginx-mailcow-mailcowdockerized-840ef4db0ccc9fa84038dc7a52133779926dba4c51554516c17404ede80a2c01
The contents of Traefik docker-compose.yml:
version: '3'
services:
traefik:
image: traefik:v2.1
container_name: traefik
restart: unless-stopped
security_opt:
- no-new-privileges:true
networks:
- proxy
ports:
- 80:80
- 443:443
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./data/traefik.yml:/traefik.yml:ro
- ./data/acme.json:/acme.json
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.entrypoints=http"
- "traefik.http.routers.traefik.rule=Host(`traefik.tld.com`)"
- "traefik.http.middlewares.traefik-auth.basicauth.users=admin:pass"
- "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https"
- "traefik.http.routers.traefik.middlewares=traefik-https-redirect"
- "traefik.http.routers.traefik-secure.entrypoints=https"
- "traefik.http.routers.traefik-secure.rule=Host(`traefik.tld.com`)"
- "traefik.http.routers.traefik-secure.middlewares=traefik-auth"
- "traefik.http.routers.traefik-secure.tls=true"
- "traefik.http.routers.traefik-secure.tls.certresolver=http"
- "traefik.http.routers.traefik-secure.service=api#internal"
networks:
proxy:
external: true
Contents of traefik.yml (I used .yml instead of .toml)
api:
dashboard: true
entryPoints:
http:
address: ":80"
https:
address: ":443"
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: false
certificatesResolvers:
http:
acme:
email: myemail#tld.com
storage: acme.json
httpChallenge:
entryPoint: http
Just to point out, with this setup of Traefik, certificates are generated automatically for other services like gitlab. For that, I just correctly labelled the gitlab service and assigned the Traefik network to it and Traefik service would recognize the gitlab service and generates the certificate in acme.json but sadly not for nginx-mailcow.
The contents of my docker-compose.override.yml for mailcow:
version: '2.1'
services:
nginx-mailcow:
labels:
- "traefik.enable=true"
- "traefik.http.routers.nginx-mailcow.entrypoints=http"
- "traefik.http.routers.nginx-mailcow.rule=HostRegexp(`{host:(autodiscover|autoconfig|webmail|mail|email).+}`)"
- "traefik.http.middlewares.nginx-mailcow-https-redirect.redirectscheme.scheme=https"
- "traefik.http.routers.nginx-mailcow.middlewares=nginx-mailcow-https-redirect"
- "traefik.http.routers.nginx-mailcow-secure.entrypoints=https"
- "traefik.http.routers.nginx-mailcow-secure.rule=Host(`mail.tld.com`)"
- "traefik.http.routers.nginx-mailcow-secure.tls=true"
- "traefik.http.routers.nginx-mailcow-secure.service=nginx-mailcow"
- "traefik.http.services.nginx-mailcow.loadbalancer.server.port=80"
- "traefik.docker.network=proxy"
networks:
proxy:
certdumper:
image: humenius/traefik-certs-dumper
container_name: traefik_certdumper
network_mode: none
command: --restart-containers mailcowdockerized_postfix-mailcow_1,mailcowdockerized_dovecot-mailcow_1
volumes:
- /opt/containers/traefik/data:/traefik:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./data/assets/ssl:/output:rw
environment:
- DOMAIN=tld.com
networks:
proxy:
external: true
The contents of my nginx-mailcow service in docker-compose.yml
version: '2.1'
services:
...
nginx-mailcow:
depends_on:
- sogo-mailcow
- php-fpm-mailcow
- redis-mailcow
image: nginx:mainline-alpine
dns:
- ${IPV4_NETWORK:-172.22.1}.254
command: /bin/sh -c "envsubst < /etc/nginx/conf.d/templates/listen_plain.template > /etc/nginx/conf.d/listen_plain.active &&
envsubst < /etc/nginx/conf.d/templates/listen_ssl.template > /etc/nginx/conf.d/listen_ssl.active &&
envsubst < /etc/nginx/conf.d/templates/server_name.template > /etc/nginx/conf.d/server_name.active &&
envsubst < /etc/nginx/conf.d/templates/sogo.template > /etc/nginx/conf.d/sogo.active &&
envsubst < /etc/nginx/conf.d/templates/sogo_eas.template > /etc/nginx/conf.d/sogo_eas.active &&
. /etc/nginx/conf.d/templates/sogo.auth_request.template.sh > /etc/nginx/conf.d/sogo_proxy_auth.active &&
. /etc/nginx/conf.d/templates/sites.template.sh > /etc/nginx/conf.d/sites.active &&
nginx -qt &&
until ping phpfpm -c1 > /dev/null; do sleep 1; done &&
until ping sogo -c1 > /dev/null; do sleep 1; done &&
until ping redis -c1 > /dev/null; do sleep 1; done &&
until ping rspamd -c1 > /dev/null; do sleep 1; done &&
exec nginx -g 'daemon off;'"
environment:
- HTTPS_PORT=${HTTPS_PORT:-443}
- HTTP_PORT=${HTTP_PORT:-80}
- MAILCOW_HOSTNAME=${MAILCOW_HOSTNAME}
- IPV4_NETWORK=${IPV4_NETWORK:-172.22.1}
- TZ=${TZ}
- ALLOW_ADMIN_EMAIL_LOGIN=${ALLOW_ADMIN_EMAIL_LOGIN:-n}
volumes:
- ./data/web:/web:ro
- ./data/conf/rspamd/dynmaps:/dynmaps:ro
- ./data/assets/ssl/:/etc/ssl/mail/:ro
- ./data/conf/nginx/:/etc/nginx/conf.d/:rw
- ./data/conf/rspamd/meta_exporter:/meta_exporter:ro
- sogo-web-vol-1:/usr/lib/GNUstep/SOGo/
ports:
- "${HTTPS_BIND:-0.0.0.0}:${HTTPS_PORT:-443}:${HTTPS_PORT:-443}"
- "${HTTP_BIND:-0.0.0.0}:${HTTP_PORT:-80}:${HTTP_PORT:-80}"
restart: always
networks:
mailcow-network:
aliases:
- nginx
....
I have also tried comment out ports in nginx-mailcow service but the problem persists. My current mailcow.conf changes:
HTTP_BIND=127.0.0.1
HTTP_PORT=8080
HTTPS_BIND=127.0.0.1
HTTPS_PORT=8443
SKIP_LETS_ENCRYPT=y
SKIP_CLAMD=y
Reproduction of said bug:
I setup the traefik proxy first (see contents above). Once the Traefik is up and running (I also tested for other services and it works fine in generating a certificate). Now first I cloned the mailcow repository. Then I run ./generate_config.sh to generate mailcow.conf file. As input to generate_config.sh I provide my domain name i.e., mail.tld.com
Then I comment out the ports in docker-compose.yml file because I do not want to use port 80 and 443 for nginx-mailcow as these ports are already being used by Traefik.
Then I create a docker-compose.override.yml (see contents above) to add additional configs to nginx-mailcow service (traefik labels, traefik network). The override file also contain the certdumper service which would copy https certificate from acme.json to mailcow services.
Then, I change the following two variables in mailcow.conf:
SKIP_LETS_ENCRYPT=y
SKIP_CLAMD=y
Finally, I run the mailcow using docker-compose up -d. In browser, if check https://mail.tld.com => It warns that connection is insecure. If I check acme.json. I find no certificate for mail.tld.com.
System information:
+-------------------------------------------------+---------------------------------+
| Question | Answer |
+-------------------------------------------------+---------------------------------+
| My operating system | linux x86_64 Ubuntu 18.04.1 LTS |
| Is Apparmor, SELinux or similar active? | No |
| Virtualization technlogy | KVM |
| Server/VM specifications (Memory, CPU Cores) | 16GB, 6 cores |
| Docker Version (docker version) | 19.03.8 |
| Docker-Compose Version (docker-compose version) | 1.25.4, build 8d51620a |
| Reverse proxy (custom solution) | Traefik |
+-------------------------------------------------+---------------------------------+
If you need more information, I would be happy to provide. Any help will be much appreciated. Thank you.
Finally I was able to solve the problem after investing many hours in reading the Traefik Documentation. I made tiny mistake in assigning proxy labels to the nginx-mailcow service. The solution is below.
I forgot to mention certificate resolver and I had to expose the port which I now added as follows:
services:
nginx-mailcow:
expose:
- "8080"
labels:
- "traefik.enable=true"
- "traefik.http.routers.nginx-mailcow.entrypoints=http"
- "traefik.http.routers.nginx-mailcow.rule=HostRegexp(`{host:(autodiscover|autoconfig|webmail|mail|email).+}`)"
- "traefik.http.middlewares.nginx-mailcow-https-redirect.redirectscheme.scheme=https"
- "traefik.http.routers.nginx-mailcow.middlewares=nginx-mailcow-https-redirect"
- "traefik.http.routers.nginx-mailcow-secure.entrypoints=https"
- "traefik.http.routers.nginx-mailcow-secure.rule=Host(`mail.example.com`)"
- "traefik.http.routers.nginx-mailcow-secure.tls=true"
- "traefik.http.routers.nginx-mailcow-secure.certresolver=http"
- "traefik.http.routers.nginx-mailcow-secure.service=nginx-mailcow"
- "traefik.http.services.nginx-mailcow.loadbalancer.server.port=8080"
- "traefik.docker.network=proxy"
networks:
proxy:
certdumper:
image: humenius/traefik-certs-dumper
container_name: traefik_certdumper
network_mode: none
command: --restart-containers mailcowdockerized_postfix-mailcow_1,mailcowdockerized_dovecot-mailcow_1
volumes:
- <path_to_acme.json_file_dir>:/traefik:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./data/assets/ssl:/output:rw
environment:
- DOMAIN=example.com
For people who are setting up for the first time, I had to make some additional changes beforehand.
Firstly, when you run generate.sh file then in mailcow.conf file you need to make following changes:
HTTP_PORT=8080
HTTP_BIND=127.0.0.1
HTTPS_PORT=8443
HTTPS_BIND=127.0.0.1
SKIP_LETS_ENCRYPT=y
SKIP_CLAMD=y
We make these changes as we can not run mailcow nginx on the same ports as traefik.
Now as nginx-mailcow will be running on 8080 or 8443 so we need to expose one of these ports so traefik can talk to mailcow-nginx service. I already exposed port 8080 in the override compose file)
You also need to also adapt your loadbalancer port from 80 to 8080. (As I configured above)
You need to also tell which certificate resolver should it use. So you need to add this line in labels (I made this as well above in override config)
You have to make sure that your acme.json file (certificate file is accessible by certdumper service). So replace to actual path of acme.json directory path
I hope this helps.
What I am craving for is to have 2 applications running in a pod, each of those applications has its own container. The Application A is a simple spring-boot application which makes HTTP requests to the other application which is deployed on Kubernetes. The purpose of Application B (proxy) is to intercept that HTTP request and add an Authorization token to its header. The Application B is a mitmdump with a python script. The issue I am having is that when I have deployed in on Kubernetes, the proxy seems to not intercept any traffic at all ( I tried to reproduce this issue on my local machine and I didn't find any troubles, so I guess the issue lies somewhere withing networking inside a pod). Can someone have a look into it and guide me how to solve it?
Here's the deployment and service file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy-deployment
namespace: myown
labels:
app: application-a
spec:
replicas: 1
selector:
matchLabels:
app: application-a
template:
metadata:
labels:
app: application-a
spec:
containers:
- name: application-a
image: registry.gitlab.com/application-a
resources:
requests:
memory: "230Mi"
cpu: "100m"
limits:
memory: "460Mi"
cpu: "200m"
imagePullPolicy: Always
ports:
- containerPort: 8090
env:
- name: "HTTP_PROXY"
value: "http://localhost:1030"
- name:
image: registry.gitlab.com/application-b-proxy
resources:
requests:
memory: "230Mi"
cpu: "100m"
limits:
memory: "460Mi"
cpu: "200m"
imagePullPolicy: Always
ports:
- containerPort: 1080
---
kind: Service
apiVersion: v1
metadata:
name: proxy-svc
namespace: myown
spec:
ports:
- nodePort: 31000
port: 8090
protocol: TCP
targetPort: 8090
selector:
app: application-a
sessionAffinity: None
type: NodePort
And here's how i build the docker image of mitmproxy/mitmdump
FROM mitmproxy/mitmproxy:latest
ADD get_token.py .
WORKDIR ~/mit_docker
COPY get_token.py .
EXPOSE 1080:1080
ENTRYPOINT ["mitmdump","--listen-port", "1030", "-s","get_token.py"]
EDIT
I created two dummy docker images in order to have this scenario recreated locally.
APPLICATION A - a spring boot application with a job to create an HTTP GET request every 1 minute for specified but irrelevant address, the address should be accessible. The response should be 302 FOUND. Every time an HTTP request is made, a message in the logs of the application appears.
APPLICATION B - a proxy application which is supposed to proxy the docker container with application A. Every request is logged.
Make sure your docker proxy config is set to listen to http://localhost:8080 - you can check how to do so here
Open a terminal and run this command:
docker run -p 8080:8080 -ti registry.gitlab.com/dyrekcja117/proxyexample:application-b-proxy
Open another terminal and run this command:
docker run --network="host" registry.gitlab.com/dyrekcja117/proxyexample:application-a
Go into the shell with the container of application A in 3rd terminal:
docker exec -ti <name of docker container> sh
and try to make curl to whatever address you want.
And the issue I am struggling with is that when I make curl from inside the container with Application A it is intercepted by my proxy and it can be seen in the logs. But whenever Application A itself makes the same request it is not intercepted. The same thing happens on Kubernetes
Let's first wrap up the facts we discover over our troubleshooting discussion in the comments:
Your need is that APP-A receives a HTTP request and a token needs to be added inflight by PROXY before sending the request to your datastorage.
Every container in a Pod shares the network namespace, including the IP address and network ports. Containers inside a Pod can communicate with one another using localhost, source here.
You was able to login to container application-a and send a curl request to container
application-b-proxy on port 1030, proving the above statement.
The problem is that your proxy is not intercepting the request as expected.
You mention that in you was able to make it work on localhost, but in localhost the proxy has more power than inside a container.
Since I don't have access neither to your app-a code nor the mitmproxy token.py I will give you a general example how to redirect traffic from container-a to container-b
In order to make it work, I'll use NGINX Proxy Pass: it simply proxies the request to container-b.
Reproduction:
I'll use a nginx server as container-a.
I'll build it with this Dockerfile:
FROM nginx:1.17.3
RUN rm /etc/nginx/conf.d/default.conf
COPY frontend.conf /etc/nginx/conf.d
I'll add this configuration file frontend.conf:
server {
listen 80;
location / {
proxy_pass http://127.0.0.1:8080;
}
}
It's ordering the traffic should be sent to container-b that is listening in port 8080 inside the same pod.
I'll build this image as nginxproxy in my local repo:
$ docker build -t nginxproxy .
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginxproxy latest 7c203a72c650 4 minutes ago 126MB
Now the full.yaml deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy-deployment
labels:
app: application-a
spec:
replicas: 1
selector:
matchLabels:
app: application-a
template:
metadata:
labels:
app: application-a
spec:
containers:
- name: container-a
image: nginxproxy:latest
ports:
- containerPort: 80
imagePullPolicy: Never
- name: container-b
image: echo8080:latest
ports:
- containerPort: 8080
imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
name: proxy-svc
spec:
ports:
- nodePort: 31000
port: 80
protocol: TCP
targetPort: 80
selector:
app: application-a
sessionAffinity: None
type: NodePort
NOTE: I set imagePullPolicy as Never because I'm using my local docker image cache.
I'll list the changes I made to help you link it to your current environment:
container-a is doing the work of your application-a and I'm serving nginx on port 80 where you are using port 8090
container-b is receiving the request, as your application-b-proxy. The image I'm using was based on mendhak/http-https-echo, normally it listens on port 80, I've made a custom image just changing to listen on port 8080 and named it echo8080.
First I created a nginx pod and exposed it alone to show you it's running (since it's empty in content, it will return bad gateway but you can see the output is from nginx:
$ kubectl apply -f nginx.yaml
pod/nginx created
service/nginx-svc created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 64s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-svc NodePort 10.103.178.109 <none> 80:31491/TCP 66s
$ curl http://192.168.39.51:31491
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.17.3</center>
</body>
</html>
I deleted the nginx pod and created a echo-apppod and exposed it to show you the response it gives when directly curled from outside:
$ kubectl apply -f echo.yaml
pod/echo created
service/echo-svc created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
echo 1/1 Running 0 118s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echo-svc NodePort 10.102.168.235 <none> 8080:32116/TCP 2m
$ curl http://192.168.39.51:32116
{
"path": "/",
"headers": {
"host": "192.168.39.51:32116",
"user-agent": "curl/7.52.1",
},
"method": "GET",
"hostname": "192.168.39.51",
"ip": "::ffff:172.17.0.1",
"protocol": "http",
"os": {
"hostname": "echo"
},
Now I'll apply the full.yaml:
$ kubectl apply -f full.yaml
deployment.apps/proxy-deployment created
service/proxy-svc created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
proxy-deployment-9fc4ff64b-qbljn 2/2 Running 0 1s
$ k get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
proxy-svc NodePort 10.103.238.103 <none> 80:31000/TCP 31s
Now the Proof of concept, from outside the cluster, I'll send a curl to my node IP 192.168.39.51 in port 31000 which is sending the request to port 80 on the pod (handled by nginx):
$ curl http://192.168.39.51:31000
{
"path": "/",
"headers": {
"host": "127.0.0.1:8080",
"user-agent": "curl/7.52.1",
},
"method": "GET",
"hostname": "127.0.0.1",
"ip": "::ffff:127.0.0.1",
"protocol": "http",
"os": {
"hostname": "proxy-deployment-9fc4ff64b-qbljn"
},
As you can see, the response has all the parameters of the pod, indicating it was sent from 127.0.0.1 instead of a public IP, showing that the NGINX is proxying the request to container-b.
Considerations:
This example was created to show you how the communication works inside kubernetes.
You will have to check how your application-a is handling the requests and edit it to send the traffic to your proxy.
Here are a few links with tutorials and explanation that could help you port your application to kubernetes environment:
Virtual Hosts on nginx
Implementing a Reverse proxy Server in Kubernetes Using the Sidecar Pattern
Validating OAuth 2.0 Access Tokens with NGINX and NGINX Plus
Use nginx to Add Authentication to Any Application
Connecting a Front End to a Back End Using a Service
Transparent Proxy and Filtering on K8s
I Hope to help you with this example.
My goal is :
create a pod with Nextcloud
create a service to access this pod
from another machine with nginx route a CNAME to the service
I tried to deploy a pod with Nextcloud and a service to access it but actually I can't access it. I have an error :
message ERR_SSL_PROTOCOL_ERROR.
I just followed a tutorial at the beginning but I didn't want to use nginx like it was explained because I have it on another machine.
When I look at pods (nextcloud + db) and services they look ok but I have no response when I try to access nextcloud.
(nc = nextcloud)
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nc
name: nc
spec:
replicas: 1
selector:
matchLabels:
app: nc
strategy:
type: Recreate
template:
metadata:
labels:
app: nc
spec:
containers:
- env:
- name: DEBUG
value: "false"
- name: NEXTCLOUD_URL
value: http://test.fr
- name: NEXTCLOUD_ADMIN_USER
value: admin
- name: NEXTCLOUD_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: nextcloud
key: NEXTCLOUD_ADMIN_PASSWORD
- name: NEXTCLOUD_UPLOAD_MAX_FILESIZE
value: 4G
- name: NEXTCLOUD_MAX_FILE_UPLOADS
value: "20"
- name: MYSQL_DATABASE
value: nextcloud
- name: MYSQL_HOST
value: mariadb
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_USER
value: nextcloud
name: nc
image: nextcloud
ports:
- containerPort: 80
protocol: TCP
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/www/html
name: vnextcloud
subPath: html
- mountPath: /var/www/html/custom_apps
name: vnextcloud
subPath: apps
- mountPath: /var/www/html/config
name: vnextcloud
subPath: config
- mountPath: /var/www/html/data
name: vimages
subPath: imgnc
- mountPath: /var/www/html/themes
name: vnextcloud
subPath: themes
restartPolicy: Always
volumes:
- name: vnextcloud
persistentVolumeClaim:
claimName: nfs-pvcnextcloud
- name: vimages
persistentVolumeClaim:
claimName: nfs-pvcimages
For creating the service I use this command line :
kubectl expose deployment nc --type=NodePort --name=svc-nc --port 80
And to access my nextcloud I tried the address #IP_MASTER:32500
My questions are:
How to check if a pod is working well ?to know if the problem is coming from the service or the pod
What should I do to have access to my nextcloud ?I didn't do the tuto part "Create self-signed certificates" because I don't know how to manage. Should it be on my other Linux machine or in my Kubernetes Cluster
1. Please consider using stable nextcloud helm chart
2. This tutorial is a little outdated and can be found also here
In kubernetes 1.16 release you should change in all your deployments apiVersion to apiVersion: apps/v1 please take a look at Deprecations and Removals.
In addition you should get an error ValidationError(Deployment.spec): missing required field "selector" so please add selectors in your deployment under Deployment.spec like:
selector:
matchLabels:
app: db
3. Finally Create self-signed certificates. this repo is using OMGWTFSSL - Self Signed SSL Certificate Generator. Once you provide necessary information like server name, path to your local hostpath and names for your SSL certificates it will be automatically created after one pod-run under specified hostpath:
volumes:
- name: certs
hostPath:
path: "/home/<someFolderLocation>/certs-pv"
those information should be re-used in the section Nginx reverse Proxy for nginx.conf
4. In your nc-svc.yaml you can change the service type to the type: NodePort
5. How to verify if your sercie is working properly:
kubectl get pods,svc,ep -o wide
Pods:
pod/nc-6d8694659d-5przx 1/1 Running 0 15m 10.244.0.6
Svc:
service/svc-nc NodePort 10.102.90.88 <none> 80:32500/TCP
Endpoints:
endpoints/svc-nc 10.244.0.6:80
You can test your service from inside the cluster running separate pod (f.e. ubuntu)
curl your_svc_name
you can verify if service discovery is working properly:
cat /etc/resolv.conf
nslokup svc_your_svc_name (your_svc_name.default.svc.cluster.local)
From outside the cluster using NodePort:
curl NODE_IP:NODE_PORT ( if not please verify your firewall rules)
Once you provided hostname for your nextcloud service you should use
curl -vH 'Host:specified_hostname' http://external_ip/ (using http or https according to your configuration)
In addition you can exec directly into your db pod
kuebctl exec -it db_pod -- /bin/bash and run
mysqladmin status -uroot -p$MARIADB_ROOT_PASSWORD
mysqlshow -uroot -p$MYSQL_ROOT_PASSWORD --status nextcloud
6. What should I do to have access to my nextcloud ?
I didn't do the tuto part "Create self-signed certificates" because I don't know how to manage.
7. As described under point 3.
8. This part is not clear to me: from another machine with nginx route a CNAME to the service
Please refer to:
An ExternalName Service is a special case of Service that does not have selectors and uses DNS names instead.
Additional resources:
Expose your Kubernetes service from your own custom domains
What’s the difference between a CNAME and a Web Redirect?
Hope this help.
I installed Minikube v1.3.1 on my RedHat EC2 instance for some tests.
Since the ports that the nginx-ingress-controller uses by default are already in use, I am trying to change them in the deployment but without result. Could please somebody advise how to do it?
How do I know that the port are already in Use?
When I listed the system pods using the command kubectl -n kube-system get deployment | grep nginx, I get:
nginx-ingress-controller 0/1 1 0 9d
meaning that my container is not up. When I describe it using the command kubectl -n kube-system describe pod nginx-ingress-controller-xxxxx I get:
Type Reason Age From
Message ---- ------ ----
---- ------- Warning FailedCreatePodSandBox 42m (x163507 over 2d1h) kubelet, minikube (combined from similar
events): Failed create pod sandbox: rpc error: code = Unknown desc =
failed to start sandbox container for pod
"nginx-ingress-controller-xxxx": Error response from daemon: driver
failed programming external connectivity on endpoint
k8s_POD_nginx-ingress-controller-xxxx_kube-system_...: Error starting
userland proxy: listen tcp 0.0.0.0:443: bind: address already in use
Then I check the processes using those ports and I kill them. That free them up and the ingress-controller pod gets deployed correctly.
What did I try to change the nginx-ingress-controller port?
kubectl -n kube-system get deployment | grep nginx
> NAME READY UP-TO-DATE AVAILABLE AGE
> nginx-ingress-controller 0/1 1 0 9d
kubectl -n kube-system edit deployment nginx-ingress-controller
The relevant part of my deployment looks like this:
name: nginx-ingress-controller
ports:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
- containerPort: 81
hostPort: 81
protocol: TCP
- containerPort: 444
hostPort: 444
protocol: TCP
- containerPort: 18080
hostPort: 18080
protocol: TCP
Then I remove the subsections with port 443 and 80, but when I rollout the changes, they get added again.
Now my services are not reachable anymore through ingress.
Please note that minikube ships with addon-manager, which role is to keep an eye on specific addon template files (default location: /etc/kubernetes/addons/) and do one of two specific actions based on the label's value of managed resource:
addonmanager.kubernetes.io/mode
addonmanager.kubernetes.io/mode=Reconcile
Will be periodically reconciled. Direct manipulation to these addons
through apiserver is discouraged because addon-manager will bring
them back to the original state. In particular
addonmanager.kubernetes.io/mode=KeepOnly
Will be checked for existence only. Users can edit these addons as
they want.
So to keep your customized version of default Ingress service listening ports, please change first the Ingress deployment template configuration to KeepOnly on minikube VM.
Basically, minikube bootstraps Nginx Ingress Controller as the separate addon, thus as per design you might have to enable it in order to propagate the particular Ingress Controller's resources within minikube cluster.
Once you enabled some specific minikube Addon, Addon-manager creates template files for each component by placing them into /etc/kubernetes/addons/ folder on the host machine, and then spin up each manifest file, creating corresponded K8s resources; furthermore Addon-manager continuously inspects the actual state for all addon resources synchronizing K8s target resources (service, deployment, etc.) according to the template data.
Therefore, you can consider modifying Ingress addon template data throughout ingress-*.yaml files under /etc/kubernetes/addons/ directory, propagating the desired values into the target k8s objects; it may takes some until K8s engine reflects the changes and re-spawns the relative ReplicaSet based resources.
Well, I think you have to modify the Ingress which refer to the service you're trying to expose on custom port.
This can be done with custom annotation. Here is an example for your port 444:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myservice
namespace: mynamespace
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/listen-ports-ssl: "444"
spec:
tls:
- hosts:
- host.org
secretName: my-host-tls-cert
rules:
- host: host.org
http:
paths:
- path: /
backend:
serviceName: my-service
servicePort: 444
As I was going over my recent experiments, I went over my notes to recreate a relatively simple setup using Kubernetes for a back-end and front-end service setup. In my scenario both of these services need to be exposed, and for now I'm doing that using NodePort.
This all worked quite nicely a week or so ago, but I think I managed to mess things up and this has me going nuts. The result is that I cannot seem to get access to my back-end pods via the service. I've followed along the Debug Service document (https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service) and things are going haywire pretty quickly.
So this is my current yaml file:
apiVersion: v1
kind: Service
metadata:
name: test
spec:
type: NodePort
ports:
- name: default
protocol: TCP
port: 80
targetPort: 8080
selector:
app: test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
spec:
selector:
matchLabels:
app: test
replicas: 1
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: jan/test:v1.0.0
ports:
- containerPort: 8080
protocol: TCP
The application starts fine - it reports in the log it is ready for requests. (It is a Java/Grizzly application). Now here is a list of what I tried.
check kubectl services: it is there (for this example it is 172.17.0.4)
exec into the pod (alpine)
ifconfig - 172.17.0.4, 127.0.0.1
nslookup test 10.96.0.10 - works
(note without nameservice this will return
can't resolve '(null)' : Name does not resolve
ping 127.0.0.1 - works
wget http://127.0.0.1:8080 - responds fine
ping 172.17.0.4 - works
wget http://172.17.0.4:8080 - fails immediately, connection refused
wget -qO- test - fails after a while, operation times out
exec into another (busybox) pod
ifconfig - 172.17.0.8, 127.0.0.1
nslookup test - works
ping to pod 172.17.0.4 - works
wget http://172.17.0.8:8080 - fails immediately, connection refused
wget -qO- test - fails immediately, connection refused
Most importantly - I think that the wget -qO- {service} need to start reporting its pod, which currently it does not. Again - I went through the scenario of the Debug Service document and that completes without issues.
So what (else) could be wrong for that wget -qO- to fail?
So, let's see...You are in a busybox pod.
ifconfig - 172.17.0.8, 127.0.0.1
wget http://172.17.0.8:8080 - fails immediately, connection refused
What are you doing here? This is like to do localhost:8080. Of course you are getting connection refused. There is nothing serving on port 8080 of busybox.
wget -qO- test - fails immediately, connection refused
Same here. Now you are doing the request on port 80 of busybox, that again has nothing serving.
There is absolutely no way this configuration has ever worked. All you are doing is to do requests to yourself from within a busybox.
You need to do the request to a service that points to your app or directly to the pod that contains your app.
I removed an important property that was fed into the application. So actually the problem was not at all at the level of K8S. Essentially I was rendering my deployed application 'invisible'.