Kubernetes pod wget -qO- fails for some service, but not all - networking

As I was going over my recent experiments, I went over my notes to recreate a relatively simple setup using Kubernetes for a back-end and front-end service setup. In my scenario both of these services need to be exposed, and for now I'm doing that using NodePort.
This all worked quite nicely a week or so ago, but I think I managed to mess things up and this has me going nuts. The result is that I cannot seem to get access to my back-end pods via the service. I've followed along the Debug Service document (https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service) and things are going haywire pretty quickly.
So this is my current yaml file:
apiVersion: v1
kind: Service
metadata:
name: test
spec:
type: NodePort
ports:
- name: default
protocol: TCP
port: 80
targetPort: 8080
selector:
app: test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
spec:
selector:
matchLabels:
app: test
replicas: 1
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: jan/test:v1.0.0
ports:
- containerPort: 8080
protocol: TCP
The application starts fine - it reports in the log it is ready for requests. (It is a Java/Grizzly application). Now here is a list of what I tried.
check kubectl services: it is there (for this example it is 172.17.0.4)
exec into the pod (alpine)
ifconfig - 172.17.0.4, 127.0.0.1
nslookup test 10.96.0.10 - works
(note without nameservice this will return
can't resolve '(null)' : Name does not resolve
ping 127.0.0.1 - works
wget http://127.0.0.1:8080 - responds fine
ping 172.17.0.4 - works
wget http://172.17.0.4:8080 - fails immediately, connection refused
wget -qO- test - fails after a while, operation times out
exec into another (busybox) pod
ifconfig - 172.17.0.8, 127.0.0.1
nslookup test - works
ping to pod 172.17.0.4 - works
wget http://172.17.0.8:8080 - fails immediately, connection refused
wget -qO- test - fails immediately, connection refused
Most importantly - I think that the wget -qO- {service} need to start reporting its pod, which currently it does not. Again - I went through the scenario of the Debug Service document and that completes without issues.
So what (else) could be wrong for that wget -qO- to fail?

So, let's see...You are in a busybox pod.
ifconfig - 172.17.0.8, 127.0.0.1
wget http://172.17.0.8:8080 - fails immediately, connection refused
What are you doing here? This is like to do localhost:8080. Of course you are getting connection refused. There is nothing serving on port 8080 of busybox.
wget -qO- test - fails immediately, connection refused
Same here. Now you are doing the request on port 80 of busybox, that again has nothing serving.
There is absolutely no way this configuration has ever worked. All you are doing is to do requests to yourself from within a busybox.
You need to do the request to a service that points to your app or directly to the pod that contains your app.

I removed an important property that was fed into the application. So actually the problem was not at all at the level of K8S. Essentially I was rendering my deployed application 'invisible'.

Related

A proxy inside a kubernetes pod doesn't intercept any HTTP traffic

What I am craving for is to have 2 applications running in a pod, each of those applications has its own container. The Application A is a simple spring-boot application which makes HTTP requests to the other application which is deployed on Kubernetes. The purpose of Application B (proxy) is to intercept that HTTP request and add an Authorization token to its header. The Application B is a mitmdump with a python script. The issue I am having is that when I have deployed in on Kubernetes, the proxy seems to not intercept any traffic at all ( I tried to reproduce this issue on my local machine and I didn't find any troubles, so I guess the issue lies somewhere withing networking inside a pod). Can someone have a look into it and guide me how to solve it?
Here's the deployment and service file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy-deployment
namespace: myown
labels:
app: application-a
spec:
replicas: 1
selector:
matchLabels:
app: application-a
template:
metadata:
labels:
app: application-a
spec:
containers:
- name: application-a
image: registry.gitlab.com/application-a
resources:
requests:
memory: "230Mi"
cpu: "100m"
limits:
memory: "460Mi"
cpu: "200m"
imagePullPolicy: Always
ports:
- containerPort: 8090
env:
- name: "HTTP_PROXY"
value: "http://localhost:1030"
- name:
image: registry.gitlab.com/application-b-proxy
resources:
requests:
memory: "230Mi"
cpu: "100m"
limits:
memory: "460Mi"
cpu: "200m"
imagePullPolicy: Always
ports:
- containerPort: 1080
---
kind: Service
apiVersion: v1
metadata:
name: proxy-svc
namespace: myown
spec:
ports:
- nodePort: 31000
port: 8090
protocol: TCP
targetPort: 8090
selector:
app: application-a
sessionAffinity: None
type: NodePort
And here's how i build the docker image of mitmproxy/mitmdump
FROM mitmproxy/mitmproxy:latest
ADD get_token.py .
WORKDIR ~/mit_docker
COPY get_token.py .
EXPOSE 1080:1080
ENTRYPOINT ["mitmdump","--listen-port", "1030", "-s","get_token.py"]
EDIT
I created two dummy docker images in order to have this scenario recreated locally.
APPLICATION A - a spring boot application with a job to create an HTTP GET request every 1 minute for specified but irrelevant address, the address should be accessible. The response should be 302 FOUND. Every time an HTTP request is made, a message in the logs of the application appears.
APPLICATION B - a proxy application which is supposed to proxy the docker container with application A. Every request is logged.
Make sure your docker proxy config is set to listen to http://localhost:8080 - you can check how to do so here
Open a terminal and run this command:
docker run -p 8080:8080 -ti registry.gitlab.com/dyrekcja117/proxyexample:application-b-proxy
Open another terminal and run this command:
docker run --network="host" registry.gitlab.com/dyrekcja117/proxyexample:application-a
Go into the shell with the container of application A in 3rd terminal:
docker exec -ti <name of docker container> sh
and try to make curl to whatever address you want.
And the issue I am struggling with is that when I make curl from inside the container with Application A it is intercepted by my proxy and it can be seen in the logs. But whenever Application A itself makes the same request it is not intercepted. The same thing happens on Kubernetes
Let's first wrap up the facts we discover over our troubleshooting discussion in the comments:
Your need is that APP-A receives a HTTP request and a token needs to be added inflight by PROXY before sending the request to your datastorage.
Every container in a Pod shares the network namespace, including the IP address and network ports. Containers inside a Pod can communicate with one another using localhost, source here.
You was able to login to container application-a and send a curl request to container
application-b-proxy on port 1030, proving the above statement.
The problem is that your proxy is not intercepting the request as expected.
You mention that in you was able to make it work on localhost, but in localhost the proxy has more power than inside a container.
Since I don't have access neither to your app-a code nor the mitmproxy token.py I will give you a general example how to redirect traffic from container-a to container-b
In order to make it work, I'll use NGINX Proxy Pass: it simply proxies the request to container-b.
Reproduction:
I'll use a nginx server as container-a.
I'll build it with this Dockerfile:
FROM nginx:1.17.3
RUN rm /etc/nginx/conf.d/default.conf
COPY frontend.conf /etc/nginx/conf.d
I'll add this configuration file frontend.conf:
server {
listen 80;
location / {
proxy_pass http://127.0.0.1:8080;
}
}
It's ordering the traffic should be sent to container-b that is listening in port 8080 inside the same pod.
I'll build this image as nginxproxy in my local repo:
$ docker build -t nginxproxy .
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginxproxy latest 7c203a72c650 4 minutes ago 126MB
Now the full.yaml deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy-deployment
labels:
app: application-a
spec:
replicas: 1
selector:
matchLabels:
app: application-a
template:
metadata:
labels:
app: application-a
spec:
containers:
- name: container-a
image: nginxproxy:latest
ports:
- containerPort: 80
imagePullPolicy: Never
- name: container-b
image: echo8080:latest
ports:
- containerPort: 8080
imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
name: proxy-svc
spec:
ports:
- nodePort: 31000
port: 80
protocol: TCP
targetPort: 80
selector:
app: application-a
sessionAffinity: None
type: NodePort
NOTE: I set imagePullPolicy as Never because I'm using my local docker image cache.
I'll list the changes I made to help you link it to your current environment:
container-a is doing the work of your application-a and I'm serving nginx on port 80 where you are using port 8090
container-b is receiving the request, as your application-b-proxy. The image I'm using was based on mendhak/http-https-echo, normally it listens on port 80, I've made a custom image just changing to listen on port 8080 and named it echo8080.
First I created a nginx pod and exposed it alone to show you it's running (since it's empty in content, it will return bad gateway but you can see the output is from nginx:
$ kubectl apply -f nginx.yaml
pod/nginx created
service/nginx-svc created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 64s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-svc NodePort 10.103.178.109 <none> 80:31491/TCP 66s
$ curl http://192.168.39.51:31491
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.17.3</center>
</body>
</html>
I deleted the nginx pod and created a echo-apppod and exposed it to show you the response it gives when directly curled from outside:
$ kubectl apply -f echo.yaml
pod/echo created
service/echo-svc created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
echo 1/1 Running 0 118s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echo-svc NodePort 10.102.168.235 <none> 8080:32116/TCP 2m
$ curl http://192.168.39.51:32116
{
"path": "/",
"headers": {
"host": "192.168.39.51:32116",
"user-agent": "curl/7.52.1",
},
"method": "GET",
"hostname": "192.168.39.51",
"ip": "::ffff:172.17.0.1",
"protocol": "http",
"os": {
"hostname": "echo"
},
Now I'll apply the full.yaml:
$ kubectl apply -f full.yaml
deployment.apps/proxy-deployment created
service/proxy-svc created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
proxy-deployment-9fc4ff64b-qbljn 2/2 Running 0 1s
$ k get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
proxy-svc NodePort 10.103.238.103 <none> 80:31000/TCP 31s
Now the Proof of concept, from outside the cluster, I'll send a curl to my node IP 192.168.39.51 in port 31000 which is sending the request to port 80 on the pod (handled by nginx):
$ curl http://192.168.39.51:31000
{
"path": "/",
"headers": {
"host": "127.0.0.1:8080",
"user-agent": "curl/7.52.1",
},
"method": "GET",
"hostname": "127.0.0.1",
"ip": "::ffff:127.0.0.1",
"protocol": "http",
"os": {
"hostname": "proxy-deployment-9fc4ff64b-qbljn"
},
As you can see, the response has all the parameters of the pod, indicating it was sent from 127.0.0.1 instead of a public IP, showing that the NGINX is proxying the request to container-b.
Considerations:
This example was created to show you how the communication works inside kubernetes.
You will have to check how your application-a is handling the requests and edit it to send the traffic to your proxy.
Here are a few links with tutorials and explanation that could help you port your application to kubernetes environment:
Virtual Hosts on nginx
Implementing a Reverse proxy Server in Kubernetes Using the Sidecar Pattern
Validating OAuth 2.0 Access Tokens with NGINX and NGINX Plus
Use nginx to Add Authentication to Any Application
Connecting a Front End to a Back End Using a Service
Transparent Proxy and Filtering on K8s
I Hope to help you with this example.

Expose an external service to be accessible from within the cluster

I am trying to setup connection to my databases which reside outside of GKE cluster from within the cluster.
I have read various tutorials including
https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-mapping-external-services
and multiple SO questions though the problem persists.
Here is an example configuration with which I am trying to setup kafka connectivity:
---
kind: Endpoints
apiVersion: v1
metadata:
name: kafka
subsets:
- addresses:
- ip: 10.132.0.5
ports:
- port: 9092
---
kind: Service
apiVersion: v1
metadata:
name: kafka
spec:
type: ClusterIP
ports:
- port: 9092
targetPort: 9092
I am able to get some sort of response by connecting directly via nc 10.132.0.5 9092 from the node VM itself, but if I create a pod, say by kubectl run -it --rm --restart=Never alpine --image=alpine sh then I am unable to connect from within the pod using nc kafka 9092. All libraries in my code fail by timing out so it seems to be some kind of routing issue.
Kafka is given as an example, I am having the same issues connecting to other databases as well.
Solved it, the issue was within my understanding of how GCP operates.
To solve the issue I had to add a firewall rule which allowed all incoming traffic from internal GKE network. In my case it was 10.52.0.0/24 address range.
Hope it helps someone.

Change Kubernetes nginx-ingress-controller ports

I installed Minikube v1.3.1 on my RedHat EC2 instance for some tests.
Since the ports that the nginx-ingress-controller uses by default are already in use, I am trying to change them in the deployment but without result. Could please somebody advise how to do it?
How do I know that the port are already in Use?
When I listed the system pods using the command kubectl -n kube-system get deployment | grep nginx, I get:
nginx-ingress-controller 0/1 1 0 9d
meaning that my container is not up. When I describe it using the command kubectl -n kube-system describe pod nginx-ingress-controller-xxxxx I get:
Type Reason Age From
Message ---- ------ ----
---- ------- Warning FailedCreatePodSandBox 42m (x163507 over 2d1h) kubelet, minikube (combined from similar
events): Failed create pod sandbox: rpc error: code = Unknown desc =
failed to start sandbox container for pod
"nginx-ingress-controller-xxxx": Error response from daemon: driver
failed programming external connectivity on endpoint
k8s_POD_nginx-ingress-controller-xxxx_kube-system_...: Error starting
userland proxy: listen tcp 0.0.0.0:443: bind: address already in use
Then I check the processes using those ports and I kill them. That free them up and the ingress-controller pod gets deployed correctly.
What did I try to change the nginx-ingress-controller port?
kubectl -n kube-system get deployment | grep nginx
> NAME READY UP-TO-DATE AVAILABLE AGE
> nginx-ingress-controller 0/1 1 0 9d
kubectl -n kube-system edit deployment nginx-ingress-controller
The relevant part of my deployment looks like this:
name: nginx-ingress-controller
ports:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
- containerPort: 81
hostPort: 81
protocol: TCP
- containerPort: 444
hostPort: 444
protocol: TCP
- containerPort: 18080
hostPort: 18080
protocol: TCP
Then I remove the subsections with port 443 and 80, but when I rollout the changes, they get added again.
Now my services are not reachable anymore through ingress.
Please note that minikube ships with addon-manager, which role is to keep an eye on specific addon template files (default location: /etc/kubernetes/addons/) and do one of two specific actions based on the label's value of managed resource:
addonmanager.kubernetes.io/mode
addonmanager.kubernetes.io/mode=Reconcile
Will be periodically reconciled. Direct manipulation to these addons
through apiserver is discouraged because addon-manager will bring
them back to the original state. In particular
addonmanager.kubernetes.io/mode=KeepOnly
Will be checked for existence only. Users can edit these addons as
they want.
So to keep your customized version of default Ingress service listening ports, please change first the Ingress deployment template configuration to KeepOnly on minikube VM.
Basically, minikube bootstraps Nginx Ingress Controller as the separate addon, thus as per design you might have to enable it in order to propagate the particular Ingress Controller's resources within minikube cluster.
Once you enabled some specific minikube Addon, Addon-manager creates template files for each component by placing them into /etc/kubernetes/addons/ folder on the host machine, and then spin up each manifest file, creating corresponded K8s resources; furthermore Addon-manager continuously inspects the actual state for all addon resources synchronizing K8s target resources (service, deployment, etc.) according to the template data.
Therefore, you can consider modifying Ingress addon template data throughout ingress-*.yaml files under /etc/kubernetes/addons/ directory, propagating the desired values into the target k8s objects; it may takes some until K8s engine reflects the changes and re-spawns the relative ReplicaSet based resources.
Well, I think you have to modify the Ingress which refer to the service you're trying to expose on custom port.
This can be done with custom annotation. Here is an example for your port 444:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myservice
namespace: mynamespace
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/listen-ports-ssl: "444"
spec:
tls:
- hosts:
- host.org
secretName: my-host-tls-cert
rules:
- host: host.org
http:
paths:
- path: /
backend:
serviceName: my-service
servicePort: 444

kubernetes nginx ingress fails to redirect HTTP to HTTPS

I have a web app hosted in the Google Cloud platform that sits behind a load balancer, which itself sits behind an ingress. The ingress is set up with an SSL certificate and accepts HTTPS connections as expected, with one problem: I cannot get it to redirect non-HTTPS connections to HTTPS. For example, if I connect to it with the URL http://foo.com or foo.com, it just goes to foo.com, instead of https://foo.com as I would expect. Connecting to https://foo.com explicitly produces the desired HTTPS connection.
I have tried every annotation and config imaginable, but it stubbornly refuses, although it shouldn't even be necessary since docs imply that the redirect is automatic if TLS is specified. Am I fundamentally misunderstanding how ingress resources work?
Update: Is it necessary to manually install nginx ingress on GCP? Now that I think about it, I've been taking its availability on the platform for granted, but after coming across information on how to install nginx ingress on the Google Container Engine, I realized the answer may be a lot simpler than I thought. Will investigate further.
Kubernetes version: 1.8.5-gke.0
Ingress YAML file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: https-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/secure-backends: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- hosts:
- foo.com
secretName: tls-secret
rules:
- host: foo.com
http:
paths:
- path: /*
backend:
serviceName: foo-prod
servicePort: 80
kubectl describe ing https-ingress output
Name: https-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (10.56.0.3:8080)
TLS:
tls-secret terminates foo.com
Rules:
Host Path Backends
---- ---- --------
foo.com
/* foo-prod:80 (<none>)
Annotations:
force-ssl-redirect: true
secure-backends: true
ssl-redirect: true
Events: <none>
The problem was indeed the fact that the Nginx Ingress is not standard on the Google Cloud Platform, and needs to be installed manually - doh!
However, I found installing it to be much more difficult than anticipated (especially because my needs pertained specifically to GCP), so I'm going to outline every step I took from start to finish in hopes of helping anyone else who uses that specific cloud and has that specific need, and finds generic guides to not quite fit the bill.
Get Cluster Credentials
This is a GCP specific step that tripped me up for a while - you're dealing with it if you get weird errors like
kubectl unable to connect to server: x509: certificate signed by unknown authority
when trying to run kubectl commands. Run this to set up your console:
gcloud container clusters get-credentials YOUR-K8s-CLUSTER-NAME --z YOUR-K8S-CLUSTER-ZONE
Install Helm
Helm by itself is not hard to install, and the directions can be found on GCP's own docs; what they neglect to mention, however, is that on new versions of K8s, RBAC configuration is required to allow Tiller to install things. Run the following after helm init:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
Install Nginx Ingress through Helm
Here's another step that tripped me up - rbac.create=true is necessary for the aforementioned RBAC factor.
helm install --name nginx-ingress-release stable/nginx-ingress --set rbac.create=true
Create your Ingress resource
This step is the simplest, and there are plenty of sample nginx ingress configs to tweak - see #JahongirRahmonov's example above. What you MUST keep in mind is that this step takes anywhere from half an hour to over an hour to set up - if you change the config and check again immediately, it won't be set up, but don't take that as implication that you messed something up! Wait for a while and see first.
It's hard to believe this is how much it takes just to redirect HTTP to HTTPS with Kubernetes right now, but I hope this guide helps anyone else stuck on such a seemingly simple and yet so critical need.
GCP has a default ingress controller which at the time of this writing cannot force https.
You need to explicitly manage an NGINX Ingress Controller.
See this article on how to do that on GCP.
Then add this annotation to your ingress:
kubernetes.io/ingress.allow-http: "false"
Hope it helps.

Send command over a mapped port via kubernetes service

I have an application that can receive commands from a specific port like so:
echo <command> | nc <hostname> <port>
In this case it is opening port 22082, I believe in it's Docker container.
When I place this application into a kubernetes pod, I need to expose it by creating a kubernetes service. Here is my service:
apiVersion: v1
kind: Service
metadata:
name: commander
spec:
selector:
app: commander
ports:
- protocol: TCP
port: 22282
targetPort: 22082
#type: NodePort
externalIPs:
- 10.10.30.19
NOTE: I commented out NodePort because I haven't been able to expose the port using that method. Whenever I use sudo netstat -nlp | grep 22282 I get nothing.
Using an external IP i'm able to find the port and connect to it using netcat, but whenever I issue a command over the port, it just hangs.
Normally I should be able to issue a 'help' command and get information on the app. With kubernetes I can't get that same output.
Now, if I use hostNetwork: true in my app yaml (not the service), I can connect to the port and get my 'help' info.
What could be keeping my command from reaching the app while not using hostNetwork configuration?
Thanks
UPDATE: Noticed this message from sudo iptables --list:
Chain KUBE-SERVICES (1 references)
target prot opt source destination
REJECT tcp -- anywhere 172.21.155.23 /* default/commander: has no endpoints */ tcp dpt:22282 reject-with icmp-port-unreachable
UPDATE #2: I solved the above error by setting spec.template.metadata.labels.app to commander. I still, however, am experiencing an inability to send any command to the app.
Thanks to #sfgroups I discovered that I needed to set an actual nodePort like so:
apiVersion: v1
kind: Service
metadata:
name: commander
spec:
selector:
app: commander
ports:
- protocol: TCP
port: 22282
nodePort: 32282
targetPort: 22082
type: NodePort
Pretty odd behavior, makes me wonder what the point of the port field even is!

Resources