Send command over a mapped port via kubernetes service - networking

I have an application that can receive commands from a specific port like so:
echo <command> | nc <hostname> <port>
In this case it is opening port 22082, I believe in it's Docker container.
When I place this application into a kubernetes pod, I need to expose it by creating a kubernetes service. Here is my service:
apiVersion: v1
kind: Service
metadata:
name: commander
spec:
selector:
app: commander
ports:
- protocol: TCP
port: 22282
targetPort: 22082
#type: NodePort
externalIPs:
- 10.10.30.19
NOTE: I commented out NodePort because I haven't been able to expose the port using that method. Whenever I use sudo netstat -nlp | grep 22282 I get nothing.
Using an external IP i'm able to find the port and connect to it using netcat, but whenever I issue a command over the port, it just hangs.
Normally I should be able to issue a 'help' command and get information on the app. With kubernetes I can't get that same output.
Now, if I use hostNetwork: true in my app yaml (not the service), I can connect to the port and get my 'help' info.
What could be keeping my command from reaching the app while not using hostNetwork configuration?
Thanks
UPDATE: Noticed this message from sudo iptables --list:
Chain KUBE-SERVICES (1 references)
target prot opt source destination
REJECT tcp -- anywhere 172.21.155.23 /* default/commander: has no endpoints */ tcp dpt:22282 reject-with icmp-port-unreachable
UPDATE #2: I solved the above error by setting spec.template.metadata.labels.app to commander. I still, however, am experiencing an inability to send any command to the app.

Thanks to #sfgroups I discovered that I needed to set an actual nodePort like so:
apiVersion: v1
kind: Service
metadata:
name: commander
spec:
selector:
app: commander
ports:
- protocol: TCP
port: 22282
nodePort: 32282
targetPort: 22082
type: NodePort
Pretty odd behavior, makes me wonder what the point of the port field even is!

Related

Expose an external service to be accessible from within the cluster

I am trying to setup connection to my databases which reside outside of GKE cluster from within the cluster.
I have read various tutorials including
https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-mapping-external-services
and multiple SO questions though the problem persists.
Here is an example configuration with which I am trying to setup kafka connectivity:
---
kind: Endpoints
apiVersion: v1
metadata:
name: kafka
subsets:
- addresses:
- ip: 10.132.0.5
ports:
- port: 9092
---
kind: Service
apiVersion: v1
metadata:
name: kafka
spec:
type: ClusterIP
ports:
- port: 9092
targetPort: 9092
I am able to get some sort of response by connecting directly via nc 10.132.0.5 9092 from the node VM itself, but if I create a pod, say by kubectl run -it --rm --restart=Never alpine --image=alpine sh then I am unable to connect from within the pod using nc kafka 9092. All libraries in my code fail by timing out so it seems to be some kind of routing issue.
Kafka is given as an example, I am having the same issues connecting to other databases as well.
Solved it, the issue was within my understanding of how GCP operates.
To solve the issue I had to add a firewall rule which allowed all incoming traffic from internal GKE network. In my case it was 10.52.0.0/24 address range.
Hope it helps someone.

Change Kubernetes nginx-ingress-controller ports

I installed Minikube v1.3.1 on my RedHat EC2 instance for some tests.
Since the ports that the nginx-ingress-controller uses by default are already in use, I am trying to change them in the deployment but without result. Could please somebody advise how to do it?
How do I know that the port are already in Use?
When I listed the system pods using the command kubectl -n kube-system get deployment | grep nginx, I get:
nginx-ingress-controller 0/1 1 0 9d
meaning that my container is not up. When I describe it using the command kubectl -n kube-system describe pod nginx-ingress-controller-xxxxx I get:
Type Reason Age From
Message ---- ------ ----
---- ------- Warning FailedCreatePodSandBox 42m (x163507 over 2d1h) kubelet, minikube (combined from similar
events): Failed create pod sandbox: rpc error: code = Unknown desc =
failed to start sandbox container for pod
"nginx-ingress-controller-xxxx": Error response from daemon: driver
failed programming external connectivity on endpoint
k8s_POD_nginx-ingress-controller-xxxx_kube-system_...: Error starting
userland proxy: listen tcp 0.0.0.0:443: bind: address already in use
Then I check the processes using those ports and I kill them. That free them up and the ingress-controller pod gets deployed correctly.
What did I try to change the nginx-ingress-controller port?
kubectl -n kube-system get deployment | grep nginx
> NAME READY UP-TO-DATE AVAILABLE AGE
> nginx-ingress-controller 0/1 1 0 9d
kubectl -n kube-system edit deployment nginx-ingress-controller
The relevant part of my deployment looks like this:
name: nginx-ingress-controller
ports:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
- containerPort: 81
hostPort: 81
protocol: TCP
- containerPort: 444
hostPort: 444
protocol: TCP
- containerPort: 18080
hostPort: 18080
protocol: TCP
Then I remove the subsections with port 443 and 80, but when I rollout the changes, they get added again.
Now my services are not reachable anymore through ingress.
Please note that minikube ships with addon-manager, which role is to keep an eye on specific addon template files (default location: /etc/kubernetes/addons/) and do one of two specific actions based on the label's value of managed resource:
addonmanager.kubernetes.io/mode
addonmanager.kubernetes.io/mode=Reconcile
Will be periodically reconciled. Direct manipulation to these addons
through apiserver is discouraged because addon-manager will bring
them back to the original state. In particular
addonmanager.kubernetes.io/mode=KeepOnly
Will be checked for existence only. Users can edit these addons as
they want.
So to keep your customized version of default Ingress service listening ports, please change first the Ingress deployment template configuration to KeepOnly on minikube VM.
Basically, minikube bootstraps Nginx Ingress Controller as the separate addon, thus as per design you might have to enable it in order to propagate the particular Ingress Controller's resources within minikube cluster.
Once you enabled some specific minikube Addon, Addon-manager creates template files for each component by placing them into /etc/kubernetes/addons/ folder on the host machine, and then spin up each manifest file, creating corresponded K8s resources; furthermore Addon-manager continuously inspects the actual state for all addon resources synchronizing K8s target resources (service, deployment, etc.) according to the template data.
Therefore, you can consider modifying Ingress addon template data throughout ingress-*.yaml files under /etc/kubernetes/addons/ directory, propagating the desired values into the target k8s objects; it may takes some until K8s engine reflects the changes and re-spawns the relative ReplicaSet based resources.
Well, I think you have to modify the Ingress which refer to the service you're trying to expose on custom port.
This can be done with custom annotation. Here is an example for your port 444:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myservice
namespace: mynamespace
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/listen-ports-ssl: "444"
spec:
tls:
- hosts:
- host.org
secretName: my-host-tls-cert
rules:
- host: host.org
http:
paths:
- path: /
backend:
serviceName: my-service
servicePort: 444

How to route TCP traffic from outside to a Service inside a Kubernetes cluster?

I have a cluster on Azure (AKS). I am have a orientdb service
apiVersion: v1
kind: Service
metadata:
name: orientdb
labels:
app: orientdb
role: backend
spec:
selector:
app: orientdb
ports:
- protocol: TCP
port: 2424
name: binary
- protocol: TCP
port: 2480
name: http
which I want to expose to the outside, such that an app from the internet can send TCP traffic directly to this service.
(In order to connect to orientdb you need to connect over TCP to port 2424)
I am not good in networking so this is my understanding, which might as well be wrong.
I tried the following:
Setting up an Ingress
did not work, because ingress handles http, but is not well suited for tcp.
I tried to set ExternalIP field in the service config in NodePort definition
did not work.
So my problem is the following:
I cannot send tcp traffic to the service. Http traffic works fine.
I would really appreciate if someone would show me how to expose my service such that I can sen TCP traffic directly to my oriented service.
Thanks in advance.
You can use both the service of type Loadbalancer ( I assume AKS supports that) , or you can just use the node port.
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service
kubectl get services my-service
The output is similar to this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service ClusterIP 10.3.245.137 104.198.205.71 8080/TCP 54s
Reference here
kubectl expose usage:
Usage
$ expose (-f FILENAME | TYPE NAME) [--port=port] [--protocol=TCP|UDP|SCTP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type]
You can make use of --port= 2424 --target-port= 2424 options for correct ports in the kubectl expose command above

Assign External IP to a Kubernetes Service

EDIT: The whole point of my setup is to achieve (if possible) the following :
I have multiple k8s nodes
When I contact an IP address (from my company's network), it should be routed to one of my container/pod/service/whatever.
I should be able to easily setup that IP (like in my service .yml definition)
I'm running a small Kubernetes cluster (built with kubeadm) in order to evaluate if I can move my Docker (old)Swarm setup to k8s. The feature I absolutely need is the ability to assign IP to containers, like I do with MacVlan.
In my current docker setup, I'm using MacVlan to assign IP addresses from my company's network to some containers so I can reach directly (without reverse-proxy) like if it's any physical server. I'm trying to achieve something similar with k8s.
I found out that:
I have to use Service
I can't use the LoadBalancer type, as it's only for compatible cloud providers (like GCE or AWS).
I should use ExternalIPs
Ingress Resources are some kind of reverse proxy ?
My yaml file is :
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
nodeSelector:
kubernetes.io/hostname: k8s-slave-3
---
kind: Service
apiVersion: v1
metadata:
name: nginx-service
spec:
type: ClusterIP
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
externalIPs:
- A.B.C.D
I was hopping that my service would get the IP A.B.C.D (which is one of my company's network). My deployment is working as I can reach my nginx container from inside the k8s cluster using it's ClusterIP.
What am I missing ? Or at least, where can I find informations on my network traffic in order to see if packets are coming ?
EDIT :
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.96.0.1 <none> 443/TCP 6d
nginx-service 10.102.64.83 A.B.C.D 80/TCP 23h
Thanks.
First of all run this command:
kubectl get -n namespace services
Above command will return output like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend NodePort 10.100.44.154 <none> 9400:3003/TCP 13h
frontend NodePort 10.107.53.39 <none> 3000:30017/TCP 13h
It is clear from the above output that External IPs are not assigned to the services yet. To assign External IPs to backend service run the following command.
kubectl patch svc backend -p '{"spec":{"externalIPs":["192.168.0.194"]}}'
and to assign external IP to frontend service run this command.
kubectl patch svc frontend -p '{"spec":{"externalIPs":["192.168.0.194"]}}'
Now get namespace service to check either external IPs assignment:
kubectl get -n namespace services
We get an output like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend NodePort 10.100.44.154 192.168.0.194 9400:3003/TCP 13h
frontend NodePort 10.107.53.39 192.168.0.194 3000:30017/TCP 13h
Cheers!!! Kubernetes External IPs are now assigned .
If this is just for testing, then try
kubectl port-forward service/nginx-service 80:80
Then you can
curl http://localhost:80
A solution that could work (and not only for testing, though it has its shortcomings) is to set your Pod to map the host network with the hostNetwork spec field set to true.
It means that you won't need a service to expose your Pod, as it will always be accessible on your host via a single port (the containerPort you specified in the manifest). No need to keep a DNS mapping record in that case.
This also means that you can only run a single instance of this Pod on a given node (talking about shortcomings...). As such, it makes it a good candidate for a DaemonSet object.
If your Pod still needs to access/resolve internal Kubernetes hostnames, you need to set the dnsPolicy spec field set to ClusterFirstWithNoHostNet. This setting will enable your pod to access the K8S DNS service.
Example:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx
spec:
template:
metadata:
labels:
app: nginx-reverse-proxy
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
tolerations: # allow a Pod instance to run on Master - optional
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- image: nginx
name: nginx
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
EDIT: I was put on this track thanks to the the ingress-nginx documentation
You can just Patch an External IP
CMD: $ kubectl patch svc svc_name -p '{"spec":{"externalIPs":["your_external_ip"]}}'
Eg:- $ kubectl patch svc kubernetes -p '{"spec":{"externalIPs":["10.2.8.19"]}}'
you can try kube-keepalived-vip configurtion to route the traffic. https://github.com/kubernetes/contrib/tree/master/keepalived-vip
You can try to add "type: NodePort" in your yaml file for the service and then you'll have a port to access it via the web browser or from the outside. For my case, it helped.
I don't know if that helps in your particular case but what I did (and I'm on a Bare Metal cluster) was to use the LoadBalancer and set the loadBalancerIP as well as the externalIPs to my server IP as you did it.
After that the correct external IP showed up for the load balancer.
Always use the namespace flag either before or after the service name, because Namespace-based scoping is applicable for deployments and services and this points out to the service that is tagged to a specific namespace. kubectl patch svc service-name -n namespace -p '{"spec":{"externalIPs":["IP"]}}'
Just include additional option.
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service --external-ip=1.1.1.1

Kubernetes and ERR_CONNECTION_RESET

I've got a pod with 2 containers, both running nginx. One is running on port 80, the other on port 88. I have no trouble accessing the one on port 80, but can't seem to access the one on port 88. When I try, I get:
This site can’t be reached
The connection was reset.
ERR_CONNECTION_RESET
So here's the details.
1) The container is defined in the deployment YAML as:
- name: rss-reader
image: nickchase/nginx-php-rss:v3
ports:
- containerPort: 88
2) I created the service with:
kubectl expose deployment rss-site --port=88 --target-port=88 --type=NodePort --name=backend
3) This created a service of:
root#kubeclient:/home/ubuntu# kubectl describe service backend
Name: backend
Namespace: default
Labels: app=web
Selector: app=web
Type: NodePort
IP: 11.1.250.209
Port: <unset> 88/TCP
NodePort: <unset> 31754/TCP
Endpoints: 10.200.41.2:88,10.200.9.2:88
Session Affinity: None
No events.
And when I tried to access it, I used the URL
http://[nodeip]:31754/index.php
Now, when I instantiate the container manually with Docker, this works.
So anybody have a clue what I'm missing here?
Thanks in advance...
My presumtion is that you're using the wrong access IP. Are you trying to access the minion's IP address and port 31754?

Resources