I have a problem with multi-port services. I try to expose two ports, the first one works, the other does not. I am testing this with telnet (amongst others), and I always get "connection refused" for the second port.
This is the part about the ports in the service's yaml:
spec:
clusterIP: 10.97.153.249
externalTrafficPolicy: Cluster
ports:
- name: port-1
nodePort: 32714
port: 8080
protocol: TCP
targetPort: 8080
- name: port-2
nodePort: 32715
port: 17176
protocol: TCP
targetPort: 17176
I would first confirm that kubectl get svc shows the two NodePorts. If that is the case, then it is highly likely that the destination port in the pods are not working. Could you check in the pods if the ports are listening correctly? Then, I would also advise you to check the access using the ClusterIP as well.
Related
In kubernetes, I have the following service:
apiVersion: v1
kind: Service
metadata:
name: test-service
namespace: default
spec:
ports:
- name: tcp
protocol: TCP
port: 5555
targetPort: 5555
- name: udp
protocol: UDP
port: 5556
targetPort: 5556
selector:
tt: test
Which exposes two ports, 5555 for TCP and 5556 for UDP. How can expose these ports externally using the same ingress? I tried using nginx to do something like the following but it doesn't work. It complains that mixed ports are not supported.
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
5555: "default/test-service:5555"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: ingress-nginx
data:
5556: "default/test-service:5556"
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- name: tcp
port: 5555
targetPort: 5555
protocol: TCP
- name: udp
port: 5556
targetPort: 5556
protocol: UDP
args:
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
Is there a way to do this?
Most cloud providers do not support UDP load-balancing or mix protocols and might have cloud-specific methods to bypass this issue.
The DigitalOcean CPI does not support mixed protocols in the Service
definition, it accepts TCP only for load balancer Services. It is
possible to ask for HTTP(S) and HTTP2 ports with Service annotations.
Summary: The DO CPI is the current bottleneck with its TCP-only limitation. As long as it is there the implementation of this feature
will have no effect on the DO bills.
See more: do-mixed-protocols.
A simple solution that may solve your problem is to create a reverse
proxy system on a standalone server - using Nginx and route UDP and
TCP traffic directly to your Kubernetes service.
Follow these two steps:
1. Create a NodePort service application
2. Create a small server instance and run Nginx with LB config on it
Use NodePort type of service which will expose your application on your cluster nodes, and makes them accessible through your node IP on a static port. This type supports multi-protocol services. Read more about services here.
apiVersion: v1
kind: Service
metadata:
name: test-service
namespace: default
spec:
type: NodePort
ports:
- name: tcp
protocol: TCP
port: 5555
targetPort: 5555
nodePort: 30010
- name: udp
protocol: UDP
port: 5556
targetPort: 5556
nodePort: 30011
selector:
tt: test
For example this service exposes test pods’ port 5555 through nodeIP:30010 with the TCP protocol and port 5556 through nodeIP:30011 with UDP. Please adjust ports according to your needs, this is just an example.
Then create a small server instance and run Nginx with LB config.
For this step, you can get a small server from any cloud provider.
Once you have the server, ssh inside and run the following to install Nginx:
$ sudo yum install nginx
In the next step, you will need your node IP addresses, which you can get by running:
$ kubectl get nodes -o wide.
Note: If you have private cluster without external access to your nodes, you will have to set up a point of entry for this use ( for example NAT gateway).
Then you have to add the following to your nginx.conf (run command $ sudo vi /etc/nginx/nginx.conf):
worker_processes 1;
events {
worker_connections 1024;
}
stream {
upstream tcp_backend {
server <node ip 1>:30010;
server <node ip 2>:30010;
server <node ip 3>:30010;
...
}
upstream udp_backend {
server <node ip 1>:30011;
server <node ip 2>:30011;
server <node ip 3>:30011;
...
}
server {
listen 5555;
proxy_pass tcp_backend;
proxy_timeout 1s; }
server {
listen 5556 udp;
proxy_pass udp_backend;
proxy_timeout 1s;
}
}
Now you can start your Nginx server using command:
$ sudo /etc/init.d/nginx start
If you have already started you Nginx server before applying changes to your config file, you have to restart it - execute commands below:
$ sudo netstat -tulpn # Get the PID of nginx.conf program
$ sudo kill -2 <PID of nginx.conf>
$ sudo service nginx restart
And now you have UDP/TCP LoadBalancer which you can access through <server IP>:<nodePort>.
See more: tcp-udp-loadbalancer.
You can enable feature gates MixedProtocolLBService. For instructions on how to enable function gates, see below.
How do you enable Feature Gates in K8s?
Restart (delete and re-create) the Ingress controller after enabling it for the settings to take effect.
MixedProtocolLBService is a beta feature since Kubernetes 1.20. Whether it becomes stable or depprecated remains to be seen.
I currently configure my Nginx Pod readinessProbe to monitor Redis port 6379, and I configure my redis-pod behind the redis-service (ClusterIP).
So my idea is to monitor Redis port though redis-service using DNS instead of IP address.
When I use readinessProbe.host: redis-service.default.svc.cluster.local the Nginx-pod is not running. When I describe the Nginx-pod $ kubectl describe pods nginx, I found below error in Events section:
Readiness probe failed: dial tcp: lookup redis-service.default.svc.cluster.local: no such host
It only works if I use ClusterIP instead of DNS.
Please help me figure out how to use DNS instead of ClusterIP.
My Pod file:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
livenessProbe:
httpGet:
path: /
port: 80
readinessProbe:
tcpSocket:
host: redis-service.default.svc.cluster.local
port: 6379
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
Thanks.
I figured out how to do it.
Instead of using tcpSocket, just use exec.
Exec your tcpCheck script inside the container to check the service:port availability.
Use your init-container to share the script with the main container.
Thanks.
I am new to Kubernetes. I followed Kubernetes the hard way from Kesley Hightower and also this to set up Kubernetes in Azure. Now all the services are up and running fine. But I am not able to expose the traffic using Load balancer. I tried to add a Service object of type LoadBalancer but the external IP is showing as <pending>. I need to add ingress to expose the traffic.
nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-service
name: nginx-service
spec:
type: LoadBalancer
externalIPs:
- <ip>
ports:
- name: "80"
port: 80
targetPort: 80
- name: "443"
port: 443
targetPort: 443
selector:
app: nginx-service
Thank you,
By default, the solution proposed by Kubernetes The Hard Way doesn't include a solution for LoadBalancer. The fact it's pending forever is expected behavior. You need to use out-of-the box solution for that. A very commonly used is MetalLB.
MetalLB isn't going to allocate an External IP for you, it will allocate a internal IP inside our VPC and you have to create the necessary routing rules to route traffic to this IP.
I've got a pod with 2 containers, both running nginx. One is running on port 80, the other on port 88. I have no trouble accessing the one on port 80, but can't seem to access the one on port 88. When I try, I get:
This site can’t be reached
The connection was reset.
ERR_CONNECTION_RESET
So here's the details.
1) The container is defined in the deployment YAML as:
- name: rss-reader
image: nickchase/nginx-php-rss:v3
ports:
- containerPort: 88
2) I created the service with:
kubectl expose deployment rss-site --port=88 --target-port=88 --type=NodePort --name=backend
3) This created a service of:
root#kubeclient:/home/ubuntu# kubectl describe service backend
Name: backend
Namespace: default
Labels: app=web
Selector: app=web
Type: NodePort
IP: 11.1.250.209
Port: <unset> 88/TCP
NodePort: <unset> 31754/TCP
Endpoints: 10.200.41.2:88,10.200.9.2:88
Session Affinity: None
No events.
And when I tried to access it, I used the URL
http://[nodeip]:31754/index.php
Now, when I instantiate the container manually with Docker, this works.
So anybody have a clue what I'm missing here?
Thanks in advance...
My presumtion is that you're using the wrong access IP. Are you trying to access the minion's IP address and port 31754?
I have a Kubernetes cluster running 1.2.3 binaries along with flannel 0.5.5. I am using the GCE backend with IP forwarding enabled. For some reason, although I specify a specific Node's external IP address, it will not forward to the appropriate node.
Additionally I cannot create external load balancers, which the controller-manager says it can't find the gce instances that are the nodes, which are in ready state. I've looked at the source where the load balancer creation happens, my guess is it's either permission issues (I gave the cluster full permissions for gce) or it's not finding the metadata.
Here is an example of the services in question:
kind: "Service"
apiVersion: "v1"
metadata:
name: "client"
spec:
ports:
- protocol: "TCP"
port: 80
targetPort: 80
name: "insecure"
- protocol: "TCP"
port: 443
targetPort: 443
name: "secure"
selector:
name: "client"
sessionAffinity: "ClientIP"
externalIPs:
- "<Node External IP>"
And when I was trying to create the load balancer, it had the type: LoadBalancer.
Why would the forwarding to the Node IP not work? I have an idea as to the Load balancer issue, but if anyone has any insight?
So I eventually worked around this issue by creating an external loadbalancer. Only then did I have a valid external IP.