In kubernetes, I have the following service:
apiVersion: v1
kind: Service
metadata:
name: test-service
namespace: default
spec:
ports:
- name: tcp
protocol: TCP
port: 5555
targetPort: 5555
- name: udp
protocol: UDP
port: 5556
targetPort: 5556
selector:
tt: test
Which exposes two ports, 5555 for TCP and 5556 for UDP. How can expose these ports externally using the same ingress? I tried using nginx to do something like the following but it doesn't work. It complains that mixed ports are not supported.
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
5555: "default/test-service:5555"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: ingress-nginx
data:
5556: "default/test-service:5556"
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- name: tcp
port: 5555
targetPort: 5555
protocol: TCP
- name: udp
port: 5556
targetPort: 5556
protocol: UDP
args:
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
Is there a way to do this?
Most cloud providers do not support UDP load-balancing or mix protocols and might have cloud-specific methods to bypass this issue.
The DigitalOcean CPI does not support mixed protocols in the Service
definition, it accepts TCP only for load balancer Services. It is
possible to ask for HTTP(S) and HTTP2 ports with Service annotations.
Summary: The DO CPI is the current bottleneck with its TCP-only limitation. As long as it is there the implementation of this feature
will have no effect on the DO bills.
See more: do-mixed-protocols.
A simple solution that may solve your problem is to create a reverse
proxy system on a standalone server - using Nginx and route UDP and
TCP traffic directly to your Kubernetes service.
Follow these two steps:
1. Create a NodePort service application
2. Create a small server instance and run Nginx with LB config on it
Use NodePort type of service which will expose your application on your cluster nodes, and makes them accessible through your node IP on a static port. This type supports multi-protocol services. Read more about services here.
apiVersion: v1
kind: Service
metadata:
name: test-service
namespace: default
spec:
type: NodePort
ports:
- name: tcp
protocol: TCP
port: 5555
targetPort: 5555
nodePort: 30010
- name: udp
protocol: UDP
port: 5556
targetPort: 5556
nodePort: 30011
selector:
tt: test
For example this service exposes test pods’ port 5555 through nodeIP:30010 with the TCP protocol and port 5556 through nodeIP:30011 with UDP. Please adjust ports according to your needs, this is just an example.
Then create a small server instance and run Nginx with LB config.
For this step, you can get a small server from any cloud provider.
Once you have the server, ssh inside and run the following to install Nginx:
$ sudo yum install nginx
In the next step, you will need your node IP addresses, which you can get by running:
$ kubectl get nodes -o wide.
Note: If you have private cluster without external access to your nodes, you will have to set up a point of entry for this use ( for example NAT gateway).
Then you have to add the following to your nginx.conf (run command $ sudo vi /etc/nginx/nginx.conf):
worker_processes 1;
events {
worker_connections 1024;
}
stream {
upstream tcp_backend {
server <node ip 1>:30010;
server <node ip 2>:30010;
server <node ip 3>:30010;
...
}
upstream udp_backend {
server <node ip 1>:30011;
server <node ip 2>:30011;
server <node ip 3>:30011;
...
}
server {
listen 5555;
proxy_pass tcp_backend;
proxy_timeout 1s; }
server {
listen 5556 udp;
proxy_pass udp_backend;
proxy_timeout 1s;
}
}
Now you can start your Nginx server using command:
$ sudo /etc/init.d/nginx start
If you have already started you Nginx server before applying changes to your config file, you have to restart it - execute commands below:
$ sudo netstat -tulpn # Get the PID of nginx.conf program
$ sudo kill -2 <PID of nginx.conf>
$ sudo service nginx restart
And now you have UDP/TCP LoadBalancer which you can access through <server IP>:<nodePort>.
See more: tcp-udp-loadbalancer.
You can enable feature gates MixedProtocolLBService. For instructions on how to enable function gates, see below.
How do you enable Feature Gates in K8s?
Restart (delete and re-create) the Ingress controller after enabling it for the settings to take effect.
MixedProtocolLBService is a beta feature since Kubernetes 1.20. Whether it becomes stable or depprecated remains to be seen.
Related
I'm doing some tutorials using k3d (k3s in docker) and my yml looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: NodePort
selector:
app: nginx
ports:
- name: http
port: 80
targetPort: 80
With the resulting node port being 31747:
:~$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 18m
nginx NodePort 10.43.254.138 <none> 80:31747/TCP 17m
:~$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 172.18.0.2:6443 22m
nginx 10.42.0.8:80 21m
However wget does not work:
:~$ wget localhost:31747
Connecting to localhost:31747 ([::1]:31747)
wget: can't connect to remote host: Connection refused
:~$
What have I missed? I've ensured that my labels all say app: nginx and my containerPort, port and targetPort are all 80
The question is, is the NodePort range mapped from the host to the docker container acting as the node. The command docker ps will show you, for more details you can docker inspect $container_id and look at the Ports attribute under NetworkSettings. I don't have k3d around, but here is an example from kind.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1d2225b83a73 kindest/node:v1.17.0 "/usr/local/bin/entr…" 18 hours ago Up 18 hours 127.0.0.1:32769->6443/tcp kind-control-plane
$ docker inspect kind-control-plane
[
{
# [...]
"NetworkSettings": {
# [...]
"Ports": {
"6443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32769"
}
]
},
# [...]
}
]
If it is not, working with kubectl port-forward as suggested in the comment is probably the easiest approach. Alternatively, start looking into Ingress. Ingress is the preferred method to expose workloads outside of a cluster, and in the case of kind, they have support for Ingress. It seems k3d also has a way to map the ingress port to the host.
Turns out I didn't expose the ports when creating the cluster
https://k3d.io/usage/guides/exposing_services/
maybe, your pod is running on the other work node, not localhost. you should use the correct node ip.
I want to deploy a simple nginx app on my own kubernetes cluster.
I used the basic nginx deployment. On the machine with the ip 192.168.188.10. It is part of cluster of 3 raspberries.
NAME STATUS ROLES AGE VERSION
master-pi4 Ready master 2d20h v1.18.2
node1-pi4 Ready <none> 2d19h v1.18.2
node2-pi3 Ready <none> 2d19h v1.18.2
$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
$ kubectl create service nodeport nginx --tcp=80:80
service/nginx created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-8fb6d868-6957j 1/1 Running 0 10m
my-nginx-8fb6d868-8c59b 1/1 Running 0 10m
nginx-f89759699-n6f79 1/1 Running 0 4m20s
$ kubectl describe service nginx
Name: nginx
Namespace: default
Labels: app=nginx
Annotations: <none>
Selector: app=nginx
Type: NodePort
IP: 10.98.41.205
Port: 80-80 80/TCP
TargetPort: 80/TCP
NodePort: 80-80 31400/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
But I always get a time out
$ curl http://192.168.188.10:31400/
curl: (7) Failed to connect to 192.168.188.10 port 31400: Connection timed out
Why is the web server nginx not reachable? I tried to run it from the same machine I deployed it to? How can I make it accessible from an other machine from the network on port 31400?
As mentioned by #suren, you are creating a stand-alone service without any link with your deployment.
You can solve using the command from suren answer, or creating a new deployment using the follow yaml spec:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
After, type kubectl get svc to get the nodeport to access your service.
nginx-svc NodePort 10.100.136.135 <none> 80:31816/TCP 34s
To access use http://<YOUR_NODE_IP>:31816
so is 192.168.188.10 your host ip / your vm ip ?
you have to check it first if any service using that port or maybe you haven't add it into your security group if you using cloud platform.
just to make sure you can create a pod and access it using fqdn like my-svc.my-namespace.svc.cluster-domain.example
I currently configure my Nginx Pod readinessProbe to monitor Redis port 6379, and I configure my redis-pod behind the redis-service (ClusterIP).
So my idea is to monitor Redis port though redis-service using DNS instead of IP address.
When I use readinessProbe.host: redis-service.default.svc.cluster.local the Nginx-pod is not running. When I describe the Nginx-pod $ kubectl describe pods nginx, I found below error in Events section:
Readiness probe failed: dial tcp: lookup redis-service.default.svc.cluster.local: no such host
It only works if I use ClusterIP instead of DNS.
Please help me figure out how to use DNS instead of ClusterIP.
My Pod file:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
livenessProbe:
httpGet:
path: /
port: 80
readinessProbe:
tcpSocket:
host: redis-service.default.svc.cluster.local
port: 6379
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
Thanks.
I figured out how to do it.
Instead of using tcpSocket, just use exec.
Exec your tcpCheck script inside the container to check the service:port availability.
Use your init-container to share the script with the main container.
Thanks.
I have a corporate network(10.22..) which hosts a Kubernetes cluster(10.225.0.1). How can I access some VM in the same network but outside the cluster from within the pod in the cluster?
For example, I have a VM with IP 10.22.0.1:30000, which I need to access from a Pod in Kubernetes cluster. I tried to create a Service like this
apiVersion: v1
kind: Service
metadata:
name: vm-ip
spec:
selector:
app: vm-ip
ports:
- name: vm
protocol: TCP
port: 30000
targetPort: 30000
externalIPs:
- 10.22.0.1
But when I do "curl http://vm-ip:30000" from a Pod(kubectl exec -it), it returns "connection refused" error. But it works with "google.com". What are the ways of accessing the external IPs?
You can create an endpoint for that.
Let's go through an example:
In this example, I have a http server on my network with IP 10.128.15.209 and I want it to be accessible from my pods inside my Kubernetes Cluster.
First thing is to create an endpoint. This is going to let me create a service pointing to this endpoint that will redirect the traffic to my external http server.
My endpoint manifest is looking like this:
apiVersion: v1
kind: Endpoints
metadata:
name: http-server
subsets:
- addresses:
- ip: 10.128.15.209
ports:
- port: 80
$ kubectl apply -f http-server-endpoint.yaml
endpoints/http-server configured
Let's create our service:
apiVersion: v1
kind: Service
metadata:
name: http-server
spec:
ports:
- port: 80
targetPort: 80
$ kubectl apply -f http-server-service.yaml
service/http-server created
Checking if our service exists and save it's clusterIP for letter usage:
user#minikube-server:~$$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-server ClusterIP 10.96.228.220 <none> 80/TCP 30m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10d
Now it's time to verify if we can access our service from a pod:
$ kubectl run ubuntu -it --rm=true --restart=Never --image=ubuntu bash
This command will create and open a bash session inside a ubuntu pod.
In my case I'll install curl to be able to check if I can access my http server. You may need install mysql:
root#ubuntu:/# apt update; apt install -y curl
Checking connectivity with my service using clusterIP:
root#ubuntu:/# curl 10.128.15.209:80
Hello World!
And finally using the service name (DNS):
root#ubuntu:/# curl http-server
Hello World!
So, in your specific case you have to create this:
apiVersion: v1
kind: Endpoints
metadata:
name: vm-server
subsets:
- addresses:
- ip: 10.22.0.1
ports:
- port: 30000
---
apiVersion: v1
kind: Service
metadata:
name: vm-server
spec:
ports:
- port: 30000
targetPort: 30000
I have a custom kubernetes cluster on a serve with public IP and DNS pointing to it (also wildcard).
Gitlab was configured with the cluster following this guide: https://gitlab.touch4it.com/help/user/project/clusters/index#add-existing-kubernetes-cluster
However, after installing Ingress, the ingress endpoint is never detected:
I tried patching the object in k8s, like so
externalIPs: (was empty)
- 1.2.3.4
externalTrafficPolicy: local (was cluster)
I suspect that the problem is empty ingress (scroll to the end) object then calling:
# kubectl get service ingress-nginx-ingress-controller -n gitlab-managed-apps -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-11-20T08:57:18Z"
labels:
app: nginx-ingress
chart: nginx-ingress-1.22.1
component: controller
heritage: Tiller
release: ingress
name: ingress-nginx-ingress-controller
namespace: gitlab-managed-apps
resourceVersion: "3940"
selfLink: /api/v1/namespaces/gitlab-managed-apps/services/ingress-nginx-ingress-controller
uid: c175afcc-0b73-11ea-91ec-5254008dd01b
spec:
clusterIP: 10.107.35.248
externalIPs:
- 1.2.3.4 # (public IP)
externalTrafficPolicy: Local
healthCheckNodePort: 30737
ports:
- name: http
nodePort: 31972
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 31746
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: ingress
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
But Gitlab still cant find the ingress endpoint. I tried restarting cluster and Gitlab.
The network inspection in Gitlab always shows this response:
...
name ingress
status installed
status_reason null
version 1.22.1
external_ip null
external_hostname null
update_available false
can_uninstall false
...
Any ideas how to have a working Ingress Endpoint?
GitLab: 12.4.3 (4d477238500) k8s: 1.16.3-00
I had the exact same issue as you, and I finally figured out how to solve it.
The first to understand, is that on bare metal, you can't make it working without using MetalLB, because it calls the required Kubernetes APIs making it accepting the IP address you give to the Service of LoadBalancer type.
So first step is to deploy MetalLB to your cluster.
Then you need to have another machine, running a service like NGiNX or HAproxy or whatever can do some load balancing.
Last but not least, you have to give the Load Balancer machine IP address to MetalLB so that it can assign it to the Service.
Usually MetalLB requires a range of IP addresses, but you can also give one IP address like I did:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: staging-public-ips
protocol: layer2
addresses:
- 1.2.3.4/32
This way, MetalLB will assign the IP address to the Service with type LoadBalancer and Gitlab will finally find the IP address.
WARNING: MetalLB will assign only once an IP address. If you need many Service with type LoadBalancer, you will need many machines running NGiNX/HAproxy and so on and add its IP address in the MetalLB addresses pool.
For your information, I've posted all the technical details to my Gitlab issue here.