Send messages to RabbitMQ from outside kubernetes cluster using nginx-ingress - nginx

I am trying to setup a RabbitMQ tasks queue in a kubernetes cluster and need to be able to populate the task queue from outside of the kubernetes cluster. I am trying to accomplish this using the nginx ingress controller. I am running into errors when trying to declare a queue or send messages to an existing queue from outside of the cluster. Using the amqp-tools cli in Ubuntu from outside the cluster I get an error:
$ export BROKER_URL=amqp://<host-name>:80/rabbitmq
$ /usr/bin/amqp-declare-queue --url=$BROKER_URL -q foo -d
logging in to AMQP server: invalid AMQP data
If I do this same thing from a VM inside the cluster, I can create and send message to the queue just fine. I am also able to connect to the RabbitMQ management UI from outside of the cluster but if I try to declare a queue from the UI I get the error Management API returned status code 405 - displayed at the bottom of the screen.
I was reading that the virtual host in RabbitMQ '/' has problems with nginx because of how it parses the host but I am not very experienced with this sort of thing and dont know how to fix that.
Any help with this would be greatly appreciated.
I am deploying the nginx ingress controller with the recommended manifest:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
RabbitMQ is deployed with this manifest:
---
apiVersion: v1
kind: Namespace
metadata:
name: rabbitmq
labels:
app: rabbitmq
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq-service
namespace: rabbitmq
labels:
component: rabbitmq
spec:
type: ClusterIP
ports:
- name: amqp
port: 5672
targetPort: 5672
protocol: TCP
- name: http
port: 80
targetPort: 15672
protocol: TCP
selector:
app: taskQueue
component: rabbitmq
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: rabbit-ingress
namespace: rabbitmq
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: <host-name>
http:
paths:
- path: /rabbitmq/?(.*)
backend:
serviceName: rabbitmq-service
servicePort: 5672
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: rabbit-manage-ingress
namespace: rabbitmq
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: <host-name>
http:
paths:
- path: /rabbitmq-manage/?(.*)
backend:
serviceName: rabbitmq-service
servicePort: 80
---
apiVersion: v1
kind: ReplicationController
metadata:
labels:
app: taskQueue
component: rabbitmq
name: rabbitmq-controller
namespace: rabbitmq
spec:
replicas: 1
template:
metadata:
labels:
app: taskQueue
component: rabbitmq
spec:
containers:
- image: rabbitmq:3-management
name: rabbitmq
ports:
- containerPort: 5672
- containerPort: 15672
resources:
limits:
cpu: 100m

As far as i know, RabbitMQ does not provide a HTTP-API for interacting (at least it's not the default). NGINX-Ingress cannot use an Ingress resource to expose anything different to a HTTP-Service. Take a look at the documentation to learn how to expose a TCP- or UDP-Based service.

Related

Problem with NGINX, Kubernetes and CloudFlare

I am experiencing exactly this issue: Nginx-ingress-controller fails to start after AKS upgrade to v1.22, with the exception that none of the proposed solutions is working for my case.
I am running a Kubernetes Cluster on Oracle Cloud and I accidentally upgraded the cluster and now I cannot connect anymore to the services through nginx-controller. After reading the official nginx documentation, I am aware of the new version of nginx, so I checked the documentation and re-installed the nginx-controller following Oracle Cloud official documentation.
I am able to perform step by step as I run:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yaml
And then an ingress-nginx namespace is created and a LoadBalancer is created. Then as in the guide I have created a simple hello application (though not running on port 80):
apiVersion: apps/v1
kind: Deployment
metadata:
name: docker-hello-world
labels:
app: docker-hello-world
spec:
selector:
matchLabels:
app: docker-hello-world
replicas: 1
template:
metadata:
labels:
app: docker-hello-world
spec:
containers:
- name: docker-hello-world
image: scottsbaldwin/docker-hello-world:latest
ports:
- containerPort: 8088
---
apiVersion: v1
kind: Service
metadata:
name: docker-hello-world-svc
spec:
selector:
app: docker-hello-world
ports:
- port: 8088
targetPort: 8088
type: ClusterIP
and then the ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ing
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- secretName: tls-secret
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: docker-hello-world-svc
port:
number: 8088
But when running the curl commands I only get a curl: (56) Recv failure: Connection reset by peer.
So I then tried to connect to some python microservices that are already running by simply editing the ingress, but whatever I do I get the same error message. And when setting the host as the following:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ing
namespace: ingress-nginx
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: SUBDOMAIN.DOMAIN.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ANY_MICROSERVICE_RUNNING_IN_CLUSTER
port:
number: EXPOSED_PORT_BY_MICROSERVICE
Then, by setting the subdomain on CloudFlare I only get a 520 Bad Gateway.
Can you help me find what is that I do not see?
This may be related to your Ingress resource.
In Kubernetes versions v1.19 and above, Ingress resources should use ingressClassName instead of the older annotation syntax. Additional information on what should be done when upgrading can be found on the official Kubernetes documentation.
However, with the changes it requires at face value, from the information you're provided so far, your Ingress resource should look this:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ing
spec:
ingressClassName: nginx
rules:
- host: SUBDOMAIN.DOMAIN.com
http:
paths:
- backend:
service:
name: docker-hello-world-svc
port:
number: 8088
path: /
pathType: Prefix
tls:
- hosts:
- SUBDOMAIN.DOMAIN.com
secretName: tls-secret
Additionally, please provide the deployment Nginx-ingress logs if you still have issues, as the Cloudflare error does not detail what could be wrong apart from providing a starting point.
As someone who uses Cloudflare and Nginx, there are multiple reasons why you're receiving a 520 error, so it'd be better if we could reduce the scope of what could be the main issue. Let me know if you have any questions.

Loadbalancing/Redirecting from Kubernetes NodePort Services

i got a bare metal cluster with a few nodeport deployments of my services (http and https). I would like to access them from a single url like myservices.local with (sub)paths.
config could be sth like the following (pseudo code):
/app1
http://10.100.22.55:30322
http://10.100.22.56:30322
# browser access: myservices.local/app1
/app2
https://10.100.22.55:31432
https://10.100.22.56:31432
# browser access: myservices.local/app2
/...
I tried a few things with haproxy and nginx but nothing really worked (for inexperienced in this webserver/lb things kinda confusing syntax/ config style in my opinion).
what is the easiest solution for a case like this?
The easiest and most used way is to use a NGINX Ingress. The NGINX Ingress is built around the Kubernetes Ingress resource, using a ConfigMap to store the NGINX configuration.
In the documentation we can read:
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
internet
|
[ Ingress ]
--|-----|--
[ Services ]
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting. An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
This is exactly what you want to achieve.
The first thing you need to do is to install the NGINX Ingress Controller in your cluster. You can follow the official Installation Guide.
A ingress is always going to point to a Service. So you need to have a Deployment, a Service and a NGINX Ingress.
Here is an example of an application similar to your example.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app1
name: app1
spec:
replicas: 1
selector:
matchLabels:
app: app1
strategy:
type: Recreate
template:
metadata:
labels:
app: app1
spec:
containers:
- name: app1
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app2
name: app2
spec:
replicas: 1
selector:
matchLabels:
app: app2
strategy:
type: Recreate
template:
metadata:
labels:
app: app2
spec:
containers:
- name: app2
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 5001
---
apiVersion: v1
kind: Service
metadata:
name: app1
labels:
app: app1
spec:
type: ClusterIP
ports:
- port: 5000
protocol: TCP
targetPort: 5000
selector:
app: app1
---
apiVersion: v1
kind: Service
metadata:
name: app2
labels:
app: app2
spec:
type: ClusterIP
ports:
- port: 5001
protocol: TCP
targetPort: 5001
selector:
app: app2
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress #ingress resource
metadata:
name: myservices
labels:
app: myservices
spec:
rules:
- host: myservices.local #only match connections to myservices.local.
http:
paths:
- path: /app1
backend:
serviceName: app1
servicePort: 5000
- path: /app2
backend:
serviceName: app2
servicePort: 5001

I need help understanding kubernetes architecture best practices

I have 2 nodes on GCP in a kubernetes cluster. I also have a load balancer in GCP as well. this is a regular cluster (not GCK). I am trying to expose my front-end-service to the world. I am trying nginx-ingress type:nodePort as a solution. Where should my loadbalancer be pointing to? is this a good architecture approach?
world --> GCP-LB --> nginx-ingress-resource(GCP k8s cluster) --> services(pods)
to access my site I would have to point LB to worker-node-IP where nginx pod is running. Is this bad practice. I am new in this subject and trying to understand.
Thank you
deployservice:
apiVersion: v1
kind: Service
metadata:
name: mycha-service
labels:
run: mycha-app
spec:
ports:
- port: 80
targetPort: 3000
protocol: TCP
selector:
app: mycha-app
nginxservice:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
labels:
app: nginx-ingress
spec:
type: NodePort
ports:
- nodePort: 31000
port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
name: nginx-ingress
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
labels:
run: nginx-ingress
spec:
type: NodePort
ports:
- nodePort: 31000
port: 80
targetPort: 3000
protocol: TCP
selector:
app: nginx-ingress
nginx-resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mycha-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: mycha-service
servicePort: 80
This configuration is not working.
When you use ingress in-front of your workload pods the service type for workload pods will always be of type clusterIP because you are not exposing pods directly outside the cluster.
But you need to expose the ingress controller outside the cluster either using NodePort type service or using Load Balancer type service and for production its recommended to use Loadbalancer type service.
This is the recommended pattern.
Client -> LoadBalancer -> Ingress Controller -> Kubernetes Pods
Ingress controller avoids usage of kube-proxy and load balancing provided by kube-proxy. You can configure layer 7 load balancing in the ingress itself.
The best practise of exposing application is:
World > LoadBalancer/NodePort (for connecting to the cluster) > Ingress (Mostly to redirect traffic) > Service
If you are using Google Cloud Platform, I would use GKE as it is optimized for containers and configure many things automatically for you.
Regarding your issue, I also couldn't obtain IP address for LB <Pending> state, however you can expose your application using NodePort and VMs IP. I will try few other config to obtain ExternalIP and will edit answer.
Below is one of examples how to expose your app using Kubeadm on GCE.
On GCE, your VM already have ExternalIP. This way you can just use Service with NodePort and Ingress to redirect traffic to proper services.
Deploy Nginx Ingress using Helm 3 as tiller is not required anymore ($ helm install nginx stable/nginx-ingress).
As Default it will deploy service with LoadBalancer type but it won't get externalIP and it will stuck in <Pending> state. You have to change it to NodePort and apply changes.
$ kubectl edit svc nginx-nginx-ingress-controller
Default it will open Vi editor. If you want other you need to specify it
$ KUBE_EDITOR="nano" kubectl edit svc nginx-nginx-ingress-controller
Now you can deploy service, deployment and ingress.
apiVersion: v1
kind: Service
metadata:
name: fs
spec:
selector:
key: app
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: fd
spec:
replicas: 1
selector:
matchLabels:
key: app
template:
metadata:
labels:
key: app
spec:
containers:
- name: hello1
image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mycha-deploy
labels:
app: mycha-app
spec:
replicas: 1
selector:
matchLabels:
app: mycha-app
template:
metadata:
labels:
app: mycha-app
spec:
containers:
- name: mycha-container
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: mycha-service
labels:
app: mycha-app
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: mycha-app
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.pod.svc
http:
paths:
- path: /mycha
backend:
serviceName: mycha-service
servicePort: 80
- path: /hello
backend:
serviceName: fs
servicePort: 80
service/fs created
deployment.apps/fd created
deployment.apps/mycha-deploy created
service/mycha-service created
ingress.extensions/two-svc-ingress created
$ kubectl get svc nginx-nginx-ingress-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-nginx-ingress-controller NodePort 10.105.247.148 <none> 80:31143/TCP,443:32224/TCP 97m
Now you should use your VM ExternalIP (slave VM) with port from NodePort service. My VM ExternalIP: 35.228.133.12, service: 80:31143/TCP,443:32224/TCP
IMPORTANT
If you would curl your VM with port you would get response:
$ curl 35.228.235.99:31143
curl: (7) Failed to connect to 35.228.235.99 port 31143: Connection timed out
As you are doing this manually, you also need add Firewall rule to allow traffic from outside on this specific port or range.
Information about creation of Firewall Rules can be found here.
If you will set proper values (open ports, set IP range (0.0.0.0/0), etc) you will be able to get service from you machine.
Curl from my local machine:
$ curl -H "HOST: my.pod.svc" http://35.228.235.99:31143/mycha
<!DOCTYPE html>
...
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
$ curl -H "HOST: my.pod.svc" http://35.228.235.99:31143/hello
Hello, world!
Version: 1.0.0
Hostname: fd-c6d79cdf8-dq2d6

Unable to get a websocket app work through kubernetes ingress-nginx in a non-root context path

Here is a sample WebSocket app that I'm trying to get it to work from a Kubernetes ingress-nginx controller.
Kubernetes yaml:
echo "
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ws-example
spec:
replicas: 1
template:
metadata:
labels:
app: wseg
spec:
containers:
- name: websocketexample
image: nicksardo/websocketexample
imagePullPolicy: Always
ports:
- name: http
containerPort: 8080
env:
- name: podname
valueFrom:
fieldRef:
fieldPath: metadata.name
---
apiVersion: v1
kind: Service
metadata:
name: ws-example-svc
labels:
app: wseg
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
protocol: TCP
selector:
app: wseg
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ws-example-svc
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myhostname.com
http:
paths:
- backend:
serviceName: ws-example-svc
servicePort: 80
path: /somecontext
" | kubectl create -f -
I get this error:
WebSocket connection to 'ws://myhostname.com/somecontext/ws?encoding=text' failed: Error during WebSocket handshake: Unexpected response code: 400
When I try to connect using a WebSocket client web page like this http://www.websocket.org/echo.html
The version of ingress-nginx is 0.14.0. This version supports WebSockets.
Update, I'm able to directly access the websocket running pod, when I port-forward from my localhost to pod's port.
[rpalaniappan#sdgl15280a331:~/git/zalenium] $ kubectl get pods -l app=wseg
NAME READY STATUS RESTARTS AGE
ws-example-5dddb98cfb-vmdt5 1/1 Running 0 5h
[rpalaniappan#sdgl15280a331:~/git/zalenium] $ kubectl port-forward ws-example-5dddb98cfb-vmdt5 8080:8080
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
Handling connection for 8080
[rpalaniappan#sdgl15280a331:~/git/zalenium] $ wscat -c ws://localhost:8080/ws
connected (press CTRL+C to quit)
< Connected to ws-example-5dddb98cfb-vmdt5
> hi
< hi
< ws-example-5dddb98cfb-vmdt5 reports time: 2018-12-28 01:19:00.788098266 +0000 UTC
So basically this:
nginx.ingress.kubernetes.io/rewrite-target: /
is stripping the /ws from the request (combined with path: /ws) that gets sent to the backend everytime your browser tries to issue a WebSocket connection request. The backend expects /ws when it receives a connection request.
If you specify path: /mypath and /mypath/* it works (works for me):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ws-example-svc
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myhostname.com
http:
paths:
- backend:
serviceName: ws-example-svc
servicePort: 80
path: /mypath
- backend:
serviceName: ws-example-svc
servicePort: 80
path: /mypath/*
https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#websockets
If the NGINX ingress controller is exposed with a service
type=LoadBalancer make sure the protocol between the loadbalancer and
NGINX is TCP.
Sample AWS L4 Service https://github.com/kubernetes/ingress-nginx/blob/master/deploy/provider/aws/service-l4.yaml#L11
# Enable PROXY protocol
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"

Kubernetes nginx ingress is not resolving services

Cloud: Google Cloud Platform.
I have the following configuration
kind: Deployment
apiVersion: apps/v1
metadata:
name: api
spec:
replicas: 2
selector:
matchLabels:
run: api
template:
metadata:
labels:
run: api
spec:
containers:
- name: api
image: gcr.io/*************/api
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /_ah/health
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
---
kind: Service
apiVersion: v1
metadata:
name: api
spec:
selector:
run: api
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /api/*
backend:
serviceName: api
servicePort: 8080
All set. GKE saying that all deployments are okay, pods number are met and main ingress with nginx-ingress-controllers are set as well. But I'm not able to reach any of the services. Even application specific 404. Nothing. Like, it's not resolved at all.
Another related question I see to entrypoints. The first one through main-ingress. It created it's own LoadBalancer with own IP address. The second one address from nginx-ingress-controller. The second one is at least returning 404 from default backend, but also not pointing to expected api service.

Resources