How to whitelist an nginx ingress custom port - nginx

I have an nginx ingress in Kubernetes with both a whitelist (handled by a nginx.ingress.kubernetes.io/whitelist-source-range annotation) and also a custom port mapping (which exposes an SFTP server port 22 via a --tcp-services-configmap configmap). The whitelist works great for 80 and 443, but it does not work for 22. How do I whitelist my custom port?
Configuration looks roughly like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
...
spec:
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.33.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: sftp
containerPort: 22
...
kind: Ingress
metadata:
name: {{ .controllerName }}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/whitelist-source-range: {{ .ipAllowList }}
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
22: "default/sftp:22"
UPDATE
Thanks to #jordanm I discovered that I can restrict IP addresses for all ports via loadBalancerSourceRanges in the LoadBalancer rather than nginx:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
loadBalancerIP: {{ .loadBalancerIp }}
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
- name: sftp
port: 22
targetPort: sftp
loadBalancerSourceRanges:
{{ .ipAllowList }}

Firstly take a look at this issue: ip-whitelist-support.
IPs are not whitelisted for TCP services, an alternative would be to create a separate firewall for the TCP services and whitelist the IPs at the firewall level.
For specific location {{ $path }} we have defined
{{ if isLocationAllowed $location }}.
Check official Ingress documentation: ingress-kubernetes.
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
You must have an Ingress controller to satisfy an Ingress. Only
creating an Ingress resource has no effect.
In this case Ingress resource instrument ingress-controller how to deal with http/https requests. In this approach nginx-ingress controller as a software (introduce layer-7 functionality/loadbalancing).
If you are interested with nginx ingress tcp support:
Ingress does not support TCP or UDP services. For this reason this Ingress controller uses the flags --tcp-services-configmap and --udp-services-configmap
See: exposing-tcp-udp-services
If you want to check more granular configuration while working with your tcp service you should consider using L4 loadbalancing/firewall settings provided by your cloud provider.

Related

i want to change my wordpress pod domain name

I hope you all are doing great today ,
here is my situation :
i have 2 wordpress websites (identical )
--1st one is an App Service wordpress in azure with a domain name eg:https://wordpress.azurewebsites.net
--the 2nd one is in aks cluster as a pod with a load balancer that expose it to the internet with a public ip
what i want to do :
i want to take the domain name from the app service and give it to the aks pod
what did i do :
i changed from the dashboard the domain name and changed the load balancer public ip adress
and it didn't work now i can't access the dashboard from the load balancer ip adress either
im new in kubernetes i hope someone can guide me to the right direction on how to do it
Seems like you are missing an ingress controller. You could for example install ingress-nginx and expose the ingress with this service config:
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
loadBalancerIP: 53.1.1.1
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
You can now create a service for your app:
apiVersion: v1
kind: Service
metadata:
name: app_service
namespace: app
spec:
type: ClusterIP
ports:
- name: service
port: 80
selector:
app: yoour_app
Then you can expose yoour app with an ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app_ingress
namespace: app
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- wordpress.azurewebsites.net
rules:
- host: wordpress.azurewebsites.net
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app_service
port:
number: 80

Configuring Static IP address with Ingress Nginx Sticky Session on Azure Kubernetes

I am trying to configure an additional layer of Sticky Session to my current Kubernetes architecture. Instead of routing every request through the main LoadBalancer service, I want to route the requests through an upper layer of nginx sticky session. I am following the guide on https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/
I am using Azure Cloud for my cluster deployment. Previously, using a Service with LoadBalancer type would automatically generate an external IP address for users to connect to my cluster. Now I need to configure the static IP address for my users to connect to, with the nginx ingress in place. How can I do so? I followed the guide here - https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/static-ip but the external address of the Ingress is still empty!!
What did I do wrongly?
# nginx-sticky-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-lb
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
ports:
- port: 80
name: http
targetPort: 80
- port: 443
name: https
targetPort: 443
selector:
# Selects nginx-ingress-controller pods
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
# nginx-sticky-controller.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
terminationGracePeriodSeconds: 60
containers:
- image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.31.0
name: nginx-ingress-controller
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
resources:
limits:
cpu: 0.5
memory: "0.5Gi"
requests:
cpu: 0.5
memory: "0.5Gi"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/nginx-ingress-lb
# nginx-sticky-server.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "nginx-sticky-server"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
spec:
rules:
- http:
paths:
- backend:
# This assumes http-svc exists and routes to healthy endpoints.
serviceName: my-own-service-master
servicePort: http
Ok I got it working. I think the difference lies in the cloud provider you are using, and for Azure Cloud, you should follow their documentation and their way of implementing ingress controller in the Kubernetes cluster.
Link over here for deploying the ingress controller. Their way of creating the public IP address within the Kubernetes cluster and linking it up with the ingress controller works. I can confirm as of now, the time of writing.
Once I am done deploying the steps in the link above, I can apply the ingress .yaml file as usual i.e. kubectl apply -f nginx-sticky-server.yaml to set up the nginx sticky session. IF the service name and service port stated in your ingress .yaml file is correct, the nginx ingress controller should redirect your user requests to the correct service.

Kubernetes NGINX-INGRESS Do I need an NGINX Service running?

I am attempting to create an NGINX-INGRESS (locally at first, then to be deployed to AWS behind a load-balancer). However I am new to Kubernetes, and I understand the Ingress model for NGINX- the configurations are confusing me as to weather I should be deploying an NGINX-INGRESS Service, Ingress or Both
I am working with multiple Flask-Apps I would like to have routed by path (/users, /content, etc.) My services are named user-service on port: 8000 (their container port is 8000 as well)
In this example an Ingress is defined. However, when I apply an ingress (in the same Namespace as my Flask there is no response from http://localhost
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-name
namespace: my-namespace
spec:
rules:
- http:
paths:
- path: /users
backend:
serviceName: users-service
servicePort: 8000
- path: /content
backend:
serviceName: content-service
servicePort: 8000
Furthermore, looking at the nginx-ingress "Deployment" docs, under Docker for Mac (which I assume I can use as I am using Docker on a MacOS) they define a Service like so:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
---
This seems to function for me (When I open "localhost" I get Nginx "not found", but it is a service in a different namespace then my apps- and there is no association between the port 80/443 and my service-ports.
For reference here is one of my deployment/service definitions:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: users-service
labels:
app: users-service
namespace: example
spec:
replicas: 1
selector:
matchLabels:
app: users-service
template:
metadata:
labels:
app: users-service
spec:
containers:
- name: users-service
image: users-service:latest
imagePullPolicy: Never
ports:
- containerPort: 8000
---
kind: Service
apiVersion: v1
metadata:
name: users-service
spec:
selector:
app: users-service
ports:
- protocol: TCP
port: 8000
Update
I followed a video for setting up an NGINX-Controller+Ingress, here the results, entering "localhost/users" does not work,
describe-ingress:
(base) MacBook-Pro-2018-i9:microservices jordanbaucke$ kubectl describe ingress users-ingress
Name: users-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/users users-service:8000 (10.1.0.75:8000)
Annotations: Events: <none>
users-service:
(base) MacBook-Pro-2018-i9:microservices jordanbaucke$ kubectl describe svc users-service
Name: users-service
Namespace: default
Labels: <none>
Annotations: Selector: app=users-service
Type: ClusterIP
IP: 10.100.213.229
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
Endpoints: 10.1.0.75:8000
Session Affinity: None
Events: <none>
nginx-ingress
(base) MacBook-Pro-2018-i9:microservices jordanbaucke$ kubectl describe svc nginx-ingress
Name: nginx-ingress
Namespace: default
Labels: <none>
Annotations: Selector: name=nginx-ingress
Type: NodePort
IP: 10.106.167.181
LoadBalancer Ingress: localhost
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 32710/TCP
Endpoints: 10.1.0.74:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32240/TCP
Endpoints: 10.1.0.74:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Now when I try to enter the combination of NodeIP:NodePort/users, it does not connect?
From inside my nginx-ingress pod, calling:
curl 10.1.0.75:8000 or curl 10.100.213.229:8000 returns results.
For nginx or any other ingress to work properly:
Nginx ingress controller need to deployed on the cluster
A LoadBalancer or NodePort type service need to be created to expose nginx ingress controller via port 80 and 443 in the same namespace where nginx ingress controller is deployed.LoadBalancer works in supported public cloud(AWS etc). NodePort works if running locally.
ClusterIP type service need to be created for workload pods in the namespace where workload pods are deployed.
Workload Pods will be exposed via nginx ingress and you need to create ingress resource in the same namespace as of the clusterIP service of your workload Pods.
You will use either the LoadBalancer(in case nginx ingress controller was exposed via LoadBalancer) or NodeIP:NodePort(in case Nginx ingress controller was exposed via NodePort) to access your workload Pods.
So in this case since docker desktop is being used Loadbalancer type service(ingress-nginx) to expose nginx ingress controller will not work. This needs to be of NodePort type. Once done workload pods can be accessed via NodeIP:NodePort/users and NodeIP:NodePort/content. NodeIP:NodePort should give nginx homepage as well.

I need help understanding kubernetes architecture best practices

I have 2 nodes on GCP in a kubernetes cluster. I also have a load balancer in GCP as well. this is a regular cluster (not GCK). I am trying to expose my front-end-service to the world. I am trying nginx-ingress type:nodePort as a solution. Where should my loadbalancer be pointing to? is this a good architecture approach?
world --> GCP-LB --> nginx-ingress-resource(GCP k8s cluster) --> services(pods)
to access my site I would have to point LB to worker-node-IP where nginx pod is running. Is this bad practice. I am new in this subject and trying to understand.
Thank you
deployservice:
apiVersion: v1
kind: Service
metadata:
name: mycha-service
labels:
run: mycha-app
spec:
ports:
- port: 80
targetPort: 3000
protocol: TCP
selector:
app: mycha-app
nginxservice:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
labels:
app: nginx-ingress
spec:
type: NodePort
ports:
- nodePort: 31000
port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
name: nginx-ingress
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
labels:
run: nginx-ingress
spec:
type: NodePort
ports:
- nodePort: 31000
port: 80
targetPort: 3000
protocol: TCP
selector:
app: nginx-ingress
nginx-resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mycha-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: mycha-service
servicePort: 80
This configuration is not working.
When you use ingress in-front of your workload pods the service type for workload pods will always be of type clusterIP because you are not exposing pods directly outside the cluster.
But you need to expose the ingress controller outside the cluster either using NodePort type service or using Load Balancer type service and for production its recommended to use Loadbalancer type service.
This is the recommended pattern.
Client -> LoadBalancer -> Ingress Controller -> Kubernetes Pods
Ingress controller avoids usage of kube-proxy and load balancing provided by kube-proxy. You can configure layer 7 load balancing in the ingress itself.
The best practise of exposing application is:
World > LoadBalancer/NodePort (for connecting to the cluster) > Ingress (Mostly to redirect traffic) > Service
If you are using Google Cloud Platform, I would use GKE as it is optimized for containers and configure many things automatically for you.
Regarding your issue, I also couldn't obtain IP address for LB <Pending> state, however you can expose your application using NodePort and VMs IP. I will try few other config to obtain ExternalIP and will edit answer.
Below is one of examples how to expose your app using Kubeadm on GCE.
On GCE, your VM already have ExternalIP. This way you can just use Service with NodePort and Ingress to redirect traffic to proper services.
Deploy Nginx Ingress using Helm 3 as tiller is not required anymore ($ helm install nginx stable/nginx-ingress).
As Default it will deploy service with LoadBalancer type but it won't get externalIP and it will stuck in <Pending> state. You have to change it to NodePort and apply changes.
$ kubectl edit svc nginx-nginx-ingress-controller
Default it will open Vi editor. If you want other you need to specify it
$ KUBE_EDITOR="nano" kubectl edit svc nginx-nginx-ingress-controller
Now you can deploy service, deployment and ingress.
apiVersion: v1
kind: Service
metadata:
name: fs
spec:
selector:
key: app
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: fd
spec:
replicas: 1
selector:
matchLabels:
key: app
template:
metadata:
labels:
key: app
spec:
containers:
- name: hello1
image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mycha-deploy
labels:
app: mycha-app
spec:
replicas: 1
selector:
matchLabels:
app: mycha-app
template:
metadata:
labels:
app: mycha-app
spec:
containers:
- name: mycha-container
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: mycha-service
labels:
app: mycha-app
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: mycha-app
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.pod.svc
http:
paths:
- path: /mycha
backend:
serviceName: mycha-service
servicePort: 80
- path: /hello
backend:
serviceName: fs
servicePort: 80
service/fs created
deployment.apps/fd created
deployment.apps/mycha-deploy created
service/mycha-service created
ingress.extensions/two-svc-ingress created
$ kubectl get svc nginx-nginx-ingress-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-nginx-ingress-controller NodePort 10.105.247.148 <none> 80:31143/TCP,443:32224/TCP 97m
Now you should use your VM ExternalIP (slave VM) with port from NodePort service. My VM ExternalIP: 35.228.133.12, service: 80:31143/TCP,443:32224/TCP
IMPORTANT
If you would curl your VM with port you would get response:
$ curl 35.228.235.99:31143
curl: (7) Failed to connect to 35.228.235.99 port 31143: Connection timed out
As you are doing this manually, you also need add Firewall rule to allow traffic from outside on this specific port or range.
Information about creation of Firewall Rules can be found here.
If you will set proper values (open ports, set IP range (0.0.0.0/0), etc) you will be able to get service from you machine.
Curl from my local machine:
$ curl -H "HOST: my.pod.svc" http://35.228.235.99:31143/mycha
<!DOCTYPE html>
...
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
$ curl -H "HOST: my.pod.svc" http://35.228.235.99:31143/hello
Hello, world!
Version: 1.0.0
Hostname: fd-c6d79cdf8-dq2d6

websocket connection failed after establishing https in google ingress controller

I have deployed an application in kubernetes which is served by Google Ingress Controller (Service as ELB). The application is working fine. But the moment I am applying https related configuration, the https is coming but websocket fails.
Below is the service file and configmap
for http:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
annotations:
# Enable PROXY protocol
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
# Increase the ELB idle timeout to avoid issues with WebSockets or Server-Sent Events.
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
---------------------------------------------------------------------------------------------------
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app: ingress-nginx
data:
use-proxy-protocol: "true"
for https:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:2xxxxxxxxxxxxxxxxxxx56:certificate/3fxxxxxxxxxxxxxxxxxxxxxxxxxx80"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
# Increase the ELB idle timeout to avoid issues with WebSockets or Server-Sent Events.
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: http
------------------------------------------------------------------------------------------
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app: ingress-nginx
data:
use-proxy-protocol: "false"
Am I missing any annotations or data in configmap ? Pls help me out
I think the problem is the annotation:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
The backend-protocol in ELBs must be TCP for websocket connections.
Also, I see you're using Nginx Ingress Controller, maybe you want to set these variables in the config
proxy-read-timeout: "3600"
proxy-send-timeout: "3600"
To avoid connection closings.

Resources