accessing service form another pod Kubernetes - nginx

I have two services running in my k8s.
I am trying to access my wallet service from my user service but my curl cmd just returns 504 gateway timeout.
here is my ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dev-ingress
namespace: dev
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
# nginx.ingress.kubernetes.io/use-regex: "true"
# nginx.ingress.kubernetes.io/rewrite-target: /api/v1$uri
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /api/v1/wallet
pathType: Prefix
backend:
service:
name: wallet-service
port:
number: 80
- path: /api/v1/user
pathType: Prefix
backend:
service:
name: accounts-service
port:
number: 80
this is the way I passed the env in my account service.
http://wallet-service:3007
and I log the URL when hitting my endpoint
curl http://EXTERNAL_IP/api/v1/user/health/wallet
every other non-related endpoint works.
Any help is appreciated
I am running Azure Kubernetes

I have two services running in my k8s.
Do you have an actual K8S service?
apiVerison: ...
kind: Service
Check it with kubectl get svc -A and you should see your services there
Check to see that your pods are exposed to the services
kubectl get endpoints -A
Are you running on a cloud provider (GCP, Azure, AWS, etc), if so check your security configuration as well (NSG for Azure, a security policy for AWS, etc)
Check inner communication :
# Log into one of the pods
kubectl exec -n <namespace> <pod name> sh
# Try to connect to the other service with the FQDN
curl -sv <servicename>.<namespace>.svc.cluster.local
Update:
You commented that you are running on Azure, check the desired ports are opened under the NSG.
Assuming that you are running AKS you need to find out the MS_xxxx resource and your NSG group will be located under this resource group, edit it and open the desired ports
You are trying to connect to http://wallet-service:3007/api/v1/wallet/health - Where did you get the 3007 port?

Related

Using self-signed certificates in nginx Ingress

I'm migrating services into a kubernetes cluster on minikube, these services require a self-signed certificate on load, accessing the service via NodePort works perfectly and demands the certificate in the browser (picture below), but accessing via the ingress host (the domain is modified locally in /etc/hosts) provides me with a Kubernetes Ingress Controller Fake Certificate by Acme and skips my self-signed cert without any message.
The SSLs should be decrypted inside the app and not in the Ingress, and the tls-acme: "false" flag does not work and still gives me the fake cert
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
# decryption of tls occurs in the backend service
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/tls-acme: "false"
spec:
rules:
- host: admin.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: admin-service
port:
number: 443
when signing in it should show the following before loading:
minikube version: v1.15.1
kubectl version: 1.19
using ingress-nginx 3.18.0
The problem turned out to be a bug on Minikube, and also having to enable ssl passthrough in the nginx controller (in addition to the annotation) with the flag --enable-ssl-passthrough=true.
I was doing all my cluster testing on a Minikube cluster version v1.15.1 with kubernetes v1.19.4 where ssl passthrough failed, and after following the guidance in the ingress-nginx GitHub issue, I discovered that the issue didn't replicate in kind, so I tried deploying my app on a new AWS cluster (k8 version 1.18) and everything worked great.

Gitlab kubernetes integration

I have a custom kubernetes cluster on a serve with public IP and DNS pointing to it (also wildcard).
Gitlab was configured with the cluster following this guide: https://gitlab.touch4it.com/help/user/project/clusters/index#add-existing-kubernetes-cluster
However, after installing Ingress, the ingress endpoint is never detected:
I tried patching the object in k8s, like so
externalIPs: (was empty)
- 1.2.3.4
externalTrafficPolicy: local (was cluster)
I suspect that the problem is empty ingress (scroll to the end) object then calling:
# kubectl get service ingress-nginx-ingress-controller -n gitlab-managed-apps -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-11-20T08:57:18Z"
labels:
app: nginx-ingress
chart: nginx-ingress-1.22.1
component: controller
heritage: Tiller
release: ingress
name: ingress-nginx-ingress-controller
namespace: gitlab-managed-apps
resourceVersion: "3940"
selfLink: /api/v1/namespaces/gitlab-managed-apps/services/ingress-nginx-ingress-controller
uid: c175afcc-0b73-11ea-91ec-5254008dd01b
spec:
clusterIP: 10.107.35.248
externalIPs:
- 1.2.3.4 # (public IP)
externalTrafficPolicy: Local
healthCheckNodePort: 30737
ports:
- name: http
nodePort: 31972
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 31746
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: ingress
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
But Gitlab still cant find the ingress endpoint. I tried restarting cluster and Gitlab.
The network inspection in Gitlab always shows this response:
...
name ingress
status installed
status_reason null
version 1.22.1
external_ip null
external_hostname null
update_available false
can_uninstall false
...
Any ideas how to have a working Ingress Endpoint?
GitLab: 12.4.3 (4d477238500) k8s: 1.16.3-00
I had the exact same issue as you, and I finally figured out how to solve it.
The first to understand, is that on bare metal, you can't make it working without using MetalLB, because it calls the required Kubernetes APIs making it accepting the IP address you give to the Service of LoadBalancer type.
So first step is to deploy MetalLB to your cluster.
Then you need to have another machine, running a service like NGiNX or HAproxy or whatever can do some load balancing.
Last but not least, you have to give the Load Balancer machine IP address to MetalLB so that it can assign it to the Service.
Usually MetalLB requires a range of IP addresses, but you can also give one IP address like I did:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: staging-public-ips
protocol: layer2
addresses:
- 1.2.3.4/32
This way, MetalLB will assign the IP address to the Service with type LoadBalancer and Gitlab will finally find the IP address.
WARNING: MetalLB will assign only once an IP address. If you need many Service with type LoadBalancer, you will need many machines running NGiNX/HAproxy and so on and add its IP address in the MetalLB addresses pool.
For your information, I've posted all the technical details to my Gitlab issue here.

Using Kuberenetes ingress controller as reverse proxy to other services in the cluster

I have a simple Kubernetes cluster on kops and aws, which is serving a web app, there is a single html page and a few apis. They are all running as services. I want to expose all endpoints(html and apis) publicly for the web page to work.
I have exposed the html service as an LoadBalancer and I am also using nginx-ingress controller. I want to use the same LoadBalancer to expose the other apis as well(using a different LoadBalancer for each service seems like a bad and expensive way), it is something that I was able to do using Nginx reverse proxy in the on-premise version of the same application, by giving different paths for each api in the nginx conf file.
Although I am not able to do the same in the cluster, I tried Service ingress but somehow I am not able to get the desired result, if I add a path, e.g. "path: "/mobiles-service"" and then add the specific service for it, the http requests do not somehow get redirected to the service. Only the html service works on the root path. Any help would be appreciated.
First you need to create controller for your Kops cluster running on AWS
kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/ingress-nginx/v1.6.0.yaml
Then check if ingress-nginx service is created by running:
kubectl get svc ingress-nginx -n kube-ingress
Then create your pods and ClusterIP type services for your each app like sample below:
kind: Service
apiVersion: v1
metadata:
name: app1-service
spec:
selector:
app: app1
ports:
- port: <app-port>
Then create ingress rule file like sample below:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /app1
backend:
serviceName: app1-service
servicePort: <app1-port>
- path: /app2
backend:
serviceName: app2-service
servicePort: <app2-port>
Once you deploy this ingress rule yaml, Kubernetes creates an Ingress resource on your cluster. The Ingress controller running in your cluster is responsible for creating an HTTP(S) Load Balancer to route all external HTTP traffic (on port 80) to the App Services in backend you exposed on specified pathes.
You can see newly created ingress rule by running:
kubectl get ingress
And you will see output like below:
NAME HOSTS ADDRESS PORTS AGE
example-ingress * a886e57982736434e9a1890264d461398-830017012.us-east-2.elb.amazonaws.com 80 1m
In relevant path like http://external-dns-name/app1 and http://external-dns-name/app2 you will access to your apps and in root / path, you will get <default backend - 404>

kubernetes expose nginx to static ip in gcp with ingress service configuration error

I had a couple of questions regarding kubernetes ingress service [/controllers]
For example I have an nginx frontend image that I am trying to run with kubectl -
kubectl run <deployment> --image <repo> --port <internal-nginx-port>.
Now I tried to expose this to the outer world with a service -
kubectl expose deployment <deployment> --target-port <port>.
Then tried to create an ingress service with the following nignx-ing.yaml -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: urtutorsv2ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "coreos"
spec:
backend:
serviceName: <service>
servicePort: <port>
Where my ingress.global-static-ip-name is correctly created & available
in Google cloud console.
[I am assuming the service port here is the port I want on my "coreos" IP , so I set it to 80 initially which didn't work so I tried setting it same as the specified in the first step but it still didn't work.]
So, the issue is I am not able to access the frontend at both the urls
http://COREOS_IP, http://COREOS_IPIP:
Which is why I tried to use -
kubectl expose deployment <deployment> --target-port <port>. --type NodePort
to see if it worked with a NodePort & I was able to access the frontend.
So, I am thinking there might be a configuration mistake here because of which I am not getting results with the ingress.
Can anyone here help debug / fix the issue ?
Yeah, the service is there. I tried to check the status with - kubectl get services, kubectl describe service k8urtutorsv2. It showed the service. I tried editing it & saved the nodeport value. the thing is it works with nodeport but not 80 or 443.
You cannot directly expose service on the port 80 or 443.
The available range of exposed services is predefined in the kube-api configuration by the service-node-port-range option with the default value 30000-32767.

Kubernetes whitelist-source-range blocks instead of whitelist IP

Running Kubernetes on GKE
Installed Nginx controller with latest stable release by using helm.
Everythings works well, except adding the whitelist-source-range annotation results in that I'm completely locked out from my service.
Ingress config
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: staging-ingress
namespace: staging
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/whitelist-source-range: "x.x.x.x, y.y.y.y"
spec:
rules:
- host: staging.com
http:
paths:
- path: /
backend:
serviceName:staging-service
servicePort: 80
I connected to the controller pod and checked the nginx config and found this:
# Deny for staging.com/
geo $the_real_ip $deny_5b3266e9d666401cb7ac676a73d8d5ae {
default 1;
x.x.x.x 0;
y.y.y.y 0;
}
It looks like he is locking me out instead of whitelist this IP's. But it also locking out all other addresses... I get 403 by going from staging.com host.
Yes. However, I figured out by myself. Your service has to be enabled externalTrafficPolicy: Local. That means that the actual client IP should be used instead of the internal cluster IP.
To accomplish this run
kubectl patch svc nginx-ingress-controller -p '{"spec":{"externalTrafficPolicy":"Local"}}'
Your nginx controller service has to be set as externalTrafficPolicy: Local. That means that the actual client IP will be used instead of cluster's internal IP.
You need to get the real service name from kubectl get svc command. The service is something like:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nobby-leopard-nginx-ingress-controller LoadBalancer 10.0.139.37 40.83.166.29 80:31223/TCP,443:30766/TCP 2d
nobby-leopard-nginx-ingress-controller is the service name you want to use.
To finish this, run
kubectl patch svc nobby-leopardnginx-ingress-controller -p '{"spec":{"externalTrafficPolicy":"Local"}}'
When you setting up a new nginx controller, you can use the command below:
helm install stable/nginx-ingress \
--namespace kube-system \
--set controller.service.externalTrafficPolicy=Local`
to have a nginx ingress controller accept whitelist after installing.

Resources