My team was so far using ambassador ingress aka emissary ingress and now wants to migrate to nginx ingress. I know how to migrate ambassador ingress(mapping yaml) resources into nginx ingress.
But wondering how we migrate API-gateway i.e emissary ingress into nginx ingress?
update:
Should we use this project: https://github.com/nginxinc/nginx-kubernetes-gateway to replace the ambassador api-gateway i.e emissary ingress?
Can someone please advise on the same. Thank you.
Related
We have a Kubernetes cluster running in Azure (AKS) with an Nginx controller as an ingress. I would like to track all incoming requests. Solutions like Prometheus with Grafana are not working, because the tracking should be highly customized.
I already found that Traefik implemented middlewares (https://doc.traefik.io/traefik/middlewares/overview/) which would be a great solution. Is there also a similar solution that I can use with Nginx?
I have deployed a Linkerd Service mesh and my Kubernetes cluster is configured with the Nginx ingress controller as a DaemonSet and all the ingresses are working fine also the Linkerd. Recently, I have added a traffic split functionality to run my blue/green setup I can reach through to these services with separate ingress resources. I have created an apex-web service as described here. If I reached you this service internally it perfectly working. I have created another ingress resources and I'm not able to test the blue/green functionality outside of my cluster. I'd like to mention that I have meshed (injected the Linkerd proxy) to all my Nginx pods but it is returning "503 Service Temporarily Unavailable" message from the Nginx.
I went through the documentation and I have created ingress following this, I can confirm that below annotations were added to the ingress resources.
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
grpc_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
but still no luck with the out side of the cluster.
I'm testing with the given emojivoto app and all the traffic split and the apex-web services are in this training repository.
I'm not quite sure what went wrong and how to fix this outside from the cluster. I'd really appreciate if anyone assist me to fix this Linkerd, Blue/Green issue.
I have raised this question in the Linkerd Slack channel and got this fixed with the wonderful support from the community. Seems Nginx doesn't like the service which doesn't have an endpoint. My configuration was correct and asked to change the service pointed in the traffic split to a service with an endpoint and it fixed the issue.
In a nutshell, my traffic split was configured with web-svc and web-svc-2 services. I have changed the traffic split spec.service to the same web-svc and it worked
Here is the traffic split configuration after the update.
apiVersion: split.smi-spec.io/v1alpha1
kind: TrafficSplit
metadata:
name: web-svc-ts
namespace: emojivoto
spec:
# The root service that clients use to connect to the destination application.
service: web-svc
# Services inside the namespace with their own selectors, endpoints and configuration.
backends:
- service: web-svc
# Identical to resources, 1 = 1000m
weight: 500m
- service: web-svc-2
weight: 500m
Kudos to the Linkerd team who supported me to fix this issue. It is working like a charm.
tl;dr: The nginx ingress requires a Service resource to have an Endpoint resource in order to be considered a valid destination for traffic. The architecture in the repo creates three Service resources, one of which acts as an apex and has no Endpoint resources because it has no selectors, so the nginx ingress won't send traffic to it, and the leaf services will not get traffic as a result.
The example in the repo follows the SMI Spec by defining a single apex service and two leaf services. The web-apex service does not have any endpoints, so nginx will not send traffic to it.
According to the SMI Spec services can be self-referential, which means that a service can be both an apex and a leaf service, so to use the nginx ingress with this example, you can modify the TrafficSplit definition to change the spec.service value from web-apex to web-svc:
apiVersion: split.smi-spec.io/v1alpha1
kind: TrafficSplit
metadata:
name: web-svc-ts
namespace: emojivoto
spec:
# The root service that clients use to connect to the destination application.
service: web-svc
# Services inside the namespace with their own selectors, endpoints and configuration.
backends:
- service: web-svc
# Identical to resources, 1 = 1000m
weight: 500m
- service: web-svc-2
weight: 500m
I already had NGINX handling my reverse-proxy and load balancing for bare-metals and VMs, wonder if I can use the same instance for my Kubernetes cluster exposing services in load-balancer mode. If so, could I use it for both L4 and L7?
You can't use it as type LoadBalancer because there's no cloud-provider API to handle an external Nginx instance. You can do a couple of things I can think of:
Create Kubernetes Service exposed on a NodePort. So your architecture will look like this:
External NGINX -> Kubernetes NodePort Service -> Pods
Create a Kubernetes Ingress managed by an ingress controller. The most popular happens to be Nginx. So your architecture will look something like this:
External NGINX -> Kubernetes Service (has to be NodePort) -> Ingress (NGINX) -> Backend Service -> Pods
So we've got a big site with 1 nginx config that handles everything! This includes SSL.
At the moment, the config is setup to route all traffic for subdomain.ourdomain.com to our exposed kubernetes service.
When I visit subdomain.ourdomain.com, it returns a 502 Bad Gateway. I've triple checked that the service inside my kubernetes pod is running properly. I'm pretty certain there is something wrong with the kubernetes config I'm using.
So what I've done:
Created kubernetes service
Exposed it using type LoadBalancer
Added the correct routing to our nginx config for our subdomain
This is what the kubectl get services returns:
users <cluster_ip> <external_ip> 80/TCP 12m
This is what kubectl get endpoints returns:
kubernetes <kub_ip>:443 48d
redis-master 10.0.1.5:6379 48d
redis-slave 10.0.0.5:6379 48d
users 10.0.2.7:80 3m
All I want to do is route all traffic through our nginx configuration to our kubernetes service?
We tried routing all traffic to our kubernetes container cluster IP, but this didn't work.
So I built my Kubernetes cluster on AWS using KOPS
I then deployed SocketCluster on my K8s cluster using Baasil which deploys 7 YAML files
My problem is: the scc-ingress isn't getting any IP or endpoint as I have not deployed any ingress controller.
According to ingress controller docs, I am recommended to deploy an nginx ingress controller
I need easy and explained steps to deploy the nginx ingress controller for my specific cluster.
To view the current status of my cluster in a nice GUI, see the screenshots below:
Deployments
Ingress
Pods
Replica Sets
Services
The answer is here https://github.com/kubernetes/kops/tree/master/addons/ingress-nginx
kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/ingress-nginx/v1.4.0.yaml
But obviously the scc-ingress file needed to be changed to have a host such as foo.bar.com
Also, need to generate a self-signed SSL using OpenSSL as per this link https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/tls
Finally, had to add a CNAME on Route53 from foo.bar.com to the dns of the ELB created