Kubernetes on GCE: Ingress Timeout Configuration - nginx

I'm running Kubernetes on Google Compute Engine (GCE). I have an Ingress set up. Everything works perfectly except when I upload large files, the L7 HTTPS Load Balancer terminates the connection after 30 seconds. I know that I can bump this up manually in the "Backend Service", but I'm wondering if there is a way to do this from the Ingress spec. I worry that my manual tweak will get changed back to 30s later on.
The nginx ingress controller has a number of annotations that can be used to configure nginx. Does the GCE L7 Load Balancer have something similar?

This can now be configured within GKE, by using a custom resource BackendConfig.
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: my-bconfig
spec:
timeoutSec: 60
And then configuring your Service to use this configuration with an annotation:
apiVersion: v1
kind: Service
metadata:
name: my-service
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"80":"my-bconfig"}}'
spec:
ports:
- port: 80
.... other fields
See Configuring a backend service through Ingress

For anyone else looking for the solution to this problem, timeout and other settings (e.g. enable CDN) can only be configured manually at the moment.
Follow this kubernetes/ingress-gce issue for the latest updates on a long-term solution.

Related

How to automatically assign node IP to NGINX ingress controller?

I have a single-node k8s cluster on DigitalOcean. I used to use NGINX Ingress Controller with ClusterIP service, but then it stopped working for some reason, so I updated it to use LoadBalancer service instead, but it costs me $12, which is too much for my pet projects.
So as a temporary measure I am using NodePort service with a hardcoded external IP of the machine. Here's how the helm release configuration looks now:
controller:
enableExternalDNS: true
service:
externalIPs:
- 46.XXX.XXX.XXX
type: NodePort
rbac:
create: true
But I don't like it being hardcoded like that. Is there a way to tell NGINX Ingress Controller to use node external IP without hardcoding it?
Or maybe there's some other way to expose the service without using LoadBalancer?

Difference between nginx ingress controller kind:service vs kind: Ingress vs kind: configMap in Kubernetes

I am trying to understand the difference between Nginx ingress controller
kind:service
vs
kind: Ingress
vs
kind: configMap
in Kubernetes but a little unclear.
Is kind: Service same as Kind: Ingress in Service and Ingress?
kind represents the type of Kubernetes objects to be created while using the yaml file.
Kubernetes objects are persistent entities in the Kubernetes
system. Kubernetes uses these entities to represent the state of your
cluster. Specifically, they can describe:
What containerized applications are running (and on which nodes)
The resources available to those applications
The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance
ConfigMap Object: A ConfigMap is an API object used to store
non-confidential data in key-value pairs. Pods can consume ConfigMaps
as environment variables, command-line arguments, or as configuration
files in a volume.
Ingress Object: An API object that manages external access to the
services in a cluster, typically HTTP. Ingress may provide load
balancing, SSL termination and name-based virtual hosting.
Service Object: In Kubernetes, a Service is an abstraction which
defines a logical set of Pods and a policy by which to access them
(sometimes this pattern is called a micro-service).

Linkerd traffic split with Nginx Ingress Controller

I have deployed a Linkerd Service mesh and my Kubernetes cluster is configured with the Nginx ingress controller as a DaemonSet and all the ingresses are working fine also the Linkerd. Recently, I have added a traffic split functionality to run my blue/green setup I can reach through to these services with separate ingress resources. I have created an apex-web service as described here. If I reached you this service internally it perfectly working. I have created another ingress resources and I'm not able to test the blue/green functionality outside of my cluster. I'd like to mention that I have meshed (injected the Linkerd proxy) to all my Nginx pods but it is returning "503 Service Temporarily Unavailable" message from the Nginx.
I went through the documentation and I have created ingress following this, I can confirm that below annotations were added to the ingress resources.
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
grpc_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
but still no luck with the out side of the cluster.
I'm testing with the given emojivoto app and all the traffic split and the apex-web services are in this training repository.
I'm not quite sure what went wrong and how to fix this outside from the cluster. I'd really appreciate if anyone assist me to fix this Linkerd, Blue/Green issue.
I have raised this question in the Linkerd Slack channel and got this fixed with the wonderful support from the community. Seems Nginx doesn't like the service which doesn't have an endpoint. My configuration was correct and asked to change the service pointed in the traffic split to a service with an endpoint and it fixed the issue.
In a nutshell, my traffic split was configured with web-svc and web-svc-2 services. I have changed the traffic split spec.service to the same web-svc and it worked
Here is the traffic split configuration after the update.
apiVersion: split.smi-spec.io/v1alpha1
kind: TrafficSplit
metadata:
name: web-svc-ts
namespace: emojivoto
spec:
# The root service that clients use to connect to the destination application.
service: web-svc
# Services inside the namespace with their own selectors, endpoints and configuration.
backends:
- service: web-svc
# Identical to resources, 1 = 1000m
weight: 500m
- service: web-svc-2
weight: 500m
Kudos to the Linkerd team who supported me to fix this issue. It is working like a charm.
tl;dr: The nginx ingress requires a Service resource to have an Endpoint resource in order to be considered a valid destination for traffic. The architecture in the repo creates three Service resources, one of which acts as an apex and has no Endpoint resources because it has no selectors, so the nginx ingress won't send traffic to it, and the leaf services will not get traffic as a result.
The example in the repo follows the SMI Spec by defining a single apex service and two leaf services. The web-apex service does not have any endpoints, so nginx will not send traffic to it.
According to the SMI Spec services can be self-referential, which means that a service can be both an apex and a leaf service, so to use the nginx ingress with this example, you can modify the TrafficSplit definition to change the spec.service value from web-apex to web-svc:
apiVersion: split.smi-spec.io/v1alpha1
kind: TrafficSplit
metadata:
name: web-svc-ts
namespace: emojivoto
spec:
# The root service that clients use to connect to the destination application.
service: web-svc
# Services inside the namespace with their own selectors, endpoints and configuration.
backends:
- service: web-svc
# Identical to resources, 1 = 1000m
weight: 500m
- service: web-svc-2
weight: 500m

K8s "Nginx ingress controller" is creating less number of connection than required

I am using the "nginx-ingress-controller" to use 'active connections' metrics in my HPA. But Nginx ingress is creating few connections to handle a large number of users.
I am new to Nginx ingress so I don't know if it is expected behavior. I was expecting that 'active connections' should close to the number of concurrent users. Now due to less connection, my application is not scaling.
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: ggs-hpa
spec:
minReplicas: 1
maxReplicas: 10
metrics:
- type: External
external:
metricName: custom.googleapis.com|nginx-ingress-controller|nginx_connnections
targetAverageValue: 6
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: ggs
I am using JMeter to create these users and my deployment is on the GKE cluster. I am using the default setting of 'nginx-ingress-controller', no customizations in Nginx config.
Can someone please help me to understand this behavior of connections? Thank you.
Sorry I missed to post the solution of the issue. I was not using 'nodeSelector' so nginx pod's node was having multiple pods and these pods were also required high number of connection and a node is having limited number of connections and ports (even after increasing it by tweaking linux settings).
Solution:
So I created labels for nodes and using these labels in my nodeSelector to control the deployment of application.
Labels I created -
ingress
monitoring
geocoding (my application)
testing
Now everything is working fine.

Unable to get real remote IP in AKS with advanced networking

We have two AKS clusters for different environments. Both use a Nginx server as a custom ingress. By that I mean that it acts like an ingress, but it is just a normal Nginx deployment behind a service. There are several good reasons for that setup, the main one being that ingress did not exist in AKS when we started.
The services are defined like this:
apiVersion: v1
kind: Service
metadata:
name: <our name>
namespace: <our namespace>
spec:
ports:
- port: 443
targetPort: 443
selector:
app: <our app>
loadBalancerIP: <our ip>
type: LoadBalancer
externalTrafficPolicy: Local
We have configured Nginx with the real ip module like this:
real_ip_header X-Original-Forwarded-For;
set_real_ip_from 10.0.0.0/8; # or whatever ip is correct
One environment uses the old basic networking, networkPlugin=kubenet. There Nginx logs the real client IP addresses in the log and can use them for access controls. The other uses advanced networking, networkPlugin=azure. There Nginx logs the IP address of one of the nodes, which is useless. Both the X-Original-Forwarded-For and the standard X-Forwarded-For headers are empty and of course the source IP is from the node, not from the client.
Is there a way around this? If at all possible we would like to avoid defining a "real" ingress as our own Nginx server contains custom configuration that would be hard to duplicate in such a setup, plus it is not clear that a standard ingress would help either?
Microsoft should have fixed this by now for real ingresses. However, apparently the fix doesn't cover our case where Nginx runs as a pod behind a service with advanced networking. We were told to use the workaround posted by denniszielke in https://github.com/Azure/AKS/issues/607 where the iptables for all nodes are updated regularly. Quite dirty in my view, but it works.
We still have the service defined as above with "externalTrafficPolicy: Local" and we have installed the ConfigMap and DaemonSet from the link. I changed the script to reduce logging a bit and moved both to another namespace.

Resources