I am really new to kubernetes and nginx. I am able to use it as a reverse-proxy by setting up ingress resource, however, I am not sure about how should I use it to forward the request from kubernetes to a particular host.
My case is as follows:
I have a container running in kubernetes pod which access an external api url (example www.xxx.com) with some parameters, however, because I have blocked the outgoing requests for all the pods, it can not access that api url.
To solve this I want to setup nginx proxy which will forward my request to the actual api url.
Being new to this and having lack of proper steps documented anywhere to achieve this, I am really stuck. How can I do this?
What you could do is to define a Service object that points to your external API endpoint. This is done by creating an Endpoint object and a Service object both with the same name.
https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors
Once you have your service, you could create an Ingress rule that would forward the traffic to that service. Make sure that the Ingress controller can access your API endpoint.
Related
I have a service providing an API that I want to only be accessible over https. I don't want http to redirect to https because that will expose credentials and the caller won't notice. Better to get an error response.
How to do I configure my ingress.yaml? Note that I want to maintain the default 308 redirect from http to https for other services in the same cluster.
Thanks.
In the documentation: you can read the following sentence about HTTPS enforcement through redirect:
By default the controller redirects (308) to HTTPS if TLS is enabled for that ingress. If you want to disable this behavior globally, you can use ssl-redirect: "false" in the NGINX ConfigMap.
To configure this feature for specific ingress resources, you can use the nginx.ingress.kubernetes.io/ssl-redirect: "false" annotation in the particular resource.
You can also create two separate configurations: one with http and https and the other one only for http.
Using kubernetes.io/ingress.class annotation you can choose the ingress controller to be used.
This mechanism also provides users the ability to run multiple NGINX ingress controllers (e.g. one which serves public traffic, one which serves "internal" traffic).
See also this and this similar questions.
I have configured NGINX as a reverse proxy with web sockets enabled for a backend web application with multiple replicas. The request from NGINX does a proxy_pass to a Kubernetes service which in turn load balances the request to the endpoints mapped to the service. I need to ensure that the request from a particular client is proxied to the same Kubernetes back end pod for the life cycle of that access, basically maintaining session persistence.
Tried setting the sessionAffinity: ClientIP in the Kubernetes service, however this does the routing based on the client IP which is of the NGINX proxy. Is there a way to make the Kubernetes service do the affinity based on the actual client IP from where the request originated and not the NGINX internal pod IP ?
This is not an option with Nginx. Or rather it's not an option with anything in userspace like this without a lot of very fancy network manipulation. You'll need to find another option, usually an app-specific proxy rules in the outermost HTTP proxy layer.
I am learning the world of k8s and there is a lot of talk about ingress and ingress controllers. Conceptually it sounds identical to a web server which I will define as a service that proxies HTTP requests to web application servers. It can serve up certificates and do basic load balancing...
Whereas ingress: Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. Ingress may provide load balancing, SSL termination and name-based virtual hosting.
https://kubernetes.io/docs/concepts/services-networking/ingress/
They sound the same! So what exactly is the difference here? I can't be the only one confused by this right?
In general Web Server's is responsible for accepting and fulfilling requests from clients.
A web server‘s fundamental job is to accept and fulfill requests from clients for static content from a website (HTML pages, files, images, video, and so on). The client is almost always a browser or mobile application and the request takes the form of a Hypertext Transfer Protocol (HTTP) message, as does the web server’s response.
Lately you can find many web servers like Apache or Nginx.
Kubernetes Ingress is an API object. In IBM blog - What is Kubernetes Ingress and why is it useful?
Kubernetes Ingress is an API object that provides routing rules to manage external users' access to the services in a Kubernetes cluster, typically via HTTPS/HTTP. With Ingress, you can easily set up rules for routing traffic without creating a bunch of Load Balancers or exposing each service on the node. This makes it the best option to use in production environments.
Also in Kubernetes Ingress Docs you can find that Kubernetes Ingress needs Ingress Controller.
You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
There are many ingress controllers like Nginx, Ambassador, Apache, etc.
To sum up:
To use Ingress you need some Web Server as Ingress Controller.
Kubernetes Ingress is an Kubernetes object which helps user to configure Web Server (like Nginx) in Kubernetes Clusters.
As you pointed in documentation it allows you to configure some HTTP/HTTPS routing, traffic load balancing, terminate SSL / TLS, etc.
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
Background
I have an in-cluster Kubernetes pod/application that works fine when accessing it via an nginx-ingress ingress controller (requires specific Host HTTP header), but it cannot be accessed by other in-cluster pods/applications (i.e. for testing) due to the pods using different host names (e.g. service-name.namespace.svc.cluster.local) rather than the FQDN of the K8S master (in the LAN).
Plan So Far
I think the only way to (easily) resolve this is to setup an in-cluster forward-proxy nginx instance. Ideally, the service is either a side-car for the pod that needs to have headers re-written, or it needs to be a general in-cluster proxy that multiple services can access.
Question
How would I setup an in-cluster nginx forward proxy service?
Should it be a sidecar, or a general service any pod can access?
Work So Far
The linked "similar" questions don't appear to be helpful for my use case (i.e. don't show how to configure an in-cluster proxy), or are focused on proxying to IPs external to the cluster (i.e. I need to proxy HTTP requests, and re-write their headers, to in-cluster resources).
I wanted to visit my dashboard on a local Kubernetes installation (using docker for mac). I was 'blocked'. I have to provide a token or my config which is normal since the RBAC updates.
Now I don't want to kubectl proxy or enable port forwarding every time I want to visit my dashboard so I installed an nginx proxy with a ingress (tls) which redirects me to https://kubernetes-dashboard.kube-system.svc.cluster.local:443.
This works fine but now I'm a bit confused because I can see the dashboard now, without facing the RBAC issue.
I read this here:
To make Dashboard use authorization header you simply need to pass
Authorization: Bearer in every request to Dashboard. This can
be achieved i.e. by configuring reverse proxy in front of Dashboard.
Proxy will be responsible for authentication with identity provider
and will pass generated token in request header to Dashboard. Note
that Kubernetes API server needs to be configured properly to accept
these tokens.
But it's still not very clear for me. Can someone explain we why I can see the dashboard when I create a proxy in front of it?
Proxy is usually needed to transfer data between different segments of the network without connecting them directly.
Each segment of the network is "talking" to proxy host without any knowledge of the existence of the other network segment.
The Proxy Server is responsible for all negotiations and operations concerning requests and response packets. So, to enable authentication, authorization, SSL termination and many other things you need to configure your proxy server according to your needs.
If you can see the kubernetes dashboard via proxy in front of it it just means that you did not configure any security on that proxy.
For example, to learn how to configure Nginx Ingress to protect a service with basic authentication in your cluster consider to read this article.
For more complex security setup read the article about securing Kubernetes services with Ingress, TLS and LetsEncrypt.