I have two services deployed to a single cluster in K8s. One is IS4, the other is a client application.
According to leastprivilege, the internal service must also use the FQDN.
The issue I'm having when developing locally (via skaffold & Docker) is that the internal service resolves the FQDN to 127.0.0.1 (the cluster). Is there any way to ensure that it resolves correctly and routes to the correct service?
Another issue is that internally the services communicate on HTTP, and publicly they expose HTTPS. With a URL rewrite I'm able to resolve the DNS part, but I'm unable to change the HTTPS calls to HTTP as NGINX isn't called, it's a call direct to the service. If there is some inter-service ruleset I can hook into (similar to ingress) I believe I could use that to terminate TLS and things would work.
Edit for clarification:
I mean, I'm not deploying to AKS. When deployed to AKS this isn't an issue.
HTTPS is explosed via NGingx ingress, which terminates TLS.
Related
I have a frontend app in Fargate (ECS) in a private subnet exposed to internet through an Application Load Balancer. My frontend makes API calls to my backend apps, also in Fargate, same VPC.
Users calls to my frontend are made via HTTPS, but my frontend communicates with my backend via HTTP (AWS Service Discovery - AWS Cloud Map). This way, the user browser is showing the error "blocked: mixed content" since half of the communication is made via HTTPS and the other half uses HTTP.
infra here
As far as I know and had been searching, it is not possible to use a SSL/TLS certificate with Service Discovery.
I've made a lot of researches and couldn't find something really useful. I also tried to create an internal load balancer for each backend service but the communication is timing out, it only works when I have a VPN connected.
What am I missing here? Do I need an internal load balancer in front of each backend service to attach a certificate between frontend and backend? What is the best approach to solve this?
Users calls to my frontend are made via HTTPS, but my frontend communicates with my backend via HTTP (AWS Service Discovery - AWS Cloud Map). This way, the user browser is causing the error "blocked: mixed content" since half of the communication is made via HTTPS and the other half uses HTTP.
The user's browser wouldn't know anything about this if the communication was happening between the front-end server and the back-end server. Apparently you have front-end client JavaScript code running in the user's web browser trying to access the backend server directly.
If you want to access the backend server directly from the user's web browser, then service discovery won't work, because service discovery is only for traffic that is inside the VPC. And of course by trying to use service discovery in this way you are also causing a security issue which the browser is correctly blocking you from doing. You will need to add another load balancer, or another listener on your current load balancer, that exposes the backend API to the Internet.
Alternatively you could use a reverse proxy like Nginx on your front-end server to send backend API requests to the backend service, and then have your client-side JavaScript code send all requests to the front-end server.
I recently moved over from coder/code-server to microsoft's implementation of a code server. Included in this is the ability to set up tunnels to the remote host, however i can't seem to get it to work. I'm using nginx to forward to the code-server service as a reverse proxy which works just fine for the most part. Since the documentation doesn't include a list of networking requirements for hosting a code-server i'd like to know if there are any ports that should be open and forwarded or any additional configuration?
I've configured nginx to forward all requests to the code-server instance, but this isn't enough to get tunnels working. I can connect to it through the domain i'm listening for.
I'm currently working on a project where we are using Google Cloud. Within the Cloud we are using CloudRun to provide our services. One of these services is rather complex and has many different configuration options. To validate how these configurations affect the quality of the results and also to evaluate the quality of changes to the service, I would like to proceed as follows:
in addition to the existing service I deploy another instance of the service which contains the changes
I mirror all incoming requests and let both services process them, only the responses from the initial service are returned, but the responses from both services are stored
This allows me to create a detailed evaluation of the differences between the two services without having to provide the user with potentially worse responses.
For the implementation I have setup a NGINX which mirrors the requests. This is also deployed as a CloudRun service. This now accepts all requests and takes care of the authentication. The original service and the mirrored version have been configured in such a way that they can only be accessed internally and should therefore be accessed via a VPC network.
I have tried all possible combinations for the configuration of these parts but I always get 403 or 502 errors.
I have tried setting the NGINX service to the HTTP and HTTPS routes from the service, and I have tried all the VPC Connector settings. When I set the ingress from the service to ALL it works perfectly if I configure the service with HTTPS and port 443 in NGINX. As soon as I set the ingress to Internal I get errors with HTTPS -> 403 and with HTTP -> 502.
Does anyone have experience in this regard and can give me tips on how to solve this problem? Would be very grateful for any help.
If your Cloud Run service are internally accessible (ingress control set to internal only), you need to perform your request from your VPC.
Therefore, as you perfectly did, you plugged a serverless VPC connector on your NGINX service.
The set up is correct. Now, why it works when you route ALL the egress traffic and not only the private traffic to your VPC connector?
In fact, Cloud Run is a public resource, with a public URL, and even if you set the ingress to internal. This param say "the traffic must come to the VPC" and not say "I'm plugged to the VPC with a private IP".
So, to go to your VPC and access a public ressource (Your cloud run services), you need to route ALL the traffic to your VPC, even the public one.
As far as I know, Instance Management and the Controller have the same functions, which managing NGINX Plus and the Instances. but it does not make more sense.
So my question is
What are the differences between Instance Management and Controller?
What is Ingress Controller?
Nginx Instance Management: NGINX Instance Manager empowers you to
Automate configuration and monitoring using APIs.
For example, if you have multiple servers using Nginx then in Nginx Plus service provides a dashboard where all the events can be monitored, Including spikes on specific events, Or think as one of the servers has not been updated from having multiple VM, monitor the list of inventory. To achieve nginx-agant needs to install along with Nginx server on the host.
Ensure your fleet of NGINX web servers and proxies have fixes for
active CVEs
Seamlessly integrate with third‑party monitoring solutions such as
Prometheus and Grafana for insights
Nginx Controller: NGINX Controller is cloud‑agnostic and includes a set of enterprise‑grade services that give you a clear line of sight to apps in development, test, or production. With per‑app analytics, you gain new insights into app performance and reliability so you can pinpoint performance issues before they impact production.
Example: To enable the ingress you did need an Ingress Controller to enabled first.
Nginx Ingress: Each LoadBalancer service requires its own load balancer with its own public IP address, whereas an Ingress only requires one, even when providing access to dozens of services. When a client sends an HTTP request to the Ingress, the host and path in the request determine which service the request is forwarded to.
For example Google Kubernetes Controller
I'm currently working on copying AWS EKS cluster to Azure AKS.
In our EKS we use external Nginx with proxy protocol to identify the client real IP and check if it is whitelisted in our Nginx.
In AWS to do so we added to the Kubernetes service annotation aws-load-balancer-proxy-protocol to support Nginx proxy_protocol directive.
Now the day has come and we want to run our cluster also on Azure AKS and I'm trying to do the same mechanism.
I saw that AKS Load Balancer hashes the IPs so I removed the proxy_protocol directive from my Nginx conf, I tried several things, I understand that Azure Load Balancer is not used as a proxy but I did read here:
AKS Load Balancer Standard
I tried whitelisting IPs at the level of the Kubernetes service using the loadBalancerSourceRanges api instead on the Nginx level.
But I think that the Load Balancer sends the IP to the cluster already hashed (is it the right term?) and the cluster seem to ignore the ips under loadBalancerSourceRanges and pass them through.
I'm stuck now trying to understand where I lack the knowledge, I tried to handle it from both ends (load balancer and kubernetes service) and they both seem not to cooperate with me.
Given my failures, what is the "right" way of passing the client real IP address to my AKS cluster?
From the docs: https://learn.microsoft.com/en-us/azure/aks/ingress-basic#create-an-ingress-controller
If you would like to enable client source IP preservation for requests
to containers in your cluster, add --set controller.service.externalTrafficPolicy=Local to the Helm install
command. The client source IP is stored in the request header under
X-Forwarded-For. When using an ingress controller with client source
IP preservation enabled, SSL pass-through will not work.
More information here as well: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
You can use the real_ip and geo modules to create the IP whitelist configuration. Alternatively, the loadBalancerSourceRanges should let you whitelist any client IP ranges by updating the associated NSG.