Mirror requests from cloudrun service to other cloudrun service - nginx

I'm currently working on a project where we are using Google Cloud. Within the Cloud we are using CloudRun to provide our services. One of these services is rather complex and has many different configuration options. To validate how these configurations affect the quality of the results and also to evaluate the quality of changes to the service, I would like to proceed as follows:
in addition to the existing service I deploy another instance of the service which contains the changes
I mirror all incoming requests and let both services process them, only the responses from the initial service are returned, but the responses from both services are stored
This allows me to create a detailed evaluation of the differences between the two services without having to provide the user with potentially worse responses.
For the implementation I have setup a NGINX which mirrors the requests. This is also deployed as a CloudRun service. This now accepts all requests and takes care of the authentication. The original service and the mirrored version have been configured in such a way that they can only be accessed internally and should therefore be accessed via a VPC network.
I have tried all possible combinations for the configuration of these parts but I always get 403 or 502 errors.
I have tried setting the NGINX service to the HTTP and HTTPS routes from the service, and I have tried all the VPC Connector settings. When I set the ingress from the service to ALL it works perfectly if I configure the service with HTTPS and port 443 in NGINX. As soon as I set the ingress to Internal I get errors with HTTPS -> 403 and with HTTP -> 502.
Does anyone have experience in this regard and can give me tips on how to solve this problem? Would be very grateful for any help.

If your Cloud Run service are internally accessible (ingress control set to internal only), you need to perform your request from your VPC.
Therefore, as you perfectly did, you plugged a serverless VPC connector on your NGINX service.
The set up is correct. Now, why it works when you route ALL the egress traffic and not only the private traffic to your VPC connector?
In fact, Cloud Run is a public resource, with a public URL, and even if you set the ingress to internal. This param say "the traffic must come to the VPC" and not say "I'm plugged to the VPC with a private IP".
So, to go to your VPC and access a public ressource (Your cloud run services), you need to route ALL the traffic to your VPC, even the public one.

Related

Understanding products of NIGNX PLUS, Controller, Ingress Controller, and Instance Management

As far as I know, Instance Management and the Controller have the same functions, which managing NGINX Plus and the Instances. but it does not make more sense.
So my question is
What are the differences between Instance Management and Controller?
What is Ingress Controller?
Nginx Instance Management: NGINX Instance Manager empowers you to
Automate configuration and monitoring using APIs.
For example, if you have multiple servers using Nginx then in Nginx Plus service provides a dashboard where all the events can be monitored, Including spikes on specific events, Or think as one of the servers has not been updated from having multiple VM, monitor the list of inventory. To achieve nginx-agant needs to install along with Nginx server on the host.
Ensure your fleet of NGINX web servers and proxies have fixes for
active CVEs
Seamlessly integrate with third‑party monitoring solutions such as
Prometheus and Grafana for insights
Nginx Controller: NGINX Controller is cloud‑agnostic and includes a set of enterprise‑grade services that give you a clear line of sight to apps in development, test, or production. With per‑app analytics, you gain new insights into app performance and reliability so you can pinpoint performance issues before they impact production.
Example: To enable the ingress you did need an Ingress Controller to enabled first.
Nginx Ingress: Each LoadBalancer service requires its own load balancer with its own public IP address, whereas an Ingress only requires one, even when providing access to dozens of services. When a client sends an HTTP request to the Ingress, the host and path in the request determine which service the request is forwarded to.
For example Google Kubernetes Controller

K8s Inter-service communication via FQDN

I have two services deployed to a single cluster in K8s. One is IS4, the other is a client application.
According to leastprivilege, the internal service must also use the FQDN.
The issue I'm having when developing locally (via skaffold & Docker) is that the internal service resolves the FQDN to 127.0.0.1 (the cluster). Is there any way to ensure that it resolves correctly and routes to the correct service?
Another issue is that internally the services communicate on HTTP, and publicly they expose HTTPS. With a URL rewrite I'm able to resolve the DNS part, but I'm unable to change the HTTPS calls to HTTP as NGINX isn't called, it's a call direct to the service. If there is some inter-service ruleset I can hook into (similar to ingress) I believe I could use that to terminate TLS and things would work.
Edit for clarification:
I mean, I'm not deploying to AKS. When deployed to AKS this isn't an issue.
HTTPS is explosed via NGingx ingress, which terminates TLS.

How to make ingress connect to pod in my network

my k8s master node has Public network IP, and worker node deploy in private net. worker node can connect to master but master cannot connect to worker node.
I have tested that can deploy a pod by kubectl, the pod running on worker node and master can watch pod status. but when I deploy a ingress, and access the ingress on master node, traffic cannot go to worker node.
I use flannel network.
I have tried use ssh tunnel, but it hard to management
I don't know if there are some suggests, thanks.
If you are deployed in a cloud environment, the most likely cause is incorrect firewall settings or route configurations. However, ingress configuration errors also may appear to look like infrastructure problems at times.
The Ingress will redirect your requests to the different services that it is registered with. The endpoint health is also monitored and requests will only be sent to active and healthy endpoints. My troubleshooting flow is as follows:
Hit an unregistered path on your url and check if you get the default backend response. If no, then your ingress controller may not be correctly set up (whether it be domain name, access rules, or just configuration). If yes, then your ingress controller should be correctly set up, and this is a problem with the Ingress definition or backend.
Try hitting your registered path on your url. If you get a 504 gateway timeout, then your endpoint is accepting the request, but not responding correctly. You can follow the target pod logs to figure out whether it is behaving properly.
If you get a 503 Service Unavailable, then your service might be down or deemed unhealthy by the ingress. In this case, you should definitely verify that your pods are running properly.
Check your nginx-ingress-controller logs to see how the requests are being redirected and what the internal responses are.
All your nodes and master should have communication with each other, without this you are going to have problems on cluster functionalities.
The ingress creates a load balancer pointing to your nodes machines.
Why your master cannot connect to your nodes?
Give a check on:
https://kubernetes.io/docs/concepts/architecture/master-node-communication/

Google cloud HTTP load balancer always returns unhealthy instance for meteor app

I am trying to set up a HTTP load balancer for my Meteor app on google cloud. I have the application set up correctly, and I know this because I can visit the IP given in the Network Load Balancer.
However, when I try and set up a HTTP load balancer, the health checks always say that the instances are unhealthy (even though I know they are not). I tried including a route in my application that returns a status 200, and pointing the health check towards that route.
Here is exactly what I did, step by step:
Create new instance template/group for the app.
Upload image to google cloud.
Create replication controller and service for the app.
The network load balancer was created automatically. Additionally, there were two firewall rules allowing HTTP/HTTPS traffic on all IPs.
Then I try and create the HTTP load balancer. I create a backend service in the load balancer with all the VMs corresponding to the meteor app. Then I create a new global forwarding rule. No matter what, the instances are labelled "unhealthy" and the IP from the global forwarding rule returns a "Server Error".
In order to use HTTP load balancing on Google Cloud with Kubernetes, you have to take a slightly different approach than for network load balancing, due to the current lack of built-in support for HTTP balancing.
I suspect you created your service in step 3 with type: LoadBalancer. This won't work properly because of how the LoadBalancer type is implemented, which causes the service to be available only on the network forwarding rule's IP address, rather than on each host's IP address.
What will work, however, is using type: NodePort, which will cause the service to be reachable on the automatically-chosen node port on each host's external IP address. This plays more nicely with the HTTP load balancer. You can then pass this node port to the HTTP load balancer that you create. Once you open up a firewall on the node port, you should be good to go!
If you want more concrete steps, a walkthrough of how to use HTTP load balancers with Container Engine was actually recently added to GKE's documentation. The same steps should work with normal Kubernetes.
As a final note, now that version 1.0 is out the door, the team is getting back to adding some missing features, including native support for L7 load balancing. We hope to make it much easier for you soon!

Azure: How to connect one cloud service with other in one virtual network

I want deploy backend WCF service in WebRole in Cloud Service 1 only with Internal Endpoint.
And deploy ASP.NET MVC frontend in WebRole in Cloud Service 2.
Is it possible to use Azure Virtual Netowork to call backend from frontend by Internal Endpoint ?
UPDATED: I am just trying build simple SOA architect like this:
Yes and No.
An internal endpoint essentially means that the role instance has been configured to accept traffic on a given port, but that port can NOT receive traffic from outside of the cloud service (hence it being "internal" to the cloud service). Internal endpoints are also not load balanced so you're going to need to "juggle" traffic management from the callers yourself.
Now here is where the issues arise, a virtual network allows you to securely traverse cloud service boundaries, letting a role instance in cloud service 1 call a role instance in cloud service 2. However, to do this, the calling role instance needs to know how to address the receiving instance. If they were in the same cloud service, they you can crawl the cloud service topology via the RoleEnvironment class. But this class only works for the cloud service its exists in, its not aware of a virtual network.
Now you could have the receiving role instance publish its FQDN to a shared area (say Azure table storage). However, a cloud service will only use its own internal DNS resolution (which only allows you to resolve short names in the same cloud service) unless you have configured the virtual network with a self-hosted DNS server.
So yes, you can do what you're trying to accomplish, but it does present some challenges. Given this, I'd have to argue if the convenience of separating for deployment enough to justify the additional complexity of the solution? If so, then I'd also look and see if perhaps there's a better way to interconnect the two services rather then direct calls (like a queue based pattern).
#BrentDaCodeMonkey makes some very valid points in his answer, so read that first.
I, personally, would not want to give up automatic discovery and scale via load balancing. My suggestion would be that you expose the WCF endpoint via an Azure Service Bus Relay endpoint. This will give you a "fixed" endpoint with which to communicate (solving the discovery issue) and infinite scalability because multiple servers can register and listen on the same Service Bus relay address. Additionally it introduces some basic security into the mix via shared key authentication when your web application(s) connect to your WCF services.
If you co-locate the Service Bus instance with your Cloud Services the overhead of the relay in the middle is totally negligible and, IMHO, worth it for the benefits explained above.

Resources