How to configure Target Endpoint's containing localhost and port in apigee? - apigee

I have created a very simple RESTful service and deployed it on my local tomcat server. I would like to create and configure an API proxy and test it out using Apigee. While trying to creating a new API proxy it does not allow me to point me to url endpoint containing containing localhost and port information.
http://localhost:8080/PageNameService/ /**** DOES NOT WORK ***/
http://weather.yahooapis.com /***** WORKS ************/
Does this mean that you cannot configure Target Endpoint URL's that contain localhost and ports in apigee ? Please guide.

Apigee Edge is hosted on a 4G cloud infrastructure and would accept a backend url for which it can act as a facade and pass on the traffic processed by it. It would not be able to connect to your tomcat server with localhost:8080. You can give any globally accessible complete URL's(it can have domain names or even ip addresses with correct ports is fine.)

I solved this need by using ngrok. I am able to route calls from Edge to my local machine. Absolutely love ngork! :-)

Related

Mirror requests from cloudrun service to other cloudrun service

I'm currently working on a project where we are using Google Cloud. Within the Cloud we are using CloudRun to provide our services. One of these services is rather complex and has many different configuration options. To validate how these configurations affect the quality of the results and also to evaluate the quality of changes to the service, I would like to proceed as follows:
in addition to the existing service I deploy another instance of the service which contains the changes
I mirror all incoming requests and let both services process them, only the responses from the initial service are returned, but the responses from both services are stored
This allows me to create a detailed evaluation of the differences between the two services without having to provide the user with potentially worse responses.
For the implementation I have setup a NGINX which mirrors the requests. This is also deployed as a CloudRun service. This now accepts all requests and takes care of the authentication. The original service and the mirrored version have been configured in such a way that they can only be accessed internally and should therefore be accessed via a VPC network.
I have tried all possible combinations for the configuration of these parts but I always get 403 or 502 errors.
I have tried setting the NGINX service to the HTTP and HTTPS routes from the service, and I have tried all the VPC Connector settings. When I set the ingress from the service to ALL it works perfectly if I configure the service with HTTPS and port 443 in NGINX. As soon as I set the ingress to Internal I get errors with HTTPS -> 403 and with HTTP -> 502.
Does anyone have experience in this regard and can give me tips on how to solve this problem? Would be very grateful for any help.
If your Cloud Run service are internally accessible (ingress control set to internal only), you need to perform your request from your VPC.
Therefore, as you perfectly did, you plugged a serverless VPC connector on your NGINX service.
The set up is correct. Now, why it works when you route ALL the egress traffic and not only the private traffic to your VPC connector?
In fact, Cloud Run is a public resource, with a public URL, and even if you set the ingress to internal. This param say "the traffic must come to the VPC" and not say "I'm plugged to the VPC with a private IP".
So, to go to your VPC and access a public ressource (Your cloud run services), you need to route ALL the traffic to your VPC, even the public one.

Building Proxy Site with Nginx and Rotating Proxy Service

Im' looking to build a similar application to https://www.proxysite.com/ but am not sure on the best architecture.
Looking to have a data flow like this.
User Web Browser -> myproxysite.com -> Ngninx Proxy Server (somehow rotating IP for each client session) -> Targetsite.com
Then the user would need to maintain a full session on Targetsite.com as a logged in user.
In this example, targetsite.com is always the same site and is pre-determined. The challenge we are facing is that targetsite.com is blocking our users based on IP, many of whom are accessing it from the same office network.
So my questions are:
Does this seem correct?
Is there anyway for me to configure nginx with a rotating proxy service like luminati? Or do I need to add an API software layer to handle the actual IP changes?
Any guidance on this one would be greatly appreciated!
While I can't help you with your application, I do want to suggest an alternative. You mentioned an office so it sounds like the users who will use the proxy are workers.
Luminati (now BrightData) has a proxy manager which you can host on any server. The proxy manager allows you to create ports (ie port 24000) and configure it with whatever proxy you want (doesn't have to be BrightData's proxy). It has a ton of different parameters that you can include for each proxy (including IP rotation) and each port can be configured to have a unique setup.
Then you simply go to your user PC, open the browser proxy settings, type the IP address of the server that the proxy manager is running on and the specific port you configured and voila. You have central control of the managing the proxies and your user's browser is proxied.
A big benefit of this is the logs in the proxy manager show all activity on each port you setup, so you can monitor traffic and the success rates right there.
Proxy manager: https://prnt.sc/13uyjgj

K8s Inter-service communication via FQDN

I have two services deployed to a single cluster in K8s. One is IS4, the other is a client application.
According to leastprivilege, the internal service must also use the FQDN.
The issue I'm having when developing locally (via skaffold & Docker) is that the internal service resolves the FQDN to 127.0.0.1 (the cluster). Is there any way to ensure that it resolves correctly and routes to the correct service?
Another issue is that internally the services communicate on HTTP, and publicly they expose HTTPS. With a URL rewrite I'm able to resolve the DNS part, but I'm unable to change the HTTPS calls to HTTP as NGINX isn't called, it's a call direct to the service. If there is some inter-service ruleset I can hook into (similar to ingress) I believe I could use that to terminate TLS and things would work.
Edit for clarification:
I mean, I'm not deploying to AKS. When deployed to AKS this isn't an issue.
HTTPS is explosed via NGingx ingress, which terminates TLS.

Netlify and ngrok linking

I have a front-end deployed on Netlify and a back-end is deployed on localhost which is exposed using ngrok.
Is it possible to link them so that when I click on the Netlify link, it would send request to my localhost server exposed from ngrok ?
Netify can proxy to a dynamic backend, that is an intended use case. The problem we'll have is using "localhost" - netlify needs a valid hostname to connect to. So, if your ngrok is exposed (not firewalled) at some public IP, you can put that into your redirects configuration:
/backend-stuff-in-this-path/* https://1.2.3.4/:splat 200!
will send all requests to the path /backend-stuff-in-this-path/ANYTHING to the server at 1.2.3.4/ANYTHING
This may not be incredibly useful since your machine will change IP addresses sometimes one presumes, but if you were using localhost anyway, you weren't planning to put it in production quite yet. Note that redirects are deploy-specific, so you do need to redeploy to change the location if your IP changes.

website Hosted on Ec2 not responding, what happened with My instance?

On Friday I hosted a WordPress site on my micro instance - I installed LAMP stack, and WordPress on it.
Instance state is Running, but when I try to access website with public domain given in console, it says
web page Not available
I have set an Outbound rule to allow everyone and Inbound rule for my IP address only.
This is about accessing website from outside world, but when I try to connect to my instance with JAVA Interface, MindTerm Web SSH, it says
Network connection timeout error
Can't figure out anything, Just started working on AWS.
I think you have confused the Outbound and Inbound rules - Outbound means traffic going out from the server, while Inbound means traffic from the internet to the server.
As you say, you added an Inbound rule for your IP address only, and you can access the website from you IP only, just like you requested.
Add an Inbound rule for port 80 for0.0.0.0/0, and you should be able to access the site from other locations as well.
If you need to open it to HTTP and SSH, open it for both for 0.0.0.0/0:
Please verify your settings and permission based on this :
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstancesConnecting.html
You might also want to check your firewall, in case that is blocking the access..

Resources