Will HttpClient api gateway calls work on kubernetes cluster? - asp.net

Hello I have worked on API gateway for a identity service. I used HttpClient and currently call the identity service with localhost. My worry is when we deploy them we plan to have identity service in a cluster in azure there will be a need to use DNS name then. Will the calls still work like they do now using the localhost just by switching to production it will use the DNS name of the cluster? Or are there any other configurations that need to be done?

Related

Access locally deployed Servicefabric micro service from another machine on same/different network

I have service fabric based microservice deployed locally on my test machine.
I would like to access the service endpoint from another machine.
Example: Once I deploy service fabric locally I use something like
http://lastname.com:47830/v1/api/endpoint.
How would I reach to this endpoint from another machine, considering authorizations are figured out.
Target machine must allow traffic to port 47830. Then http://{ip-of-host}:47830/v1/api/endpoint works

ECS Nginx network setup

I have 3 containers on ECS: web, api and nginx. Basically nginx is proxying traffic to web and api containers:
upstream web {
server web-container:3000;
}
upstream api {
server api-container:3001;
}
But every time I redeploy web or api they change their IPs so I need to redeploy nginx afterwards in order to make it to "pick up" new IPs.
Is there a way to avoid this so I could just update let's say api service and nginx service would automatically proxy to correct IP address?
I assume these containers belong to 3 different task definitions and ultimately 3 different tasks (or better 3 different services).
If that is the setup then you want to use service discovery for this. This only works with ECS services and the idea is that you create 3 distinct services each with 1+ tasks in it. You give the service a name (e.g. nginx, web, api) and each container in them is going to be able to resolve the other containers by pointing to the fqdn (e.g. api.local). When your container in the nginx service tries to connect to api.local service discovery will resolve that name to the IP of one of the tasks in the ECS service api.
If you want to see an example re how this is setup you can look at this demo app and particularly at this CloudFormation template

K8s Inter-service communication via FQDN

I have two services deployed to a single cluster in K8s. One is IS4, the other is a client application.
According to leastprivilege, the internal service must also use the FQDN.
The issue I'm having when developing locally (via skaffold & Docker) is that the internal service resolves the FQDN to 127.0.0.1 (the cluster). Is there any way to ensure that it resolves correctly and routes to the correct service?
Another issue is that internally the services communicate on HTTP, and publicly they expose HTTPS. With a URL rewrite I'm able to resolve the DNS part, but I'm unable to change the HTTPS calls to HTTP as NGINX isn't called, it's a call direct to the service. If there is some inter-service ruleset I can hook into (similar to ingress) I believe I could use that to terminate TLS and things would work.
Edit for clarification:
I mean, I'm not deploying to AKS. When deployed to AKS this isn't an issue.
HTTPS is explosed via NGingx ingress, which terminates TLS.

Allow requests to SF endpoints only from several ec2 instances

I have a public API running on EC2 instance (through AWS ELB) built with Symfony3. However, I have several background tasks which have to consume this API but only on dedicated endpoints. I have to ensure that it is only the workers that consume these endpoints.
I was wondering how can I implement such a structure with AWS. I'm looking at API gateway, VPCs, but I'm kind of lost.
Do you have an idea?
If both the API server and the API consumers are running on EC2 instances, then you can easily configure the security group assigned to your API server to restrict access to only those API consumer instances. Just create a rule in the security group that opens the inbound port for your API, and use the security group(s) assigned to your API consumer instances as the source.

Azure: How to connect one cloud service with other in one virtual network

I want deploy backend WCF service in WebRole in Cloud Service 1 only with Internal Endpoint.
And deploy ASP.NET MVC frontend in WebRole in Cloud Service 2.
Is it possible to use Azure Virtual Netowork to call backend from frontend by Internal Endpoint ?
UPDATED: I am just trying build simple SOA architect like this:
Yes and No.
An internal endpoint essentially means that the role instance has been configured to accept traffic on a given port, but that port can NOT receive traffic from outside of the cloud service (hence it being "internal" to the cloud service). Internal endpoints are also not load balanced so you're going to need to "juggle" traffic management from the callers yourself.
Now here is where the issues arise, a virtual network allows you to securely traverse cloud service boundaries, letting a role instance in cloud service 1 call a role instance in cloud service 2. However, to do this, the calling role instance needs to know how to address the receiving instance. If they were in the same cloud service, they you can crawl the cloud service topology via the RoleEnvironment class. But this class only works for the cloud service its exists in, its not aware of a virtual network.
Now you could have the receiving role instance publish its FQDN to a shared area (say Azure table storage). However, a cloud service will only use its own internal DNS resolution (which only allows you to resolve short names in the same cloud service) unless you have configured the virtual network with a self-hosted DNS server.
So yes, you can do what you're trying to accomplish, but it does present some challenges. Given this, I'd have to argue if the convenience of separating for deployment enough to justify the additional complexity of the solution? If so, then I'd also look and see if perhaps there's a better way to interconnect the two services rather then direct calls (like a queue based pattern).
#BrentDaCodeMonkey makes some very valid points in his answer, so read that first.
I, personally, would not want to give up automatic discovery and scale via load balancing. My suggestion would be that you expose the WCF endpoint via an Azure Service Bus Relay endpoint. This will give you a "fixed" endpoint with which to communicate (solving the discovery issue) and infinite scalability because multiple servers can register and listen on the same Service Bus relay address. Additionally it introduces some basic security into the mix via shared key authentication when your web application(s) connect to your WCF services.
If you co-locate the Service Bus instance with your Cloud Services the overhead of the relay in the middle is totally negligible and, IMHO, worth it for the benefits explained above.

Resources