Allow requests to SF endpoints only from several ec2 instances - symfony

I have a public API running on EC2 instance (through AWS ELB) built with Symfony3. However, I have several background tasks which have to consume this API but only on dedicated endpoints. I have to ensure that it is only the workers that consume these endpoints.
I was wondering how can I implement such a structure with AWS. I'm looking at API gateway, VPCs, but I'm kind of lost.
Do you have an idea?

If both the API server and the API consumers are running on EC2 instances, then you can easily configure the security group assigned to your API server to restrict access to only those API consumer instances. Just create a rule in the security group that opens the inbound port for your API, and use the security group(s) assigned to your API consumer instances as the source.

Related

How to access AWS SNS from an application served by internal classic load balancer?

I have an application running in an AWS EKS cluster, the application is previously serving with a public-facing load balancer thus it can easily AWS SNS service, but due to some security reasons we are asked to move that to serve via the internal load balancer, now after moving to the internal load balancer, the application is working but was not able to access SNS service.
How can we configure the application from the internal network to access the AWS SNS service?
You might be after these documents
https://aws.plainenglish.io/publishing-private-amazon-sns-messages-348d38ffc351
https://docs.aws.amazon.com/sns/latest/dg/sns-vpc-tutorial.html
In Short, as you are within a private network, you need to create VPC Endpoints in order to access the AWS services that are outside of a VPC.

Mirror requests from cloudrun service to other cloudrun service

I'm currently working on a project where we are using Google Cloud. Within the Cloud we are using CloudRun to provide our services. One of these services is rather complex and has many different configuration options. To validate how these configurations affect the quality of the results and also to evaluate the quality of changes to the service, I would like to proceed as follows:
in addition to the existing service I deploy another instance of the service which contains the changes
I mirror all incoming requests and let both services process them, only the responses from the initial service are returned, but the responses from both services are stored
This allows me to create a detailed evaluation of the differences between the two services without having to provide the user with potentially worse responses.
For the implementation I have setup a NGINX which mirrors the requests. This is also deployed as a CloudRun service. This now accepts all requests and takes care of the authentication. The original service and the mirrored version have been configured in such a way that they can only be accessed internally and should therefore be accessed via a VPC network.
I have tried all possible combinations for the configuration of these parts but I always get 403 or 502 errors.
I have tried setting the NGINX service to the HTTP and HTTPS routes from the service, and I have tried all the VPC Connector settings. When I set the ingress from the service to ALL it works perfectly if I configure the service with HTTPS and port 443 in NGINX. As soon as I set the ingress to Internal I get errors with HTTPS -> 403 and with HTTP -> 502.
Does anyone have experience in this regard and can give me tips on how to solve this problem? Would be very grateful for any help.
If your Cloud Run service are internally accessible (ingress control set to internal only), you need to perform your request from your VPC.
Therefore, as you perfectly did, you plugged a serverless VPC connector on your NGINX service.
The set up is correct. Now, why it works when you route ALL the egress traffic and not only the private traffic to your VPC connector?
In fact, Cloud Run is a public resource, with a public URL, and even if you set the ingress to internal. This param say "the traffic must come to the VPC" and not say "I'm plugged to the VPC with a private IP".
So, to go to your VPC and access a public ressource (Your cloud run services), you need to route ALL the traffic to your VPC, even the public one.

How to restrict Kubernetes Engine HTTP access to only Firebase apps

I currently have services running on the Google App Engine platform which use the X-Appengine-Inbound-Appid header to limit HTTP requests to our apps only.
I recently found out that some of my services require a static IP and therefor I would like to move some of the services to the Kubernetes Engine.
Is there a way for Kubernetes Engine to secure requests using a similar header approach? The requests should only be allowed from our own Firebase apps.
Ideally I would keep things as simple as possible for the clients using the services.
Possibly I could generate a specific API key for each user which can be blacklisted on abuse, but that already adds quite a bit of complexity.
You can use ngnix ingress controller as an entry point for your cluster, and add whatever rules for ngnix.

How do openstack API services communicate with other services?

Every major service in OpenStack has an API service as endpoint for clients to access, eg. openstack-nova-api, openstack-glance-api etc. But for every major service, there are other minor services like openstack-nova-scheduler, openstack-nova-conductor etc. these services are suggested to be deployed on other nodes rather the node where API service is running to get some kind of isolation.
My question is how openstack-nova-api knows where the real services(openstack-nova-scheduler/openstack-nova-conductor) are running, how they communicate with other? When openstack-nova-api got a new request, how does it distribute it to the real services which can process and send back the results?
Internal communication between OpenStack modules is done through the AMQP message queue, typically managed by RabbitMQ.

Azure: How to connect one cloud service with other in one virtual network

I want deploy backend WCF service in WebRole in Cloud Service 1 only with Internal Endpoint.
And deploy ASP.NET MVC frontend in WebRole in Cloud Service 2.
Is it possible to use Azure Virtual Netowork to call backend from frontend by Internal Endpoint ?
UPDATED: I am just trying build simple SOA architect like this:
Yes and No.
An internal endpoint essentially means that the role instance has been configured to accept traffic on a given port, but that port can NOT receive traffic from outside of the cloud service (hence it being "internal" to the cloud service). Internal endpoints are also not load balanced so you're going to need to "juggle" traffic management from the callers yourself.
Now here is where the issues arise, a virtual network allows you to securely traverse cloud service boundaries, letting a role instance in cloud service 1 call a role instance in cloud service 2. However, to do this, the calling role instance needs to know how to address the receiving instance. If they were in the same cloud service, they you can crawl the cloud service topology via the RoleEnvironment class. But this class only works for the cloud service its exists in, its not aware of a virtual network.
Now you could have the receiving role instance publish its FQDN to a shared area (say Azure table storage). However, a cloud service will only use its own internal DNS resolution (which only allows you to resolve short names in the same cloud service) unless you have configured the virtual network with a self-hosted DNS server.
So yes, you can do what you're trying to accomplish, but it does present some challenges. Given this, I'd have to argue if the convenience of separating for deployment enough to justify the additional complexity of the solution? If so, then I'd also look and see if perhaps there's a better way to interconnect the two services rather then direct calls (like a queue based pattern).
#BrentDaCodeMonkey makes some very valid points in his answer, so read that first.
I, personally, would not want to give up automatic discovery and scale via load balancing. My suggestion would be that you expose the WCF endpoint via an Azure Service Bus Relay endpoint. This will give you a "fixed" endpoint with which to communicate (solving the discovery issue) and infinite scalability because multiple servers can register and listen on the same Service Bus relay address. Additionally it introduces some basic security into the mix via shared key authentication when your web application(s) connect to your WCF services.
If you co-locate the Service Bus instance with your Cloud Services the overhead of the relay in the middle is totally negligible and, IMHO, worth it for the benefits explained above.

Resources