How to send requests between servers in Private Network in GCP? - networking

The use case is the following:
a Compute Engine instance with a private IP only ( no external IP)
The project has policies to not create external IPs
The goal is to be able to send HTTP requests to the Private Compute Engine Instance from Cloud Build
What are the best practices in Networking to ensure that communication?
Thank you

For now, you can't plug Cloud Build in your VPC and thus connect private resources from there.
A new feature is coming and named Worker Pool. The principle is to provision Compute Engine instances in your project, and thus in your VPC to run Cloud Build pipeline. Because the pipeline will run in your VM in your VPC, you will be able to reach the private IPs of your VPC.

Have you explored Private Google Access? It allows resources that do not have external IP to access Google's APIs and Services. Private Google Access is enabled on a subnet by subnet basis.
https://cloud.google.com/vpc/docs/private-access-options

Related

How to expose IP of a VM to only authenticated users in GCP Project

The use case is the following:
Private network for the GCP project
VPN on the local computer that seems to be blocking SSH connections
A VM that has a webapp to be accessed but we don't want to expose the IP to the public network
What are the best practices to keep it private and to access it eg. with OAuth authentication?
What are the steps to make and to follow?
Appreciate your help with this.
There are several methods in Google Cloud. The second method is the recommended method based upon the requirements in your question.
If the users have defined public IP addresses, use Google Cloud VPC firewall rules to only allow access from approved IP addresses.
Do not assign a static public IP address to the instance. Add an HTTP(S) Load Balancer and enable IAP. Add each user's identity to IAP for identity-based access control.
Additional methods suitable for developers:
My favorite is to use WireGuard (VPN) and use peer-based access control.

How to set the external IP of a specific node in Google Kubernetes Engine?

Unfortunately, we have to interface with a third-party service which instead of implementing authentication, relies on the request IP to determine if a client is authorized or not.
This is problematic because nodes are started and destroyed by Kubernetes and each time the external IP changes. Is there a way to make sure the external IP is chosen among a fixed set of IPs? That way we could communicate those IPs to the third party and they would be authorized to perform requests. I only found a way to fix the service IP, but that does not change at all the single nodes' IPs.
To be clear, we are using Google's Kubernetes Engine, so a custom solution for that environment would work too.
Yes, it's possible by using KubeIP.
You can create a pool of shareable IP addresses, and use KubeIP to automatically attach IP address from the pool to the Kubernetes node.
IP addresses can be created by:
opening Google Cloud Dashboard
going VPC Network -> External IP addresses
clicking on "Reserve Static Address" and following the wizard (on the Network Service Tier, I think it needs to be a "Premium", for this to work).
The easiest way to have a single static IP for GKE nodes or the entire cluster is to use a NAT.
You can either use a custom NAT solution or use Google Cloud NAT with a private cluster

Does traffic leave GCP when using the public APIs?

For example let's say I have service A and B in GCP. Imagine that we are sending data from a VM in GCP to CloudStorage.
A sends B 10GB of traffic using the public API. In GCP would this this result in the data exiting the GCP network and then coming back in or would the entire exchange of data stay local to the GCP network?
Google VPC provides private communication between compute resources you create, and you can also enable private communication to Google managed services like Google Cloud Storage, Spanner, big data and analytics, and Machine Learning
For more details see here - https://cloud.google.com/compute/docs/private-google-access/private-google-access
The traffic is all internal and private though the API is resolved to a public destination IP address. Network address translation is in Google's infrastructure and is transparent to the user.

AWS Code Deploy to VPC instances in AutoScaling Group fails unless Elastic IP is assigned

I have a AutoScaling Group Setup and AWS Code Deploy Setup for VPC having 1 public subnet. The VPC instance is capable of accessing all AWS services through IAM Role.
The base AMI is ubuntu with CodeDeploy Agent installed on it.
Whenever the scaling event triggers, the AutoScaling Group launches an instance and the instance goes into "Waiting for Lifecycle Event"
AWS Code Deploy triggers deployment and is "In Progress" state, it remains in that state for more than an hour and then it fails.
If, within that hour, I manually assign Elastic IP, the Code deploy succeeds immediately.
Is having public/Elastic IP a requirement for CodeDeploy to succeed on VPC instances?
How can I get Code Deploy succeeded without the need of Public IP.
Have you set up a NAT instance so that the instances can access the internet without a public facing IP address? The EIP doesn't matter if the instance has access to the internet otherwise. Your code is deployed from the CodeDeploy agent polling the endpoint, thus if it can't hit the end point, it will never work.
The endpoint that CodeDeploy agent talks to is not the public domain name like codedeloy.amazonaws.com. Agent talks to command control endpoint, which is "https://codedeploy-commands.#{cfg.region}.amazonaws.com", according to https://github.com/aws/aws-codedeploy-agent/blob/29d4ff4797c544565ccae30fd490aeebc9662a78/vendor/gems/codedeploy-commands-1.0.0/lib/aws/plugins/deploy_control_endpoint.rb#L9. So you'll need to make sure private instance can access to this command control endpoint.
To connect your VPC to CodeDeploy, you define an interface VPC endpoint for CodeDeploy. An interface endpoint is an elastic network interface with a private IP address that serves as an entry point for traffic destined to a supported AWS service. The endpoint provides reliable, scalable connectivity to CodeDeploy without requiring an internet gateway, network address translation (NAT) instance, or VPN connection.
https://docs.aws.amazon.com/codedeploy/latest/userguide/vpc-endpoints.html

How to set up CloudFoudry in my data center

I want to deploy a CloudFoundry private in my data center. I do want to expose port 80 traffic for internet accress.
I do not want to expose all the CloudFoundry roles (Cloud Controller, DEA, Haelth Manager. ..etc) on the public network.
Is a there a best practice document on configuring Cloud Foundry?
Do I need to implement a external router that will do port 80 port forwarding to Uhuru NGIX Router?
The network isolation is done at the cloud layer, i.e. vSphere, OpenStack, VCloud or AWS. Assuming you deploy this using bosh, you need to configure your networks so that everything is on a private network, except for the routers, which need to have an interface on the internet facing side. But in front of the routers, you should have your load balancers, so not even the routers need to be connected directly to the Internet.

Resources