GCP - No Cloud NAT but given public IP leaves VPC - networking

We have a VPC which has VMs with private IP addresses only. There is no Cloud NAT attached to this VPC, so we should not be able to reach out public IPs.
Despite of the aboves, we experienced that we were able to curl the following public IP address from an internal VM.
64.233.166.153
The subnet of the VM has Private Google Access enabled and there is a default route to the default internet gateway, no other route entry matches for this IP. But there is no Cloud NAT.
My questions:
How is it possible to reach public IPs without NAT at all?
Are there other reachable public IPs? (without Cloud NAT)
What are these IPs used for?

Looks like the IP address belongs to a GCP resource/API.
As per GCP documentation[1], when PGA(Private Google Access) is enabled GCP VM instances without external IP can connect to the set of external IP addresses used by Google APIs and services by enabling Private Google Access on the subnet used by the VM's network interface.
This could be the potential reason why your VM was able to speak with the Public IP.
[1] https://cloud.google.com/vpc/docs/configure-private-google-access

Answer provided by #dp nulletla is right.
#Robert - For your use case that you mentioned in the comments - to reach BQ API from GCE with private IP without leaving google backbone network, I believe VPC Private Service Connect (PSC) for Google APIs is the right solution approach for you.
By default, if you have an application that uses a Google service, such as Cloud Storage, your application connects to the default DNS name for that service, such as storage.googleapis.com. Even though the IP addresses for the default DNS names are publicly routable, traffic sent from Google Cloud resources remains within Google's network.
With Private Service Connect, you can create private endpoints using global internal IP addresses within your VPC network. You can assign DNS names to these internal IP addresses with meaningful names like storage-vialink1.p.googleapis.com and bigtable-adsteam.p.googleapis.com. These names and IP addresses are internal to your VPC network and any on-premises networks that are connected to it using Cloud VPN tunnels or VLAN attachments. You can control which traffic goes to which endpoint, and can demonstrate that traffic stays within Google Cloud.
Basically when you create PSC endpoint,you assign private IP address to this endpoint. You reach respective google API e.g. Big Query, you always connect via PSC endpoint IP. This way you can control egress traffic in your VPC firewall rule with deny all and allow only PSC endpoint IP.
Additionally you can go 1 step further and try to restrict traffic/data going to BQ APIs from your GCE/VPC on more granular level with the use of VPC Service Control. By setting the VPC SC perimeter you can define/enforce with more restrictive policies to avoid any sort of data exfiltration.
Thanks
BR
Omkar

Related

How can public IP address of cloud VMs remains same even if cloud VM gets migrated to another data centre?

Let us assume, I have hosted my Application on any cloud and want to migrate to AWS for the XYZ reason.
In the previous DATA centre, I have the public IP address assign for cloud VMs (Application).
Now if I'm migrating to AWS I want the same Ip address as the existing one.
So how we can achieve this as a network engineer.
CSP must ensure that the public IP address of cloud VMs remains the same even if the cloud VM network is being served from multiple CSP data centres.
There may be two situation
You want to change your cloud vendor. In this case, public IP can't be the same.
If you want to change the region to a single Cloud provider and use the same IP across all regions e.g; AWS.
then you have to create the Elastic IP with Global static IP addresses.

Azure Network Security Group Vs Route Tables

Networking newbie here. From the Documentation it feels like both NSG and Routing tables(UDR) are doing the same thing - capable of defining ACLs at multiple levels (Vnet, Subnet, VM)
https://learn.microsoft.com/en-us/azure/virtual-network/network-security-group-how-it-works
https://learn.microsoft.com/en-us/azure/virtual-network/virtual-networks-udr-overview
So how are they different and when is each used?
thanks.
Azure automatically creates a route table for each subnet within an Azure virtual network and adds system default routes to the table. The route table is like a networking map that tells the traffic from one place to another place via the next hop. This generates the "path" but does not filter traffic.
The Azure network security group is used to filter network traffic to and from Azure resources in an Azure virtual network. It contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. If there is no route to one place from a subnet, you even do not need to configure the security rules because there is no path. So when you consider the NSG it should have a successful network route.
For example, usually, we can access Azure VM in Azure virtual network via SSH or RDP over the Internet but it has a less secure way to expose the port 22 or 3389. We can restrict access to your Azure VM via specifying the source IP address in the NSG. This setting allows traffic only from a specific IP address or range of IP addresses to connect to the VM. Read more details here. In this scenario, we need to ensure that there is a route to the internet from your Azure virtual network and vice versa.

GKE: IP Addresses

I have noticed something strange with my service deployed on GKE and I would like to understand...
When I Launch kubectl get services I can see my service EXTRNAL-IP. Let's say 35.189.192.88. That's the one I use to access my application.
Ben when my application tries to access another external API, the owner of the API sees another IP address from me : 35.205.57.21
Can you explain me why ? And is it possible to make this second IP static ?
Because my app has to access an external API, and the owner of this API filters its access by IP address
Thanks !
The IP address you have on service as EXTERNAL-IP is a load balancer IP address reserved and assigned to your new service and it is only for incoming traffic.
But when your pod is trying to reach any service outside the cluster two scenarios can happen:
The destination API is inside the same VPC, which means that no translation of IP addresses is needed and then on the last version of Kubernetes you will reach the API using the Pod IP address assigned by Kubernetes on the range 10.0.0.0/8.
When the target is outside the VPC you need to reach it using some kind of NAT, in that case, the default gateway for your VPC is used and the NAT applies the IP address of the node where the pod is running.
If you need to have and static IP address in order to whitelist it you need to use a cloud NAT
https://cloud.google.com/nat/docs/overview

How to set the external IP of a specific node in Google Kubernetes Engine?

Unfortunately, we have to interface with a third-party service which instead of implementing authentication, relies on the request IP to determine if a client is authorized or not.
This is problematic because nodes are started and destroyed by Kubernetes and each time the external IP changes. Is there a way to make sure the external IP is chosen among a fixed set of IPs? That way we could communicate those IPs to the third party and they would be authorized to perform requests. I only found a way to fix the service IP, but that does not change at all the single nodes' IPs.
To be clear, we are using Google's Kubernetes Engine, so a custom solution for that environment would work too.
Yes, it's possible by using KubeIP.
You can create a pool of shareable IP addresses, and use KubeIP to automatically attach IP address from the pool to the Kubernetes node.
IP addresses can be created by:
opening Google Cloud Dashboard
going VPC Network -> External IP addresses
clicking on "Reserve Static Address" and following the wizard (on the Network Service Tier, I think it needs to be a "Premium", for this to work).
The easiest way to have a single static IP for GKE nodes or the entire cluster is to use a NAT.
You can either use a custom NAT solution or use Google Cloud NAT with a private cluster

How do I configure vpc to allow outbound traffic over customer gateway

I have configured a vpc to communicate with an on-prem private network as outlined here I am able to ping servers in my on-prem network through the virtual gateway. I have two private subnets and my route table associated with each of those subnets is configured as below:
10.255.254.0/23 local
0.0.0.0/0 vgw-xxxxxxx
My expectation is that all of my traffic, internet or otherwise is being communicated over the vgw to the cgw and then be subject to our on-premise firewall policies. In fact the article linked above specifically says that is the case:
The instances in the VPN-only subnet can't reach the Internet directly; any Internet-bound traffic must first traverse the virtual private gateway to your network, where the traffic is then subject to your firewall and corporate security policies.
When running a server on one of the private subnets the output from traceroute looks like this:
My traceroute to www.google.com looks like this:
as you can see from above traffic to www.google.com is just dying on the first hop.
I know that this can be achieved by adding a NAT to the public subnet, but I would prefer that all traffic flow through the on prem network instead.
What piece am I missing to make this work?

Resources