Google Cloud VPN over NAT - networking

I'm trying to configure VPN between Google Cloud and a few on premise subnetworks hidden behind NAT. The problem is that I have only 1 external IP address and as I've said I'd like to create multiple VPN connections to few (~10) subnetworks which are hidden behind NAT. Is it possible?

Unfortunately it is not possible.
GCP VPNs just support one-to-one NAT and this means a single Internal IP matching a single External IP.

Related

GCP - No Cloud NAT but given public IP leaves VPC

We have a VPC which has VMs with private IP addresses only. There is no Cloud NAT attached to this VPC, so we should not be able to reach out public IPs.
Despite of the aboves, we experienced that we were able to curl the following public IP address from an internal VM.
64.233.166.153
The subnet of the VM has Private Google Access enabled and there is a default route to the default internet gateway, no other route entry matches for this IP. But there is no Cloud NAT.
My questions:
How is it possible to reach public IPs without NAT at all?
Are there other reachable public IPs? (without Cloud NAT)
What are these IPs used for?
Looks like the IP address belongs to a GCP resource/API.
As per GCP documentation[1], when PGA(Private Google Access) is enabled GCP VM instances without external IP can connect to the set of external IP addresses used by Google APIs and services by enabling Private Google Access on the subnet used by the VM's network interface.
This could be the potential reason why your VM was able to speak with the Public IP.
[1] https://cloud.google.com/vpc/docs/configure-private-google-access
Answer provided by #dp nulletla is right.
#Robert - For your use case that you mentioned in the comments - to reach BQ API from GCE with private IP without leaving google backbone network, I believe VPC Private Service Connect (PSC) for Google APIs is the right solution approach for you.
By default, if you have an application that uses a Google service, such as Cloud Storage, your application connects to the default DNS name for that service, such as storage.googleapis.com. Even though the IP addresses for the default DNS names are publicly routable, traffic sent from Google Cloud resources remains within Google's network.
With Private Service Connect, you can create private endpoints using global internal IP addresses within your VPC network. You can assign DNS names to these internal IP addresses with meaningful names like storage-vialink1.p.googleapis.com and bigtable-adsteam.p.googleapis.com. These names and IP addresses are internal to your VPC network and any on-premises networks that are connected to it using Cloud VPN tunnels or VLAN attachments. You can control which traffic goes to which endpoint, and can demonstrate that traffic stays within Google Cloud.
Basically when you create PSC endpoint,you assign private IP address to this endpoint. You reach respective google API e.g. Big Query, you always connect via PSC endpoint IP. This way you can control egress traffic in your VPC firewall rule with deny all and allow only PSC endpoint IP.
Additionally you can go 1 step further and try to restrict traffic/data going to BQ APIs from your GCE/VPC on more granular level with the use of VPC Service Control. By setting the VPC SC perimeter you can define/enforce with more restrictive policies to avoid any sort of data exfiltration.
Thanks
BR
Omkar

How to connect multiple cloud with overlapping VPC?

We are creating a Console to administer, view logs and metrics, create resources on Kubernetes in a multicloud environment.
The Console ( a web app ) is deployed on GKE in GCP, but we can't figure out how we can connect and reach K8S Api-Servers in multiple VPC with overlapping IPs, without exposing them on public IP.
I draw a little diagram to expose the problem.
Are there some products or best practice to perform this securely?
Product vendors for example Mongo Atlas or Confluent Cloud seems to have solved this issue, they can create infrastructure in multiple cloud and administer them.
It's not possible to connect two overlapping networks with VPN even if they're in different clouds (GCP & AWS).
I'd suggest to use NAT translation on both sides and connect networks using VPN.
Here's some documentation that may help you. Unfortunatelly it's quite a bit of reading and setting up. Not the easiest solution but it has the benefit of being reliable and it's a quite old and tested approach.
General docs
Configure NAT to Enable Communication Between Overlapping Networks
Using NAT in Overlapping Networks
GCP side
Cloud NAT overview
Using Cloud NAT
AWS side
NAT instances
Comparison of NAT instances and NAT gateways
You second option is to split the original networks in smaller chunks so they wold not overlap but that's not always possible (due to network being small enough already and many IP's are used up...).
It depends on couple factors in the environments.
To access an overlapping network you need some form of gateway.
it can be some kind of proxy socks/http/other or a router/gw(with nat..).
If you can access the 192.168.23.0/24 or any other subnet that can connect to the aws 192.168.2.0/24 subnet from gcp then you can use either one of the solutions.
I assume that aws and gcp can provide the tunnel between the gw/proxy network.
If you don't need security layer for the tunnel you can use a vxlan tunnel and secure the tcp/other app protocol.
Using Google Cloud VPN with AWS Virtual Private Gateway you can accomplish such a thing. A detailed description by Google is given in this documentation.
It describes two VPN topologies:
A site-to-site Route-based IPsec VPN tunnel configuration.
A site-to-site IPsec VPN tunnel configuration using Google Cloud Router and dynamic routing with the BGP protocol.
Additionally, when CIDR-ranges overlap. You would need to create a new VPC/CIDR ranges that are non-overlapping. Otherwise, you could never connect to instances that have IP-addresses in both AWS and GCP.

Share L2 network over L3 network with vxlan

I have a network problem that I can't solve, I have a server at hetzner and a server at OVH, I'm trying to use some OVH ip on my hetzner server and some Hetzner ip on my OVH server, because I need flexibility in my network.
My VMs are on proxmox, I created a vxlan between the 2 servers and I bridge the vxlan in the vmbr0 interface of proxmox on one side and it works but ovh and hetzner informed me that I was sending packet with wrong mac address, so I don't know what to do.
I'm really not an expert in computer networking.
Thank you in advance to all those who can help me.
VxLAN offers a lot of flexibility, however it might not be the best answer in this case. You may be better off using VPN tunnels between both cloud environments. That is, assuming you can have multiple VMs within the tenant on those providers, have them point their default gateways towards a VM within your control, and use that VM as a firewall/VPN concentrator. From there you can establish a S2S VPN between the cloud environments, and can NAT the traffic from your provider's WAN IP address to the appropriate host, whether locally, or at the remote environment.
If you must have L2 connectivity between your cloud environments, I can speak from experience only in a Juniper environment, and in that case we would place a vMX VM behind a vSRX VM. The vMX VM would act as the EVPN/VXLAN VTEP, and your VMs would set this as their default gateway. The vSRX would establish IPSEC S2S tunnels, through which data-center interconnect (DCI) traffic would flow. L2 traffic would flow through the vMX, where it would be encapsulated in a vxlan tunnel, which would route through the SRX, which would then encapsulate this in an encrypted IPSEC tunnel, before sending to the other data center. Details of this might be a little too complex for a stack exchange answer though.
Hope this helps point you in the right direction!

Setting up VPN between GCP Projects to access SQL Engine subnetwork

Please bear with me as my background is development and not sysadmin. Networking is something I'm learning as I go and thus why I'm writing here :)
A couple of months ago I started the process of designing the network structure of our cloud. After a couple of exchange here, I settled for having a project that will host a VPN Tunnel to the on-premise resources and some other projects that will host our products once they are moved from the on-premises servers.
All is good and I managed to set things up.
Now, one of the projects is dedicated to "storage": that means, for us, databases, buckets for statis data to be accessed around , etc.
I created a first mySQL database (2nd gen) to start testing and noticed that the only option available to access the SQL databases from Internal IPs was with the "parent project" subnetwork.
I realised that SQL Engine create a subnetwork dedicated for just that. It's written in the documentation as well, silly me.
No problem, I tear it down, enable Private Service Connection, create an allocated IP range in the VPC management and set it to export routes.
Then I went back to the SQL Engine a created a new database. As expected the new one had the IP assigned to the allocated IP range set up previously.
Now, I expected every peered network to be able to see the SQL subnetwork as well but apparently not. Again, RDFM you silly goose. It was written there as well.
I activated a bronze support subscription with GCP to have some guidance but what I got was a repeated "create a vpn tunnel between the two projects" which left me a little disappointed as the concept of Peered VPC is so good.
But anyway, let's do that then.
I created a tunnel pointing to a gateway on the project that will have K8s clusters and vice-versa.
The dashboard tells me that the tunnel are established but apparently there is a problem with the bgp settings because they are hanging on "Waiting for peer" on both side, since forever.
At this point I'm looking for anything related to BGP but all I can find is how it works in theory, what it is used for, which are the ASM numbers reserved etc etc.
I really need someone to point out the obvious and tell me what I fucked up here, so:
This is the VPN tunnel on the projects that hosts the databases:
And this is the VPN tunnel on the project where the products will be deployed, that need to access the databases.
Any help is greatly appreciated!
Regarding the BGP status "Waiting for peer" in your VPN tunnel, I believe this is due to the configured Cloud Router BGP IP and BGP peer IP. When configuring, the Cloud Router BGP IP address of tunnel1 is going to be the BGP Peer IP address for tunnel2, and the BGP Peer IP address for tunnel1 is going to be the Router BGP IP address of tunnel2.
Referring to your scenario, the IP address for stage-tunnel-to-cerberus should be:
Router BGP IP address: 169.254.1.2
and,
BGP Peer IP address: 169.254.1.1
This should put your VPN tunnels BGP session status in "BGP established".
You can't achieve what you want by VPN or by VPC Peering. In fact there is a rule in VPC which avoid peering transitivity described in the restriction part
Only directly peered networks can communicate. Transitive peering is not supported. In other words, if VPC network N1 is peered with N2 and N3, but N2 and N3 are not directly connected, VPC network N2 cannot communicate with VPC network N3 over VPC Network Peering.
Now, take what you want to achieve. When you use a Cloud SQL private IP, you create a peering between your VPC and the VPC of the Cloud SQL. And you have another peering (or VPN tunnel) for the SQL engine.
SQL Engine -> Peering -> Project -> Peering -> Cloud SQL
Like this you can't.
But you can use the shared VPC. Create a shared VPC, add your 2 projects in it, create a common subnet for SQL Engine and the Cloud SQL peering. That should work.
But, be careful. All VPC features aren't available with shared VPC. For example, serverless VPC connector aren't yet compliant with it.
Hope this help!
The original setup in the OP question should work, i.e.
Network 1 <--- (VPN) ---> Network 2 <--- (Peered) ---> CloudSQL network
(the network and the peering is created by GCP)
Then resource in Network 1 is able to access a MySQL instance created in the CloudSQLz network.

How to set the external IP of a specific node in Google Kubernetes Engine?

Unfortunately, we have to interface with a third-party service which instead of implementing authentication, relies on the request IP to determine if a client is authorized or not.
This is problematic because nodes are started and destroyed by Kubernetes and each time the external IP changes. Is there a way to make sure the external IP is chosen among a fixed set of IPs? That way we could communicate those IPs to the third party and they would be authorized to perform requests. I only found a way to fix the service IP, but that does not change at all the single nodes' IPs.
To be clear, we are using Google's Kubernetes Engine, so a custom solution for that environment would work too.
Yes, it's possible by using KubeIP.
You can create a pool of shareable IP addresses, and use KubeIP to automatically attach IP address from the pool to the Kubernetes node.
IP addresses can be created by:
opening Google Cloud Dashboard
going VPC Network -> External IP addresses
clicking on "Reserve Static Address" and following the wizard (on the Network Service Tier, I think it needs to be a "Premium", for this to work).
The easiest way to have a single static IP for GKE nodes or the entire cluster is to use a NAT.
You can either use a custom NAT solution or use Google Cloud NAT with a private cluster

Resources