Setting up VPN between GCP Projects to access SQL Engine subnetwork - networking

Please bear with me as my background is development and not sysadmin. Networking is something I'm learning as I go and thus why I'm writing here :)
A couple of months ago I started the process of designing the network structure of our cloud. After a couple of exchange here, I settled for having a project that will host a VPN Tunnel to the on-premise resources and some other projects that will host our products once they are moved from the on-premises servers.
All is good and I managed to set things up.
Now, one of the projects is dedicated to "storage": that means, for us, databases, buckets for statis data to be accessed around , etc.
I created a first mySQL database (2nd gen) to start testing and noticed that the only option available to access the SQL databases from Internal IPs was with the "parent project" subnetwork.
I realised that SQL Engine create a subnetwork dedicated for just that. It's written in the documentation as well, silly me.
No problem, I tear it down, enable Private Service Connection, create an allocated IP range in the VPC management and set it to export routes.
Then I went back to the SQL Engine a created a new database. As expected the new one had the IP assigned to the allocated IP range set up previously.
Now, I expected every peered network to be able to see the SQL subnetwork as well but apparently not. Again, RDFM you silly goose. It was written there as well.
I activated a bronze support subscription with GCP to have some guidance but what I got was a repeated "create a vpn tunnel between the two projects" which left me a little disappointed as the concept of Peered VPC is so good.
But anyway, let's do that then.
I created a tunnel pointing to a gateway on the project that will have K8s clusters and vice-versa.
The dashboard tells me that the tunnel are established but apparently there is a problem with the bgp settings because they are hanging on "Waiting for peer" on both side, since forever.
At this point I'm looking for anything related to BGP but all I can find is how it works in theory, what it is used for, which are the ASM numbers reserved etc etc.
I really need someone to point out the obvious and tell me what I fucked up here, so:
This is the VPN tunnel on the projects that hosts the databases:
And this is the VPN tunnel on the project where the products will be deployed, that need to access the databases.
Any help is greatly appreciated!

Regarding the BGP status "Waiting for peer" in your VPN tunnel, I believe this is due to the configured Cloud Router BGP IP and BGP peer IP. When configuring, the Cloud Router BGP IP address of tunnel1 is going to be the BGP Peer IP address for tunnel2, and the BGP Peer IP address for tunnel1 is going to be the Router BGP IP address of tunnel2.
Referring to your scenario, the IP address for stage-tunnel-to-cerberus should be:
Router BGP IP address: 169.254.1.2
and,
BGP Peer IP address: 169.254.1.1
This should put your VPN tunnels BGP session status in "BGP established".

You can't achieve what you want by VPN or by VPC Peering. In fact there is a rule in VPC which avoid peering transitivity described in the restriction part
Only directly peered networks can communicate. Transitive peering is not supported. In other words, if VPC network N1 is peered with N2 and N3, but N2 and N3 are not directly connected, VPC network N2 cannot communicate with VPC network N3 over VPC Network Peering.
Now, take what you want to achieve. When you use a Cloud SQL private IP, you create a peering between your VPC and the VPC of the Cloud SQL. And you have another peering (or VPN tunnel) for the SQL engine.
SQL Engine -> Peering -> Project -> Peering -> Cloud SQL
Like this you can't.
But you can use the shared VPC. Create a shared VPC, add your 2 projects in it, create a common subnet for SQL Engine and the Cloud SQL peering. That should work.
But, be careful. All VPC features aren't available with shared VPC. For example, serverless VPC connector aren't yet compliant with it.
Hope this help!

The original setup in the OP question should work, i.e.
Network 1 <--- (VPN) ---> Network 2 <--- (Peered) ---> CloudSQL network
(the network and the peering is created by GCP)
Then resource in Network 1 is able to access a MySQL instance created in the CloudSQLz network.

Related

Connect to a GCP Redis instance that is connected to a different VPC network

On GCP, peered VPC connections are not transitive and Memorystore exists in it's own VPC network. This means that it's not possible to connect to a Redis instance from multiple VPC networks. Only a single authorized network is able to get access.
This diagram illustrates how VPC-2 cannot connect to VPC-1's Redis instance:
[Redis]-[VPC-1]-[VPC-2]
The only proposed solution I've found so far to connect from multiple VPC networks is to host a Redis proxy (nutcracker)
but this feels like a lot of work and potential maintenance in the future.
Is there a managed service offered by GCP that can do the trick?
I've recently connected a private GKE cluster to Cloud Build following this documentation which makes use of routers and tunnels, is it possible to use a Cloud Router and VPN tunnels to proxy the connection?
Another solution so you can manage the peered VPCs within the same project:
As you know, peered VPCs are not transitive, in this case meaning your VPC-2 does not know about the connection between VPC-1 and Redis VPC.
You can use VPC-1 as a transit network, by either importing and exporting routes between VPC-1 and VPC-2 or for a more managed solution you could use Cloud VPN on your VPC-1. If you have multiple VPCs that you need to connect to Redis, I would suggest considering using the Cloud VPN.
Here is an example of how this architecture could work
From this example, look at network-b as your VPC-1 and Network-a as your Redis VPC and Network-c as your VPC-2.
If you only have a few VPCs that need to connect to the Redis VPC, you could also consider exporting and importing custom routes from VPC-1 to all peered VPC that need access to Redis.
For Redis please note that only IPs from RFC1918 are allowed to connect so your IPs that need to connect to Redis would need to be in these ranges
10.0.0.0 – 10.255.255.255 (10/8 prefix)
172.16.0.0 – 172.31.255.255 (172.16/12 prefix)
192.168.0.0 – 192.168.255.255 (192.168/16 prefix)

How to connect multiple cloud with overlapping VPC?

We are creating a Console to administer, view logs and metrics, create resources on Kubernetes in a multicloud environment.
The Console ( a web app ) is deployed on GKE in GCP, but we can't figure out how we can connect and reach K8S Api-Servers in multiple VPC with overlapping IPs, without exposing them on public IP.
I draw a little diagram to expose the problem.
Are there some products or best practice to perform this securely?
Product vendors for example Mongo Atlas or Confluent Cloud seems to have solved this issue, they can create infrastructure in multiple cloud and administer them.
It's not possible to connect two overlapping networks with VPN even if they're in different clouds (GCP & AWS).
I'd suggest to use NAT translation on both sides and connect networks using VPN.
Here's some documentation that may help you. Unfortunatelly it's quite a bit of reading and setting up. Not the easiest solution but it has the benefit of being reliable and it's a quite old and tested approach.
General docs
Configure NAT to Enable Communication Between Overlapping Networks
Using NAT in Overlapping Networks
GCP side
Cloud NAT overview
Using Cloud NAT
AWS side
NAT instances
Comparison of NAT instances and NAT gateways
You second option is to split the original networks in smaller chunks so they wold not overlap but that's not always possible (due to network being small enough already and many IP's are used up...).
It depends on couple factors in the environments.
To access an overlapping network you need some form of gateway.
it can be some kind of proxy socks/http/other or a router/gw(with nat..).
If you can access the 192.168.23.0/24 or any other subnet that can connect to the aws 192.168.2.0/24 subnet from gcp then you can use either one of the solutions.
I assume that aws and gcp can provide the tunnel between the gw/proxy network.
If you don't need security layer for the tunnel you can use a vxlan tunnel and secure the tcp/other app protocol.
Using Google Cloud VPN with AWS Virtual Private Gateway you can accomplish such a thing. A detailed description by Google is given in this documentation.
It describes two VPN topologies:
A site-to-site Route-based IPsec VPN tunnel configuration.
A site-to-site IPsec VPN tunnel configuration using Google Cloud Router and dynamic routing with the BGP protocol.
Additionally, when CIDR-ranges overlap. You would need to create a new VPC/CIDR ranges that are non-overlapping. Otherwise, you could never connect to instances that have IP-addresses in both AWS and GCP.

Connecting LAN Subnet to GCP VM Subnet (VM Windows File Server)

So a little background of what I'm trying to accomplish. I'm basically trying to setup a Windows File Server using GCP VM Windows Instance. I have the VM setup and I have created a VPN connection between our office network and to the GCP VM network.
Now I'm trying to communicate between the two different subnets and I have to admit I'm kinda lost.
My office subnet is 192.168.72.0/24 and my GCP IP is 10.123.0.0 with my server being at 10.123.0.2
If I understand networking correctly I need to setup a route between 192.168.72.0 to 10.123.0.2? Or do I just need to create a firewall rule?
I'm using a SonicWall Firewall to establish the VPN connection to the GCP network.
I think I've been working at this too long for one day. I'm steaping away for a bit.
Thanks in advance.
If you set up a Site to Site, you should not need to include a route, you will if you setup a Tunnel Interface. But to me, it sounds like you just need to do a site to site. I dont think the tunnel will come up without the correct subnets, but just verify that the tunnel is up and then I would setup a packet monitor to see what route the traffic is taking when you try to ping from 192.168.72.0/24 to IP is 10.123.0.0.

Share L2 network over L3 network with vxlan

I have a network problem that I can't solve, I have a server at hetzner and a server at OVH, I'm trying to use some OVH ip on my hetzner server and some Hetzner ip on my OVH server, because I need flexibility in my network.
My VMs are on proxmox, I created a vxlan between the 2 servers and I bridge the vxlan in the vmbr0 interface of proxmox on one side and it works but ovh and hetzner informed me that I was sending packet with wrong mac address, so I don't know what to do.
I'm really not an expert in computer networking.
Thank you in advance to all those who can help me.
VxLAN offers a lot of flexibility, however it might not be the best answer in this case. You may be better off using VPN tunnels between both cloud environments. That is, assuming you can have multiple VMs within the tenant on those providers, have them point their default gateways towards a VM within your control, and use that VM as a firewall/VPN concentrator. From there you can establish a S2S VPN between the cloud environments, and can NAT the traffic from your provider's WAN IP address to the appropriate host, whether locally, or at the remote environment.
If you must have L2 connectivity between your cloud environments, I can speak from experience only in a Juniper environment, and in that case we would place a vMX VM behind a vSRX VM. The vMX VM would act as the EVPN/VXLAN VTEP, and your VMs would set this as their default gateway. The vSRX would establish IPSEC S2S tunnels, through which data-center interconnect (DCI) traffic would flow. L2 traffic would flow through the vMX, where it would be encapsulated in a vxlan tunnel, which would route through the SRX, which would then encapsulate this in an encrypted IPSEC tunnel, before sending to the other data center. Details of this might be a little too complex for a stack exchange answer though.
Hope this helps point you in the right direction!

Connectivity between VM's in the same VNET

I have a couple of virtual machines in one Cloud Service. They are assigned to the same VNET and have received private IP addresses in the same subnet.
I noticed that I was unable to PING from one server to another and when I started to look into it there is no connectivity whatsoever between the servers. I have disabled windows firewall on both servers but that didn't do the trick.
Just now I tried on one of the vm's to ping the internal ip address assigned to itself but it fails.
Can anyone shed some light into this? Is this expected behavior?
The reason I am looking into this right now is because we are adding a third VM to do some performance monitoring and since the other two VM's are part of a Cloud Service we cannot open endpoints to both of them using the same port and need to go directly to the internal IP's.
Thanks in advance
I had a similar issue not too long ago. I had three servers in the same vnet that were able to communicate via site-to-site VPN to my HQ but could not communicate with one another. After several hours of banging my head against the desk, I ended up just re-building the vnet and connectivity to one another was restored successfully. The vnet router feature had become corrupt and could no longer send traffic internally.
To rebuild the vnet, you'll need to delete the VM's. Keep the disks though, and you can re-build them quickly after the new vnet is back online.

Resources