Azure Site-to-Site VPN with Traffic selector policy - vpn

We are trying to set up a site-to-site VPN on Azure using IkeV2 and a Traffic Selector Policy. The intended policy is from an on-premises network to a subnet of the azure vnet that contains the Local network gateway.
This seems very similar to the question that #riaan-kruger posted in this question Azure Site-To-Site TrafficSelectorPolicy is not working and seems possible but I am unable to get it to work.
#haymansfield you indicated that you were able to get this to work. Do you have any suggestions?

Related

Accessing http url hosted on a VNet Peered VM from another VM

If I peer two Bastion VMs via VNet, and run a web application on one VM, will I be able to access its REST url from the other VM? Is there a charge involved for this type of access?
Sorry that I couldn't find it in me to understand all that jargon about ingress, egresss and gateways. I just want the simple answer to my question.

Azure Network Security Group Vs Route Tables

Networking newbie here. From the Documentation it feels like both NSG and Routing tables(UDR) are doing the same thing - capable of defining ACLs at multiple levels (Vnet, Subnet, VM)
https://learn.microsoft.com/en-us/azure/virtual-network/network-security-group-how-it-works
https://learn.microsoft.com/en-us/azure/virtual-network/virtual-networks-udr-overview
So how are they different and when is each used?
thanks.
Azure automatically creates a route table for each subnet within an Azure virtual network and adds system default routes to the table. The route table is like a networking map that tells the traffic from one place to another place via the next hop. This generates the "path" but does not filter traffic.
The Azure network security group is used to filter network traffic to and from Azure resources in an Azure virtual network. It contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. If there is no route to one place from a subnet, you even do not need to configure the security rules because there is no path. So when you consider the NSG it should have a successful network route.
For example, usually, we can access Azure VM in Azure virtual network via SSH or RDP over the Internet but it has a less secure way to expose the port 22 or 3389. We can restrict access to your Azure VM via specifying the source IP address in the NSG. This setting allows traffic only from a specific IP address or range of IP addresses to connect to the VM. Read more details here. In this scenario, we need to ensure that there is a route to the internet from your Azure virtual network and vice versa.

How to connect multiple cloud with overlapping VPC?

We are creating a Console to administer, view logs and metrics, create resources on Kubernetes in a multicloud environment.
The Console ( a web app ) is deployed on GKE in GCP, but we can't figure out how we can connect and reach K8S Api-Servers in multiple VPC with overlapping IPs, without exposing them on public IP.
I draw a little diagram to expose the problem.
Are there some products or best practice to perform this securely?
Product vendors for example Mongo Atlas or Confluent Cloud seems to have solved this issue, they can create infrastructure in multiple cloud and administer them.
It's not possible to connect two overlapping networks with VPN even if they're in different clouds (GCP & AWS).
I'd suggest to use NAT translation on both sides and connect networks using VPN.
Here's some documentation that may help you. Unfortunatelly it's quite a bit of reading and setting up. Not the easiest solution but it has the benefit of being reliable and it's a quite old and tested approach.
General docs
Configure NAT to Enable Communication Between Overlapping Networks
Using NAT in Overlapping Networks
GCP side
Cloud NAT overview
Using Cloud NAT
AWS side
NAT instances
Comparison of NAT instances and NAT gateways
You second option is to split the original networks in smaller chunks so they wold not overlap but that's not always possible (due to network being small enough already and many IP's are used up...).
It depends on couple factors in the environments.
To access an overlapping network you need some form of gateway.
it can be some kind of proxy socks/http/other or a router/gw(with nat..).
If you can access the 192.168.23.0/24 or any other subnet that can connect to the aws 192.168.2.0/24 subnet from gcp then you can use either one of the solutions.
I assume that aws and gcp can provide the tunnel between the gw/proxy network.
If you don't need security layer for the tunnel you can use a vxlan tunnel and secure the tcp/other app protocol.
Using Google Cloud VPN with AWS Virtual Private Gateway you can accomplish such a thing. A detailed description by Google is given in this documentation.
It describes two VPN topologies:
A site-to-site Route-based IPsec VPN tunnel configuration.
A site-to-site IPsec VPN tunnel configuration using Google Cloud Router and dynamic routing with the BGP protocol.
Additionally, when CIDR-ranges overlap. You would need to create a new VPC/CIDR ranges that are non-overlapping. Otherwise, you could never connect to instances that have IP-addresses in both AWS and GCP.

Setting up VPN between GCP Projects to access SQL Engine subnetwork

Please bear with me as my background is development and not sysadmin. Networking is something I'm learning as I go and thus why I'm writing here :)
A couple of months ago I started the process of designing the network structure of our cloud. After a couple of exchange here, I settled for having a project that will host a VPN Tunnel to the on-premise resources and some other projects that will host our products once they are moved from the on-premises servers.
All is good and I managed to set things up.
Now, one of the projects is dedicated to "storage": that means, for us, databases, buckets for statis data to be accessed around , etc.
I created a first mySQL database (2nd gen) to start testing and noticed that the only option available to access the SQL databases from Internal IPs was with the "parent project" subnetwork.
I realised that SQL Engine create a subnetwork dedicated for just that. It's written in the documentation as well, silly me.
No problem, I tear it down, enable Private Service Connection, create an allocated IP range in the VPC management and set it to export routes.
Then I went back to the SQL Engine a created a new database. As expected the new one had the IP assigned to the allocated IP range set up previously.
Now, I expected every peered network to be able to see the SQL subnetwork as well but apparently not. Again, RDFM you silly goose. It was written there as well.
I activated a bronze support subscription with GCP to have some guidance but what I got was a repeated "create a vpn tunnel between the two projects" which left me a little disappointed as the concept of Peered VPC is so good.
But anyway, let's do that then.
I created a tunnel pointing to a gateway on the project that will have K8s clusters and vice-versa.
The dashboard tells me that the tunnel are established but apparently there is a problem with the bgp settings because they are hanging on "Waiting for peer" on both side, since forever.
At this point I'm looking for anything related to BGP but all I can find is how it works in theory, what it is used for, which are the ASM numbers reserved etc etc.
I really need someone to point out the obvious and tell me what I fucked up here, so:
This is the VPN tunnel on the projects that hosts the databases:
And this is the VPN tunnel on the project where the products will be deployed, that need to access the databases.
Any help is greatly appreciated!
Regarding the BGP status "Waiting for peer" in your VPN tunnel, I believe this is due to the configured Cloud Router BGP IP and BGP peer IP. When configuring, the Cloud Router BGP IP address of tunnel1 is going to be the BGP Peer IP address for tunnel2, and the BGP Peer IP address for tunnel1 is going to be the Router BGP IP address of tunnel2.
Referring to your scenario, the IP address for stage-tunnel-to-cerberus should be:
Router BGP IP address: 169.254.1.2
and,
BGP Peer IP address: 169.254.1.1
This should put your VPN tunnels BGP session status in "BGP established".
You can't achieve what you want by VPN or by VPC Peering. In fact there is a rule in VPC which avoid peering transitivity described in the restriction part
Only directly peered networks can communicate. Transitive peering is not supported. In other words, if VPC network N1 is peered with N2 and N3, but N2 and N3 are not directly connected, VPC network N2 cannot communicate with VPC network N3 over VPC Network Peering.
Now, take what you want to achieve. When you use a Cloud SQL private IP, you create a peering between your VPC and the VPC of the Cloud SQL. And you have another peering (or VPN tunnel) for the SQL engine.
SQL Engine -> Peering -> Project -> Peering -> Cloud SQL
Like this you can't.
But you can use the shared VPC. Create a shared VPC, add your 2 projects in it, create a common subnet for SQL Engine and the Cloud SQL peering. That should work.
But, be careful. All VPC features aren't available with shared VPC. For example, serverless VPC connector aren't yet compliant with it.
Hope this help!
The original setup in the OP question should work, i.e.
Network 1 <--- (VPN) ---> Network 2 <--- (Peered) ---> CloudSQL network
(the network and the peering is created by GCP)
Then resource in Network 1 is able to access a MySQL instance created in the CloudSQLz network.

How to create firewall for kubernetes cluster in google container engine

This may be an extremely simple question, but I can't seem to figure out how to only allow my kubernetes cluster to be accessible ONLY from my office IP.
In my firewall rules I see my rules for the gke nodes to be 2 internal ips and my office ip.
I also see a firewall rule for an external ip range that I don't see in my external IP addresses. That IP address also doesn't appear in my load balancer IPs...
Finally I have a loadbalancing firewall rule that allows the external IP ranges from the load balancing tab, which are my kubernetes ingress rules.
Long story short, how do I only allow my kubernetes cluster to be only accessible from my office IP?
This isn't currently possible in Google Container Engine.
You don't see any firewall rules for your cluster control plane because it isn't running inside your cloud project. Therefore the endpoint for your cluster won't show up in your networking views and you cannot add firewall rules to restrict access to it.
This is a shortcoming that the team is aware of and we hope to be able to provide a solution for you in the future.

Resources