Connectivity between VM's in the same VNET - networking

I have a couple of virtual machines in one Cloud Service. They are assigned to the same VNET and have received private IP addresses in the same subnet.
I noticed that I was unable to PING from one server to another and when I started to look into it there is no connectivity whatsoever between the servers. I have disabled windows firewall on both servers but that didn't do the trick.
Just now I tried on one of the vm's to ping the internal ip address assigned to itself but it fails.
Can anyone shed some light into this? Is this expected behavior?
The reason I am looking into this right now is because we are adding a third VM to do some performance monitoring and since the other two VM's are part of a Cloud Service we cannot open endpoints to both of them using the same port and need to go directly to the internal IP's.
Thanks in advance

I had a similar issue not too long ago. I had three servers in the same vnet that were able to communicate via site-to-site VPN to my HQ but could not communicate with one another. After several hours of banging my head against the desk, I ended up just re-building the vnet and connectivity to one another was restored successfully. The vnet router feature had become corrupt and could no longer send traffic internally.
To rebuild the vnet, you'll need to delete the VM's. Keep the disks though, and you can re-build them quickly after the new vnet is back online.

Related

Aws ec2 - Unable to consume http server from a different machine on the same network

Followed this tutorial to setup two ec2 instances: 12 . Creation of two EC2 instances and how to establish ping communication - YouTube
The only difference is I used a linux image.
I setup a simple python http server on a machine (on port 8000). But I cannot access this from my other machine; whenever I curl, the program kind of waits. (It might eventually timeout but I wasn't patient enough to witness that).
However, the workaround, I figured, was that you have to add a port rule via the security group. I do not like this option since it means that that port (for the machine that hosts the web server) can be accessed via the internet.
I was looking for an experience similar to what people usually have at home with their routers; machines connected to the same home router can reach out to other machines on any port (provided the destination machine has some service hosted on that port).
What is the solution to achieve something like this when working with ec2?
The instance is open to the internet because you are allowing access from '0.0.0.0/0' (anywhere) in the inbound rule of the security group.
If you want to the communication to be allowed only between the instances and not from the public internet. You can achieve that by assigning the same security group to both the instances and modifying the inbound rule in the security group to allow all traffic or ICMP traffic sourced from security group itself.
You can read more about it here:
AWS Reference

Connecting LAN Subnet to GCP VM Subnet (VM Windows File Server)

So a little background of what I'm trying to accomplish. I'm basically trying to setup a Windows File Server using GCP VM Windows Instance. I have the VM setup and I have created a VPN connection between our office network and to the GCP VM network.
Now I'm trying to communicate between the two different subnets and I have to admit I'm kinda lost.
My office subnet is 192.168.72.0/24 and my GCP IP is 10.123.0.0 with my server being at 10.123.0.2
If I understand networking correctly I need to setup a route between 192.168.72.0 to 10.123.0.2? Or do I just need to create a firewall rule?
I'm using a SonicWall Firewall to establish the VPN connection to the GCP network.
I think I've been working at this too long for one day. I'm steaping away for a bit.
Thanks in advance.
If you set up a Site to Site, you should not need to include a route, you will if you setup a Tunnel Interface. But to me, it sounds like you just need to do a site to site. I dont think the tunnel will come up without the correct subnets, but just verify that the tunnel is up and then I would setup a packet monitor to see what route the traffic is taking when you try to ping from 192.168.72.0/24 to IP is 10.123.0.0.

Setting up VPN between GCP Projects to access SQL Engine subnetwork

Please bear with me as my background is development and not sysadmin. Networking is something I'm learning as I go and thus why I'm writing here :)
A couple of months ago I started the process of designing the network structure of our cloud. After a couple of exchange here, I settled for having a project that will host a VPN Tunnel to the on-premise resources and some other projects that will host our products once they are moved from the on-premises servers.
All is good and I managed to set things up.
Now, one of the projects is dedicated to "storage": that means, for us, databases, buckets for statis data to be accessed around , etc.
I created a first mySQL database (2nd gen) to start testing and noticed that the only option available to access the SQL databases from Internal IPs was with the "parent project" subnetwork.
I realised that SQL Engine create a subnetwork dedicated for just that. It's written in the documentation as well, silly me.
No problem, I tear it down, enable Private Service Connection, create an allocated IP range in the VPC management and set it to export routes.
Then I went back to the SQL Engine a created a new database. As expected the new one had the IP assigned to the allocated IP range set up previously.
Now, I expected every peered network to be able to see the SQL subnetwork as well but apparently not. Again, RDFM you silly goose. It was written there as well.
I activated a bronze support subscription with GCP to have some guidance but what I got was a repeated "create a vpn tunnel between the two projects" which left me a little disappointed as the concept of Peered VPC is so good.
But anyway, let's do that then.
I created a tunnel pointing to a gateway on the project that will have K8s clusters and vice-versa.
The dashboard tells me that the tunnel are established but apparently there is a problem with the bgp settings because they are hanging on "Waiting for peer" on both side, since forever.
At this point I'm looking for anything related to BGP but all I can find is how it works in theory, what it is used for, which are the ASM numbers reserved etc etc.
I really need someone to point out the obvious and tell me what I fucked up here, so:
This is the VPN tunnel on the projects that hosts the databases:
And this is the VPN tunnel on the project where the products will be deployed, that need to access the databases.
Any help is greatly appreciated!
Regarding the BGP status "Waiting for peer" in your VPN tunnel, I believe this is due to the configured Cloud Router BGP IP and BGP peer IP. When configuring, the Cloud Router BGP IP address of tunnel1 is going to be the BGP Peer IP address for tunnel2, and the BGP Peer IP address for tunnel1 is going to be the Router BGP IP address of tunnel2.
Referring to your scenario, the IP address for stage-tunnel-to-cerberus should be:
Router BGP IP address: 169.254.1.2
and,
BGP Peer IP address: 169.254.1.1
This should put your VPN tunnels BGP session status in "BGP established".
You can't achieve what you want by VPN or by VPC Peering. In fact there is a rule in VPC which avoid peering transitivity described in the restriction part
Only directly peered networks can communicate. Transitive peering is not supported. In other words, if VPC network N1 is peered with N2 and N3, but N2 and N3 are not directly connected, VPC network N2 cannot communicate with VPC network N3 over VPC Network Peering.
Now, take what you want to achieve. When you use a Cloud SQL private IP, you create a peering between your VPC and the VPC of the Cloud SQL. And you have another peering (or VPN tunnel) for the SQL engine.
SQL Engine -> Peering -> Project -> Peering -> Cloud SQL
Like this you can't.
But you can use the shared VPC. Create a shared VPC, add your 2 projects in it, create a common subnet for SQL Engine and the Cloud SQL peering. That should work.
But, be careful. All VPC features aren't available with shared VPC. For example, serverless VPC connector aren't yet compliant with it.
Hope this help!
The original setup in the OP question should work, i.e.
Network 1 <--- (VPN) ---> Network 2 <--- (Peered) ---> CloudSQL network
(the network and the peering is created by GCP)
Then resource in Network 1 is able to access a MySQL instance created in the CloudSQLz network.

Physical Server Network Bridge caused serious issues across network (DNS, AD, Internet Connectivity)

Ok So yesterday I noticed that one of my spare servers within a small enterprise network had 4 network cards, which in turn had 4 IP addresses. We have planned to use this server for development purposes and we decided that we didn't want it to have 4 IP addresses. So we bridged the 4 network connections. Shortly afterwards we started noticing servers becoming unresponsive, most noticeable was one of the Virtual hosts. Then we realised that the affected servers were not longer authenticated on the network, it had seemed like the Domain controller were having DNS issues in the form of NETLOGON errors 5783. Active directory Domain Controller was unresponsive and therefore we could not add anything new clients into the network.
One of the most puzzling issues that was caused was the constant packet loss across the network. Internet connectivity was completely erratic, up-down-up-down. The domain controller and all of the effected servers would constantly lose connection during remote sessions making it almost impossible to troubleshoot the issue without physically plugging into the server itself, it was as if the switches were experiencing a broadcast storm which was crippling the network. But this wasn't reflected within the light displays on the switches themselves.
All of these issues were resolved when I deleted the network bridge, could anyone offer any sort of logical explanation? As I cannot link the issues myself.
Thank you all in advance.
You have detonated a network bomb. Bridging interfaces creates a situation where the interfaces are forwarding packets, which means that all of the various broadcast protocols that are present on your network are being sent out each interface, being picked up by each interface, and forwarded again. This results in an exponential growth that will eventually, depending on the speed of the interfaces and the ability of your network to handle it, bring the entire network to it's knees.
What you likely wanted to do was to bond those interfaces so that they appeared to be one

Connecting multiple local network with same IP range with Azure VNet

We are building a SaaS application where multiple tenants will be using single deployed application. Some tenants wants to access this application over vpn only (for security reasons). To achieve this we need to set site to site connectivity with tenant's network. But we are facing following problem.
Two tenant may be using same IP range. How can we connect Azure Vnet with these different local network with same IP range. [I am not sure but I guess, connecting these local network with two different vnet first and then connecting these two vnet to main vnet will work, but this will complicate the system].
Thanks In Advance
You can't solve your problem easily. This is a basic issue with IP address range allocation and isn't restricted to Azure. You could have some form of on-premise solution rolled out that would meet your requirements by laying down a range you control that your customer then needs to connect their internal network to. Even doing that you'd probably still have problems where your Azure VNET private IP address range overlaps a customer's internal range which is highly likely to happen.
You need to work out with customer to make sure that all participating networks in the VPN have different address space.
This can be easily work out at on premise level, it should be easy to have different address space even if everybody is using say 192.168 series.
NATing at on premise networks should be straight forward.

Resources