VPN Environment on non VLAN Netwoking in OpenStack - vpn

I have read the VPN ability of OpenStack here:
Cloudpipe – Per Project Vpns
One simple question: Is it possible to implement a VPN environment on a non-"VLAN Networking mode" (i.e. "Flat DHCP mode")?
So when I access through the OpenVPN client, I'll be 'placed' on my project/tenant network subnet. I got a fixed/private IP, i.e. 10.5.5.x/24.
I'm using OpenStack Grizzly with Quantum (Flat DHCP mode).

I haven't used this. But being aware of openstack networking I can assure you that as long as your cloud pipe instance has a floatingip associated (be it vlan or flat mode) , you can do this. I hope you had figured this out yourselves as my answer comes too late. stackoverflow seem to be slowly filling up with more openstack people only recently.

Related

Connecting LAN Subnet to GCP VM Subnet (VM Windows File Server)

So a little background of what I'm trying to accomplish. I'm basically trying to setup a Windows File Server using GCP VM Windows Instance. I have the VM setup and I have created a VPN connection between our office network and to the GCP VM network.
Now I'm trying to communicate between the two different subnets and I have to admit I'm kinda lost.
My office subnet is 192.168.72.0/24 and my GCP IP is 10.123.0.0 with my server being at 10.123.0.2
If I understand networking correctly I need to setup a route between 192.168.72.0 to 10.123.0.2? Or do I just need to create a firewall rule?
I'm using a SonicWall Firewall to establish the VPN connection to the GCP network.
I think I've been working at this too long for one day. I'm steaping away for a bit.
Thanks in advance.
If you set up a Site to Site, you should not need to include a route, you will if you setup a Tunnel Interface. But to me, it sounds like you just need to do a site to site. I dont think the tunnel will come up without the correct subnets, but just verify that the tunnel is up and then I would setup a packet monitor to see what route the traffic is taking when you try to ping from 192.168.72.0/24 to IP is 10.123.0.0.

Setting up VPN between GCP Projects to access SQL Engine subnetwork

Please bear with me as my background is development and not sysadmin. Networking is something I'm learning as I go and thus why I'm writing here :)
A couple of months ago I started the process of designing the network structure of our cloud. After a couple of exchange here, I settled for having a project that will host a VPN Tunnel to the on-premise resources and some other projects that will host our products once they are moved from the on-premises servers.
All is good and I managed to set things up.
Now, one of the projects is dedicated to "storage": that means, for us, databases, buckets for statis data to be accessed around , etc.
I created a first mySQL database (2nd gen) to start testing and noticed that the only option available to access the SQL databases from Internal IPs was with the "parent project" subnetwork.
I realised that SQL Engine create a subnetwork dedicated for just that. It's written in the documentation as well, silly me.
No problem, I tear it down, enable Private Service Connection, create an allocated IP range in the VPC management and set it to export routes.
Then I went back to the SQL Engine a created a new database. As expected the new one had the IP assigned to the allocated IP range set up previously.
Now, I expected every peered network to be able to see the SQL subnetwork as well but apparently not. Again, RDFM you silly goose. It was written there as well.
I activated a bronze support subscription with GCP to have some guidance but what I got was a repeated "create a vpn tunnel between the two projects" which left me a little disappointed as the concept of Peered VPC is so good.
But anyway, let's do that then.
I created a tunnel pointing to a gateway on the project that will have K8s clusters and vice-versa.
The dashboard tells me that the tunnel are established but apparently there is a problem with the bgp settings because they are hanging on "Waiting for peer" on both side, since forever.
At this point I'm looking for anything related to BGP but all I can find is how it works in theory, what it is used for, which are the ASM numbers reserved etc etc.
I really need someone to point out the obvious and tell me what I fucked up here, so:
This is the VPN tunnel on the projects that hosts the databases:
And this is the VPN tunnel on the project where the products will be deployed, that need to access the databases.
Any help is greatly appreciated!
Regarding the BGP status "Waiting for peer" in your VPN tunnel, I believe this is due to the configured Cloud Router BGP IP and BGP peer IP. When configuring, the Cloud Router BGP IP address of tunnel1 is going to be the BGP Peer IP address for tunnel2, and the BGP Peer IP address for tunnel1 is going to be the Router BGP IP address of tunnel2.
Referring to your scenario, the IP address for stage-tunnel-to-cerberus should be:
Router BGP IP address: 169.254.1.2
and,
BGP Peer IP address: 169.254.1.1
This should put your VPN tunnels BGP session status in "BGP established".
You can't achieve what you want by VPN or by VPC Peering. In fact there is a rule in VPC which avoid peering transitivity described in the restriction part
Only directly peered networks can communicate. Transitive peering is not supported. In other words, if VPC network N1 is peered with N2 and N3, but N2 and N3 are not directly connected, VPC network N2 cannot communicate with VPC network N3 over VPC Network Peering.
Now, take what you want to achieve. When you use a Cloud SQL private IP, you create a peering between your VPC and the VPC of the Cloud SQL. And you have another peering (or VPN tunnel) for the SQL engine.
SQL Engine -> Peering -> Project -> Peering -> Cloud SQL
Like this you can't.
But you can use the shared VPC. Create a shared VPC, add your 2 projects in it, create a common subnet for SQL Engine and the Cloud SQL peering. That should work.
But, be careful. All VPC features aren't available with shared VPC. For example, serverless VPC connector aren't yet compliant with it.
Hope this help!
The original setup in the OP question should work, i.e.
Network 1 <--- (VPN) ---> Network 2 <--- (Peered) ---> CloudSQL network
(the network and the peering is created by GCP)
Then resource in Network 1 is able to access a MySQL instance created in the CloudSQLz network.

Connectivity between VM's in the same VNET

I have a couple of virtual machines in one Cloud Service. They are assigned to the same VNET and have received private IP addresses in the same subnet.
I noticed that I was unable to PING from one server to another and when I started to look into it there is no connectivity whatsoever between the servers. I have disabled windows firewall on both servers but that didn't do the trick.
Just now I tried on one of the vm's to ping the internal ip address assigned to itself but it fails.
Can anyone shed some light into this? Is this expected behavior?
The reason I am looking into this right now is because we are adding a third VM to do some performance monitoring and since the other two VM's are part of a Cloud Service we cannot open endpoints to both of them using the same port and need to go directly to the internal IP's.
Thanks in advance
I had a similar issue not too long ago. I had three servers in the same vnet that were able to communicate via site-to-site VPN to my HQ but could not communicate with one another. After several hours of banging my head against the desk, I ended up just re-building the vnet and connectivity to one another was restored successfully. The vnet router feature had become corrupt and could no longer send traffic internally.
To rebuild the vnet, you'll need to delete the VM's. Keep the disks though, and you can re-build them quickly after the new vnet is back online.

Tunneling a network connection into a VMWare guest without network

I'm trying to establish a TCP connection between a client machine and a guest VM running inside an ESXi server. The trick is that the guest VM has no network configured (intentionally). However the ESX server is on the network, so in theory it might be possible to bridge the gap with software.
Concretely, I'd like to eventually create a direct TCP connection from python code running on the client machine (I want to create an RPyC connection). However anything that results in ssh-like port tunneling would be breakthrough enough.
I'm theorizing that some combination of VMWare Tools, pysphere and obscure network adapters could be possible. But so far, my searches don't yield any result and my only ideas are either ugly (something like tunneling over file operations) and/or very error prone (basically, if I have to build a TCP stack, I know I'll be writing lots of bugs).
It's for a testing environment setup, not production; but I prefer stability over speed. I currently don't see much need for high throughput.
To summarize the setup:
Client machine (Windows/Linux, whatever works) with vmware tools installed
ESXi server (network accessible from client machine)
VMWare guest which has no NICs at all, but is accessible using vmware tools (must be Windows in my case, but a Linux solution is welcome for the sake completeness)
Any ideas and further reading suggestions would be awesome.
Thank you Internet, you are the best!
It is not clear the meaning of 'no NICs at all on guest'. If I can assume that, there is no physical NICs assigned for the guest is what is meant here. The solution is easy as a vmWare soft NIC can be provisioned for the guest VM and that will serve as the entry point to the guest netstack.
But if the soft NIC is also not available, i really wonder how and what can serve as the entry point to the netstack of guest, be it Linux/Windows. To my understanding, if thats what you meant, then you might need to make guest OS modifications to use a different door to access the guest netstack and to post/drain pkts from it. But again, when you do a proper implementation of this backdoor, it will become just another implementation of softNIC which vmware by default support. So, why not use that?
It's a bit late but a virtual serial port may be your friend. You can pick the serial port on the outer end via network or locally depending on your options. Than you can have some ppp stuff or your custom script on both ends to communicate. You could also run some tool to create a single socket from the serial link on the guest end if you want to avoid having a ppp interface but still need to tunnel a TCP connection for some application.
This should keep you safe when analyzing malicious code as long as it's not skynet :-) You still should do it with the permission of the sysadmin as you may be violating your company's rules by working around some security measurements.
If the VM 'intentionally' has no network configured, you can't connect to it over a network.
Your question embodies a contradiction in terms.

Configuring openstack for a in-house test cloud

We're currently looking to migrate an old and buggy eucalyptus cloud to openstack. We have ~15 machines that are all on the same office-internal network. The instances get their network configuration from an external (not eucalyptus) DHCP server. We run both linux and windows images. The cloud is used exclusively for platform testing from Jenkins.
Looking into openstack, it seems that out of the three supported networking modes, none really fit our environment. What we are looking for is something like an "unmanaged mode" where openstack launches an instance that is hooked up to eth0 interface on the instances' compute node and which will receive its network configuration from the external DHCP on boot. I.e. the VM's, guest hosts and clients (jenkins) are all on the same network, managed by an external DHCP server.
Is a scenario like this possible to set up in OpenStack?
It's not commonly used, but the Networking setup that will fit your needs the best is FlatNetworking (not FlatDHCPNetworking). There isn't stellar documentation on configuring that setup to work through your environment, and some pieces (like the nova-metadata service) may be a bit tricky to manage with it, but that should accomplish allowing you to run an OpenStack cloud with an external DHCP provider.
I wrote up the wiki page http://wiki.openstack.org/UnderstandingFlatNetworking some time ago to explain the setup of the various networks and how they operate with regards to NICs on hosting systems. FlatNetworking is effectively the same as FlatDHCPNetworking except that OpenStack doesn't try and run the DHCP service for you.
Note that with this mode, all the VM instances will be on the same network with your OpenStack infrastructure - there's no separation of networks at all.

Resources