Configuring openstack for a in-house test cloud - openstack

We're currently looking to migrate an old and buggy eucalyptus cloud to openstack. We have ~15 machines that are all on the same office-internal network. The instances get their network configuration from an external (not eucalyptus) DHCP server. We run both linux and windows images. The cloud is used exclusively for platform testing from Jenkins.
Looking into openstack, it seems that out of the three supported networking modes, none really fit our environment. What we are looking for is something like an "unmanaged mode" where openstack launches an instance that is hooked up to eth0 interface on the instances' compute node and which will receive its network configuration from the external DHCP on boot. I.e. the VM's, guest hosts and clients (jenkins) are all on the same network, managed by an external DHCP server.
Is a scenario like this possible to set up in OpenStack?

It's not commonly used, but the Networking setup that will fit your needs the best is FlatNetworking (not FlatDHCPNetworking). There isn't stellar documentation on configuring that setup to work through your environment, and some pieces (like the nova-metadata service) may be a bit tricky to manage with it, but that should accomplish allowing you to run an OpenStack cloud with an external DHCP provider.
I wrote up the wiki page http://wiki.openstack.org/UnderstandingFlatNetworking some time ago to explain the setup of the various networks and how they operate with regards to NICs on hosting systems. FlatNetworking is effectively the same as FlatDHCPNetworking except that OpenStack doesn't try and run the DHCP service for you.
Note that with this mode, all the VM instances will be on the same network with your OpenStack infrastructure - there's no separation of networks at all.

Related

Aws ec2 - Unable to consume http server from a different machine on the same network

Followed this tutorial to setup two ec2 instances: 12 . Creation of two EC2 instances and how to establish ping communication - YouTube
The only difference is I used a linux image.
I setup a simple python http server on a machine (on port 8000). But I cannot access this from my other machine; whenever I curl, the program kind of waits. (It might eventually timeout but I wasn't patient enough to witness that).
However, the workaround, I figured, was that you have to add a port rule via the security group. I do not like this option since it means that that port (for the machine that hosts the web server) can be accessed via the internet.
I was looking for an experience similar to what people usually have at home with their routers; machines connected to the same home router can reach out to other machines on any port (provided the destination machine has some service hosted on that port).
What is the solution to achieve something like this when working with ec2?
The instance is open to the internet because you are allowing access from '0.0.0.0/0' (anywhere) in the inbound rule of the security group.
If you want to the communication to be allowed only between the instances and not from the public internet. You can achieve that by assigning the same security group to both the instances and modifying the inbound rule in the security group to allow all traffic or ICMP traffic sourced from security group itself.
You can read more about it here:
AWS Reference

Network settings in Openstack with single OpenVPN connection

I'm trying to set up an Openstack environment with two Kubernetes clusters, one production and one testing. My idea was to separate them with two networks in Openstack and then have a VPN in front, to limit the exposure through floating ip:s (for this I would have a proxy that routes requests into the correct internal addresses).
However, issues arise when trying to tunnel requests to both networks when connected to the VPN. Either I choose to run the VPN in its own network or in one of the two, but I don't seem to be able to make requests across network boundaries.
Is there a better way to configure the networking in Openstack or OpenVPN, so that I can keep the clusters separated and still have access to all resources through one installation of OpenVPN?
Is it better to run everything in the same Openstack network and separate them with subnets? Can I still have the production and test cluster expose different IP-addresses externally? Are they still separated enough to limit the risk of them accessing each other?
Sidenote: I use Terraform to deploy the infrastructure and Ansible to install resources, if someone has suggestion in the line of already prepared scripts.
Thanks,
The solution I went for was to separate the environments with their own networks and cidr and then attach them to the VPN instance to let it get access to them. From there I just tunnel everything.

OpenStack Compute-node communicate/ping vms run on it

In Ceilometer, when pollsters collect meter from VMs, it used hypervisor on compute-node. Now, I want to write new plugin for ceilometer and not use hypervisor to collect meter, I want to collect meter by a service that is installed on VMs (mean ceilometer get data from service), so I need compute-node must communicate with VMs by IP (private IP). Is there any solution to do this?
Thanks all.
In general the internal network used by your Nova instances is kept intentionally separate from the compute hosts themselves as a security precaution (to ensure that someone logged into a Nova server isn't able to compromise your host).
For what you are proposing, it would be better to adopt a push model rather than a pull model: have a service running inside your instances that would publish data to some service accessible at a routeable ip address.

Tunneling a network connection into a VMWare guest without network

I'm trying to establish a TCP connection between a client machine and a guest VM running inside an ESXi server. The trick is that the guest VM has no network configured (intentionally). However the ESX server is on the network, so in theory it might be possible to bridge the gap with software.
Concretely, I'd like to eventually create a direct TCP connection from python code running on the client machine (I want to create an RPyC connection). However anything that results in ssh-like port tunneling would be breakthrough enough.
I'm theorizing that some combination of VMWare Tools, pysphere and obscure network adapters could be possible. But so far, my searches don't yield any result and my only ideas are either ugly (something like tunneling over file operations) and/or very error prone (basically, if I have to build a TCP stack, I know I'll be writing lots of bugs).
It's for a testing environment setup, not production; but I prefer stability over speed. I currently don't see much need for high throughput.
To summarize the setup:
Client machine (Windows/Linux, whatever works) with vmware tools installed
ESXi server (network accessible from client machine)
VMWare guest which has no NICs at all, but is accessible using vmware tools (must be Windows in my case, but a Linux solution is welcome for the sake completeness)
Any ideas and further reading suggestions would be awesome.
Thank you Internet, you are the best!
It is not clear the meaning of 'no NICs at all on guest'. If I can assume that, there is no physical NICs assigned for the guest is what is meant here. The solution is easy as a vmWare soft NIC can be provisioned for the guest VM and that will serve as the entry point to the guest netstack.
But if the soft NIC is also not available, i really wonder how and what can serve as the entry point to the netstack of guest, be it Linux/Windows. To my understanding, if thats what you meant, then you might need to make guest OS modifications to use a different door to access the guest netstack and to post/drain pkts from it. But again, when you do a proper implementation of this backdoor, it will become just another implementation of softNIC which vmware by default support. So, why not use that?
It's a bit late but a virtual serial port may be your friend. You can pick the serial port on the outer end via network or locally depending on your options. Than you can have some ppp stuff or your custom script on both ends to communicate. You could also run some tool to create a single socket from the serial link on the guest end if you want to avoid having a ppp interface but still need to tunnel a TCP connection for some application.
This should keep you safe when analyzing malicious code as long as it's not skynet :-) You still should do it with the permission of the sysadmin as you may be violating your company's rules by working around some security measurements.
If the VM 'intentionally' has no network configured, you can't connect to it over a network.
Your question embodies a contradiction in terms.

VPN Environment on non VLAN Netwoking in OpenStack

I have read the VPN ability of OpenStack here:
Cloudpipe – Per Project Vpns
One simple question: Is it possible to implement a VPN environment on a non-"VLAN Networking mode" (i.e. "Flat DHCP mode")?
So when I access through the OpenVPN client, I'll be 'placed' on my project/tenant network subnet. I got a fixed/private IP, i.e. 10.5.5.x/24.
I'm using OpenStack Grizzly with Quantum (Flat DHCP mode).
I haven't used this. But being aware of openstack networking I can assure you that as long as your cloud pipe instance has a floatingip associated (be it vlan or flat mode) , you can do this. I hope you had figured this out yourselves as my answer comes too late. stackoverflow seem to be slowly filling up with more openstack people only recently.

Resources