Change DHCP lease time on VirtualBox - networking

My company is using VirtualBox+Hostmanager as our Vagrant provider. We have plenty of different projects which each gets its own setup attached so we ended up with a meta-project of sorts which contains just the Vagrant stuff:
projectA/
projectB/
projectC/
Inside each project we have a layout like:
projectA/
puppet/
src/ # project's source, not part of meta-project, auto-created
Puppetfile
Vagrantfile
src/ gets created by Vagrant provisioning and is checked out each from its own VCS project. So, if working on projectB, you just go to that folder, vagrant up (which starts or creates the required machine, checks out source if needed, sets up www.projectB.dev hosts entry on your machine and you're good to go), you work on it and then vagrant halt or vagrant destroy. This all works mostly well.
The problem is that VirtualBox's DHCP server (which provides the dynamic IP address for each box) gives out a really short lease (like, a day or two). As not every project is being worked daily, those machines aren't up to renew their lease and I end up with /etc/hosts like:
172.28.128.4 projectA # used
172.28.128.3 projectB
172.28.128.5 projectC
172.28.128.4 projectD # reused :(
This is not ideal for my situation as it becomes unmanageable after a while, especially with non-networking-savy frontend guys.
So, is there a way around this, either by changing the lease time on the VirtualBox DHCP server or even for the client (the base box is always a custom CentOS 6 build)? I could go the static IP route per project, but as it opens another can of worms, I'm looking at it like a last resort.

Related

puppet client reporting to wrong host in Foreman

This is my first post!
I have 100's of nodes managed by puppet/foreman. Everything is fine.
I did something I already did without problem in the past:
Change the hostname of a server.
This time I changed 2 hostnames:
Initially I had 'gate02' and 'gate03'.
I moved gate02 to 'gate02old' (with dummy IP, and switched the server OFF)
then I moved gate03 to gate02 ...
Now (the new) gate02 reports are updating the host called gate02old in foreman.
I did clean the certs in the puppetserver. I rm the ssl dir in the (new) gate02 and run puppet agent. I did not fing any reference to 'gate' in /var/lib/puppet. I changed the certname in puppet.conf and in hostname, and in sysconfig/network-script/ifcfg-xxxx.
The puppet agent run smoothly, and sends it to the puppetserver. But it updates the wrong host!
Anyone would have a clue on how to fix this ?
Thanks!
Foreman 2.0.3
Puppet 6
I do not accept that the sequence of events described led to the behavior described. If reports for the former gate03, now named gate02, are being logged on the server for name gate02old, then that is because that machine is presenting a cert to the server that identifies it as gate02old (and the server is accepting that cert). The sequence of events presented does not explain how that might be, but my first guess would be that it is actually (new) gate02old that is running and requesting catalogs from the server, not (new) gate02.
Fix it by
Ensuring that the machine you want running is in fact the one that is running, and that its hostname is in fact what you intend for it to be.
Shutting down the agent on (new) gate02. If it is running in daemon mode then shut down the daemon and disable it. If it is being scheduled by an external scheduler then stop and disable it there. Consider also using puppet agent --disable.
Deactivating the node on the server and cleaning out its data, including certs:
puppet node deactivate gate02
puppet node deactivate gate02old
puppet node deactivate gate03
You may want to wait a bit at this point, then ...
puppet node clean gate02
puppet node clean gate02old
puppet node clean gate03
Cleaning out the nodes' certs. For safety, I would do this on both nodes. Removing /opt/puppetlabs/puppet/ssl (on the nodes, not the server!) should work for that, or you could remove the puppet-agent package altogether, remove any files left behind, and then reinstall.
Updating the puppet configuration on the new gate02 as appropriate.
Re-enabling the agent on gate02, and starting it or running it in --test mode.
Signing the new CSR (unless you have autosigning enabled), which should have been issued for gate02 or whatever certname is explicitly specified in in that node's puppet configuration.
Thanks for the answer, though it was not the right one.
I did get to the right point by changing again the hostname of the old gateold02 to a another existing-for-testing one, starting the server and get it back in Foreman. Once that done, removing (again!) the certs of the new gate02 put it right, and its reports now updates the right entry in Foreman.
I still beleive there is something (a db ?) that was not updated right so foreman was sure that the host called 'gate02' was in the GUI 'gateold02'.
I am very sorry if you don't beleive me.
Not to say rather disappointed.
Cheers.

Problem communicating over a local area network (LAN) with ROS on WSL2

I am a developer of ROS projects. Recently I am trying using ROS(melodic) on WSL2(Windows Subsystem for Linux), and all things works just great. But I got some trouble when I want to use another PC which also in the same local area network(LAN) to communicate with. Before setting the environment variables like "ROS_MASTER_URI, ROS_IP", I know that since WSL 2 work on Hyper-V so the IP show on WSL2 is not the one in the real LAN. I have to do some command like below in order to make everyone in LAN communicate with the specific host:PORT on WSL2.
netsh interface portproxy delete v4tov4 listenport=$port listenaddress=$addr
But here comes a new question:
The nodes which use TCPROS to communicate with each other have a random PORT every time I launch the file.
How can I handle this kind of problem?
Or is there any information on the internet that I can have a look?
Thank you.
The root problem is described in WSL issue #4150. To quote from that thread,
WSL 2 seems to NAT it's virtual network, instead of making it bridged
to the host NIC.
Option 1 - Port forwarding script on login
Note: From #kraego's comment (and the edited question, which I'm just seeing based on the comment), this is probably not a good option for ROS, since the port numbers are randomly assigned. This makes port forwarding something that would have to be dynamically done.
There are a number of workarounds described in that issue, for which you've already figured out the first part (the port forwarding). The primary technique seems to be to create a PowerShell script to detect the IP address and create the port forwarding rules that runs upon Windows login. This particular comment near the top of the thread seems to be the canonical go-to answer, although many people have posted their tweaks or alternatives throughout the very long thread.
One downside - I believe the script that is mentioned there needs to be run at logon since the WSL subsystem seems to only want to run when a user is logged in. I've found that attempting to run a WSL service or instance through Windows OpenSSH results in that instance/service shutting down soon after the SSH session is closed, unless the user is already logged into Windows with a WSL instance opened.
Option 2 - WSL1
I would also propose that, assuming it fits your workflow and if the ROS works on it (it may not, given the device access you need, but not sure), you can simply use WSL1 instead of WSL2 to avoid this. You can try this out by:
Backing up your existing distro (from PowerShell or cmd, use wsl --export <DistroName> <FileName>
Import the backup into a new WSL1 instance with wsl --import <NewDistroName> <InstallLocation> <FileNameOfBackup> --version 1
It's possible to simply change versions in place, but I tend to like to have a backup anyway before doing it, and as long as you are backing up, you may as well leave the original in place.

how to get instances back after reboot in openstack

After successful installation of devstack and launching instances,but once reboot machine, need to start all over again and lose all the instances which were launched back then.I tried rejoin-stack but did not worked,How can i get the instances back after reboot ?
You might set resume_guests_state_on_host_boot = True in nova.conf. The file should be located at /etc/nova/nova.conf
I've found some old discussion http://www.gossamer-threads.com/lists/openstack/dev/8772
AFAIK at the present time OpenStack (Icehouse) still not completely aware about environments inside it, so it can't restore completely after reboot. The instances will be there (virsh domains), but even if you start them manually or using nova flags I'm not sure whether other facilities will handle this correctly (e.g. neutron will correctly configure all L3 rules according to DB records, etc.) Honestly I'm pretty sure they won't...
The answer depends of what you need to achieve:
If you need a template environment (e.g. similar set of instances and networks each time after reboot) you may just script everything. In other words just make a bash script creating everything you need and run it each time after stack.sh. Make sure you're starting with clean environment since OpenStack DB state remains between ./unstack - ./stack.sh or ./rejoin-stack.sh (you might try to just clean DB, or delete it. stack.sh will build it back).
If you need a persistent environment (e.g. you don't want to loose VM's and whole infrastructure state after reboot) I'm not aware how to do this using OpenStack. F.e. neutron agents (they configure iptables, dhcp etc) do not save state and driven by events from Neutron service. They will not restore after reboot, so the network will be dead. I'll be very glad if someone will share a method to do such recovery.
In general I think OpenStack is not focusing on this and will not focus during the nearest release cycles. Common approach is to have multi-node environment where each node is replaceable.
See http://docs.openstack.org/high-availability-guide/content/ch-intro.html for reference
devstack is an ephemeral environment. it is not supposed to survive a reboot. this is not a supported behavior.
that being said you might find success in re-initializing the environment by running
./unstack.sh
follower by
./stack.sh
again.
Again, devstack is an ephemeral environment. It's primary purpose for existing is to run gate testing for openstack's CI infrastructure.
or try ./rejoin-stack.sh to re-join previous screens.

launching adhoc docker instances: Is it recommended to launch a docker instance per request?

Is it recommended to launch a docker instance per request?
I have either lighttpd or Nginx running on my web server as a reverse proxy. I support a number of subdomains with very low usage. When a request for the subdomain arrives I want to start the docker instance. Preferable I'd like to launch them dynamically so that if more than one user arrives that I would launch one per user... and/or a shared instance (determined by configuration)
Originally I said this should work well for low traffic sites, but upon further thought, no, this is a bad idea.
Each time you launch a Docker container, it adds a read-write layer to the image. Even if there is very little data written, the layer exists, and each request will generate one. When a single user visits a website, rendering the page will generate 10's to 1000's of requests, for CSS, for javascript, for each image, for fonts, for AJAX, and each of these would create those read-write layers.
Right now there is no automatic cleanup of the read-write layers -- they persist even after the Docker container has exited. By default, nothing is lost.
So, even for a single low traffic site, you would find your disk use growing steadily over time. You could add your own automated cleanup.
Then there is the second problem: anything uploaded to the website would not be available to any other requests unless it was written to some out-of-container shared storage. That's pretty easy to do with S3 or a separate and persistent database service, but it does start showing the weakness in the "one new Docker container per request" approach. If you're going to have some persistent services, why not make the Docker containers more persistent and run them longer?

VMWare Virtual Machine Ignores DCHP Lease

I have a VMWare Player (Workstation 9 )virtual machine on an Ubuntu 12.10 (13.10 Kernel) host running Ubuntu 12.04 using a bridged connection and set to replicate the physical network connection. Everything usually works properly in a variety of locations. But at one location that I often frequent, the ip address of the virtual machine changes roughly every 10 minutes -rendering the vm entirely useless as it is a postgresql server and thus needs a dedicated local ip. Not only that, but when I copied a database dump into a shared folder, the file ended up getting corrupted.
I can verify that the network caused this problem, as the actual on the vm was not corrupted. I managed to temporarily solve the problem by going into a local modem and setting a DHCP Mac Address. Everything was working and files were not getting corrupted. However, it only lasted temporarily, and another random address was assigned, breaking several running processes on my machine. Between the router/gateway, there is a redundant apple router involved in the network that is likely causing the issue -but I cannot just throw it away or deactivate it, as it is not my network
Furthermore, DHCP leases work just fine for every other machine on the network; so
I believe the root issue is with vmware.
I have no clue what could possibly cause something like this to occur, as IP address assignment is one of those things that normally "just works". I am thinking about just switching to VitualBox, as I have used it in the past and never had a problem (except with properly running Windows 8. However,I have never actually seen any article suggesting VirtualVox over WMWare, as the latter supposedly performs better and has more intuitive shared folder support. Obviously though, any benefit from a shared folder is negated if it just shares corrupt garbage.
So you manually set a MAC address on your VM? In the past, I've seen VM's change MACs quite often; generally only after a reboot or cold start. It shouldn't happen on the fly... You could install Wireshark and grab a few packet captures to see if anything in there points you in the direction of the root cause.

Resources