Machine's uptime in OpenStack - openstack

I would like to know (and retrieve via REST API) the uptime of individual VMs running in OpenStack.
I was quite surprised that OpenStack web UI has a colon called "Uptime" but it actually show time since the VM was created. If i stop the VM, the UI shows Status=Shutoff, Power State=Shutdown, but the Uptime is still being incremented...
Is there a "real" uptime (I mean for a machine that is UP)?
Can I retrieve it somehow via the OpenStack's REST API?
I saw the comment at How can I get VM instance running time in openstack via python API? but the page with the extension mentioned there does not exists and it looks to me that this extension will not be available in all OpenStack environment. I would like to have some standard way to retrieve the uptime.
Thanks.
(Version Havana)

I haven't seen any documentation saying this is the reason, but the nova-scheduler doesn't differentiate between a running and powered off instance. So your cloud can't be over-allocated or leave an instance in a position that would be unable to be powered on. I would like to see a metric of actual system runtime as well, but at the moment the only way to gather that would be through ceilometer or via Rackspaces StackTach

Related

Migrate from legacy network in GCE

Long story short - I need to use networking between projects to have separate billing for them.
I'd like to reach all the VMs in different projects from a single point that I will use for provisioning systems (let's call it coordinator node).
It looks like VPC network peering is a perfect solution to this. But unfortunately one of the existing networks is "legacy". Here's what google docs state about legacy networks.
About legacy networks
Note: Legacy networks are not recommended. Many newer GCP features are not supported in legacy networks.
OK, naturally the question arises: how do you migrate out of legacy network? Documentation does not address this topic. Is it not possible?
I have a bunch of VMs, and I'd be able to shutdown them one by one:
shutdown
change something
restart
unfortunately it does not seem possible to change network even when VM is down?
EDIT:
it has been suggested to recreate VMs keeping the same disks. I would still need a way to bridge legacy network with new VPC network to make migration fluent. Any thoughts on how to do that using GCE toolset?
One possible solution - for each VM in the legacy network:
Get VM parameters (API get method)
Delete VM without deleting PD (persistent disk)
Create VM in the new VPC network using parameters from step 1 (and existing persistent disk)
This way stop-change-start is not so different from delete-recreate-with-changes. It's possible to write a script to fully automate this (migration of a whole network). I wouldn't be surprised if someone already did that.
UDPATE
https://github.com/googleinterns/vm-network-migration tool automates the above process, plus it supports migration of a whole Instance Group or Load Balancer, etc. Check it out.

How does jupyterhub work?

I have to construct the infrastructure so that multiple users can work on the same jupyter(ipython notebook) service, yet via different sessions, so the users can't interrupt each other.
I thought jupyterhub( https://github.com/jupyter/jupyterhub) is there to control everything, yet it still seems like the session is bound to one since if I logout of it on one window, an instance on another window also logs out.
Is there a way to control multi-sessions on jupyter?
Jupyter doesn't support multiple users editing the same notebook at the same time without data loss. I don't believe it is meant to. I believe Jupyter is meant to provide a relatively easy to configure and install instance of python that contains the same installed modules and environment to minimize problems caused by environmental differences between developer workstations.
Also, it's meant to make the barrier for entry to programming python and working in data science much lower than it otherwise would be. That is, it's much easier to talk an analyst into visiting a website than learning a new programming language.
More to the point of your question, though: The way Jupyter handles 'sessions' is that (unless configured otherwise), every Jupyter user corresponds to a user on the on the server that is running Jupyter and every time you log in to Jupyter you are effectively creating a new login to that server's operating system. It immediately follows that if you log out of Jupyter from one window, you're logging out of not just that browser's session, but also the login to the Jupyter server's operating system as well, which would kill all other open browser windows.
You question is a bit unclear, JupyterHub is meant to support multi-user across many machines. If course if you use the same browser from the same machine, you get logged out too, as the browser is carrying the connexion information that get revoked.
Jupyterhub is a web based multiuser application, that provides session and authentication services.
Jupyterhub will be hosted in unix/linux server, the client can access it using the ip address and port number,Once it is accessed by client, the client must enter the userid and password which is associated with the sytem users in server (PAM authentication) which will redirect to the home directory of the current user.
You can build a infrastructure by using jupyterhub, which is meant for multi-user. The jupyterhub just provides multi user interface and PAM authentication, you have to configure security, file access permission everything in kernel level using shell script.
Normally, you host a jupyterhub or jupyter notebook in command line. In the same way you can write a shell script to setup multi-user environment.

How to start multiple virtual machines simultaneously in CloudStack

Is there a way to start multiple virtual machines (instances) simultaneously in CloudStack?
Apparently this can't be done using the http user interface. Also, the http API request specifies only one id for targeting the virtual machine.
All I can think to solve this problem is to fire multiple individual start requests for each instance, then polling each of the job for results. Is there a better way?
CloudStack is an API driven system, if there is no API call where you can specify multiple VMs to be created (and I don't think there is), then it is not possible.
If you do need to create multiple machines (nearly) simultaneously, the only option I see is to fire multiple API calls, as you already mentioned.
See this answer on another question for a list of tools that make interfacing with CloudStack easier.
To start VM on cloudstack simultaneously tho in serial, I used cloudmonkey and created a bash script to setup a group of know VM UUID. See here for my experience
https://sites.google.com/site/cloudfyp/tutorial/cloudmonkey/commands-on-cloudmonkey

how to get instances back after reboot in openstack

After successful installation of devstack and launching instances,but once reboot machine, need to start all over again and lose all the instances which were launched back then.I tried rejoin-stack but did not worked,How can i get the instances back after reboot ?
You might set resume_guests_state_on_host_boot = True in nova.conf. The file should be located at /etc/nova/nova.conf
I've found some old discussion http://www.gossamer-threads.com/lists/openstack/dev/8772
AFAIK at the present time OpenStack (Icehouse) still not completely aware about environments inside it, so it can't restore completely after reboot. The instances will be there (virsh domains), but even if you start them manually or using nova flags I'm not sure whether other facilities will handle this correctly (e.g. neutron will correctly configure all L3 rules according to DB records, etc.) Honestly I'm pretty sure they won't...
The answer depends of what you need to achieve:
If you need a template environment (e.g. similar set of instances and networks each time after reboot) you may just script everything. In other words just make a bash script creating everything you need and run it each time after stack.sh. Make sure you're starting with clean environment since OpenStack DB state remains between ./unstack - ./stack.sh or ./rejoin-stack.sh (you might try to just clean DB, or delete it. stack.sh will build it back).
If you need a persistent environment (e.g. you don't want to loose VM's and whole infrastructure state after reboot) I'm not aware how to do this using OpenStack. F.e. neutron agents (they configure iptables, dhcp etc) do not save state and driven by events from Neutron service. They will not restore after reboot, so the network will be dead. I'll be very glad if someone will share a method to do such recovery.
In general I think OpenStack is not focusing on this and will not focus during the nearest release cycles. Common approach is to have multi-node environment where each node is replaceable.
See http://docs.openstack.org/high-availability-guide/content/ch-intro.html for reference
devstack is an ephemeral environment. it is not supposed to survive a reboot. this is not a supported behavior.
that being said you might find success in re-initializing the environment by running
./unstack.sh
follower by
./stack.sh
again.
Again, devstack is an ephemeral environment. It's primary purpose for existing is to run gate testing for openstack's CI infrastructure.
or try ./rejoin-stack.sh to re-join previous screens.

Azure Virtual Network Point-to-Site (ex. Azure Connect) autoconnect

While Azure Connect is being retired and Azure Virtual Network provides similar feature with better speed, i've noticed few drawbacks though.
Azure Connect was capable of maintaining connection automatically, without user even having to log in. Azure Virtual Network however requires user to interactively connect/reconnect to VPN. This makes it quite unusable in production environment. Are there any ways to overcome this obstacle?
To solve this problem you can use rasdial.
First time i used rasdial i run into this problem:
This function is not supported on this system. Don't get fooled by this message because its just that you didn't give the correct syntax.
rasdial "Your VPN name" /phonebook:%userprofile%\AppData\Roaming\Microsoft\Network\Connections\Cm\Your-VPN\Your-VPN.pbk"
%userprofile% is de user profiel you used to install Azure vpn with.
Your-VPN is de name of the azure vpn connection.
A simpel methode is to make a batch script:
SET VPN_NAME=azureVPN
:loop
rasdial %VPN_NAME% /PHONEBOOK:C:\Users\bas\AppData\Roaming\Microsoft\Network\Connections\Cm\%VPN_NAME%\%VPN_NAME%.pbk
timeout 10
goto loop
result will be:
Connecting to test...
Verifying username and password...
Registering your computer on the network...
Successfully connected to test.
Command completed successfully.
after 10 seconds:
You are already connected to test.
Command completed successfully.
To let this script start when the computer starts use the taskscheduler.
This works you just need to go to the folder and get the long name for the phone book from that folder. Also the AzureVPN (the name) should be replaced with the same thing without .pbk

Resources