Impact of restarting nova compute service - openstack

I am looking for some guidance related to the following question:
What will be the impact on the running VMs if nova compute service is restarted?
Openstack version: Newton
I understand that new connections will probably be affected as the nova-compute api will be unavailable for a few seconds. But will there be any risk to the running VMs ?
Found few articles like this but the answers are pretty vague.

As you suggest, restarting the compute service on a hypervisor will have no impact on running VMs. They will continue running and keep their connections alive. It can be considered as a "safe" operation.
However, actions such rebuild, delete, reboot, etc. will fail as long as the service is down. In addition, note that restarting the hypervisor itself (qemu or kvm , for instance) would have impact in the running VMs.

Related

Seeing error logging for a Flask application in digitalocean

I am running a Flask application in digitalocean via a Gunicorn and NGINX set up.
I have SSH access to my digitalocean droplet and am able to log in via the terminal.
Gunicorn, NGINX and Flask are already running and this is a production server.
Now, I'd like to SSH into my droplet and run a terminal command in order to see a print out of any errors that occur from my Flask application. I guess it would gunicorn errors.
Is such a thing possible? Or would I have to print things out to an error log? If so, I'll probably have questions about how to do that too! :D
Thank you in advance!!
I
Have a look at this Digital Ocean tutorial for Flask, gunicorn and NGINX. It includes a section on logging obtaining logs at the end of section #5 that should be helpful.
A common approach with Cloud-based deployments is to centralize logs by aggregating them automatically from multiple resources e.g. Droplets to save ssh'ing (or scp'ing) to to machines to query logs.
With a single Droplet, it's relatively straightforward to ssh in to the Droplet and query the logs but, as the number of resources grows, this can become burdensome. Other Cloud providers provide logging-as-a-service solutions. With Digital Ocean you may want to look at alternative solutions. I'm not an expert but the ELK stack, Splunk and Datadog are often used.

Amazon ECS Application level monitoring and instance autohealing without ELB

i'm deploying and app to Amazon ECS and need some advice on application level monitoring (periodic HTTP 200 and/or body match). Usually i place it behind an ELB and i am sure that my ELB will take action if it sees too many HTTP errors.
However this time it's a very low budget project and the budget for the ELB should be avoided (also consider this is going to work with only one instance as the userbase is very limited).
What strategies could i adopt to grant that the application is alive (kill instance and restart in case of too many app errors)? Regarding the instance i know about AWS autohealing but that's infrastructure.
Obviously one of the problems is that not having an ELB i must bind the DNS to an EIP....so reassigning it it's crucial.
And obviously the solution should not involve any other EC2 instance, external services are acceptable but keeping it all inside AWS would be great.
Thanks a lot
Monitoring of ECS is important to improve the importance of your site. If you still think there could be issues related to deployment on AWS, I suggest to practice auto-scaling feature of AWS.
You can scale up ECS when needed and release it when not required.
Nagios is another open source monitoring tool that you can leverage. Easy to install and configure.

Vagrant performance on HTTP

I am seeing very slow vagrant performance, regardless of provider or number of cores I allocate. Seems to be an issue with either OSX, Vagrant settings, or the VMWare Fusion / VirtualBox provider.
I run a PHPUnit testsuite, inside the VM it takes seconds.. outside it takes minutes.
How can I optimize HTTP performance ? I'm already utilizing NFS.

Can OpenStack be configured to start certain instances when the hypervisor reboots?

Say we have a power outage, and a hypervisor gets reset. Openstack will start up the nova services, etc.
But, it will not start back up any VMs which were running when the hypervisor went down. Can it be configured to do this?
Perhaps one workaround would be a to make a startup script on the hypervisor, or a crontask somewhere, which starts specified VMs assigned to that hypervisor if they are not running, but that's less than ideal..
Currently running Havana with KVM, but will be upgrading to Icehouse soon.
There's this section in /etc/nova/nova.conf:
# Whether to start guests that were running before the host
# rebooted (boolean value)
#resume_guests_state_on_host_boot=false
If you uncomment that last line, and change it to =true, then restart nova services everywhere, that should do what you want.

CloudStack installation: Error creating instance

I am a newbie to CloudStack. I installed it using Quick Install Guide on CentOS 6.4 and kvm. As my network is a LAN at my faculty and there is a DHCP and Gateway for connecting to internet, at the beginning I defined address pool of Pod and guest withing the range of LAN addresses. But I wasn't able to create instances getting this error:
2013-06-17 13:44:03,445 DEBUG [cloud.storage.StorageManagerImpl] (Job-Executor-1:job-9) Insufficient un-allocated capacity on: 200 for volume allocation: [Vol[3|vm=3|ROOT]] since its allocated percentage: 2.2468850974329963E7 has crossed the allocated pool.storage.allocated.capacity.disablethreshold: 0.85, skipping this pool
I guessed this is because of IP conflicts between pool and dhcp range. So I re-installed it (removed and re-installed cloud-agent and cloud-client), and this time defined a different subnet for address pools (255.0.0.0) from dhcp subnet (255.255.0.0). But now I'm getting this error while "Creating system VMs" after doing the configuration using UI:
WARN [cloud.consoleproxy.ConsoleProxyManagerImpl] (consoleproxy-1:) Exception while trying to start console proxy com.cloud.exception.AgentUnavailableException: Resource [Host:1] is unreachable: Host 1: Unable to start instance due to Unable to start VM[ConsoleProxy|v-2-VM] due to error in finalizeStart, not retrying at com.cloud.vm.VirtualMachineManagerImpl.advanceStart(VirtualMachineManagerImpl.java:847) at com.cloud.vm.VirtualMachineManagerImpl.start(VirtualMachineManagerImpl.java:472) at com.cloud.vm.VirtualMachineManagerImpl.start(VirtualMachineManagerImpl.java:465) at com.cloud.consoleproxy.ConsoleProxyManagerImpl.startProxy(ConsoleProxyManagerImpl.java:627) at com.cloud.consoleproxy.ConsoleProxyManagerImpl.allocCapacity(ConsoleProxyManagerImpl.java:1164) at com.cloud.consoleproxy.ConsoleProxyManagerImpl.expandPool(ConsoleProxyManagerImpl.java:1981) at com.cloud.consoleproxy.ConsoleProxyManagerImpl.expandPool(ConsoleProxyManagerImpl.java:173) at com.cloud.vm.SystemVmLoadScanner.loadScan(SystemVmLoadScanner.java:113) at com.cloud.vm.SystemVmLoadScanner.access$100(SystemVmLoadScanner.java:34) at com.cloud.vm.SystemVmLoadScanner$1.reallyRun(SystemVmLoadScanner.java:83) at com.cloud.vm.SystemVmLoadScanner$1.run(SystemVmLoadScanner.java:73) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) Caused by: com.cloud.utils.exception.ExecutionException: Unable to start VM[ConsoleProxy|v-2-VM] due to error in finalizeStart, not retrying at com.cloud.vm.VirtualMachineManagerImpl.advanceStart(VirtualMachineManagerImpl.java:814) ... 19 more
Is the problem related to IP conflict? If yes, how to solve it?
Thanks
You can create a nested hypervisor lab and deploy cloudstack. The easiest way to use cloudstack with opensource Xenserver hypervisor.
Bellow post shows the same process with steps.
Part 2: http://www.cloudometry.in/2015/03/apache-cloudstack-implementation-step_29.html
Part 3: http://www.cloudometry.in/2015/03/apache-cloudstack-implementation-step_96.html
First the last exception:
Some background is required to understand why CloudStack is trying to start a VM on your hypervisor. The VM is a a System VM. System VMs are used by CloudOSs to distributed services across a cloud. Currently, CloudStack has three kinds: secondary storage, virtual router and console proxy VM. The console proxy VM allows you to view the virtual framebuffer of a VM. Usually you need access to the hypervisor itself. Instead, a console proxy is installed on the hypervisor. Note that system VMs and user VMs are started using the same code. Therefore, the exception suggests that there is a general problem with creating VMs.
The specific details of the exception suggest that CloudStack cannot reach the agent on the KVM box. Why not check that you can SSH from the CloudStack management server to the KVM hypervisor? Secondly, look at the outgoing connections from the KVM box. Is there a TCP connection from the agent on the KVM box to the CloudStack management server?

Resources