Saving instances on openstack - openstack

I have installed openstack by installing devstack environment, where I am finding difficult to save the work after host reboot.However if I install openstack component wise, will it help me in any ways in saving my work after host reboot, and are there any extra benefits of installing openstack component wise

Installing Openstack component wise would certainly enhance your end to end understanding of how services interact with each other. Devstack is an all in one place sort of installation. For better understanding, I'd recommend to install each component manually following the Openstack documentation.

The cause why the vm data is lost after reboot is because you are launched the vm with the ephemeral disk, which will be gone after the vm reboot.
Try to create the instance with root disk, then you will have the permenant disk.

Related

How to keep persistent volumes in sync between clusters?

I'm trying to get an installation of Wordpress running in Kubernetes, as well as have an option of running the same configuration locally in minikube. I want to use the standard Docker image of Wordpress: https://hub.docker.com/_/wordpress/.
I'm having trouble with making sure that the plugins and templates are in sync though. The Docker container exposes a Volume at /var/www/html. Wordpress installation, as well as my plugins will live there.
Assuming I do the development on Minikube, along with the installation of plugins etc. How do handle the move between Persistent Volumes between my local cluster and the target cluster? Should I just reinstall Wordpress every time when the Pod is scaled?
You can follow Writing Portable Configuration (https://kubernetes.io/docs/concepts/storage/persistent-volumes/#writing-portable-configuration) guide for persistent volume if you are planning to migrate it to different cluster.
In a real production scenario you would want to use a standard tool to backup and migrate persistent volumes between clusters. Valero is such a tool which enables you to achieve that.

Opennebula VM not persisting network config

I've created a VM with a VNET attached on Opennebula, after a while I changed the params of the VNET but those changes do not persist on the VM after my (physical)host is restarted.
I’ve changed the /var/lib/one/vms/{$VM_ID}/context.sh file but still no luck persisting the changes.
Do you know what it could be?
I'm using OpenNebula with KVM on a Debian8 host.
After a while I figure out how to do this myself.
It seems that when the VM is started, the file /var/lib/one/datastores/0/$VM_ID/disk.1 is attached as /dev/sr0.
During boot process /usr/sbin/one-contextd mounts this unit an uses the variables inside it, they usually look like this:
DISK_ID='1'
ETH0_IP='192.168.168.217'
ETH0_MAC='02:00:c0:a8:a8:d9'
ETH0_DNS='192.168.168.217'
ETH0_GATEWAY='192.168.168.254'
This info are used to export ENV variables (the exported variables can be found on /tmp/one_env) which are used by the script /etc/one-context.d/00-network to set network configuration.
OpenNebula doesn't provide a simple way of replacing this configs after the VM is created, but you can do the following:
Edit /var/lib/one/datastores/0/$VM_ID/disk.1 and make the required
changes
Restart opennebula service
Restart the VM
Hope this is useful to someone :)
Yes, the issue is that this functionality is not supported in current versions of OpenNebula. This will be supported in the upcoming 5.0 version.
You can power off the VM and change most of the parameters(not network parameters as they are linked to a vnet) in the conf tab of the VM.
For a network-specific change only, you can simply log-in to the VM and mv the file /etc/one-context.d/00-network to some other place and your changes to the network configuration of VM won't be overwritten by the network context script.

Live migration on Openstack

I'm working on a projet on OpenStack. I have installed OpenStack by creating two virtual machines, one for the controller node and the other for the compute node.
Actually, I want to test an example of live migration on openstack and I have found a video which describes the aproch. As the video shows, I need to have 2 compute nodes, and I want to know if I just need to create a second compute node or this second compute should be created at the phase of installation of openstack.
This is the link of the video that I have watched: https://www.youtube.com/watch?v=_4vJUYFGbEM
Thank you
It doesn't matter when you add the compute nodes (During the install or later on). Please also remember that the live-migration piggy backs on the hypervisor. So depending on hypervisor that one uses, this may or may not be possible.
Please look at this http://docs.openstack.org/admin-guide/compute-configuring-migrations.html#section-configuring-compute-migrations to ensure that the migration capability exists
It simply boils down to a few things
The storage is not moved in case of a live migration, so if you have a VM with instance storage, you will need to have a shared file system like NFS or something, If you have an instance backed by a cinder volume you will be able do the migration without the shared storage.
The Nova-Compute application needs to be installed on the destiantion
The hypervisor version should be the same.
I hope this clarifies.
Either works. OpenStack allows you dynamically add and remove computes nodes from a cloud environment.
Please refer to http://docs.openstack.org/admin-guide/compute-configuring-migrations.html for extra details.
Live migration for light instances can be done over network ,without shared storage, but for heavy instances ,shared storage or shared volume will be preferred. As you mentioned you have two compute nodes ,theirs nova storage should be shared storage.
Long answer short in my perspective,
You can add/remove compute node at any time from an OpenStack installation.
For adding compute, follow installation guide to add new compute node right from environment setup.
Also, dont forget to install networking part in your new Compute node.

use julia language without internet connection (mirror?)

Problem:
I would like to make julia available for our developers on our corporate network, which has no internet access at all (no proxy), due to sensitive data.
As far as I understand julia is designed to use github.
For instance julia> Pkg.init() tries to access:
git://github.com/JuliaLang/METADATA.jl
Example:
I solved this problem for R by creating a local CRAN repository (rsync) and setting up a local webserver.
I also solved this problem for python the same way by creating a local PyPi repository (bandersnatch) + webserver.
Question:
Is there a way to create a local repository for metadata and packages for julia?
Thank you in advance.
Roman
Yes, one of the benefits from using the Julia package manager is that you should be able to fork METADATA and host it anywhere you'd like (and keep a branch where you can actually check new packages before allowing your clients to update). You might be one of the first people to actually set up such a system, so expect that you will need to submit some issues (or better yet; pull requests) in order to get everything working smoothly.
See the extra arguments to Pkg.init() where you specify the METADATA repo URL.
If you want a simpler solution to manage I would also think about having a two tier setup where you install packages on one system (connected to the internet), and then copy the resulting ~/.julia directory to the restricted system. If the packages you use have binary dependencies, you might run into problems if you don't have similar systems on both sides, or if some of the dependencies is installed globally, but Pkg.build("Pkgname") might be helpful.
This is how I solved it (for now), using second suggestion by
ivarne.I use a two tier setup, two networks one connected to internet (office network), one air gapped network (development network).
System information: openSuSE-13.1 (both networks), julia-0.3.5 (both networks)
Tier one (office network)
installed julia on an NFS share, /sharename/local/julia.
soft linked /sharename/local/bin/julia to /sharename/local/julia/bin/julia
appended /sharename/local/bin/ to $PATH using a script in /etc/profile.d/scriptname.sh
created /etc/gitconfig on all office network machines: [url "https://"] insteadOf = git:// (to solve proxy server problems with github)
now every user on the office network can simply run # julia
Pkg.add("PackageName") is then used to install various packages.
The two networks are connected periodically (with certain security measures ssh, firewall, routing) for automated data exchange for a short period of time.
Tier two (development network)
installed julia on NFS share equal to tier one.
When the networks are connected I use a shell script with rsync -avz --delete to synchronize the .julia directory of tier one to tier two for every user.
Conclusion (so far):
It seems to work reasonably well.
As ivarne suggested there are problems if a package is installed AND something more than just file copying is done (compiled?) on tier one, the package wont run on tier two. But this can be resolved with Pkg.build("Pkgname").
PackageCompiler.jl seems like the best tool for using modern Julia (v1.8) on secure systems. The following approach requires a build server with the same architecture as the deployment server, something your institution probably already uses for developing containers, etc.
Build a sysimage with PackageCompiler's create_sysimage()
Upload the build (sysimage and depot) along with the Julia binaries to the secure system
Alias a script to julia, similar to the following example:
#!/bin/bash
set -Eeu -o pipefail
unset JULIA_LOAD_PATH
export JULIA_PROJECT=/Path/To/Project
export JULIA_DEPOT_PATH=/Path/To/Depot
export JULIA_PKG_OFFLINE=true
/Path/To/julia -J/Path/To/sysimage.so "$#"
I've been able to run a research pipeline on my institution's secure system, for which there is a public version of the approach.

Not able to create instances on CloudStack

I have created a Zone, Pod and Cluster on CloudStack.I have also added a host in the Cluster, added Primary Storage and Secondary Storage. But in System VMs, nothing is listed. Also, in the logs a message "No running ssvm is found, so command will be sent to LocalHostEndPoint" comes.
Somehow I deduced that due to this, template is not being added and consequently Instances can't be created as Instances use templates to add OS in VMs.
Can anybody please help to point out and sort the problem which may be the cause here.
You need to manually install the "system VM" templates. These are the images for worker VMs that CloudStack deploys to run system services. SSVM is an example of a SystemVM. It is responsible for copying templates to secondary storage.
See Prepare the System VM Template in the installation guide.

Resources