Opennebula VM not persisting network config - networking

I've created a VM with a VNET attached on Opennebula, after a while I changed the params of the VNET but those changes do not persist on the VM after my (physical)host is restarted.
I’ve changed the /var/lib/one/vms/{$VM_ID}/context.sh file but still no luck persisting the changes.
Do you know what it could be?
I'm using OpenNebula with KVM on a Debian8 host.

After a while I figure out how to do this myself.
It seems that when the VM is started, the file /var/lib/one/datastores/0/$VM_ID/disk.1 is attached as /dev/sr0.
During boot process /usr/sbin/one-contextd mounts this unit an uses the variables inside it, they usually look like this:
DISK_ID='1'
ETH0_IP='192.168.168.217'
ETH0_MAC='02:00:c0:a8:a8:d9'
ETH0_DNS='192.168.168.217'
ETH0_GATEWAY='192.168.168.254'
This info are used to export ENV variables (the exported variables can be found on /tmp/one_env) which are used by the script /etc/one-context.d/00-network to set network configuration.
OpenNebula doesn't provide a simple way of replacing this configs after the VM is created, but you can do the following:
Edit /var/lib/one/datastores/0/$VM_ID/disk.1 and make the required
changes
Restart opennebula service
Restart the VM
Hope this is useful to someone :)

Yes, the issue is that this functionality is not supported in current versions of OpenNebula. This will be supported in the upcoming 5.0 version.

You can power off the VM and change most of the parameters(not network parameters as they are linked to a vnet) in the conf tab of the VM.
For a network-specific change only, you can simply log-in to the VM and mv the file /etc/one-context.d/00-network to some other place and your changes to the network configuration of VM won't be overwritten by the network context script.

Related

How to manage multiple symfony projects in a development computer

I've seen some post, including How to manage multiple backend stacks for development?, but nothing related to use lxc for a stable, safe and separate development environment, matching the production environment and regardless the desktop and/or linux distribution.
There was a feature previous to symfony cli release that allowed to specify a socket via ip:port, and this allowed to use different names in /etc/hosts using the 127.0.0.0/8 loopback network, so I could always use "bin/console server:start -p:myproject:8000", and I knew that using http://myproject:8000 (specified in /etc/hosts) I could access my project and keep the sessions, etc.
The symfony cli, as far as I've tried, doesn't allow this. Reading the docs, there's a built-in proxy in symfony cli, but though I've set a couple of projects to use this in the container, clicking on the list doesn't open the project (with .wip suffix), and issues an error about proxy redirections. If I browse to the port and ip of the container ip, it works perfectly, but the port is something that can change with every reboot of the container.
If there's nothing that can be set on the proxy side to solve this scenario, I'd ask to take back the socket feature that existed previously, so I can manage this situation as I used to do before, and solve this.
Thanks in advance.
I think I've finally found a good solution. I've created an issue to improve the situation that seemed not to work, so I'll try to explain for whoever might be interested.
I've setup the proxy server built-in with the symfony cli, but instead of allowing it to run with the defaults, I've had to specify --host=proxyhost (resolvable from the host) and setting proxy exceptions for .com, .org, .net, .tv, etc, together with setting a name to attach for every project (issuing symfony proxy:domain:attach myproject from inside the project dir), I can go to http://myproject.wip just like http://proxyhost:portX, no matter which port is portX.

Is any extra set up needed to run vscode-r in remote SSH?

I've been using the session watcher feature in vscode-R, and it works great locally. I was wondering what sort of special configuration is needed to get it working in a remote environment?
If I just use VSCode's instructions to connect to a remote host using the Remote Extension I can get a remote terminal and start radian, but I don't see the anything in the task bar indicating the R session it's attached to (unlike the local version, which works just like in the documentation). None of the features (e.g. showing plots, documentation, variable completion, etc) work.
Do I need any extra set up in the remote machine? Let me know if more information is needed. Thanks!

Saving instances on openstack

I have installed openstack by installing devstack environment, where I am finding difficult to save the work after host reboot.However if I install openstack component wise, will it help me in any ways in saving my work after host reboot, and are there any extra benefits of installing openstack component wise
Installing Openstack component wise would certainly enhance your end to end understanding of how services interact with each other. Devstack is an all in one place sort of installation. For better understanding, I'd recommend to install each component manually following the Openstack documentation.
The cause why the vm data is lost after reboot is because you are launched the vm with the ephemeral disk, which will be gone after the vm reboot.
Try to create the instance with root disk, then you will have the permenant disk.

How to configure WebSphere administration console for two simultaneous logins?

I have two WebSphere 8.0 environments set up; test and production. When I connect to the WebSphere Integrated Solutions Console of the test or production environment I do this via sub.domain.com:port/ibm/console... The difference between accessing the production or test environment console is in the port number of the URL.
My problem is that if I, for example, log into the production environment and I'm already logged into the test environment I can do that, but when I want to switch back to the test console and continue working there I'm prompted to login again.
I think that the problem lies with the cookies and the session. So is there a way to tweak this? I didn't find anything useful in the documentation or on the web... Any reading recommendations? If only a hint, into the right direction :)
Yes, the cookies gets confused, since the only difference is the port.
I use either of the following tricks (depends on the environment I'm working with):
Use different browser for each environment (e.g. FF for prod, and IE for dev)
Access one environment via hostname, other via IP
Create few virtual host names (aliases) in your local etc/hosts file and access different environment via different hostname.

Not able to create instances on CloudStack

I have created a Zone, Pod and Cluster on CloudStack.I have also added a host in the Cluster, added Primary Storage and Secondary Storage. But in System VMs, nothing is listed. Also, in the logs a message "No running ssvm is found, so command will be sent to LocalHostEndPoint" comes.
Somehow I deduced that due to this, template is not being added and consequently Instances can't be created as Instances use templates to add OS in VMs.
Can anybody please help to point out and sort the problem which may be the cause here.
You need to manually install the "system VM" templates. These are the images for worker VMs that CloudStack deploys to run system services. SSVM is an example of a SystemVM. It is responsible for copying templates to secondary storage.
See Prepare the System VM Template in the installation guide.

Resources