I am trying to create a Eucalyptus EBS backed image from an Ubuntu cloud image. I have tried all sorts of different methods I have found on Google and nothing works. I have tried creating a new partition, labeling it cloudimg-rootfs and verifying both fstab and menu.lst are looking for that. I have tried just dd from the image to both the disk itself and to a primary partition and several other methods.. Nothing works. Does anyone actually know how to get a bfEBS going on eucalyptus?
Thanks!
Please refer to the following KB article:
Creating an Ubuntu EBS-backed EMI from an Existing Instance Store-Backed Instance
This explains how to create an EBS-backed Ubuntu image from an existing Ubuntu instance store-backed instance on Eucalyptus.
Please let us know if you have any questions.
Cheers,
Related
It seems that snapshots and instances are very similar (e.g. https://serverfault.com/questions/527449/why-does-openstack-distinguish-images-from-snapshots).
However, I've been unable to share snapshots publicly globally (i.e. across all projects). Note, I'm a user of the OpenStack installation, not an administrator of the installation.
Assuming that Images don't suffer the same limitation of Snapshots, is there a procedure for convert a snapshot to an image? If not, maybe I should ask a separate question, but my cloud admin tells me it needs to be an image.
for download
glance-image download
Initially I tried this:
openstack image save --file NixOS.qcow2 5426bbcf-06b3-42f3-b117-37395e7dde83
However, the reported size of NixOS.qcow2 was always 0 bytes. Not good. Well, the issue was apparently related to the fact that is is also what was reported in OpenStack and Horizon for the size of the Snapshot. So something weird was going on, but functionally, I could still use the snapshot to create instances without issue.
I then created a volume of a snapshot in horizon (when shut off, couldn't create a volume when shelved), then used this command to create an image from the newly created volume (NixSnapVol):
openstack image create --volume NixSnapVol NixSnapVol-img
Interestingly the reported size went from 41GB to 45GB, maybe that was part of the issue. Anyway, seems to work now, and bonus is that it is now RAW type instead of qcow2, so I don't need to do the conversion (our system largely benefits from using RAW as we have a ceph backend).
I am trying to migrate the credentials from one Jenkins to another but usernames/passwords are hashed in ${JENKINS_HOME}/credentials.xml
I found this answer, but the problem is it doesn't explain where would someone find the encryption key in order to successfully migrate credentials.
Any help is greatly appreciated!
EDIT: More information.. my ${JENKINS_HOME} is on a separate volume which I detach and re-attach onto the new VM, and it still doesn't work with me.
I found this analysis (link is dead as of June 2020, archived here) very helpful. In a nutshell:
Jenkins uses the master.key to encrypt the key hudson.util.Secret.
This key is then used to encrypt the password in credentials.xml.
When I need to bootstrap new Jenkins instances with some default passwords, I use a template directory tree that contains
secrets/hudson.util.Secret and
secrets/master.key
This works fine.
Regarding JENKINS migration, I recently experienced this situation and after few testings, my workaround worked for me.
Here is what I did:
I moved below files and folders from Source Jenkins to target:
$JENKINS_HOME/secret.key
$JENKINS_HOME/secrets
$JENKINS-HOME/users
$JENKINS_HOME/credentials.xml
Please note: These files are not required to move:
$JENKINS_HOME/identity.key.enc
$JENKINS_HOME/secrets/org.jenkinsci.main.modules.instance_identity.InstanceIdentity.KEY
otherwise you will see below error after starting Jenkins:
java.lang.AssertionError: InstanceIdentity is missing its singleton
Jenkins will automatically generate those two files. Once started, you should be good.
I'm working on a projet on OpenStack. I have installed OpenStack by creating two virtual machines, one for the controller node and the other for the compute node.
Actually, I want to test an example of live migration on openstack and I have found a video which describes the aproch. As the video shows, I need to have 2 compute nodes, and I want to know if I just need to create a second compute node or this second compute should be created at the phase of installation of openstack.
This is the link of the video that I have watched: https://www.youtube.com/watch?v=_4vJUYFGbEM
Thank you
It doesn't matter when you add the compute nodes (During the install or later on). Please also remember that the live-migration piggy backs on the hypervisor. So depending on hypervisor that one uses, this may or may not be possible.
Please look at this http://docs.openstack.org/admin-guide/compute-configuring-migrations.html#section-configuring-compute-migrations to ensure that the migration capability exists
It simply boils down to a few things
The storage is not moved in case of a live migration, so if you have a VM with instance storage, you will need to have a shared file system like NFS or something, If you have an instance backed by a cinder volume you will be able do the migration without the shared storage.
The Nova-Compute application needs to be installed on the destiantion
The hypervisor version should be the same.
I hope this clarifies.
Either works. OpenStack allows you dynamically add and remove computes nodes from a cloud environment.
Please refer to http://docs.openstack.org/admin-guide/compute-configuring-migrations.html for extra details.
Live migration for light instances can be done over network ,without shared storage, but for heavy instances ,shared storage or shared volume will be preferred. As you mentioned you have two compute nodes ,theirs nova storage should be shared storage.
Long answer short in my perspective,
You can add/remove compute node at any time from an OpenStack installation.
For adding compute, follow installation guide to add new compute node right from environment setup.
Also, dont forget to install networking part in your new Compute node.
I have been able to successfully install and start the cloudera CDH5 server and manager and all the core projects along with that, viz. HDFS, HUE, HIVE etc. However recently I deleted the temporary hdfs directory (/dfs/*) and then formatted the namenode due to certain issues. Now I find all new sorts of issues which I am not able solve.
Some are given as below:
The problem with hue,
The problem with HDFS,
Any help would highly be appreciated.
Thanking in advance.
Edit: I have tried creating all those missing directories in both HDFS as well as local FS and have tried various owners for them to without success.
What helped me, being the most easy solution, is deleting all those services from the cloudera web manager and readding them back.
I have a WordPress website up and running with many plugins installed on it and a huge database, I need to use chef-solo in order to create an environment in which can install the same website with all its plugins and and also importing its database.
I need it to be like, using chef to install the same website on a different server, exactly the same
Now here are my questions:
I know we can use chef to install WordPress but can we set it in a
way that we don't need to configure the the WordPress and everything
is already set once its running?
What to do with the plugins? can we install them using the chef or
now that should be done manually?
How about importing the database, that can be done with chef-solo
as well?
The whole website is on git, can I somehow import the whole
thing?
is there any other issue I may possibly face? if I want do that?
There is a wordpress cookbook openly available for chef.
When you mean configure, I take it you mean setup data in the database. Assuming that you've separated the database instance from the server instance, and you're attempting to scale up the number of servers then you should be able to skip data setup. You should be configuring the new server instance (node) to point to the same database via Chef.
I stumbled into this question looking for the answer to this question. From what I can tell the start may be here.
Kind of hand-wavy, but this should enable you to do some wordpress stuff via the command line with Chef, rather than the point and click it prefers.
As per #1, you should not need to import the database. If the database goes down, you'll want to focus on that as a separate but connected recipe, since then you'll want to be taking snapshots and uploading them somewhere like S3 via a cron job. I believe there are plugins that can enable this.
You'll have to be a little more clear by "import". If it's in a code base you may be able to short-cut your cookbook path by pulling down the git repo onto the host. You may want to look at git-archive.
Other issues that I'm looking at are images. We're migrating from a hosted solution to AWS, and it appears that instead of storing the images in the database, word-press pulls them into a local directory. This means that if we scale to > 1 host, we'll have issues with images. Something to think about, there's a wealth of plugins that can probably solve this.
Hope this is helpful,
Ben