Launch an instance from a non-glance image in OpenStack - openstack

I have VM images stored locally on my system. Is there any way I could launch the instances without uploading them to glance or using them as Cinder volumes ?

First, you must check type of existing VM image, you can use qemu-img info {image_path} for that purpose, and then create cinder volume with size of RAW image (qemu-img info show it).
Second, if image type has RAW type, run:
dd if={image_path} of={volume_path-volume_id},
if image has other format do:
qemu-img convert -O raw {image_path} {volume_path-volume_id}.
Third, you must set bootable flag for this volume (with command-line or horizon).
Fourthly, boot instance from this volume.
{image_path} - is path to VM image on file system, like:
/tmp/images/my-vm-image.iso
{volume_path-volume_id} - is path to cinder volume on file system, like:
/dev/mapper/data-volume--blabla--cinder--volume--id for LVM or /mnt/nfs/volume-blabla-cinder-volume-id for NFS.

Related

Error reading data into Spark using spraklyr::spark_read_csv

I'm running Spark in 'standalone' mode on a local machine in Docker containers. I have a master and two workers, each is running in its own Docker container. In each of the containers the path /opt/spark-data is mapped to the same local directory on the host.
I'm connecting to the Spark master from R using sparklyr, and I can do a few things, for example, loading data into Spark using sparklyr::copy_to.
However, I cannot get sparklyr::spark_read_csv to work. The data I'm trying to load is in the local directory that is mapped in the containers. When attaching to the running containers I can see that the file I'm trying to load does exist in each of the 3 containers, in the local (to the container) path /opt/spark-data.
This is an example for the code I'm using:
xx_csv <- spark_read_csv(
sc,
name = "xx1_csv",
path = "file:///opt/spark-data/data-csv"
)
data-csv is a directory containing a single CSV file. I've also tried specifying the full path, including the file name.
When I'm calling the above code, I'm getting an exception:
Error: org.apache.spark.sql.AnalysisException: Path does not exist: file:/opt/spark-data/data-csv;
I've also tried with different numbers of / in the path argument, but to no avail.
The documentation for spark_read_csv says that
path: The path to the file. Needs to be accessible from the
cluster. Supports the ‘"hdfs://"’, ‘"s3a://"’ and ‘"file://"’
protocols.
My naive expectation is that if, when attaching to the container, I can see the file in the container file system, it means that it is "accessible from the cluster", so I don't understand why I'm getting the error. All the directories and files in the path are owned by rood and have read permissions by all.
What am I missing?
try without "file://" and with \\ if your are Windows user.

CREATE_FAILED : Flavor's local disks conflict. You need to create a bootable volume to use this flavor for your instance

As I'm trying to create a stack, I get the following error :
[kypo-proxy-jump-stack]: CREATE_FAILED Resource CREATE failed:
Forbidden: resources.kypo-proxy-jump: Flavor's local disks conflict.
You need to create a bootable volume to use this flavor for your instance. (HTTP 403)
I already tried to boot instance from image and attach non-bootable volume, as given in the link below : https://docs.openstack.org/ocata/user-guide/cli-nova-launch-instance-from-volume.html
but it didn't work.

Cloudstack: Failed to attach volume Data to VM Nas12U41

Failed to attach volume Data to VM Nas12U41; org.libvirt.LibvirtException: internal error: unable to execute QEMU command '__com.redhat_drive_add': could not open disk image /mnt/3c164f13-17f2-3edf-b836-74299f20a559/65bcbd35-4fc5-4714-af04-4712a6a7f0e7: qcow2: Image is corrupt; cannot be opened read/write
you may want to check if the storage is healthy, and the file exists and is healthy. Depending on what filesystem and primary storage you're using you may run fsck or equivalent to recover the corrupted file. You may also want to check file permissions. Can you share your CloudStack version, output of "virsh version", "qemu-img info" on the qcow2 file and KVM host distro.
To discuss further pl join the CloudStack users mailing list and ask there http://cloudstack.apache.org/mailing-lists.html

Systrace - error truncating /sys/kernel/debug/tracing/set_ftrace_filter: No such device (19) unable to start

I am currently working on a project which aims to find out what the system is doing behind a series of user interaction on the android UI. For example, if user click send button in Facebook Messenger, the measured response time for such action is 1.2 seconds. My goal is to figure out what the 1.2 seconds consist of. My friend suggested that I should take a look into 'Systrace'.
However, when I tried systrace on my HTC one M8, I have encountered some problems:
First, error opening /sys/kernel/debug/tracing/options/overwrite - no such file or directory. I solved this problem by building up the support of the kernel following http://opensourceforu.com/2010/11/kernel-tracing-with-ftrace-part-1/ and mount -t debugfs none /sys/kernel/debug. Then I could find the tracing directory. Besides, I set ro.debuggable=1 in file default.prop within Ramdisk and burn the boot.img into my phone.
Now I encounter another problem: when I run - python systrace.py --time=10 -o mynewtrace.html sched gfx view wm, the following error(19) pop up: error truncating /sys/kernel/debug/tracing/set_ftrace_filter: No such device (19). I don't know if the way my building up kernel support for systrace is incorrect or anything is missing.
Could anyone helps me out with this problem, please?
I think I have worked out the solution. My environment is Ubuntu 16.04 + HTC one M8. I will write the steps as followed:
open terminal and enter: $adb shell
(1) $su (2) $mount -t debugfs none /sys/kernel/debug. Now you should be able to see many directories under /sys/kernel/debug/. (You may cd into /sys/kernel/debug to confirm this)
New a new terminal and enter: dd if=/dev/block/platform/msm_sdcc.1/by-name/boot of=/sdcard/boot.img to generate the boot.img kernel image from your device.
Use AndroidImageKitchen to unpack the boot.img and find the default.prop within Ramdisk folder. Then change ro.debuggable=0 to ro.debuggable=1. Repack the boot.img and flash boot it to your device.
Once the device boot, under terminal, enter: adb root and message like: restarting adbd as root may pop up. Disconnect the USB and connect again.
cd to the systrace folder, e.g. ~/androidSDK/platform-tools/systrace and use:
python systrace.py --time=10 -o mynewtrace.html sched gfx view wm
Now you may able to generate your own systrace files.

openstack glace unknown command

every image related command I use ends up with an error:
glance image-create --name='Ubuntu 12.04 x86_64 Server' --disk-format=qcow2 --container-format=bare --public < precise-server-cloudimg-amd64-disk1.img
Usage: glance [options] [args]
Commands:
help <command> Output help for one of the commands below
add Adds a new image to Glance
update Updates an image's metadata in Glance
delete Deletes an image from Glance
index Return brief information about images in Glance
details Return detailed information about images in
Glance
show Show detailed information about an image in
Glance
clear Removes all images and metadata from Glance
Member Commands:
image-members List members an image is shared with
member-images List images shared with a member
member-add Grants a member access to an image
member-delete Revokes a member's access to an image
members-replace Replaces all membership for an image
glance: error: no such option: --name
Cut down the above command and got this:
glance image-create
Usage: glance [options] [args]
Commands:
help <command> Output help for one of the commands below
add Adds a new image to Glance
update Updates an image's metadata in Glance
delete Deletes an image from Glance
index Return brief information about images in Glance
details Return detailed information about images in
Glance
show Show detailed information about an image in
Glance
clear Removes all images and metadata from Glance
Member Commands:
image-members List members an image is shared with
member-images List images shared with a member
member-add Grants a member access to an image
member-delete Revokes a member's access to an image
members-replace Replaces all membership for an image
Unknown command: image-create
I can't figure it out.
NOTE: I'm using a VM running Ubuntu Precise and python-glanceclient is no longer available for it.
For your first command, you are using the '=' character, which is not required. Note the glance image-create command help:
usage: glance image-create [--id <IMAGE_ID>] [--name <NAME>] [--store <STORE>]
[--disk-format <DISK_FORMAT>]
[--container-format <CONTAINER_FORMAT>]
[--owner <TENANT_ID>] [--size <SIZE>]
[--min-disk <DISK_GB>] [--min-ram <DISK_RAM>]
[--location <IMAGE_URL>] [--file <FILE>]
[--checksum <CHECKSUM>] [--copy-from <IMAGE_URL>]
[--is-public {True,False}]
[--is-protected {True,False}]
[--property <key=value>] [--human-readable]
[--progress]
The second command should be valid. You may want to reinstall the glance commands. Try installing it using pip:
sudo pip install python-glanceclient

Resources