CREATE_FAILED : Flavor's local disks conflict. You need to create a bootable volume to use this flavor for your instance - openstack

As I'm trying to create a stack, I get the following error :
[kypo-proxy-jump-stack]: CREATE_FAILED Resource CREATE failed:
Forbidden: resources.kypo-proxy-jump: Flavor's local disks conflict.
You need to create a bootable volume to use this flavor for your instance. (HTTP 403)
I already tried to boot instance from image and attach non-bootable volume, as given in the link below : https://docs.openstack.org/ocata/user-guide/cli-nova-launch-instance-from-volume.html
but it didn't work.

Related

Error reading data into Spark using spraklyr::spark_read_csv

I'm running Spark in 'standalone' mode on a local machine in Docker containers. I have a master and two workers, each is running in its own Docker container. In each of the containers the path /opt/spark-data is mapped to the same local directory on the host.
I'm connecting to the Spark master from R using sparklyr, and I can do a few things, for example, loading data into Spark using sparklyr::copy_to.
However, I cannot get sparklyr::spark_read_csv to work. The data I'm trying to load is in the local directory that is mapped in the containers. When attaching to the running containers I can see that the file I'm trying to load does exist in each of the 3 containers, in the local (to the container) path /opt/spark-data.
This is an example for the code I'm using:
xx_csv <- spark_read_csv(
sc,
name = "xx1_csv",
path = "file:///opt/spark-data/data-csv"
)
data-csv is a directory containing a single CSV file. I've also tried specifying the full path, including the file name.
When I'm calling the above code, I'm getting an exception:
Error: org.apache.spark.sql.AnalysisException: Path does not exist: file:/opt/spark-data/data-csv;
I've also tried with different numbers of / in the path argument, but to no avail.
The documentation for spark_read_csv says that
path: The path to the file. Needs to be accessible from the
cluster. Supports the ‘"hdfs://"’, ‘"s3a://"’ and ‘"file://"’
protocols.
My naive expectation is that if, when attaching to the container, I can see the file in the container file system, it means that it is "accessible from the cluster", so I don't understand why I'm getting the error. All the directories and files in the path are owned by rood and have read permissions by all.
What am I missing?
try without "file://" and with \\ if your are Windows user.

Cloudstack: Failed to attach volume Data to VM Nas12U41

Failed to attach volume Data to VM Nas12U41; org.libvirt.LibvirtException: internal error: unable to execute QEMU command '__com.redhat_drive_add': could not open disk image /mnt/3c164f13-17f2-3edf-b836-74299f20a559/65bcbd35-4fc5-4714-af04-4712a6a7f0e7: qcow2: Image is corrupt; cannot be opened read/write
you may want to check if the storage is healthy, and the file exists and is healthy. Depending on what filesystem and primary storage you're using you may run fsck or equivalent to recover the corrupted file. You may also want to check file permissions. Can you share your CloudStack version, output of "virsh version", "qemu-img info" on the qcow2 file and KVM host distro.
To discuss further pl join the CloudStack users mailing list and ask there http://cloudstack.apache.org/mailing-lists.html

Grakn Error; trying to load schema for "phone calls" example

I am trying to run the example grakn migration "phone_calls" (using python and JSON files).
Before reaching there, I need to load the schema, but I am having trouble with getting the schema loaded, as shown here: https://dev.grakn.ai/docs/examples/phone-calls-schema
System:
-Mac OS 10.15
-grakn-core 1.8.3
-python 3.7.3
The grakn server is started. I checked and the 48555 TCP port is open, so I don't think there is any firewall issue. The schema file is in the same folder (phone_calls) as where the json data files is, for the next step. I am using a virtual environment. The error is below:
(project1_env) (base) tiffanytoor1#MacBook-Pro-2 onco % grakn server start
Storage is already running
Grakn Core Server is already running
(project1_env) (base) tiffanytoor1#MacBook-Pro-2 onco % grakn console --keyspace phone_calls --file phone_calls/schema.gql
Unable to create connection to Grakn instance at localhost:48555
Cause: io.grpc.StatusRuntimeException
UNKNOWN: Could not reach any contact point, make sure you've provided valid addresses (showing first 1, use getErrors() for more: Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=5f59fd46): com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...] init query OPTIONS: error writing ). Please check server logs for the stack trace.
I would appreciate any help! Thanks!
Nevermind -- I found the solution, in case any one else runs into a similar problem. The server configuration file needs to be edited: point the data directory to your project data files (here: the phone_calls data files) & change the server IP address to your own.

Stack update feature is not working with volumes when we add a extra node (scaling scenario)

I have a stack template with 3 nodes , each node having a volume attached ! Now i updated my template with additional node and additional volume for that node ! This results in user_data update of a node in template ! And when i perform stack update feature , it gives me error: Invalid volume: Volume 01e40c6e-4467-42fe-ba9d-ce7012db8978 status must be available or downloading to reserve, but the current status is in-use.
This shows for the node (where user_data is changed) and yes it is currently in use ! Then how one can update the stack using stack update feature of openstack with volumes ?
Below is the file i create on that node user_data , so adding a node updates this user_data as well:
cat << 'EOF' > mydata.json
{
"hosts":{
"nodes":{
"node-1":{
"my_lan":{
"hostname":"node-1",
"ip":"~node-1-my_lan-ip~",
"interface": "eth0"
}
},
"node-2":{
"my_lan":{
"hostname":"node-2",
"ip":"~node-2-my_lan-ip~",
"interface": "eth0"
}
}
}
We have a similar used case where we create two separate stacks, One for vm and other one for cinder volumes.
Create cinder volume stack first and then associate the cinder volume id with the vm stack.
This used case will give you fine control on maintaining the cinder volumes and vm.
Obviously any change in VM user data is going to recreate the vm instead of rebuild action and that is what you are running onto.

Openstack Error: Unable to retrieve instances, volume list, snapshot list etc

I have installed devstack on ubuntu 14.04 and have rebooted Virtual Machine, but after reboot openstack is not working and getting the following errors:
Overview -> Error: Unable to retrieve usage information
Instances -> Error: Unable to retrieve instances.
Volumes -> Error: Unable to retrieve snapshot list.
Images -> Error: Unable to retrieve images.
As I have found the solution on stack overflow, there is no ./rejoin-stack to retain all the Virtual Machines
So I am trying to execute the command inside devstack directory i.e.
cd devstack
$devstack screen -c stack-screenrc
By running above command, I found that Virtual machine network interfaces are automatically created and by this the physical wired connection is getting disconnected.
I have attached the screen shot of network interfaces created.
Can anyone help me please,
Thanks
Any inputs??

Resources