Stack update feature is not working with volumes when we add a extra node (scaling scenario) - openstack

I have a stack template with 3 nodes , each node having a volume attached ! Now i updated my template with additional node and additional volume for that node ! This results in user_data update of a node in template ! And when i perform stack update feature , it gives me error: Invalid volume: Volume 01e40c6e-4467-42fe-ba9d-ce7012db8978 status must be available or downloading to reserve, but the current status is in-use.
This shows for the node (where user_data is changed) and yes it is currently in use ! Then how one can update the stack using stack update feature of openstack with volumes ?
Below is the file i create on that node user_data , so adding a node updates this user_data as well:
cat << 'EOF' > mydata.json
{
"hosts":{
"nodes":{
"node-1":{
"my_lan":{
"hostname":"node-1",
"ip":"~node-1-my_lan-ip~",
"interface": "eth0"
}
},
"node-2":{
"my_lan":{
"hostname":"node-2",
"ip":"~node-2-my_lan-ip~",
"interface": "eth0"
}
}
}

We have a similar used case where we create two separate stacks, One for vm and other one for cinder volumes.
Create cinder volume stack first and then associate the cinder volume id with the vm stack.
This used case will give you fine control on maintaining the cinder volumes and vm.
Obviously any change in VM user data is going to recreate the vm instead of rebuild action and that is what you are running onto.

Related

CREATE_FAILED : Flavor's local disks conflict. You need to create a bootable volume to use this flavor for your instance

As I'm trying to create a stack, I get the following error :
[kypo-proxy-jump-stack]: CREATE_FAILED Resource CREATE failed:
Forbidden: resources.kypo-proxy-jump: Flavor's local disks conflict.
You need to create a bootable volume to use this flavor for your instance. (HTTP 403)
I already tried to boot instance from image and attach non-bootable volume, as given in the link below : https://docs.openstack.org/ocata/user-guide/cli-nova-launch-instance-from-volume.html
but it didn't work.

Cloudstack: Failed to attach volume Data to VM Nas12U41

Failed to attach volume Data to VM Nas12U41; org.libvirt.LibvirtException: internal error: unable to execute QEMU command '__com.redhat_drive_add': could not open disk image /mnt/3c164f13-17f2-3edf-b836-74299f20a559/65bcbd35-4fc5-4714-af04-4712a6a7f0e7: qcow2: Image is corrupt; cannot be opened read/write
you may want to check if the storage is healthy, and the file exists and is healthy. Depending on what filesystem and primary storage you're using you may run fsck or equivalent to recover the corrupted file. You may also want to check file permissions. Can you share your CloudStack version, output of "virsh version", "qemu-img info" on the qcow2 file and KVM host distro.
To discuss further pl join the CloudStack users mailing list and ask there http://cloudstack.apache.org/mailing-lists.html

Grakn Error; trying to load schema for "phone calls" example

I am trying to run the example grakn migration "phone_calls" (using python and JSON files).
Before reaching there, I need to load the schema, but I am having trouble with getting the schema loaded, as shown here: https://dev.grakn.ai/docs/examples/phone-calls-schema
System:
-Mac OS 10.15
-grakn-core 1.8.3
-python 3.7.3
The grakn server is started. I checked and the 48555 TCP port is open, so I don't think there is any firewall issue. The schema file is in the same folder (phone_calls) as where the json data files is, for the next step. I am using a virtual environment. The error is below:
(project1_env) (base) tiffanytoor1#MacBook-Pro-2 onco % grakn server start
Storage is already running
Grakn Core Server is already running
(project1_env) (base) tiffanytoor1#MacBook-Pro-2 onco % grakn console --keyspace phone_calls --file phone_calls/schema.gql
Unable to create connection to Grakn instance at localhost:48555
Cause: io.grpc.StatusRuntimeException
UNKNOWN: Could not reach any contact point, make sure you've provided valid addresses (showing first 1, use getErrors() for more: Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=5f59fd46): com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...] init query OPTIONS: error writing ). Please check server logs for the stack trace.
I would appreciate any help! Thanks!
Nevermind -- I found the solution, in case any one else runs into a similar problem. The server configuration file needs to be edited: point the data directory to your project data files (here: the phone_calls data files) & change the server IP address to your own.

Marklogic 8 : Scheduled task do not start and no logs

I scheduled a data extraction with an Xquery query on ML 8.0.6 using the "scheduler tasks".
My Xquery query (this query works if I copy/paste it in the ML web console and I get a file available on AWS S3):
xdmp:save("s3://XX.csv",let $nl := "
"
return
document {
for $book in collection("books")/books
return (root($book)/bookId||","||
$optin/updatedDate||$nl
)
})
My scheduled task :
Task enabled : yes
Task path : /home/bob/extraction.xqy
task root : /
task type: hourly
task period : 1
task start time: 8 past the hour
task database : mydatabase
task modules : file system
task user : admin
task host: XX
task priority : higher
Unfortunately, my script is not executed because no file is generated on AWS S3 (the storage used)and I do not have any logs.
Any idea to :
1/debug a job in the scheduler task?
2/ See the job running at the expected time ?
Thanks,
Romain.
First, I would try take a look at the ErrorLog.txt because it will probably show you where to look for the problem.
xdmp:filesystem-file(concat(xdmp:data-directory(),"/","Logs","/","ErrorLog.txt"))
Where is the script located logically: Has it been uploaded to the content database, modules database, or ./MarkLogic/Modules directory?
If this is a cluster, have you specified which host it's running on? If so, and using the filesystem modules, then ensure the script exists in the ./MarkLogic/Modules directory of the specified host. Inf not, and using the filesystem modules, then ensure the script exists in the ./MarkLogic/Modules directory of all the hosts of the cluster.
As for seeing the job running, you can check the http://servername:8002/dashboard/ and take a look at the Query Execution tab see the running processes, or you can get a snapshot of the process by taking a look at the Status page of the task server (Configure > Groups > [group name] > Task Server: Status and click show more button)

Openstack Error: Unable to retrieve instances, volume list, snapshot list etc

I have installed devstack on ubuntu 14.04 and have rebooted Virtual Machine, but after reboot openstack is not working and getting the following errors:
Overview -> Error: Unable to retrieve usage information
Instances -> Error: Unable to retrieve instances.
Volumes -> Error: Unable to retrieve snapshot list.
Images -> Error: Unable to retrieve images.
As I have found the solution on stack overflow, there is no ./rejoin-stack to retain all the Virtual Machines
So I am trying to execute the command inside devstack directory i.e.
cd devstack
$devstack screen -c stack-screenrc
By running above command, I found that Virtual machine network interfaces are automatically created and by this the physical wired connection is getting disconnected.
I have attached the screen shot of network interfaces created.
Can anyone help me please,
Thanks
Any inputs??

Resources