Failure trying to start devstack - openstack

I have a devstack environment (Installed on Ubuntu 14.04.1 LTS) with which I have been working for a while now.
Started getting this error when I run the stack.sh script
ERROR: openstack Authentication failure: An unexpected error prevented the server from fulfilling your request: No module named backends.sql (Disable debug mode to suppress these details.) (HTTP 500)
Things I have tried.
Restarted the devstack VM
Reinstalled mysql-server
Cloned a new copy of devstack to run a fresh copy of the code
Note that I can login to MySQL using the credentials I have in my local.conf and the same configurations were working for the past couple of months
If any other information is needed, please let me know.
More detailed log:
2014-12-18 22:28:13.530 | ++ ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-net,n-cond,n-sch,n-novnc,n-xvnc,n-cauth,c-sch,c-api,c-vol,h-eng,h-api,h-api-cfn,h-api-cw,horizon,rabbit,tempest,mysql,ceilometer-acompute,ceilometer-acentral,ceilometer-anotification,ceilometer-collector,ceilometer-alarm-evaluator,ceilometer-alarm-notifier,ceilometer-api
2014-12-18 22:28:13.530 | ++ is_service_enabled gantt
2014-12-18 22:28:13.533 | ++ return 1
2014-12-18 22:28:13.533 | + for i in '$TOP_DIR/extras.d/*.sh'
2014-12-18 22:28:13.534 | + [[ -r /home/hellraiser/newicehouse/extras.d/70-marconi.sh ]]
2014-12-18 22:28:13.534 | + source /home/hellraiser/newicehouse/extras.d/70-marconi.sh stack post-config
2014-12-18 22:28:13.534 | ++ is_service_enabled marconi-server
2014-12-18 22:28:13.536 | ++ return 1
2014-12-18 22:28:13.536 | + for i in '$TOP_DIR/extras.d/*.sh'
2014-12-18 22:28:13.536 | + [[ -r /home/hellraiser/newicehouse/extras.d/70-sahara.sh ]]
2014-12-18 22:28:13.536 | + source /home/hellraiser/newicehouse/extras.d/70-sahara.sh stack post-config
2014-12-18 22:28:13.536 | ++ is_service_enabled sahara
2014-12-18 22:28:13.539 | ++ return 1
2014-12-18 22:28:13.540 | + for i in '$TOP_DIR/extras.d/*.sh'
2014-12-18 22:28:13.540 | + [[ -r /home/hellraiser/newicehouse/extras.d/70-trove.sh ]]
2014-12-18 22:28:13.540 | + source /home/hellraiser/newicehouse/extras.d/70-trove.sh stack post-config
2014-12-18 22:28:13.540 | ++ is_service_enabled trove
2014-12-18 22:28:13.543 | ++ return 1
2014-12-18 22:28:13.543 | + for i in '$TOP_DIR/extras.d/*.sh'
2014-12-18 22:28:13.543 | + [[ -r /home/hellraiser/newicehouse/extras.d/80-opendaylight.sh ]]
2014-12-18 22:28:13.543 | + source /home/hellraiser/newicehouse/extras.d/80-opendaylight.sh stack post-config
2014-12-18 22:28:13.543 | ++ is_service_enabled odl-server odl-compute
2014-12-18 22:28:13.546 | ++ return 1
2014-12-18 22:28:13.546 | ++ is_service_enabled odl-server
2014-12-18 22:28:13.549 | ++ return 1
2014-12-18 22:28:13.549 | ++ is_service_enabled odl-compute
2014-12-18 22:28:13.552 | ++ return 1
2014-12-18 22:28:13.552 | + for i in '$TOP_DIR/extras.d/*.sh'
2014-12-18 22:28:13.553 | + [[ -r /home/hellraiser/newicehouse/extras.d/80-tempest.sh ]]
2014-12-18 22:28:13.553 | + source /home/hellraiser/newicehouse/extras.d/80-tempest.sh stack post-config
2014-12-18 22:28:13.553 | ++ is_service_enabled tempest
2014-12-18 22:28:13.555 | ++ return 0
2014-12-18 22:28:13.555 | ++ [[ stack == \s\o\u\r\c\e ]]
2014-12-18 22:28:13.555 | ++ [[ stack == \s\t\a\c\k ]]
2014-12-18 22:28:13.555 | ++ [[ post-config == \i\n\s\t\a\l\l ]]
2014-12-18 22:28:13.555 | ++ [[ stack == \s\t\a\c\k ]]
2014-12-18 22:28:13.555 | ++ [[ post-config == \p\o\s\t\-\c\o\n\f\i\g ]]
2014-12-18 22:28:13.555 | ++ create_tempest_accounts
2014-12-18 22:28:13.555 | ++ is_service_enabled tempest
2014-12-18 22:28:13.558 | ++ return 0
2014-12-18 22:28:13.558 | ++ openstack project create alt_demo
2014-12-18 22:28:14.181 | ERROR: openstack Authentication failure: An unexpected error prevented the server from fulfilling your request: No module named backends.sql (Disable debug mode to suppress these details.) (HTTP 500)
2014-12-18 22:28:14.212 | + exit_trap
2014-12-18 22:28:14.213 | + local r=1
2014-12-18 22:28:14.213 | ++ jobs -p
2014-12-18 22:28:14.214 | + jobs=

Related

volume backup create what is errno 22?

Trying to create a volume backup both using the web UI and the cmd and keep getting errno 22. I'm unable to find information about the error or how to fix it. Anyone knows where I should start looking?
(openstack) volume backup create --force --name inventory01_vol_backups 398ee974-9b83-4918-9935-f52882b3e6b7
(openstack) volume backup show inventory01_vol_backups
+-----------------------+------------------------------------------------------------------+
| Field | Value |
+-----------------------+------------------------------------------------------------------+
| availability_zone | None |
| container | None |
| created_at | 2021-08-03T23:45:49.000000 |
| data_timestamp | 2021-08-03T23:45:49.000000 |
| description | None |
| fail_reason | [errno 22] RADOS invalid argument (error calling conf_read_file) |
| has_dependent_backups | False |
| id | 924c6e62-789e-4e51-9748-927695fc744c |
| is_incremental | False |
| name | inventory01_vol_backups |
| object_count | 0 |
| size | 30 |
| snapshot_id | None |
| status | error |
| updated_at | 2021-08-03T23:45:50.000000 |
| volume_id | 398ee974-9b83-4918-9935-f52882b3e6b7 |
+-----------------------+------------------------------------------------------------------+
The issue was caused due to a bug in Cinder version 16.2.1.dev13. Updating cinder to the latest version solved the issue

openstack ocata glance creating 0 sized image

When I create new image using glance does not matter if using cli or gui I am getting returned code 0 and image is created but its size is zero.
The behavior is slightly different as from GUI my browser crushes but stil image is created from cli I am getting return code 0.
Command:
openstack image create --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --public --debug cirros-deb
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | d41d8cd98f00b204e9800998ecf8427e |
| container_format | bare |
| created_at | 2018-01-20T23:24:47Z |
| disk_format | qcow2 |
| file | /v2/images/c695bc30-731d-4a4f-ab0f-12eb972d8188/file |
| id | c695bc30-731d-4a4f-ab0f-12eb972d8188 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros-deb |
| owner | a3460a3b0e8f4d0bbdd25bf790fe504c |
| protected | False |
| schema | /v2/schemas/image |
| size | 0 |
| status | active |
| tags | |
| updated_at | 2018-01-20T23:24:47Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+
clean_up CreateImage:
END return value: 0
I tried with different cirros image and with ubuntu cloud image always behavior is the same.
Under /var/lib/glance/images file is created with size 0:
-rw-r-----. 1 glance glance 0 Jan 21 00:24 c695bc30-731d-4a4f-ab0f-12eb972d8188
grep c695bc30-731d-4a4f-ab0f-12eb972d8188 glance/api.log
2018-01-21 00:24:47.915 1894 INFO eventlet.wsgi.server [req-7246cd30-47c4-41a5-b358-c8e5cc0f4e56 8bd3e4905ffb4f698e2476d9080a7d90 a3460a3b0e8f4d0bbdd25bf790fe504c - default default] 172.19.254.50 - - [21/Jan/2018 00:24:47] "PUT /v2/images/c695bc30-731d-4a4f-ab0f-12eb972d8188/file HTTP/1.1" 204 213 0.111323
2018-01-21 00:24:47.931 1894 INFO eventlet.wsgi.server [req-28e0cda2-c9f7-4543-b19a-d59eccffa47e 8bd3e4905ffb4f698e2476d9080a7d90 a3460a3b0e8f4d0bbdd25bf790fe504c - default default] 172.19.254.50 - - [21/Jan/2018 00:24:47] "GET /v2/images/c695bc30-731d-4a4f-ab0f-12eb972d8188 HTTP/1.1" 200 780 0.015399
Any idea what can be wrong?
Find location of python glance client.
find / -name http.py
vi /usr/lib/python2.7/site-packages/glanceclient/common/http.py
- data = self._chunk_body(data)
+ pass
Referenc:
https://bugs.launchpad.net/python-glanceclient/+bug/1666511
https://ask.openstack.org/en/question/101944/why-does-openstack-image-create-of-cirros-result-in-size-0/?answer=102303#post-id-102303

NoValidHost: No valid host was found. There are not enough hosts available

When I create the instance in the dashboard, I get error:
No valid host was found. There are not enough hosts available.
In the /var/log/nova/nova-conductor.log file, there is the log:
2017-08-05 00:22:29.046 3834 WARNING nova.scheduler.utils [req-89c159c7-b40a-43eb-8f0d-9306eb73e83a 2a5fa182fb1b459980db09cd1572850e 0d5998f2f7ec4c4892a32e06bafb19df - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available.
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 199, in inner
return func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 104, in select_destinations
dests = self.driver.select_destinations(ctxt, spec_obj)
File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 74, in select_destinations
raise exception.NoValidHost(reason=reason)
NoValidHost: No valid host was found. There are not enough hosts available.
2017-08-05 00:22:29.048 3834 WARNING nova.scheduler.utils [req-89c159c7-b40a-43eb-8f0d-9306eb73e83a 2a5fa182fb1b459980db09cd1572850e 0d5998f2f7ec4c4892a32e06bafb19df - - -] [instance: 2011e343-c8fc-4ed0-8148-b0d2b5ba37c3] Setting instance to ERROR state.
2017-08-05 00:22:30.785 3834 WARNING oslo_config.cfg [req-89c159c7-b40a-43eb-8f0d-9306eb73e83a 2a5fa182fb1b459980db09cd1572850e 0d5998f2f7ec4c4892a32e06bafb19df - - -] Option "auth_plugin" from group "neutron" is deprecated. Use option "auth_type" from group "neutron".
And I searched the SO, find a related post:Openstack-Devstack: Can't create instance, There are not enough hosts available
I checked the free_ram_mb in mysql:
MariaDB [nova]> select * from compute_nodes \G;
*************************** 1. row ***************************
created_at: 2017-08-04 12:44:26
updated_at: 2017-08-04 13:51:35
deleted_at: NULL
id: 4
service_id: NULL
vcpus: 8
memory_mb: 7808
local_gb: 19
vcpus_used: 0
memory_mb_used: 512
local_gb_used: 0
hypervisor_type: QEMU
hypervisor_version: 1005003
cpu_info: {"vendor": "Intel", "model": "Broadwell", "arch": "x86_64", "features": ["smap", "avx", "clflush", "sep", "rtm", "vme", "invpcid", "msr", "fsgsbase", "xsave", "pge", "erms", "hle", "cmov", "tsc", "smep", "pcid", "pat", "lm", "abm", "adx", "3dnowprefetch", "nx", "fxsr", "syscall", "sse4.1", "pae", "sse4.2", "pclmuldq", "fma", "tsc-deadline", "mmx", "osxsave", "cx8", "mce", "de", "rdtscp", "ht", "pse", "lahf_lm", "rdseed", "popcnt", "mca", "pdpe1gb", "apic", "sse", "f16c", "ds", "invtsc", "pni", "aes", "avx2", "sse2", "ss", "hypervisor", "bmi1", "bmi2", "ssse3", "fpu", "cx16", "pse36", "mtrr", "movbe", "rdrand", "x2apic"], "topology": {"cores": 2, "cells": 1, "threads": 1, "sockets": 4}}
disk_available_least: 18
free_ram_mb: 7296
free_disk_gb: 19
current_workload: 0
running_vms: 0
hypervisor_hostname: ha-node1
deleted: 0
host_ip: 192.168.8.101
supported_instances: [["i686", "qemu", "hvm"], ["x86_64", "qemu", "hvm"]]
pci_stats: {"nova_object.version": "1.1", "nova_object.changes": ["objects"], "nova_object.name": "PciDevicePoolList", "nova_object.data": {"objects": []}, "nova_object.namespace": "nova"}
metrics: []
extra_resources: NULL
stats: {}
numa_topology: NULL
host: ha-node1
ram_allocation_ratio: 3
cpu_allocation_ratio: 16
uuid: 9113940b-7ec9-462d-af06-6988dbb6b6cf
disk_allocation_ratio: 1
*************************** 2. row ***************************
created_at: 2017-08-04 12:44:34
updated_at: 2017-08-04 13:50:47
deleted_at: NULL
id: 6
service_id: NULL
vcpus: 8
memory_mb: 7808
local_gb: 19
vcpus_used: 0
memory_mb_used: 512
local_gb_used: 0
hypervisor_type: QEMU
hypervisor_version: 1005003
cpu_info: {"vendor": "Intel", "model": "Broadwell", "arch": "x86_64", "features": ["smap", "avx", "clflush", "sep", "rtm", "vme", "invpcid", "msr", "fsgsbase", "xsave", "pge", "erms", "hle", "cmov", "tsc", "smep", "pcid", "pat", "lm", "abm", "adx", "3dnowprefetch", "nx", "fxsr", "syscall", "sse4.1", "pae", "sse4.2", "pclmuldq", "fma", "tsc-deadline", "mmx", "osxsave", "cx8", "mce", "de", "rdtscp", "ht", "pse", "lahf_lm", "rdseed", "popcnt", "mca", "pdpe1gb", "apic", "sse", "f16c", "ds", "invtsc", "pni", "aes", "avx2", "sse2", "ss", "hypervisor", "bmi1", "bmi2", "ssse3", "fpu", "cx16", "pse36", "mtrr", "movbe", "rdrand", "x2apic"], "topology": {"cores": 2, "cells": 1, "threads": 1, "sockets": 4}}
disk_available_least: 18
free_ram_mb: 7296
free_disk_gb: 19
current_workload: 0
running_vms: 0
hypervisor_hostname: ha-node2
deleted: 0
host_ip: 192.168.8.102
supported_instances: [["i686", "qemu", "hvm"], ["x86_64", "qemu", "hvm"]]
pci_stats: {"nova_object.version": "1.1", "nova_object.changes": ["objects"], "nova_object.name": "PciDevicePoolList", "nova_object.data": {"objects": []}, "nova_object.namespace": "nova"}
metrics: []
extra_resources: NULL
stats: {}
numa_topology: NULL
host: ha-node2
ram_allocation_ratio: 3
cpu_allocation_ratio: 16
uuid: 32b574df-52ac-43dc-87f8-353350449076
disk_allocation_ratio: 1
2 rows in set (0.00 sec)
You see the free_ram_mb: 7296, I just want to create a 512mb VM, but failed.
EDIT-1
The nova services all are up:
[root#ha-node1 ~]# nova service-list
+----+------------------+----------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+----------+----------+---------+-------+----------------------------+-----------------+
| 2 | nova-consoleauth | ha-node3 | internal | enabled | up | 2017-08-05T14:20:25.000000 | - |
| 5 | nova-conductor | ha-node3 | internal | enabled | up | 2017-08-05T14:20:29.000000 | - |
| 7 | nova-cert | ha-node3 | internal | enabled | up | 2017-08-05T14:20:23.000000 | - |
| 15 | nova-scheduler | ha-node3 | internal | enabled | up | 2017-08-05T14:20:20.000000 | - |
| 22 | nova-cert | ha-node1 | internal | enabled | up | 2017-08-05T14:20:26.000000 | - |
| 29 | nova-conductor | ha-node1 | internal | enabled | up | 2017-08-05T14:20:22.000000 | - |
| 32 | nova-consoleauth | ha-node1 | internal | enabled | up | 2017-08-05T14:20:29.000000 | - |
| 33 | nova-consoleauth | ha-node2 | internal | enabled | up | 2017-08-05T14:20:29.000000 | - |
| 36 | nova-scheduler | ha-node1 | internal | enabled | up | 2017-08-05T14:20:30.000000 | - |
| 40 | nova-conductor | ha-node2 | internal | enabled | up | 2017-08-05T14:20:26.000000 | - |
| 44 | nova-cert | ha-node2 | internal | enabled | up | 2017-08-05T14:20:27.000000 | - |
| 46 | nova-scheduler | ha-node2 | internal | enabled | up | 2017-08-05T14:20:28.000000 | - |
| 49 | nova-compute | ha-node2 | nova | enabled | up | 2017-08-05T14:19:35.000000 | - |
| 53 | nova-compute | ha-node1 | nova | enabled | up | 2017-08-05T14:20:05.000000 | - |
+----+------------------+----------+----------+---------+-------+----------------------------+-----------------+
The nova list:
[root#ha-node1 ~]# nova list
+--------------------------------------+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------+--------+------------+-------------+----------+
| 20193e58-2c5b-44c6-a98f-a44e2001934f | vm1 | ERROR | - | NOSTATE | |
And the nova show instance:
[root#ha-node1 ~]# nova show 20193e58-2c5b-44c6-a98f-a44e2001934f
+--------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Property | Value |
+--------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| OS-DCF:diskConfig | AUTO |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | - |
| OS-EXT-SRV-ATTR:hostname | vm1 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | instance-00000003 |
| OS-EXT-SRV-ATTR:kernel_id | |
| OS-EXT-SRV-ATTR:launch_index | 0 |
| OS-EXT-SRV-ATTR:ramdisk_id | |
| OS-EXT-SRV-ATTR:reservation_id | r-jct8kkcq |
| OS-EXT-SRV-ATTR:root_device_name | /dev/vda |
| OS-EXT-SRV-ATTR:user_data | - |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | - |
| OS-EXT-STS:vm_state | error |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2017-08-05T14:17:54Z |
| description | vm1 |
| fault | {"message": "No valid host was found. There are not enough hosts available.", "code": 500, "details": " File \"/usr/lib/python2.7/site-packages/nova/conductor/manager.py\", line 496, in build_instances |
| | context, request_spec, filter_properties) |
| | File \"/usr/lib/python2.7/site-packages/nova/conductor/manager.py\", line 567, in _schedule_instances |
| | hosts = self.scheduler_client.select_destinations(context, spec_obj) |
| | File \"/usr/lib/python2.7/site-packages/nova/scheduler/utils.py\", line 370, in wrapped |
| | return func(*args, **kwargs) |
| | File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py\", line 51, in select_destinations |
| | return self.queryclient.select_destinations(context, spec_obj) |
| | File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py\", line 37, in __run_method |
| | return getattr(self.instance, __name)(*args, **kwargs) |
| | File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py\", line 32, in select_destinations |
| | return self.scheduler_rpcapi.select_destinations(context, spec_obj) |
| | File \"/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py\", line 126, in select_destinations |
| | return cctxt.call(ctxt, 'select_destinations', **msg_args) |
| | File \"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py\", line 169, in call |
| | retry=self.retry) |
| | File \"/usr/lib/python2.7/site-packages/oslo_messaging/transport.py\", line 97, in _send |
| | timeout=timeout, retry=retry) |
| | File \"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py\", line 464, in send |
| | retry=retry) |
| | File \"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py\", line 455, in _send |
| | raise result |
| | ", "created": "2017-08-05T14:18:14Z"} |
| flavor | m1.tiny (1) |
| hostId | |
| host_status | |
| id | 20193e58-2c5b-44c6-a98f-a44e2001934f |
| image | cirros-0.3.4-x86_64 (202778cd-6b32-4486-9444-c167089d9082) |
| key_name | - |
| locked | False |
| metadata | {} |
| name | vm1 |
| os-extended-volumes:volumes_attached | [] |
| status | ERROR |
| tags | [] |
| tenant_id | 0d5998f2f7ec4c4892a32e06bafb19df |
| updated | 2017-08-05T14:18:16Z |
| user_id | 2a5fa182fb1b459980db09cd1572850e |
+--------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
EDIT-2
The nova-compute.log in the /var/log/nova/ useful information:
......
2017-08-05 22:17:42.669 103174 INFO nova.compute.resource_tracker [req-60a062ce-4b3d-4cb7-863e-2f9bba0bc6ec - - - - -] Compute_service record updated for ha-node1:ha-node1
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [req-7dded91f-7497-4d20-ba89-69f867a2a8fb 2a5fa182fb1b459980db09cd1572850e 0d5998f2f7ec4c4892a32e06bafb19df - - -] [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] Instance failed to spawn
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] Traceback (most recent call last):
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2078, in _build_resources
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] yield resources
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1920, in _build_and_run_instance
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] block_device_info=block_device_info)
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2584, in spawn
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] admin_pass=admin_password)
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2959, in _create_image
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] fileutils.ensure_tree(libvirt_utils.get_instance_path(instance))
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] File "/usr/lib/python2.7/site-packages/oslo_utils/fileutils.py", line 40, in ensure_tree
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] os.makedirs(path, mode)
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] File "/usr/lib64/python2.7/os.py", line 157, in makedirs
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] mkdir(name, mode)
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] OSError: [Errno 13] Permission denied: '/var/lib/nova/instances/20193e58-2c5b-44c6-a98f-a44e2001934f'
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f]
2017-08-05 22:18:11.563 103174 INFO nova.compute.manager [req-7dded91f-7497-4d20-ba89-69f867a2a8fb 2a5fa182fb1b459980db09cd1572850e 0d5998f2f7ec4c4892a32e06bafb19df - - -] [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] Terminating instance
....
Debugging
Enable debug mode to get detailed logs.
Set debug = True in these files:
/etc/nova/nova.conf
/etc/nova/cinder.conf
/etc/glance/glance-registry.conf
Restart the reconfigured services
Try to create instance again and check logs.
Take a look at the nova-scheduler.log file and try to find line like this:
.. INFO nova.filters [req-..] Filter DiskFilter returned 0 hosts
Above this line should be DEBUG logs with Filters detailed information, for example:
.. DEBUG nova.filters [req-..] Filter RetryFilter returned 1 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:104
.. DEBUG nova.filters [req-..] Filter AvailabilityZoneFilter returned 1 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:104
.. DEBUG nova.filters [req-..] Filter RamFilter returned 1 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:104
.. DEBUG nova.filters [req-..] (...) ram: 37107MB disk: 11264MB io_ops: 0 instances: 4 does not have 17408 MB usable disk, it only has 11264.0 MB usable disk. host_passes /usr/lib/python2.7/site-packages/nova/scheduler/filters/disk_filter.py:70
Overcommitting
OpenStack allows you to overcommit CPU and RAM on compute nodes. This allows you to increase the number of instances running on your cloud at the cost of reducing the performance of the instances. The Compute service uses the following ratios by default:
CPU allocation ratio: 16:1
RAM allocation ratio: 1.5:1
Please read documentation to get more information.
You can change allocation ratio using nova.conf:
cpu_allocation_ratio
ram_allocation_ratio
disk_allocation_ratio
First you need to check the output of "nova service-list" or "openstack compute service list". It should show at least one 'nova-compute' service with state as "Up" and status as 'enabled'.
If the above is fine, then the compute nodes are communicating properly with the scheduler. If not, then you need to check the nova-scheduler logs.
The nova-scheduler has a series of filters like Memory filter, CPU filter, Aggregate filter which it is apply to filter hosts based on the flavor you selected. i.e If you select a flavor with 16GB RAM, then the scheduler will filter (Memory filter) compute hosts which has the available memory. After all the filtering is done the scheduler will try to launch instance on a filtered host, if it fails it will try on another host. Default number of tries is 3. All these can be seen in scheduler logs. That will give you a clear idea on what went wrong.
Also you need to check 'nova show ' output. If you can see the compute host present in "OS-EXT-SRV-ATTR:hypervisor_hostname" , then we can understand that the scheduler was successfully able to allocate a compute host and something went wrong with the compute host. In that case, you need to check the nova-compute logs of that hypervisor.
Finally I found I mount the /var/lib/nova/ to nfs directory /mnt/sdb/var/lib/nova/, but the /mnt/sdb/var/lib/nova/ permission is root:root, so I changed to the nova:nova(same to the /var/lib/nova/).
command :
chown -R nova:nova nova
edit /etc/nova/nova.conf in all the compute node and modify as per your application requirement .
cpu_allocation_ratio = 2.0
( double of physical core can be used for total instance )
ram_allocation_ratio = 2.0
( double of Total memory can be used for total instance
restart nova and nova-scheduler in all the compute node
systemctl restart openstack-nova-*
systemctl restart openstack-nova-scheduler.service

How to change working directory in scons

I have a project built with make, but I want to shift to scons.
However, I could not link object files in scons, so I want to know how to change working directory in scons.
What I exactly want is
make -C $(OBJECTDIRECTORY) -f $(SOURCEDIRECTORY)./Makefile InternalDependency
This is one line from my Makefile, and works well.
However, when scons builds my project, it does
x86_64-pc-linux-ld -o build/kernel32/kernel32.elf -melf_i386 -T scripts/elf_i386.x -nostdlib -e main -Ttext 0x10200 build/kernel32/asmUtils.o build/kernel32/cpu.o build/kernel32/main.o build/kernel32/memory.o build/kernel32/pageManager.o build/kernel32/utils.o
and got an error,
x86_64-pc-linux-ld: cannot find main.o
. Even thouh I do same command in shell manually, I got the same error.
However, if I move to build/kernel32, and do manually
x86_64-pc-linux-ld -o kernel32.elf -melf_i386 -T ../../elf_i386.x -nostdlib -e main -Ttext 0x10200 main.o cpu.o memory.o pageManager.o utils.o asmUtils.o
and it works.
My assumption is ld could not link object files in some upper directory.
So, is there any way to do like "-C" option of Make?
Or any other workaround wayin scons?
Here is my SConscript, and SConsctruct.
In project root directory,
#SConstruct
build_dir = 'build'
# Build
SConscript(['src/SConscript'], variant_dir = build_dir, duplicate = 0)
# Clean
Clean('.', build_dir)
In src directory
#SConscript for src
SConscript(['bootloader/SConscript',
'kernel32/SConscript'])
In kernel32 directory
#SConscript for kernel32
import os, sys
# Build entry
env_entry = Environment(tools=['default', 'nasm'])
target_entry = 'entry.bin'
object_entry = 'entry.s'
output_entry = env_entry.Object(target_entry, object_entry)
# Compile CPP
env_gpp_options = {
'CXX' : 'x86_64-pc-linux-g++',
'CXXFLAGS' : '-std=c++11 -g -m32 -ffreestanding -fno-exceptions -fno-rtti',
'LINK' : 'x86_64-pc-linux-ld',
'LINKFLAGS' : '-melf_i386 -T scripts/elf_i386.x -nostdlib -e main -Ttext 0x10200',
}
env_gpp = Environment(**env_gpp_options)
env_gpp.Append(ENV = {'PATH' : os.environ['PATH']})
object_cpp_list = Glob('*.cpp')
for object_cpp in object_cpp_list:
env_gpp.Object(object_cpp)
# Compile ASM
env_nasm = Environment(tools=['default', 'nasm'])
env_nasm.Append(ASFLAGS='-f elf32')
object_nasm_list = Glob('*.asm')
for object_nasm in object_nasm_list:
env_nasm.Object(object_nasm)
# Find all object file
object_target_list = Glob('*.o')
object_target_list.append('entry.bin')
# Linking
env_link_target = 'kernel32.elf'
env_gpp.Program(env_link_target, object_target_list)
Pleas let me know. Thank you.
The log for "--tree=prune" is
scons: Reading SConscript files ...
scons: done reading SConscript files.
scons: Building targets ...
x86_64-pc-linux-ld -o build/kernel32/kernel32.elf -melf_i386 -T scripts/elf_i386.x -nostdlib -e main -Ttext 0x10200 build/kernel32/asmUtils.o build/kernel32/cpu.o build/kernel32/main.o build/kernel32/memory.o build/kernel32/pageManager.o build/kernel32/utils.o build/kernel32/entry.bin
+-.
+-SConstruct
+-build
| +-src/SConscript
| +-build/bootloader
| | +-src/bootloader/BootLoader.asm
| | +-src/bootloader/SConscript
| | +-build/bootloader/bootloader.bin
| | +-src/bootloader/BootLoader.asm
| | +-/usr/bin/nasm
| +-build/kernel32
| +-src/kernel32/SConscript
| +-src/kernel32/asmUtils.asm
| +-build/kernel32/asmUtils.o
| | +-src/kernel32/asmUtils.asm
| | +-/usr/bin/nasm
| +-src/kernel32/cpu.cpp
| +-build/kernel32/cpu.o
| | +-src/kernel32/cpu.cpp
| | +-src/kernel32/cpu.hpp
| | +-src/kernel32/types.hpp
| | +-/home/xaliver/BuildTools/cross/bin/x86_64-pc-linux-g++
| +-build/kernel32/entry.bin
| | +-src/kernel32/entry.s
| | +-/usr/bin/nasm
| +-src/kernel32/entry.s
| +-build/kernel32/kernel32.elf
| | +-[build/kernel32/asmUtils.o]
| | +-[build/kernel32/cpu.o]
| | +-build/kernel32/main.o
| | | +-src/kernel32/main.cpp
| | | +-src/kernel32/cpu.hpp
| | | +-src/kernel32/memory.hpp
| | | +-src/kernel32/types.hpp
| | | +-src/kernel32/utils.hpp
| | | +-src/kernel32/pageManager.hpp
| | | +-src/kernel32/page.hpp
| | | +-/home/xaliver/BuildTools/cross/bin/x86_64-pc-linux-g++
| | +-build/kernel32/memory.o
| | | +-src/kernel32/memory.cpp
| | | +-src/kernel32/memory.hpp
| | | +-src/kernel32/pageManager.hpp
| | | +-src/kernel32/page.hpp
| | | +-src/kernel32/types.hpp
| | | +-/home/xaliver/BuildTools/cross/bin/x86_64-pc-linux-g++
| | +-build/kernel32/pageManager.o
| | | +-src/kernel32/pageManager.cpp
| | | +-src/kernel32/pageManager.hpp
| | | +-src/kernel32/page.hpp
| | | +-src/kernel32/types.hpp
| | | +-/home/xaliver/BuildTools/cross/bin/x86_64-pc-linux-g++
| | +-build/kernel32/utils.o
| | | +-src/kernel32/utils.cpp
| | | +-src/kernel32/utils.hpp
| | | +-src/kernel32/types.hpp
| | | +-/home/xaliver/BuildTools/cross/bin/x86_64-pc-linux-g++
| | +-[build/kernel32/entry.bin]
| | +-/home/xaliver/BuildTools/cross/bin/x86_64-pc-linux-ld
| +-src/kernel32/main.cpp
| +-[build/kernel32/main.o]
| +-src/kernel32/memory.cpp
| +-[build/kernel32/memory.o]
| +-src/kernel32/pageManager.cpp
| +-[build/kernel32/pageManager.o]
| +-src/kernel32/utils.cpp
| +-[build/kernel32/utils.o]
+-src
+-src/SConscript
+-src/bootloader
| +-src/bootloader/BootLoader.asm
| +-src/bootloader/SConscript
+-src/kernel32
+-src/kernel32/SConscript
+-src/kernel32/asmUtils.asm
+-src/kernel32/cpu.cpp
+-src/kernel32/cpu.hpp
+-src/kernel32/entry.s
+-src/kernel32/main.cpp
+-src/kernel32/memory.cpp
+-src/kernel32/memory.hpp
+-src/kernel32/page.hpp
+-src/kernel32/pageManager.cpp
+-src/kernel32/pageManager.hpp
+-src/kernel32/types.hpp
+-src/kernel32/utils.cpp
+-src/kernel32/utils.hpp
scons: building terminated because of errors.

xmlstarlet sel doesn't show subsequent lines correctly

This example shows an exatract of an output file much bigger t
xmlstarlet fo jira-output.xml | egrep 'hours|username|worklog|work_date' | egrep -v 'external|time' | head -20
Gives me approximately this:
#<worklogs date_from="2014-06-01 00:00:00" date_to="2014-06-30 23:59:59" number_of_worklogs="222" format="xml" diffOnly="false" errorsOnly="false" validOnly="false" addBillingInfo="false" addIssueSummary="false" addIssueDescription="false" duration_ms="106" headerOnly="false" userName="" addIssueDetails="false" addParentIssue="false" addUserDetails="false" addWorklogDetails="false" billingKey="" issueKey="" projectKey="">
# <worklog>
# <worklog_id>15650</worklog_id>
# <hours>0.11666667</hours>
# <work_date>2014-06-07</work_date>
# <username>cadalso</username>
# </worklog>
# <worklog>
# <worklog_id>15653</worklog_id>
# <hours>0.2</hours>
# <work_date>2014-06-07</work_date>
# <username>cadalso</username>
# </worklog>
# <worklog>
# <worklog_id>15941</worklog_id>
# <hours>4.0</hours>
# <work_date>2014-06-17</work_date>
# <username>mrjcleaver</username>
# </worklog>
# <worklog>
#</worklogs>
This executes nicely, totalling
xmlstarlet sel -T -t -v "sum(worklogs/worklog/hours)" --nl jira-output.xml
This total is different, but only because XML file has many more rows in it
4.31666667
But the following
xmlstarlet sel -T -t -m /worklogs/worklog/worklog_id -v "concat('|',/worklogs/worklog/staff_id,' | ', /worklogs/worklog/worklog_id,' | ',/worklogs/worklog/work_date,' | ',/worklogs/worklog/hours,' |')" --nl jira-output.xml
Shows:
#| cadalso | 15650 | 2014-06-07 | 0.11666667 |
#| cadalso | 15650 | 2014-06-07 | 0.11666667 |
#| cadalso | 15650 | 2014-06-07 | 0.11666667 |
#... one for each row, but with the wrong values
Whereas what I want would be:
#| cadalso | 15650 | 2014-06-07 | 0.11666667 |
#| cadalso | 15653 | 2014-06-07 | 0.2 |
#| mrjcleaver | 15941 | 2014-06-17 | 4.0 |
What am I doing wrong?
Thanks, M.
Big thanks to npostavs, the answer was:
xmlstarlet sel -T -t -m /worklogs/worklog -v "concat('|',staff_id,' | ', worklog_id,' | ',work_date,' | ',hours,' |')" --nl jira-output.xml

Resources