NoValidHost: No valid host was found. There are not enough hosts available - openstack

When I create the instance in the dashboard, I get error:
No valid host was found. There are not enough hosts available.
In the /var/log/nova/nova-conductor.log file, there is the log:
2017-08-05 00:22:29.046 3834 WARNING nova.scheduler.utils [req-89c159c7-b40a-43eb-8f0d-9306eb73e83a 2a5fa182fb1b459980db09cd1572850e 0d5998f2f7ec4c4892a32e06bafb19df - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available.
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 199, in inner
return func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 104, in select_destinations
dests = self.driver.select_destinations(ctxt, spec_obj)
File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 74, in select_destinations
raise exception.NoValidHost(reason=reason)
NoValidHost: No valid host was found. There are not enough hosts available.
2017-08-05 00:22:29.048 3834 WARNING nova.scheduler.utils [req-89c159c7-b40a-43eb-8f0d-9306eb73e83a 2a5fa182fb1b459980db09cd1572850e 0d5998f2f7ec4c4892a32e06bafb19df - - -] [instance: 2011e343-c8fc-4ed0-8148-b0d2b5ba37c3] Setting instance to ERROR state.
2017-08-05 00:22:30.785 3834 WARNING oslo_config.cfg [req-89c159c7-b40a-43eb-8f0d-9306eb73e83a 2a5fa182fb1b459980db09cd1572850e 0d5998f2f7ec4c4892a32e06bafb19df - - -] Option "auth_plugin" from group "neutron" is deprecated. Use option "auth_type" from group "neutron".
And I searched the SO, find a related post:Openstack-Devstack: Can't create instance, There are not enough hosts available
I checked the free_ram_mb in mysql:
MariaDB [nova]> select * from compute_nodes \G;
*************************** 1. row ***************************
created_at: 2017-08-04 12:44:26
updated_at: 2017-08-04 13:51:35
deleted_at: NULL
id: 4
service_id: NULL
vcpus: 8
memory_mb: 7808
local_gb: 19
vcpus_used: 0
memory_mb_used: 512
local_gb_used: 0
hypervisor_type: QEMU
hypervisor_version: 1005003
cpu_info: {"vendor": "Intel", "model": "Broadwell", "arch": "x86_64", "features": ["smap", "avx", "clflush", "sep", "rtm", "vme", "invpcid", "msr", "fsgsbase", "xsave", "pge", "erms", "hle", "cmov", "tsc", "smep", "pcid", "pat", "lm", "abm", "adx", "3dnowprefetch", "nx", "fxsr", "syscall", "sse4.1", "pae", "sse4.2", "pclmuldq", "fma", "tsc-deadline", "mmx", "osxsave", "cx8", "mce", "de", "rdtscp", "ht", "pse", "lahf_lm", "rdseed", "popcnt", "mca", "pdpe1gb", "apic", "sse", "f16c", "ds", "invtsc", "pni", "aes", "avx2", "sse2", "ss", "hypervisor", "bmi1", "bmi2", "ssse3", "fpu", "cx16", "pse36", "mtrr", "movbe", "rdrand", "x2apic"], "topology": {"cores": 2, "cells": 1, "threads": 1, "sockets": 4}}
disk_available_least: 18
free_ram_mb: 7296
free_disk_gb: 19
current_workload: 0
running_vms: 0
hypervisor_hostname: ha-node1
deleted: 0
host_ip: 192.168.8.101
supported_instances: [["i686", "qemu", "hvm"], ["x86_64", "qemu", "hvm"]]
pci_stats: {"nova_object.version": "1.1", "nova_object.changes": ["objects"], "nova_object.name": "PciDevicePoolList", "nova_object.data": {"objects": []}, "nova_object.namespace": "nova"}
metrics: []
extra_resources: NULL
stats: {}
numa_topology: NULL
host: ha-node1
ram_allocation_ratio: 3
cpu_allocation_ratio: 16
uuid: 9113940b-7ec9-462d-af06-6988dbb6b6cf
disk_allocation_ratio: 1
*************************** 2. row ***************************
created_at: 2017-08-04 12:44:34
updated_at: 2017-08-04 13:50:47
deleted_at: NULL
id: 6
service_id: NULL
vcpus: 8
memory_mb: 7808
local_gb: 19
vcpus_used: 0
memory_mb_used: 512
local_gb_used: 0
hypervisor_type: QEMU
hypervisor_version: 1005003
cpu_info: {"vendor": "Intel", "model": "Broadwell", "arch": "x86_64", "features": ["smap", "avx", "clflush", "sep", "rtm", "vme", "invpcid", "msr", "fsgsbase", "xsave", "pge", "erms", "hle", "cmov", "tsc", "smep", "pcid", "pat", "lm", "abm", "adx", "3dnowprefetch", "nx", "fxsr", "syscall", "sse4.1", "pae", "sse4.2", "pclmuldq", "fma", "tsc-deadline", "mmx", "osxsave", "cx8", "mce", "de", "rdtscp", "ht", "pse", "lahf_lm", "rdseed", "popcnt", "mca", "pdpe1gb", "apic", "sse", "f16c", "ds", "invtsc", "pni", "aes", "avx2", "sse2", "ss", "hypervisor", "bmi1", "bmi2", "ssse3", "fpu", "cx16", "pse36", "mtrr", "movbe", "rdrand", "x2apic"], "topology": {"cores": 2, "cells": 1, "threads": 1, "sockets": 4}}
disk_available_least: 18
free_ram_mb: 7296
free_disk_gb: 19
current_workload: 0
running_vms: 0
hypervisor_hostname: ha-node2
deleted: 0
host_ip: 192.168.8.102
supported_instances: [["i686", "qemu", "hvm"], ["x86_64", "qemu", "hvm"]]
pci_stats: {"nova_object.version": "1.1", "nova_object.changes": ["objects"], "nova_object.name": "PciDevicePoolList", "nova_object.data": {"objects": []}, "nova_object.namespace": "nova"}
metrics: []
extra_resources: NULL
stats: {}
numa_topology: NULL
host: ha-node2
ram_allocation_ratio: 3
cpu_allocation_ratio: 16
uuid: 32b574df-52ac-43dc-87f8-353350449076
disk_allocation_ratio: 1
2 rows in set (0.00 sec)
You see the free_ram_mb: 7296, I just want to create a 512mb VM, but failed.
EDIT-1
The nova services all are up:
[root#ha-node1 ~]# nova service-list
+----+------------------+----------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+----------+----------+---------+-------+----------------------------+-----------------+
| 2 | nova-consoleauth | ha-node3 | internal | enabled | up | 2017-08-05T14:20:25.000000 | - |
| 5 | nova-conductor | ha-node3 | internal | enabled | up | 2017-08-05T14:20:29.000000 | - |
| 7 | nova-cert | ha-node3 | internal | enabled | up | 2017-08-05T14:20:23.000000 | - |
| 15 | nova-scheduler | ha-node3 | internal | enabled | up | 2017-08-05T14:20:20.000000 | - |
| 22 | nova-cert | ha-node1 | internal | enabled | up | 2017-08-05T14:20:26.000000 | - |
| 29 | nova-conductor | ha-node1 | internal | enabled | up | 2017-08-05T14:20:22.000000 | - |
| 32 | nova-consoleauth | ha-node1 | internal | enabled | up | 2017-08-05T14:20:29.000000 | - |
| 33 | nova-consoleauth | ha-node2 | internal | enabled | up | 2017-08-05T14:20:29.000000 | - |
| 36 | nova-scheduler | ha-node1 | internal | enabled | up | 2017-08-05T14:20:30.000000 | - |
| 40 | nova-conductor | ha-node2 | internal | enabled | up | 2017-08-05T14:20:26.000000 | - |
| 44 | nova-cert | ha-node2 | internal | enabled | up | 2017-08-05T14:20:27.000000 | - |
| 46 | nova-scheduler | ha-node2 | internal | enabled | up | 2017-08-05T14:20:28.000000 | - |
| 49 | nova-compute | ha-node2 | nova | enabled | up | 2017-08-05T14:19:35.000000 | - |
| 53 | nova-compute | ha-node1 | nova | enabled | up | 2017-08-05T14:20:05.000000 | - |
+----+------------------+----------+----------+---------+-------+----------------------------+-----------------+
The nova list:
[root#ha-node1 ~]# nova list
+--------------------------------------+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------+--------+------------+-------------+----------+
| 20193e58-2c5b-44c6-a98f-a44e2001934f | vm1 | ERROR | - | NOSTATE | |
And the nova show instance:
[root#ha-node1 ~]# nova show 20193e58-2c5b-44c6-a98f-a44e2001934f
+--------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Property | Value |
+--------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| OS-DCF:diskConfig | AUTO |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | - |
| OS-EXT-SRV-ATTR:hostname | vm1 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | instance-00000003 |
| OS-EXT-SRV-ATTR:kernel_id | |
| OS-EXT-SRV-ATTR:launch_index | 0 |
| OS-EXT-SRV-ATTR:ramdisk_id | |
| OS-EXT-SRV-ATTR:reservation_id | r-jct8kkcq |
| OS-EXT-SRV-ATTR:root_device_name | /dev/vda |
| OS-EXT-SRV-ATTR:user_data | - |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | - |
| OS-EXT-STS:vm_state | error |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2017-08-05T14:17:54Z |
| description | vm1 |
| fault | {"message": "No valid host was found. There are not enough hosts available.", "code": 500, "details": " File \"/usr/lib/python2.7/site-packages/nova/conductor/manager.py\", line 496, in build_instances |
| | context, request_spec, filter_properties) |
| | File \"/usr/lib/python2.7/site-packages/nova/conductor/manager.py\", line 567, in _schedule_instances |
| | hosts = self.scheduler_client.select_destinations(context, spec_obj) |
| | File \"/usr/lib/python2.7/site-packages/nova/scheduler/utils.py\", line 370, in wrapped |
| | return func(*args, **kwargs) |
| | File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py\", line 51, in select_destinations |
| | return self.queryclient.select_destinations(context, spec_obj) |
| | File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py\", line 37, in __run_method |
| | return getattr(self.instance, __name)(*args, **kwargs) |
| | File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py\", line 32, in select_destinations |
| | return self.scheduler_rpcapi.select_destinations(context, spec_obj) |
| | File \"/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py\", line 126, in select_destinations |
| | return cctxt.call(ctxt, 'select_destinations', **msg_args) |
| | File \"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py\", line 169, in call |
| | retry=self.retry) |
| | File \"/usr/lib/python2.7/site-packages/oslo_messaging/transport.py\", line 97, in _send |
| | timeout=timeout, retry=retry) |
| | File \"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py\", line 464, in send |
| | retry=retry) |
| | File \"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py\", line 455, in _send |
| | raise result |
| | ", "created": "2017-08-05T14:18:14Z"} |
| flavor | m1.tiny (1) |
| hostId | |
| host_status | |
| id | 20193e58-2c5b-44c6-a98f-a44e2001934f |
| image | cirros-0.3.4-x86_64 (202778cd-6b32-4486-9444-c167089d9082) |
| key_name | - |
| locked | False |
| metadata | {} |
| name | vm1 |
| os-extended-volumes:volumes_attached | [] |
| status | ERROR |
| tags | [] |
| tenant_id | 0d5998f2f7ec4c4892a32e06bafb19df |
| updated | 2017-08-05T14:18:16Z |
| user_id | 2a5fa182fb1b459980db09cd1572850e |
+--------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
EDIT-2
The nova-compute.log in the /var/log/nova/ useful information:
......
2017-08-05 22:17:42.669 103174 INFO nova.compute.resource_tracker [req-60a062ce-4b3d-4cb7-863e-2f9bba0bc6ec - - - - -] Compute_service record updated for ha-node1:ha-node1
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [req-7dded91f-7497-4d20-ba89-69f867a2a8fb 2a5fa182fb1b459980db09cd1572850e 0d5998f2f7ec4c4892a32e06bafb19df - - -] [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] Instance failed to spawn
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] Traceback (most recent call last):
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2078, in _build_resources
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] yield resources
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1920, in _build_and_run_instance
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] block_device_info=block_device_info)
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2584, in spawn
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] admin_pass=admin_password)
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2959, in _create_image
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] fileutils.ensure_tree(libvirt_utils.get_instance_path(instance))
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] File "/usr/lib/python2.7/site-packages/oslo_utils/fileutils.py", line 40, in ensure_tree
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] os.makedirs(path, mode)
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] File "/usr/lib64/python2.7/os.py", line 157, in makedirs
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] mkdir(name, mode)
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] OSError: [Errno 13] Permission denied: '/var/lib/nova/instances/20193e58-2c5b-44c6-a98f-a44e2001934f'
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f]
2017-08-05 22:18:11.563 103174 INFO nova.compute.manager [req-7dded91f-7497-4d20-ba89-69f867a2a8fb 2a5fa182fb1b459980db09cd1572850e 0d5998f2f7ec4c4892a32e06bafb19df - - -] [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] Terminating instance
....

Debugging
Enable debug mode to get detailed logs.
Set debug = True in these files:
/etc/nova/nova.conf
/etc/nova/cinder.conf
/etc/glance/glance-registry.conf
Restart the reconfigured services
Try to create instance again and check logs.
Take a look at the nova-scheduler.log file and try to find line like this:
.. INFO nova.filters [req-..] Filter DiskFilter returned 0 hosts
Above this line should be DEBUG logs with Filters detailed information, for example:
.. DEBUG nova.filters [req-..] Filter RetryFilter returned 1 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:104
.. DEBUG nova.filters [req-..] Filter AvailabilityZoneFilter returned 1 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:104
.. DEBUG nova.filters [req-..] Filter RamFilter returned 1 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:104
.. DEBUG nova.filters [req-..] (...) ram: 37107MB disk: 11264MB io_ops: 0 instances: 4 does not have 17408 MB usable disk, it only has 11264.0 MB usable disk. host_passes /usr/lib/python2.7/site-packages/nova/scheduler/filters/disk_filter.py:70
Overcommitting
OpenStack allows you to overcommit CPU and RAM on compute nodes. This allows you to increase the number of instances running on your cloud at the cost of reducing the performance of the instances. The Compute service uses the following ratios by default:
CPU allocation ratio: 16:1
RAM allocation ratio: 1.5:1
Please read documentation to get more information.
You can change allocation ratio using nova.conf:
cpu_allocation_ratio
ram_allocation_ratio
disk_allocation_ratio

First you need to check the output of "nova service-list" or "openstack compute service list". It should show at least one 'nova-compute' service with state as "Up" and status as 'enabled'.
If the above is fine, then the compute nodes are communicating properly with the scheduler. If not, then you need to check the nova-scheduler logs.
The nova-scheduler has a series of filters like Memory filter, CPU filter, Aggregate filter which it is apply to filter hosts based on the flavor you selected. i.e If you select a flavor with 16GB RAM, then the scheduler will filter (Memory filter) compute hosts which has the available memory. After all the filtering is done the scheduler will try to launch instance on a filtered host, if it fails it will try on another host. Default number of tries is 3. All these can be seen in scheduler logs. That will give you a clear idea on what went wrong.
Also you need to check 'nova show ' output. If you can see the compute host present in "OS-EXT-SRV-ATTR:hypervisor_hostname" , then we can understand that the scheduler was successfully able to allocate a compute host and something went wrong with the compute host. In that case, you need to check the nova-compute logs of that hypervisor.

Finally I found I mount the /var/lib/nova/ to nfs directory /mnt/sdb/var/lib/nova/, but the /mnt/sdb/var/lib/nova/ permission is root:root, so I changed to the nova:nova(same to the /var/lib/nova/).
command :
chown -R nova:nova nova

edit /etc/nova/nova.conf in all the compute node and modify as per your application requirement .
cpu_allocation_ratio = 2.0
( double of physical core can be used for total instance )
ram_allocation_ratio = 2.0
( double of Total memory can be used for total instance
restart nova and nova-scheduler in all the compute node
systemctl restart openstack-nova-*
systemctl restart openstack-nova-scheduler.service

Related

How to prevent trimming of sbt dependencyTree output

Is it possible to configure sbt such that output from sbt dependencyTree is not trimmed?
Fragment of trimmed output:
[info] | | | | | | +-com.fasterxml.jackson.dataformat:jackson-dataformat-cbor:2.12...
[info] | | | | | | +-com.fasterxml.jackson.dataformat:jackson-dataformat-cbor:2.14.0
[info] | | | | | | | +-com.fasterxml.jackson.core:jackson-core:2.14.0 (evicted by: ..
[info] | | | | | | | +-com.fasterxml.jackson.core:jackson-core:2.14.1
[info] | | | | | | | +-com.fasterxml.jackson.core:jackson-databind:2.14.0 (evicted ..
[info] | | | | | | | +-com.fasterxml.jackson.core:jackson-databind:2.14.1
[info] | | | | | | | +-com.fasterxml.jackson.core:jackson-annotations:2.14.1
[info] | | | | | | | +-com.fasterxml.jackson.core:jackson-core:2.14.1
Trimming is problematic for me, as the output of this command is picked up by snyk cli, which gets confused and reports false positives.
I am using sbt 1.8.0
dependencyTree tasks observe asciiGraphWidth option starting with sbt 1.6.0.
See sbt 1.6.0 release notes
Thus:
add asciiGraphWidth is build.sbt:
asciiGraphWidth := 180
alternatively, this can be passed in via command line:
sbt 'set asciiGraphWidth := 180' dependencyTree

volume backup create what is errno 22?

Trying to create a volume backup both using the web UI and the cmd and keep getting errno 22. I'm unable to find information about the error or how to fix it. Anyone knows where I should start looking?
(openstack) volume backup create --force --name inventory01_vol_backups 398ee974-9b83-4918-9935-f52882b3e6b7
(openstack) volume backup show inventory01_vol_backups
+-----------------------+------------------------------------------------------------------+
| Field | Value |
+-----------------------+------------------------------------------------------------------+
| availability_zone | None |
| container | None |
| created_at | 2021-08-03T23:45:49.000000 |
| data_timestamp | 2021-08-03T23:45:49.000000 |
| description | None |
| fail_reason | [errno 22] RADOS invalid argument (error calling conf_read_file) |
| has_dependent_backups | False |
| id | 924c6e62-789e-4e51-9748-927695fc744c |
| is_incremental | False |
| name | inventory01_vol_backups |
| object_count | 0 |
| size | 30 |
| snapshot_id | None |
| status | error |
| updated_at | 2021-08-03T23:45:50.000000 |
| volume_id | 398ee974-9b83-4918-9935-f52882b3e6b7 |
+-----------------------+------------------------------------------------------------------+
The issue was caused due to a bug in Cinder version 16.2.1.dev13. Updating cinder to the latest version solved the issue

Microstack - Cannont access (ping/ssh) launched VMs

I am trying to access some launched VMs without success. I followed this tutorial to create a private network. It is listed below:
+--------------------------------------+----------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+----------+--------------------------------------+
| 326a319c-e75d-48f1-ac36-aed342c45874 | private | f16b8b8c-482e-4cf5-a5d6-74e284b7e0f1 |
The security groups are listed below:
microstack.openstack security group list
+--------------------------------------+---------+------------------------+----------------------------------+------+
| ID | Name | Description | Project | Tags |
+--------------------------------------+---------+------------------------+----------------------------------+------+
| 04c5c579-91bf-4497-bd01-47c7fa69df81 | default | Default security group | 9c12393bf2e54547bef78aac510ba6c6 | [] |
| 3c69498c-c210-48c8-ba43-fbf60a0c224e | default | Default security group | 37f73779b3cd42dc96044ea0fd6d1e98 | [] |
| 5a20b02a-aac4-4c62-9ea2-24dfd8c59f67 | default | Default security group | | [] |
+--------------------------------------+---------+------------------------+----------------------------------+------+
I am using the following security group:
microstack.openstack security group show 3c69498c-c210-48c8-ba43-fbf60a0c224e
+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at | 2020-08-14T17:54:45Z |
| description | Default security group |
| id | 3c69498c-c210-48c8-ba43-fbf60a0c224e |
| location | Munch({'cloud': '', 'region_name': '', 'zone': None, 'project': Munch({'id': '37f73779b3cd42dc96044ea0fd6d1e98', 'name': 'admin', 'domain_id': None, 'domain_name': 'default'})}) |
| name | default |
| project_id | 37f73779b3cd42dc96044ea0fd6d1e98 |
| revision_number | 3 |
| rules | created_at='2020-08-14T17:54:45Z', direction='egress', ethertype='IPv6', id='1e5c2fed-7c7a-4dd4-9e11-c87d0de012ee', updated_at='2020-08-14T17:54:45Z' |
| | created_at='2020-08-14T17:54:45Z', direction='ingress', ethertype='IPv4', id='36394ec6-0f35-4b26-9788-61bf76a08088', remote_group_id='3c69498c-c210-48c8-ba43-fbf60a0c224e', updated_at='2020-08-14T17:54:45Z' |
| | created_at='2020-08-14T17:54:45Z', direction='ingress', ethertype='IPv6', id='48986d96-ec57-4f49-aee8-6e1c68e273b1', remote_group_id='3c69498c-c210-48c8-ba43-fbf60a0c224e', updated_at='2020-08-14T17:54:45Z' |
| | created_at='2020-08-14T17:56:16Z', direction='ingress', ethertype='IPv4', id='58816267-8df8-4a89-a9c5-31986a441365', port_range_max='22', port_range_min='22', protocol='tcp', remote_ip_prefix='0.0.0.0/0', updated_at='2020-08-14T17:56:16Z' |
| | created_at='2020-08-14T17:54:45Z', direction='egress', ethertype='IPv4', id='c75e9aa8-84f3-4d05-9d33-0da7892f7a07', updated_at='2020-08-14T17:54:45Z' |
| | created_at='2020-08-14T17:56:14Z', direction='ingress', ethertype='IPv4', id='d029b66c-219e-488d-93af-1f87a9d8b006', protocol='icmp', remote_ip_prefix='0.0.0.0/0', updated_at='2020-08-14T17:56:14Z' |
| tags | [] |
| updated_at | 2020-08-14T17:56:16Z |
The command I used to launch the VM:
microstack.openstack server create --flavor m1.medium --image ubuntu_1804 --nic net-id=326a319c-e75d-48f1-ac36-aed342c45874 --key-name microstack --security-group 3c69498c-c210-48c8-ba43-fbf60a0c224e server_micro
Below, we can see the VM was launched:
microstack.openstack server list
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+--------------+--------+-----------------------------------+-------------+-----------+
| 9e88311d-0907-4534-ba5d-ee80d2de06ee | server_micro | ACTIVE | private=10.0.0.127 | ubuntu_1804 | m1.medium |
microstack.openstack server show 9e88311d-0907-4534-ba5d-ee80d2de06ee
+-------------------------------------+----------------------------------------------------------+
| Field | Value |
+-------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | jabuti |
| OS-EXT-SRV-ATTR:hypervisor_hostname | jabuti |
| OS-EXT-SRV-ATTR:instance_name | instance-0000000a |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2020-08-31T13:54:52.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | private=10.0.0.127 |
| config_drive | |
| created | 2020-08-31T13:54:45Z |
| flavor | m1.medium (3) |
| hostId | 61fe40d2c4303db62eef04a071c6d7ee01f0465ec467f911ac05e2c0 |
| id | 9e88311d-0907-4534-ba5d-ee80d2de06ee |
| image | ubuntu_1804 (a1d60e2d-72d7-47d8-8aea-e97e8ba2a09b) |
| key_name | microstack |
| name | server_micro |
| progress | 0 |
| project_id | 37f73779b3cd42dc96044ea0fd6d1e98 |
| properties | |
| security_groups | name='default' |
| status | ACTIVE |
| updated | 2020-08-31T13:54:53Z |
| user_id | ff66b68443994bfeb2101851e7ea026d |
| volumes_attached | |
+-------------------------------------+----------------------------------------------------------+
But I cannot access the launched instance:
ping 10.0.0.127
PING 10.0.0.127 (10.0.0.127) 56(84) bytes of data.
From 10.75.211.9: icmp_seq=2 Redirect Host(New nexthop: 10.75.211.13)
From 10.75.211.9: icmp_seq=3 Redirect Host(New nexthop: 10.75.211.13)
From 10.75.211.9: icmp_seq=4 Redirect Host(New nexthop: 10.75.211.13)
From 10.75.211.9: icmp_seq=5 Redirect Host(New nexthop: 10.75.211.13)
^C
--- 10.0.0.127 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4004ms
What am I missing? What should I do to ping/ssh the launched instance?
Once we create a VM with a private network, we need to associate a floating IP to it. Below, I list the steps needed to solve the problem.
Create a floating IP for your external network:
microstack.openstack floating ip create external
Create a router to communicate both networks (internal and external):
microstack.openstack router create router1
Add the external network to the router:
microstack.openstack router set router1 --external-gateway external
Add your private subnetwork to the router:
microstack.openstack router add subnet router1 f16b8b8c-482e-4cf5-a5d6-74e284b7e0f1
Associate the floating IP to your VM (suppose the created IP is 10.20.20.92):
microstack.openstack server add floating ip server_micro 10.20.20.92
Now you should be able to ping the VM and access it through ssh.

IPOJO Component is not exposing service properly

i have ipojo component like
#Component(immediate = true)
#Provides
public class MyComponent implements MyService
{
#Override
public boolean islock()
{
return true;
}
}
when i deploy in karaf with below command
bundle:install -s mvn:com.my.osgi/mycomponent/0.0.1
if i do service:list at karaf console it shows output like
[org.apache.felix.ipojo.Factory]
--------------------------------
component.class = com.my.osgi.mycomponent
component.description = factory name="com.my.osgi.mycomponent"
bundle="77" state="valid" implementation-class="com.my.osgi.mycomponent"
requiredhandlers list="[org.apache.felix.ipojo:properties,
org.apache.felix.ipojo:callback, org.apache.felix.ipojo:provides,
org.apache.felix.ipojo:architect
ure]"
missinghandlers list="[]"
provides specification="com.my.osgi.mycomponent"
inherited interfaces="[com.my.osgi.mycomponent]"
superclasses="[]"
component.providedServiceSpecifications = [com.my.osgi.mycomponent]
factory.name = com.my.osgi.mycomponent
factory.state = 1
service.bundleid = 77
service.id = 153
service.pid = com.my.osgi.mycomponent
service.scope = singleton
But i am expecting like below,
[com.my.osgi.mycomponent]
-----------------------------------------------------
instance.name = mycomponent.3c2c91a5-4c28-46c3-a08e-1470192ef353
service.bundleid = 76
service.factoryPid = com.my.osgi.mycomponent
service.id = 397
service.pid =
com.my.osgi.mycomponent.3c2c91a5-4c28-46c3-a08e-1470192ef353
service.scope = bundle
What i am doing wrong? I am building with maven.
EDIT:
One observation as this bundle working in one bigger application where when this get deploy logs are
2018-07-17T12:03:50,011 | DEBUG | [iPOJO] pool-1-thread-1 | my-component | 38 - com.my.osgi.my-component - 1.0.3 | ServiceEvent REGISTERED - [org.apache.felix.ipojo.architecture.Architecture] - com.my.osgi.my-component
2018-07-17T12:03:50,021 | DEBUG | [iPOJO] pool-1-thread-1 | my-component | 38 - com.my.osgi.my-component - 1.0.3 | ServiceEvent REGISTERED - [com.my.osgi.my-component] - com.my.osgi.my-component
2018-07-17T12:03:50,021 | DEBUG | [iPOJO] pool-1-thread-1 | my-component | 38 - com.my.osgi.my-component - 1.0.3 | ServiceEvent MODIFIED - [com.my.osgi.my-component] - com.my.osgi.my-component
2018-07-17T12:03:50,021 | DEBUG | [iPOJO] pool-1-thread-1 | my-component | 38 - com.my.osgi.my-component - 1.0.3 | ServiceEvent MODIFIED - [com.my.osgi.my-component] - com.my.osgi.my-component
2018-07-17T12:03:50,021 | DEBUG | [iPOJO] pool-1-thread-1 | my-component | 38 - com.my.osgi.my-component - 1.0.3 | ServiceEvent MODIFIED - [com.my.osgi.my-component] - com.my.osgi.my-component
2018-07-17T12:03:50,021 | DEBUG | [iPOJO] pool-1-thread-1 | my-component | 38 - com.my.osgi.my-component - 1.0.3 | ServiceEvent REGISTERED - [org.apache.felix.ipojo.Factory] - com.my.osgi.my-component
but in my case its only two lines
2018-07-17T11:45:43,624 | DEBUG | [iPOJO] pool-1-thread-1 | my-component | 11 - com.my.osgi.my-component - 2.0.4.SNAPSHOT | ServiceEvent REGISTERED - [org.apache.felix.ipojo.extender.TypeDeclaration] - com.my.osgi.my-component
2018-07-17T11:45:43,654 | DEBUG | [iPOJO] pool-1-thread-1 | my-component | 11 - com.my.osgi.my-component - 2.0.4.SNAPSHOT | ServiceEvent REGISTERED - [org.apache.felix.ipojo.Factory] - com.my.osgi.my-component
So what this architecture doing and what got modified?
Just add the #Instantiate annotation on your component class. Alternatively, you can use:
iPOJO Factory service (http://felix.apache.org/documentation/subprojects/apache-felix-ipojo/apache-felix-ipojo-userguide/ipojo-advanced-topics/ipojo-factory-service.html)
the Config Admin (http://felix.apache.org/documentation/subprojects/apache-felix-ipojo/apache-felix-ipojo-userguide/ipojo-advanced-topics/combining-ipojo-and-configuration-admin.html)
A class annotated with #Configuration

openstack ocata glance creating 0 sized image

When I create new image using glance does not matter if using cli or gui I am getting returned code 0 and image is created but its size is zero.
The behavior is slightly different as from GUI my browser crushes but stil image is created from cli I am getting return code 0.
Command:
openstack image create --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --public --debug cirros-deb
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | d41d8cd98f00b204e9800998ecf8427e |
| container_format | bare |
| created_at | 2018-01-20T23:24:47Z |
| disk_format | qcow2 |
| file | /v2/images/c695bc30-731d-4a4f-ab0f-12eb972d8188/file |
| id | c695bc30-731d-4a4f-ab0f-12eb972d8188 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros-deb |
| owner | a3460a3b0e8f4d0bbdd25bf790fe504c |
| protected | False |
| schema | /v2/schemas/image |
| size | 0 |
| status | active |
| tags | |
| updated_at | 2018-01-20T23:24:47Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+
clean_up CreateImage:
END return value: 0
I tried with different cirros image and with ubuntu cloud image always behavior is the same.
Under /var/lib/glance/images file is created with size 0:
-rw-r-----. 1 glance glance 0 Jan 21 00:24 c695bc30-731d-4a4f-ab0f-12eb972d8188
grep c695bc30-731d-4a4f-ab0f-12eb972d8188 glance/api.log
2018-01-21 00:24:47.915 1894 INFO eventlet.wsgi.server [req-7246cd30-47c4-41a5-b358-c8e5cc0f4e56 8bd3e4905ffb4f698e2476d9080a7d90 a3460a3b0e8f4d0bbdd25bf790fe504c - default default] 172.19.254.50 - - [21/Jan/2018 00:24:47] "PUT /v2/images/c695bc30-731d-4a4f-ab0f-12eb972d8188/file HTTP/1.1" 204 213 0.111323
2018-01-21 00:24:47.931 1894 INFO eventlet.wsgi.server [req-28e0cda2-c9f7-4543-b19a-d59eccffa47e 8bd3e4905ffb4f698e2476d9080a7d90 a3460a3b0e8f4d0bbdd25bf790fe504c - default default] 172.19.254.50 - - [21/Jan/2018 00:24:47] "GET /v2/images/c695bc30-731d-4a4f-ab0f-12eb972d8188 HTTP/1.1" 200 780 0.015399
Any idea what can be wrong?
Find location of python glance client.
find / -name http.py
vi /usr/lib/python2.7/site-packages/glanceclient/common/http.py
- data = self._chunk_body(data)
+ pass
Referenc:
https://bugs.launchpad.net/python-glanceclient/+bug/1666511
https://ask.openstack.org/en/question/101944/why-does-openstack-image-create-of-cirros-result-in-size-0/?answer=102303#post-id-102303

Resources