I simply ask if there is any way to change an openstack flavor name after its creation with VMs using it as flavor, for example consider this flavor:
+----------------------------+--------------------+
| Field | Value |
+----------------------------+--------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| access_project_ids | None |
| description | None |
| disk | 0 |
| id | g1-1-1-0 |
| name | g1-1-1-0 |
| os-flavor-access:is_public | True |
| properties | |
| ram | 1024 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+--------------------+
How to change its name field to g1-small?
with pywinauto version 0.6.5, i try to automate some qt button click. after connecting to app window, print_ctrl_ids display something irrelevant controls excluding button.
|
| TitleBar - '' (L598, T165, R1340, B184)
| [u'', u'1', u'0', 'TitleBar']
| |
| | Menu - '系统' (L580, T165, R598, B183)
| | [u'Menu', u'\u7cfb\u7edfMenu', u'\u7cfb\u7edf', u'\u7cfb\u7edf1', u'\u
b\u7edf0']
| | child_window(title="系统", auto_id="MenuBar", control_type="MenuBar")
| | |
| | | MenuItem - '系统' (L580, T165, R598, B183)
| | | [u'\u7cfb\u7edfMenuItem', u'MenuItem', u'\u7cfb\u7edf2']
| | | child_window(title="系统", control_type="MenuItem")
| |
| | Button - '关闭' (L1321, T164, R1341, B184)
| | [u'\u5173\u95edButton', 'Button', u'\u5173\u95ed']
| | child_window(title="关闭", control_type="Button")
|
| Custom - '' (L580, T184, R1340, B853)
| ['Custom', u'Custom0', u'2', u'Custom1']
| |
| | Custom - '' (L580, T184, R1340, B853)
| | ['Custom', u'Custom0', u'2', u'Custom1']
| | |
| | | Custom - '' (L580, T184, R1340, B514)
| | | [u'4', 'Custom3']
| | | |
| | | | Custom - '' (L580, T461, R1340, B514)
| | | | [u'5', 'Custom4']
| | | | |
| | | | | Custom - '' (L652, T461, R1268, B514)
| | | | | [u'6', 'Custom5']
| | |
| | | Custom - '' (L652, T550, R1098, B569)
| | | [u'7', 'Custom6']
| | |
| | | Custom - '' (L652, T589, R1098, B647)
| | | [u'8', 'Custom7']
| | |
| | | Custom - '' (L1158, T550, R1268, B660)
| | | [u'9', 'Custom8']
| | |
| | | Custom - '' (L1158, T662, R1268, B710)
| | | [u'10', 'Custom9']
Im running openstack and am trying to get my gnocchi meters to come through more frequently so that I can run a scaling demo without lots of 5 minute lags. In Gnocchi I have changed the Archive-policy to be a custom policy with granularity set to 30 seconds (I've also tried the following using the existing 'medium' policy and it has the same result)
+---------------------+--------------------------------------------------------+
| Field | Value |
+---------------------+--------------------------------------------------------+
| aggregation_methods | std, count, min, max, sum, mean |
| back_window | 0 |
| definition | - points: 120, granularity: 0:00:30, timespan: 1:00:00 |
| name | test |
+---------------------+--------------------------------------------------------+
the cpu_util meter is picking it up correclty
+------------------------------------+-------------------------------------------------------------------+
| Field | Value |
+------------------------------------+-------------------------------------------------------------------+
| archive_policy/aggregation_methods | std, count, min, max, sum, mean |
| archive_policy/back_window | 0 |
| archive_policy/definition | - points: 120, granularity: 0:00:30, timespan: 1:00:00 |
| archive_policy/name | test |
| created_by_project_id | e499d0c2e0fb4a05ac39c3f8c260052b |
| created_by_user_id | 21759a51f3834b9bbae49c3ed17a13e4 |
| creator | 21759a51f3834b9bbae49c3ed17a13e4:e499d0c2e0fb4a05ac39c3f8c260052b |
| id | e5a02f3a-9fbe-4e44-bb91-e1cfe6b86143 |
| name | cpu_util |
| resource/created_by_project_id | e499d0c2e0fb4a05ac39c3f8c260052b |
| resource/created_by_user_id | 21759a51f3834b9bbae49c3ed17a13e4 |
| resource/creator | 21759a51f3834b9bbae49c3ed17a13e4:e499d0c2e0fb4a05ac39c3f8c260052b |
| resource/ended_at | None |
| resource/id | 243b9715-95ba-4532-9728-3e61776e1c29 |
| resource/original_resource_id | 243b9715-95ba-4532-9728-3e61776e1c29 |
| resource/project_id | 43a7db62d5d54c4590e363868fff49e2 |
| resource/revision_end | None |
| resource/revision_start | 2018-08-08T14:05:09.770765+00:00 |
| resource/started_at | 2018-08-08T13:20:45.948842+00:00 |
| resource/type | instance |
| resource/user_id | 4e5015006b304e7ca57edc5419b42be3 |
| unit | % |
+------------------------------------+-------------------------------------------------------------------+
but the measurements are still only coming out every 5 min
gnocchi measures show e5a02f3a-9fbe-4e44-bb91-e1cfe6b86143
+---------------------------+-------------+--------------+
| timestamp | granularity | value |
+---------------------------+-------------+--------------+
| 2018-08-08T13:30:00+00:00 | 30.0 | 0.0400002375 |
| 2018-08-08T13:35:00+00:00 | 30.0 | 0.0366666763 |
| 2018-08-08T13:40:00+00:00 | 30.0 | 0.0366667101 |
| 2018-08-08T13:45:00+00:00 | 30.0 | 0.0399999545 |
| 2018-08-08T13:50:00+00:00 | 30.0 | 0.0366664861 |
| 2018-08-08T13:55:00+00:00 | 30.0 | 0.0400000543 |
| 2018-08-08T14:00:00+00:00 | 30.0 | 0.0366665877 |
+---------------------------+-------------+--------------+
any ideas what I am missing?
I had the same issue. In Gnocchi-backed Ceilometer there is a new configuration file: polling.yaml. Resources polling interval is being set there.
https://review.opendev.org/#/c/405682/
https://docs.openstack.org/ceilometer/pike/admin/telemetry-best-practices.html
After successful installation of Openstack RDO in multiple hosts, the mysql database doesn't show any users in nova.
Command SHOW TABLES; in the database does not show user or users table.
MariaDB [nova]> SHOW TABLES;
+--------------------------------------------+
| Tables_in_nova |
+--------------------------------------------+
| agent_builds |
| aggregate_hosts |
| aggregate_metadata |
| aggregates |
| allocations |
| block_device_mapping |
| bw_usage_cache |
| cells |
| certificates |
| compute_nodes |
| console_pools |
| consoles |
| dns_domains |
| fixed_ips |
| floating_ips |
| instance_actions |
| instance_actions_events |
| instance_extra |
| instance_faults |
| instance_group_member |
| instance_group_policy |
| instance_groups |
| instance_id_mappings |
| instance_info_caches |
| instance_metadata |
| instance_system_metadata |
| instance_type_extra_specs |
| instance_type_projects |
| instance_types |
| instances |
| inventories |
| key_pairs |
| migrate_version |
| migrations |
| networks |
| pci_devices |
| project_user_quotas |
| provider_fw_rules |
| quota_classes |
| quota_usages |
| quotas |
| reservations |
| resource_provider_aggregates |
| resource_providers |
| s3_images |
| security_group_default_rules |
| security_group_instance_association |
| security_group_rules |
| security_groups |
| services |
| shadow_agent_builds |
| shadow_aggregate_hosts |
| shadow_aggregate_metadata |
| shadow_aggregates |
| shadow_block_device_mapping |
| shadow_bw_usage_cache |
| shadow_cells |
| shadow_certificates |
| shadow_compute_nodes |
| shadow_console_pools |
| shadow_consoles |
| shadow_dns_domains |
| shadow_fixed_ips |
| shadow_floating_ips |
| shadow_instance_actions |
| shadow_instance_actions_events |
| shadow_instance_extra |
| shadow_instance_faults |
| shadow_instance_group_member |
| shadow_instance_group_policy |
| shadow_instance_groups |
| shadow_instance_id_mappings |
| shadow_instance_info_caches |
| shadow_instance_metadata |
| shadow_instance_system_metadata |
| shadow_instance_type_extra_specs |
| shadow_instance_type_projects |
| shadow_instance_types |
| shadow_instances |
| shadow_key_pairs |
| shadow_migrate_version |
| shadow_migrations |
| shadow_networks |
| shadow_pci_devices |
| shadow_project_user_quotas |
| shadow_provider_fw_rules |
| shadow_quota_classes |
| shadow_quota_usages |
| shadow_quotas |
| shadow_reservations |
| shadow_s3_images |
| shadow_security_group_default_rules |
| shadow_security_group_instance_association |
| shadow_security_group_rules |
| shadow_security_groups |
| shadow_services |
| shadow_snapshot_id_mappings |
| shadow_snapshots |
| shadow_task_log |
| shadow_virtual_interfaces |
| shadow_volume_id_mappings |
| shadow_volume_usage_cache |
| snapshot_id_mappings |
| snapshots |
| tags |
| task_log |
| virtual_interfaces |
| volume_id_mappings |
| volume_usage_cache |
+--------------------------------------------+
109 rows in set (0.00 sec)
user table is in keystone database and not in nova database.
MariaDB [keystone]> show tables;
+------------------------+
| Tables_in_keystone |
+------------------------+
..........................
| trust |
| trust_role |
| user |
..........................
Is there any way to find the possible fields for the nova show --fields command? Currently I can fill the --fields parameter with anything and it never complains, just fails silently.
The docs say use the show command, but that reports a machines characteristics.
the nova show returns may different from version to version, here is an example:
$ nova show b3cdc6c0-85a7-4904-ae85-71918f734048
+-------------------------------------+-------------------------------------+
| Property | Value |
+-------------------------------------+-------------------------------------+
| OS-EXT-STS:task_state | scheduling |
| image | cirros-0.3.2-x86_64-uec |
| OS-EXT-STS:vm_state | building |
| OS-EXT-SRV-ATTR:instance_name | instance-00000002 |
| flavor | m1.small |
| id | b3cdc6c0-85a7-4904-ae85-71918f734048|
| security_groups | [{u'name': u'default'}] |
| user_id | 376744b5910b4b4da7d8e6cb483b06a8 |
| OS-DCF:diskConfig | MANUAL |
| accessIPv4 | |
| accessIPv6 | |
| progress | 0 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-AZ:availability_zone | nova |
| config_drive | |
| status | BUILD |
| updated | 2013-07-16T16:25:34Z |
| hostId | |
| OS-EXT-SRV-ATTR:host | None |
| key_name | KeyPair01 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| name | myCirrosServer |
| adminPass | tVs5pL8HcPGw |
| tenant_id | 66265572db174a7aa66eba661f58eb9e |
| created | 2013-07-16T16:25:34Z |
| metadata | {u'KEY': u'VALUE'} |
+-------------------------------------+-------------------------------------+
The property column is all the fields available to specify to --fields