Drupal error - Symfony\Component\HttpKernel\Exception\NotFoundHttpException: No route found for "GET - drupal

Installed Drupal 10/ MariaDB with Docker. Managed through the installation wizard, but when I go to the homepage after installation completes I see: (had to enable debugging)
The website encountered an unexpected error. Please try again later.
Symfony\Component\HttpKernel\Exception\NotFoundHttpException: No route found for "GET http://localhost:8083/" in Symfony\Component\HttpKernel\EventListener\RouterListener->onKernelRequest() (line 128 of /opt/drupal/vendor/symfony/http-kernel/EventListener/RouterListener.php).
Drupal\Core\Routing\AccessAwareRouter->matchRequest(Object) (Line: 106)
Symfony\Component\HttpKernel\EventListener\RouterListener->onKernelRequest(Object, 'kernel.request', Object)
call_user_func(Array, Object, 'kernel.request', Object) (Line: 111)
Drupal\Component\EventDispatcher\ContainerAwareEventDispatcher->dispatch(Object, 'kernel.request') (Line: 139)
Symfony\Component\HttpKernel\HttpKernel->handleRaw(Object, 1) (Line: 74)
Symfony\Component\HttpKernel\HttpKernel->handle(Object, 1, 1) (Line: 58)
Drupal\Core\StackMiddleware\Session->handle(Object, 1, 1) (Line: 48)
Drupal\Core\StackMiddleware\KernelPreHandle->handle(Object, 1, 1) (Line: 48)
Drupal\Core\StackMiddleware\ReverseProxyMiddleware->handle(Object, 1, 1) (Line: 51)
Drupal\Core\StackMiddleware\NegotiationMiddleware->handle(Object, 1, 1) (Line: 51)
Drupal\Core\StackMiddleware\StackedHttpKernel->handle(Object, 1, 1) (Line: 681)
Drupal\Core\DrupalKernel->handle(Object) (Line: 19)
Here's my docker-compose file:
version: '3'
services:
drupal:
image: drupal:10.0.2
ports:
- 8083:80
volumes:
- ./drupal_modules:/var/www/html/modules
- ./drupal_profiles:/var/www/html/profiles
- ./drupal_themes:/var/www/html/themes
- ./drupal_sites:/var/www/html/sites
# restart: always
mariadb:
image: mariadb:10.10
volumes:
- mariadb:/var/lib/mysql
environment:
TZ: "Europe/Rome"
MYSQL_ALLOW_EMPTY_PASSWORD: "no"
MYSQL_ROOT_PASSWORD: "rootpwd"
MYSQL_USER: 'drupal'
MYSQL_PASSWORD: 'drupal'
MYSQL_DATABASE: 'drupal'
volumes:
mariadb:
# drupal_modules:
# drupal_profiles:
# drupal_themes:
# drupal_sites:
I can successfully view database and table in the mariadb container:
> docker exec -it 348cd7dc8853 sh
# mysql -udrupal -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 4
Server version: 10.10.2-MariaDB-1:10.10.2+maria~ubu2204 mariadb.org binary distribution
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> show databases
-> ;
+--------------------+
| Database |
+--------------------+
| drupal |
| information_schema |
+--------------------+
2 rows in set (0.000 sec)
MariaDB [(none)]> use drupal
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
MariaDB [drupal]> show tables;
+------------------+
| Tables_in_drupal |
+------------------+
| batch |
| cache_config |
| cache_container |
| cache_data |
| cachetags |
| config |
| key_value |
| queue |
| sequences |
| sessions |
| user__roles |
| users |
| users_data |
| users_field_data |
+------------------+
14 rows in set (0.001 sec)

Related

How to fix mariadb replication: Could not execute Delete_rows_v1 event on table db.tableName; Index for table './db/tableName.MYI'

We have 2 mariadb servers in replication (master-slave). Those 2 servers were turn off unexpectedly. When mariadb servers get online myisam tables were checked on db1 and db2:
| 85 | db | ip:55336 | db | Query | 4398 | Checking table | tableName | 0.000 |
I have changed a slave to read master binary log from new file and new position (I think there was no lag on replication) but when I start slave on db2 I got replication error:
Could not execute Delete_rows_v1 event on table db.tableName; Index for table './db/tableName.MYI' is corrupt; try to repair it, Error_code: 126; handler error HA_ERR_WRONG_IN_RECORD; the event's master log binlog-file-01, end_log_pos 8980
Can you help me how can I fix it?
we deleted all rows in db1 for this table. Should I remove all rows for this table on db2? and then skip all steps in replication which are connected with that? there was lot of this rows.
Additionally:
On DB2:
Exec_Master_Log_Pos: 810
When I read event logs for this file on db1:
MariaDB [(none)]> SHOW BINLOG EVENTS IN 'binlog-file-01' from 810 limit 5;
+---------------------------+------+----------------+-----------+-------------+------------------------------------------------+
| Log_name | Pos | Event_type | Server_id | End_log_pos | Info |
+---------------------------+------+----------------+-----------+-------------+------------------------------------------------+
| binlog-file-01 | 810 | Gtid | 1 | 852 | BEGIN GTID 0-1-5630806796 |
| binlog-file-01 | 852 | Annotate_rows | 1 | 908 | DELETE FROM tableName |
| binlog-file-01 | 908 | Table_map | 1 | 993 | table_id: 107 (db.tableName) |
| binlog-file-01 | 993 | Delete_rows_v1 | 1 | 8980 | table_id: 107 |
| binlog-file-01 | 8980 | Delete_rows_v1 | 1 | 17006 | table_id: 107 |
+---------------------------+------+----------------+-----------+-------------+------------------------------------------------+
5 rows in set (0.02 sec)
Can I just skip this on replication?
./db/tableName.MYI indicates a corrupt index. If check table db.tableName didn't resolve this you can try:
drop and recreate the indexes on this table on the replica
truncate table tableName (as the binary log appears to be a delete from tableName without a where clause.
If the replication still fails on this binary log entry you can skip it with:
SET GLOBAL sql_slave_skip_counter=1
reference: ref.
Recommend changing your MyISAM tables to Aria or InnoDB to be crash safe in the case of power failure. Also set sync_binlog=1 for persistent safety.

volume backup create what is errno 22?

Trying to create a volume backup both using the web UI and the cmd and keep getting errno 22. I'm unable to find information about the error or how to fix it. Anyone knows where I should start looking?
(openstack) volume backup create --force --name inventory01_vol_backups 398ee974-9b83-4918-9935-f52882b3e6b7
(openstack) volume backup show inventory01_vol_backups
+-----------------------+------------------------------------------------------------------+
| Field | Value |
+-----------------------+------------------------------------------------------------------+
| availability_zone | None |
| container | None |
| created_at | 2021-08-03T23:45:49.000000 |
| data_timestamp | 2021-08-03T23:45:49.000000 |
| description | None |
| fail_reason | [errno 22] RADOS invalid argument (error calling conf_read_file) |
| has_dependent_backups | False |
| id | 924c6e62-789e-4e51-9748-927695fc744c |
| is_incremental | False |
| name | inventory01_vol_backups |
| object_count | 0 |
| size | 30 |
| snapshot_id | None |
| status | error |
| updated_at | 2021-08-03T23:45:50.000000 |
| volume_id | 398ee974-9b83-4918-9935-f52882b3e6b7 |
+-----------------------+------------------------------------------------------------------+
The issue was caused due to a bug in Cinder version 16.2.1.dev13. Updating cinder to the latest version solved the issue

ERROR MY-011542 Repl Plugin group_replication reported: 'Table repl_test does not have any PRIMARY KEY

I installed an InnoDB Cluster recently and trying to create a table without any primary key or equivalent to test the cluster index concept where "InnoDB internally generates a hidden clustered index named GEN_CLUST_INDEX on a synthetic column containing row ID values if the table has no PRIMARY KEY or suitable UNIQUE index".
I created table as below:
create table repl_test (Name varchar(10));
Checked for the creation of GEN_CLUST_INDEX:
select * from mysql.innodb_index_stats where database_name='test' and table_name = 'repl_test';
+---------------+------------+-----------------+---------------------+--------------+------------+-------------+-----------------------------------+
| database_name | table_name | index_name | last_update | stat_name | stat_value | sample_size | stat_description |
+---------------+------------+-----------------+---------------------+--------------+------------+-------------+-----------------------------------+
| test | repl_test | GEN_CLUST_INDEX | 2019-02-22 06:29:26 | n_diff_pfx01 | 0 | 1 | DB_ROW_ID |
| test | repl_test | GEN_CLUST_INDEX | 2019-02-22 06:29:26 | n_leaf_pages | 1 | NULL | Number of leaf pages in the index |
| test | repl_test | GEN_CLUST_INDEX | 2019-02-22 06:29:26 | size | 1 | NULL | Number of pages in the index |
+---------------+------------+-----------------+---------------------+--------------+------------+-------------+-----------------------------------+
3 rows in set (0.00 sec)
But, when I try to insert row, I get the below error:
insert into repl_test values ('John');
ERROR 3098 (HY000): The table does not comply with the requirements by an external plugin.
2019-02-22T14:32:53.177700Z 594022 [ERROR] [MY-011542] [Repl] Plugin group_replication reported: 'Table repl_test does not have any PRIMARY KEY. This is not compatible with Group Replication.'
Below is my conf file:
[client]
port = 3306
socket = /tmp/mysql.sock
[mysqld_safe]
socket = /tmp/mysql.sock
[mysqld]
socket = /tmp/mysql.sock
port = 3306
basedir = /mysql/product/8.0/TEST
datadir = /mysql/data/TEST/innodb_data
log-error = /mysql/admin/TEST/innodb_logs/mysql.log
log_bin = /mysql/binlog/TEST/innodb_logs/mysql-bin
server-id=1
max_connections = 500
open_files_limit = 65535
expire_logs_days = 15
innodb_flush_log_at_trx_commit=1
sync_binlog=1
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
binlog_checksum = NONE
enforce_gtid_consistency = ON
gtid_mode = ON
relay-log=<<hostname>>-relay-bin
My MySQL version: mysql Ver 8.0.11 for linux-glibc2.12 on x86_64 (MySQL Community Server - GPL)
Auto-generation of the PK should work fine for non-clustered setups. However, InnoDB Cluster (Group Replication) and Galera need a user PK (or UNIQUE that can be promoted).
If you like, file a documentation bug with bugs.mysql.com complaining that this restriction is not clear. Be sure to point to the page that needs fixing.

openstack ocata glance creating 0 sized image

When I create new image using glance does not matter if using cli or gui I am getting returned code 0 and image is created but its size is zero.
The behavior is slightly different as from GUI my browser crushes but stil image is created from cli I am getting return code 0.
Command:
openstack image create --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --public --debug cirros-deb
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | d41d8cd98f00b204e9800998ecf8427e |
| container_format | bare |
| created_at | 2018-01-20T23:24:47Z |
| disk_format | qcow2 |
| file | /v2/images/c695bc30-731d-4a4f-ab0f-12eb972d8188/file |
| id | c695bc30-731d-4a4f-ab0f-12eb972d8188 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros-deb |
| owner | a3460a3b0e8f4d0bbdd25bf790fe504c |
| protected | False |
| schema | /v2/schemas/image |
| size | 0 |
| status | active |
| tags | |
| updated_at | 2018-01-20T23:24:47Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+
clean_up CreateImage:
END return value: 0
I tried with different cirros image and with ubuntu cloud image always behavior is the same.
Under /var/lib/glance/images file is created with size 0:
-rw-r-----. 1 glance glance 0 Jan 21 00:24 c695bc30-731d-4a4f-ab0f-12eb972d8188
grep c695bc30-731d-4a4f-ab0f-12eb972d8188 glance/api.log
2018-01-21 00:24:47.915 1894 INFO eventlet.wsgi.server [req-7246cd30-47c4-41a5-b358-c8e5cc0f4e56 8bd3e4905ffb4f698e2476d9080a7d90 a3460a3b0e8f4d0bbdd25bf790fe504c - default default] 172.19.254.50 - - [21/Jan/2018 00:24:47] "PUT /v2/images/c695bc30-731d-4a4f-ab0f-12eb972d8188/file HTTP/1.1" 204 213 0.111323
2018-01-21 00:24:47.931 1894 INFO eventlet.wsgi.server [req-28e0cda2-c9f7-4543-b19a-d59eccffa47e 8bd3e4905ffb4f698e2476d9080a7d90 a3460a3b0e8f4d0bbdd25bf790fe504c - default default] 172.19.254.50 - - [21/Jan/2018 00:24:47] "GET /v2/images/c695bc30-731d-4a4f-ab0f-12eb972d8188 HTTP/1.1" 200 780 0.015399
Any idea what can be wrong?
Find location of python glance client.
find / -name http.py
vi /usr/lib/python2.7/site-packages/glanceclient/common/http.py
- data = self._chunk_body(data)
+ pass
Referenc:
https://bugs.launchpad.net/python-glanceclient/+bug/1666511
https://ask.openstack.org/en/question/101944/why-does-openstack-image-create-of-cirros-result-in-size-0/?answer=102303#post-id-102303

NoValidHost: No valid host was found. There are not enough hosts available

When I create the instance in the dashboard, I get error:
No valid host was found. There are not enough hosts available.
In the /var/log/nova/nova-conductor.log file, there is the log:
2017-08-05 00:22:29.046 3834 WARNING nova.scheduler.utils [req-89c159c7-b40a-43eb-8f0d-9306eb73e83a 2a5fa182fb1b459980db09cd1572850e 0d5998f2f7ec4c4892a32e06bafb19df - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available.
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 199, in inner
return func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 104, in select_destinations
dests = self.driver.select_destinations(ctxt, spec_obj)
File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 74, in select_destinations
raise exception.NoValidHost(reason=reason)
NoValidHost: No valid host was found. There are not enough hosts available.
2017-08-05 00:22:29.048 3834 WARNING nova.scheduler.utils [req-89c159c7-b40a-43eb-8f0d-9306eb73e83a 2a5fa182fb1b459980db09cd1572850e 0d5998f2f7ec4c4892a32e06bafb19df - - -] [instance: 2011e343-c8fc-4ed0-8148-b0d2b5ba37c3] Setting instance to ERROR state.
2017-08-05 00:22:30.785 3834 WARNING oslo_config.cfg [req-89c159c7-b40a-43eb-8f0d-9306eb73e83a 2a5fa182fb1b459980db09cd1572850e 0d5998f2f7ec4c4892a32e06bafb19df - - -] Option "auth_plugin" from group "neutron" is deprecated. Use option "auth_type" from group "neutron".
And I searched the SO, find a related post:Openstack-Devstack: Can't create instance, There are not enough hosts available
I checked the free_ram_mb in mysql:
MariaDB [nova]> select * from compute_nodes \G;
*************************** 1. row ***************************
created_at: 2017-08-04 12:44:26
updated_at: 2017-08-04 13:51:35
deleted_at: NULL
id: 4
service_id: NULL
vcpus: 8
memory_mb: 7808
local_gb: 19
vcpus_used: 0
memory_mb_used: 512
local_gb_used: 0
hypervisor_type: QEMU
hypervisor_version: 1005003
cpu_info: {"vendor": "Intel", "model": "Broadwell", "arch": "x86_64", "features": ["smap", "avx", "clflush", "sep", "rtm", "vme", "invpcid", "msr", "fsgsbase", "xsave", "pge", "erms", "hle", "cmov", "tsc", "smep", "pcid", "pat", "lm", "abm", "adx", "3dnowprefetch", "nx", "fxsr", "syscall", "sse4.1", "pae", "sse4.2", "pclmuldq", "fma", "tsc-deadline", "mmx", "osxsave", "cx8", "mce", "de", "rdtscp", "ht", "pse", "lahf_lm", "rdseed", "popcnt", "mca", "pdpe1gb", "apic", "sse", "f16c", "ds", "invtsc", "pni", "aes", "avx2", "sse2", "ss", "hypervisor", "bmi1", "bmi2", "ssse3", "fpu", "cx16", "pse36", "mtrr", "movbe", "rdrand", "x2apic"], "topology": {"cores": 2, "cells": 1, "threads": 1, "sockets": 4}}
disk_available_least: 18
free_ram_mb: 7296
free_disk_gb: 19
current_workload: 0
running_vms: 0
hypervisor_hostname: ha-node1
deleted: 0
host_ip: 192.168.8.101
supported_instances: [["i686", "qemu", "hvm"], ["x86_64", "qemu", "hvm"]]
pci_stats: {"nova_object.version": "1.1", "nova_object.changes": ["objects"], "nova_object.name": "PciDevicePoolList", "nova_object.data": {"objects": []}, "nova_object.namespace": "nova"}
metrics: []
extra_resources: NULL
stats: {}
numa_topology: NULL
host: ha-node1
ram_allocation_ratio: 3
cpu_allocation_ratio: 16
uuid: 9113940b-7ec9-462d-af06-6988dbb6b6cf
disk_allocation_ratio: 1
*************************** 2. row ***************************
created_at: 2017-08-04 12:44:34
updated_at: 2017-08-04 13:50:47
deleted_at: NULL
id: 6
service_id: NULL
vcpus: 8
memory_mb: 7808
local_gb: 19
vcpus_used: 0
memory_mb_used: 512
local_gb_used: 0
hypervisor_type: QEMU
hypervisor_version: 1005003
cpu_info: {"vendor": "Intel", "model": "Broadwell", "arch": "x86_64", "features": ["smap", "avx", "clflush", "sep", "rtm", "vme", "invpcid", "msr", "fsgsbase", "xsave", "pge", "erms", "hle", "cmov", "tsc", "smep", "pcid", "pat", "lm", "abm", "adx", "3dnowprefetch", "nx", "fxsr", "syscall", "sse4.1", "pae", "sse4.2", "pclmuldq", "fma", "tsc-deadline", "mmx", "osxsave", "cx8", "mce", "de", "rdtscp", "ht", "pse", "lahf_lm", "rdseed", "popcnt", "mca", "pdpe1gb", "apic", "sse", "f16c", "ds", "invtsc", "pni", "aes", "avx2", "sse2", "ss", "hypervisor", "bmi1", "bmi2", "ssse3", "fpu", "cx16", "pse36", "mtrr", "movbe", "rdrand", "x2apic"], "topology": {"cores": 2, "cells": 1, "threads": 1, "sockets": 4}}
disk_available_least: 18
free_ram_mb: 7296
free_disk_gb: 19
current_workload: 0
running_vms: 0
hypervisor_hostname: ha-node2
deleted: 0
host_ip: 192.168.8.102
supported_instances: [["i686", "qemu", "hvm"], ["x86_64", "qemu", "hvm"]]
pci_stats: {"nova_object.version": "1.1", "nova_object.changes": ["objects"], "nova_object.name": "PciDevicePoolList", "nova_object.data": {"objects": []}, "nova_object.namespace": "nova"}
metrics: []
extra_resources: NULL
stats: {}
numa_topology: NULL
host: ha-node2
ram_allocation_ratio: 3
cpu_allocation_ratio: 16
uuid: 32b574df-52ac-43dc-87f8-353350449076
disk_allocation_ratio: 1
2 rows in set (0.00 sec)
You see the free_ram_mb: 7296, I just want to create a 512mb VM, but failed.
EDIT-1
The nova services all are up:
[root#ha-node1 ~]# nova service-list
+----+------------------+----------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+----------+----------+---------+-------+----------------------------+-----------------+
| 2 | nova-consoleauth | ha-node3 | internal | enabled | up | 2017-08-05T14:20:25.000000 | - |
| 5 | nova-conductor | ha-node3 | internal | enabled | up | 2017-08-05T14:20:29.000000 | - |
| 7 | nova-cert | ha-node3 | internal | enabled | up | 2017-08-05T14:20:23.000000 | - |
| 15 | nova-scheduler | ha-node3 | internal | enabled | up | 2017-08-05T14:20:20.000000 | - |
| 22 | nova-cert | ha-node1 | internal | enabled | up | 2017-08-05T14:20:26.000000 | - |
| 29 | nova-conductor | ha-node1 | internal | enabled | up | 2017-08-05T14:20:22.000000 | - |
| 32 | nova-consoleauth | ha-node1 | internal | enabled | up | 2017-08-05T14:20:29.000000 | - |
| 33 | nova-consoleauth | ha-node2 | internal | enabled | up | 2017-08-05T14:20:29.000000 | - |
| 36 | nova-scheduler | ha-node1 | internal | enabled | up | 2017-08-05T14:20:30.000000 | - |
| 40 | nova-conductor | ha-node2 | internal | enabled | up | 2017-08-05T14:20:26.000000 | - |
| 44 | nova-cert | ha-node2 | internal | enabled | up | 2017-08-05T14:20:27.000000 | - |
| 46 | nova-scheduler | ha-node2 | internal | enabled | up | 2017-08-05T14:20:28.000000 | - |
| 49 | nova-compute | ha-node2 | nova | enabled | up | 2017-08-05T14:19:35.000000 | - |
| 53 | nova-compute | ha-node1 | nova | enabled | up | 2017-08-05T14:20:05.000000 | - |
+----+------------------+----------+----------+---------+-------+----------------------------+-----------------+
The nova list:
[root#ha-node1 ~]# nova list
+--------------------------------------+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------+--------+------------+-------------+----------+
| 20193e58-2c5b-44c6-a98f-a44e2001934f | vm1 | ERROR | - | NOSTATE | |
And the nova show instance:
[root#ha-node1 ~]# nova show 20193e58-2c5b-44c6-a98f-a44e2001934f
+--------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Property | Value |
+--------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| OS-DCF:diskConfig | AUTO |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | - |
| OS-EXT-SRV-ATTR:hostname | vm1 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | instance-00000003 |
| OS-EXT-SRV-ATTR:kernel_id | |
| OS-EXT-SRV-ATTR:launch_index | 0 |
| OS-EXT-SRV-ATTR:ramdisk_id | |
| OS-EXT-SRV-ATTR:reservation_id | r-jct8kkcq |
| OS-EXT-SRV-ATTR:root_device_name | /dev/vda |
| OS-EXT-SRV-ATTR:user_data | - |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | - |
| OS-EXT-STS:vm_state | error |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2017-08-05T14:17:54Z |
| description | vm1 |
| fault | {"message": "No valid host was found. There are not enough hosts available.", "code": 500, "details": " File \"/usr/lib/python2.7/site-packages/nova/conductor/manager.py\", line 496, in build_instances |
| | context, request_spec, filter_properties) |
| | File \"/usr/lib/python2.7/site-packages/nova/conductor/manager.py\", line 567, in _schedule_instances |
| | hosts = self.scheduler_client.select_destinations(context, spec_obj) |
| | File \"/usr/lib/python2.7/site-packages/nova/scheduler/utils.py\", line 370, in wrapped |
| | return func(*args, **kwargs) |
| | File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py\", line 51, in select_destinations |
| | return self.queryclient.select_destinations(context, spec_obj) |
| | File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py\", line 37, in __run_method |
| | return getattr(self.instance, __name)(*args, **kwargs) |
| | File \"/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py\", line 32, in select_destinations |
| | return self.scheduler_rpcapi.select_destinations(context, spec_obj) |
| | File \"/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py\", line 126, in select_destinations |
| | return cctxt.call(ctxt, 'select_destinations', **msg_args) |
| | File \"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py\", line 169, in call |
| | retry=self.retry) |
| | File \"/usr/lib/python2.7/site-packages/oslo_messaging/transport.py\", line 97, in _send |
| | timeout=timeout, retry=retry) |
| | File \"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py\", line 464, in send |
| | retry=retry) |
| | File \"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py\", line 455, in _send |
| | raise result |
| | ", "created": "2017-08-05T14:18:14Z"} |
| flavor | m1.tiny (1) |
| hostId | |
| host_status | |
| id | 20193e58-2c5b-44c6-a98f-a44e2001934f |
| image | cirros-0.3.4-x86_64 (202778cd-6b32-4486-9444-c167089d9082) |
| key_name | - |
| locked | False |
| metadata | {} |
| name | vm1 |
| os-extended-volumes:volumes_attached | [] |
| status | ERROR |
| tags | [] |
| tenant_id | 0d5998f2f7ec4c4892a32e06bafb19df |
| updated | 2017-08-05T14:18:16Z |
| user_id | 2a5fa182fb1b459980db09cd1572850e |
+--------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
EDIT-2
The nova-compute.log in the /var/log/nova/ useful information:
......
2017-08-05 22:17:42.669 103174 INFO nova.compute.resource_tracker [req-60a062ce-4b3d-4cb7-863e-2f9bba0bc6ec - - - - -] Compute_service record updated for ha-node1:ha-node1
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [req-7dded91f-7497-4d20-ba89-69f867a2a8fb 2a5fa182fb1b459980db09cd1572850e 0d5998f2f7ec4c4892a32e06bafb19df - - -] [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] Instance failed to spawn
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] Traceback (most recent call last):
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2078, in _build_resources
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] yield resources
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1920, in _build_and_run_instance
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] block_device_info=block_device_info)
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2584, in spawn
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] admin_pass=admin_password)
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2959, in _create_image
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] fileutils.ensure_tree(libvirt_utils.get_instance_path(instance))
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] File "/usr/lib/python2.7/site-packages/oslo_utils/fileutils.py", line 40, in ensure_tree
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] os.makedirs(path, mode)
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] File "/usr/lib64/python2.7/os.py", line 157, in makedirs
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] mkdir(name, mode)
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] OSError: [Errno 13] Permission denied: '/var/lib/nova/instances/20193e58-2c5b-44c6-a98f-a44e2001934f'
2017-08-05 22:18:03.357 103174 ERROR nova.compute.manager [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f]
2017-08-05 22:18:11.563 103174 INFO nova.compute.manager [req-7dded91f-7497-4d20-ba89-69f867a2a8fb 2a5fa182fb1b459980db09cd1572850e 0d5998f2f7ec4c4892a32e06bafb19df - - -] [instance: 20193e58-2c5b-44c6-a98f-a44e2001934f] Terminating instance
....
Debugging
Enable debug mode to get detailed logs.
Set debug = True in these files:
/etc/nova/nova.conf
/etc/nova/cinder.conf
/etc/glance/glance-registry.conf
Restart the reconfigured services
Try to create instance again and check logs.
Take a look at the nova-scheduler.log file and try to find line like this:
.. INFO nova.filters [req-..] Filter DiskFilter returned 0 hosts
Above this line should be DEBUG logs with Filters detailed information, for example:
.. DEBUG nova.filters [req-..] Filter RetryFilter returned 1 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:104
.. DEBUG nova.filters [req-..] Filter AvailabilityZoneFilter returned 1 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:104
.. DEBUG nova.filters [req-..] Filter RamFilter returned 1 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:104
.. DEBUG nova.filters [req-..] (...) ram: 37107MB disk: 11264MB io_ops: 0 instances: 4 does not have 17408 MB usable disk, it only has 11264.0 MB usable disk. host_passes /usr/lib/python2.7/site-packages/nova/scheduler/filters/disk_filter.py:70
Overcommitting
OpenStack allows you to overcommit CPU and RAM on compute nodes. This allows you to increase the number of instances running on your cloud at the cost of reducing the performance of the instances. The Compute service uses the following ratios by default:
CPU allocation ratio: 16:1
RAM allocation ratio: 1.5:1
Please read documentation to get more information.
You can change allocation ratio using nova.conf:
cpu_allocation_ratio
ram_allocation_ratio
disk_allocation_ratio
First you need to check the output of "nova service-list" or "openstack compute service list". It should show at least one 'nova-compute' service with state as "Up" and status as 'enabled'.
If the above is fine, then the compute nodes are communicating properly with the scheduler. If not, then you need to check the nova-scheduler logs.
The nova-scheduler has a series of filters like Memory filter, CPU filter, Aggregate filter which it is apply to filter hosts based on the flavor you selected. i.e If you select a flavor with 16GB RAM, then the scheduler will filter (Memory filter) compute hosts which has the available memory. After all the filtering is done the scheduler will try to launch instance on a filtered host, if it fails it will try on another host. Default number of tries is 3. All these can be seen in scheduler logs. That will give you a clear idea on what went wrong.
Also you need to check 'nova show ' output. If you can see the compute host present in "OS-EXT-SRV-ATTR:hypervisor_hostname" , then we can understand that the scheduler was successfully able to allocate a compute host and something went wrong with the compute host. In that case, you need to check the nova-compute logs of that hypervisor.
Finally I found I mount the /var/lib/nova/ to nfs directory /mnt/sdb/var/lib/nova/, but the /mnt/sdb/var/lib/nova/ permission is root:root, so I changed to the nova:nova(same to the /var/lib/nova/).
command :
chown -R nova:nova nova
edit /etc/nova/nova.conf in all the compute node and modify as per your application requirement .
cpu_allocation_ratio = 2.0
( double of physical core can be used for total instance )
ram_allocation_ratio = 2.0
( double of Total memory can be used for total instance
restart nova and nova-scheduler in all the compute node
systemctl restart openstack-nova-*
systemctl restart openstack-nova-scheduler.service

Resources