Openstack PCI passthrough interface name error in Libvirt - openstack

I want to start an instance on openstack pike with SRIOV enabled NICs. However, I am getting a Libvirt error regarding the node name. The error looks weird as the node name is not matching the interface name on the host machine or in the configuration files.
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager [req-caa92f1d-5ac1-402d-a8bc-b08ab350a21f - - - - -] Error updating resources for node jupiter.: libvirtError: Node device not found: no node device with matching name 'net_enp129s2_b2_87_6e_13_a1_5e'
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager Traceback (most recent call last):
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6696, in update_available_resource_for_node
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager rt.update_available_resource(context, nodename)
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 641, in update_available_resource
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager resources = self.driver.get_available_resource(nodename)
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5857, in get_available_resource
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5621, in _get_pci_passthrough_devices
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager for name in dev_names:
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5582, in _get_pcidev_info
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager device['label'] = 'label_%(vendor_id)s_%(product_id)s' % device
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5553, in _get_device_capabilities
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager pcinet_info = self._get_pcinet_info(address)
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5496, in _get_pcinet_info
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager virtdev = self._host.device_lookup_by_name(devname)
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/host.py", line 845, in device_lookup_by_name
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager return self.get_connection().nodeDeviceLookupByName(name)
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager result = proxy_call(self._autowrap, f, *args, **kwargs)
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in proxy_call
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager rv = execute(f, *args, **kwargs)
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager six.reraise(c, e, tb)
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager rv = meth(*args, **kwargs)
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/libvirt.py", line 4177, in nodeDeviceLookupByName
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager if ret is None:raise libvirtError('virNodeDeviceLookupByName() failed', conn=self)
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager libvirtError: Node device not found: no node device with matching name 'net_enp129s2_b2_87_6e_13_a1_5e'
2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager
The correct interface name is enp129s0f0. However, I am getting the node name as net_enp129s2_b2_87_6e_13_a1_5e' which i believe is the reason behind the failure of vm creation on openstack. Please if someone could help me understand how the node name is passed on to the Libvirt from openstack or how can I resolve this issue.

Related

ssh paramiko can't read xlsx files

I use paramiko to ssh
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy( paramiko.AutoAddPolicy() )
ssh.connect(Hostname, username=Username, password=Password)
ftp = ssh.open_sftp()
files = ftp.listdir()
dir_oi = "directory_of_interest"
foi = ftp.listdir(dir_oi)
and can find a read a csv successfully with:
remote_file = ( dir_oi +"/" + foi[-1])
with ftp.open(remote_file) as f:
df = pd.read_csv(f, sep = '\t', header = None)
the minute I change remote_file to an xlsx, with
with ftp.open(uw_remote_file) as f:
df = pd.read_excel(f)
I get the error SSHException: Server connection dropped: or Socket Closed
of note, I can run this line without any error existing_xlsx = ftp.open(uw_remote_file)
Any suggestions how to overcome this?
logfile as requested:
DEB [20220519-09:22:45.998] thr=1 paramiko.transport.sftp: [chan 0] listdir(b'blah')
DEB [20220519-09:22:48.009] thr=1 paramiko.transport.sftp: [chan 0] open(b'blah/halb.csv', 'r')
DEB [20220519-09:22:48.241] thr=1 paramiko.transport.sftp: [chan 0] open(b'blah/halb.csv', 'r') -> 35323935333939313533313032363062
DEB [20220519-09:22:49.084] thr=1 paramiko.transport.sftp: [chan 0] close(35323935333939313533313032363062)
DEB [20220519-09:23:24.790] thr=1 paramiko.transport.sftp: [chan 0] listdir(b'blah2')
DEB [20220519-09:24:01.590] thr=1 paramiko.transport.sftp: [chan 0] open(b'blah2/halb2.xlsx', 'r')
DEB [20220519-09:24:01.975] thr=1 paramiko.transport.sftp: [chan 0] open(b'blah2/halb2.xlsx', 'r') -> 37343338363564356234303033663337
DEB [20220519-09:24:23.510] thr=1 paramiko.transport.sftp: [chan 0] open(b'blah2/halb2.xlsx', 'r')
DEB [20220519-09:24:23.727] thr=1 paramiko.transport.sftp: [chan 0] open(b'blah2/halb2.xlsx', 'r') -> 64646361316532373233663463613036
DEB [20220519-09:24:24.108] thr=2 paramiko.transport: EOF in transport thread
DEB [20220519-09:24:24.108] thr=1 paramiko.transport.sftp: [chan 0] close(64646361316532373233663463613036)
traceback:
Traceback (most recent call last):
File "C:\Users\alexander.huhn.adm\Anaconda3\lib\site-packages\paramiko\sftp_client.py", line 852, in _read_response
t, data = self._read_packet()
File "C:\Users\alexander.huhn.adm\Anaconda3\lib\site-packages\paramiko\sftp.py", line 201, in _read_packet
x = self._read_all(4)
File "C:\Users\alexander.huhn.adm\Anaconda3\lib\site-packages\paramiko\sftp.py", line 188, in _read_all
raise EOFError()
EOFError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\alexander.huhn.adm\AppData\Local\Temp\24\ipykernel_33560\4051829457.py", line 4, in <cell line: 2>
df = pd.read_excel(f)
File "C:\Users\alexander.huhn.adm\Anaconda3\lib\site-packages\pandas\util\_decorators.py", line 311, in wrapper
return func(*args, **kwargs)
File "C:\Users\alexander.huhn.adm\Anaconda3\lib\site-packages\pandas\io\excel\_base.py", line 457, in read_excel
io = ExcelFile(io, storage_options=storage_options, engine=engine)
File "C:\Users\alexander.huhn.adm\Anaconda3\lib\site-packages\pandas\io\excel\_base.py", line 1376, in __init__
ext = inspect_excel_format(
File "C:\Users\alexander.huhn.adm\Anaconda3\lib\site-packages\pandas\io\excel\_base.py", line 1255, in inspect_excel_format
buf = stream.read(PEEK_SIZE)
File "C:\Users\alexander.huhn.adm\Anaconda3\lib\site-packages\paramiko\file.py", line 219, in read
new_data = self._read(read_size)
File "C:\Users\alexander.huhn.adm\Anaconda3\lib\site-packages\paramiko\sftp_file.py", line 185, in _read
t, msg = self.sftp._request(
File "C:\Users\alexander.huhn.adm\Anaconda3\lib\site-packages\paramiko\sftp_client.py", line 822, in _request
return self._read_response(num)
File "C:\Users\alexander.huhn.adm\Anaconda3\lib\site-packages\paramiko\sftp_client.py", line 854, in _read_response
raise SSHException("Server connection dropped: {}".format(e))
paramiko.ssh_exception.SSHException: Server connection dropped:
I can download using
filepath = uw_remote_file
localpath = "test.xlsx"
ftp.get(filepath,localpath)
so will go down that route and delete after use
From the log it looks like it were able to open both files. pandas is running into EOF error, So, which means excel file is completely empty.
Can you confirm if that is empty.

Error: Failed to perform requested operation on instance "dane-srv" while powering up instance (volume error)

I have an instance that does not want to power on. I keep getting the errors below:
Error: Failed to perform requested operation on instance "dane-srv", the instance has an error status: Please try again later [Error: Volume device not found at [u'/dev/disk/by-path/ip-192.168.21.30:3260-iscsi-iqn.2010-10.org.openstack:volume-c58a6541-01cf-41ce-b58a-91e026b86089-lun-1'].].
When I ran the command below I get an empty volume list. However on horizon the volumes are visible but some are showing error status or available.
All instances with volumes attached are not able to power up and unable to attach or detach volumes as well.
root#controller:~# cinder list
+----+--------+------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+----+--------+------+------+-------------+----------+-------------+
+----+--------+------+------+-------------+----------+-------------+
Errors in /var/log/cinder/cinder-volume.log
2021-06-28 14:03:16.048 32116 ERROR cinder.volume.targets.tgt [req-274640b3-6431-4944-95d1-a2518bc4f64d - - - - -] Failed recovery attempt to create iscsi backing lun for Volume ID:iqn.2010-10.org.openstack:volume-c58a6541-01cf-41ce-b58a-91e026b86089: Unexpected error while running command.
Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf tgtadm --lld iscsi --op new --mode logicalunit --tid 17 --lun 1 -b /dev/cinder-volumes/volume-c58a6541-01cf-41ce-b58a-91e026b86089
Exit code: 22
Stdout: u''
Stderr: u'tgtadm: invalid request\n'
2021-06-28 14:03:16.813 32116 ERROR cinder.volume.manager [req-274640b3-6431-4944-95d1-a2518bc4f64d - - - - -] Failed to re-export volume, setting to ERROR.
2021-06-28 14:03:16.813 32116 ERROR cinder.volume.manager Traceback (most recent call last):
2021-06-28 14:03:16.813 32116 ERROR cinder.volume.manager File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 461, in init_host
2021-06-28 14:03:16.813 32116 ERROR cinder.volume.manager self.driver.ensure_export(ctxt, volume)
2021-06-28 14:03:16.813 32116 ERROR cinder.volume.manager File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/lvm.py", line 821, in ensure_export
2021-06-28 14:03:16.813 32116 ERROR cinder.volume.manager self.target_driver.ensure_export(context, volume, volume_path)
2021-06-28 14:03:16.813 32116 ERROR cinder.volume.manager File "/usr/lib/python2.7/dist-packages/cinder/volume/targets/iscsi.py", line 261, in ensure_export
2021-06-28 14:03:16.813 32116 ERROR cinder.volume.manager old_name=None, **portals_config)
2021-06-28 14:03:16.813 32116 ERROR cinder.volume.manager File "/usr/lib/python2.7/dist-packages/cinder/utils.py", line 796, in _wrapper
2021-06-28 14:03:16.813 32116 ERROR cinder.volume.manager return r.call(f, *args, **kwargs)
2021-06-28 14:03:16.813 32116 ERROR cinder.volume.manager File "/usr/lib/python2.7/dist-packages/retrying.py", line 206, in call
2021-06-28 14:03:16.813 32116 ERROR cinder.volume.manager return attempt.get(self._wrap_exception)
2021-06-28 14:03:16.813 32116 ERROR cinder.volume.manager File "/usr/lib/python2.7/dist-packages/retrying.py", line 247, in get
2021-06-28 14:03:16.813 32116 ERROR cinder.volume.manager six.reraise(self.value[0], self.value[1], self.value[2])
2021-06-28 14:03:16.813 32116 ERROR cinder.volume.manager File "/usr/lib/python2.7/dist-packages/retrying.py", line 200, in call
2021-06-28 14:03:16.813 32116 ERROR cinder.volume.manager attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
2021-06-28 14:03:16.813 32116 ERROR cinder.volume.manager File "/usr/lib/python2.7/dist-packages/cinder/volume/targets/tgt.py", line 243, in create_iscsi_target
2021-06-28 14:03:16.813 32116 ERROR cinder.volume.manager raise exception.ISCSITargetCreateFailed(volume_id=vol_id)
2021-06-28 14:03:16.813 32116 ERROR cinder.volume.manager ISCSITargetCreateFailed: Failed to create iscsi target for volume volume-c58a6541-01cf-41ce-b58a-91e026b86089.
2021-06-28 14:03:16.813 32116 ERROR cinder.volume.manager

openstack-victoria (packstack) instance creating error

I am using openstack-victoria (packstack) in CentOS 8 for a school assignment. I am trying to create an instance out of a qcow2 image I made, the image size is somewhere around 1.9GB (CentOS 7 - template). I made this image out of an volume that was 30GB.
After creating the instance it takes around 4-5min, then I get this error message in the openstack dashboard:
Error: Failed to perform requested operation on instance "test", the instance has an error status: Please try again later [Error: Build of instance 2b5bf737-dbd3-4173-afd0-3cb95c7d236b aborted: Volume e87171f3-99bc-4ef1-bdcc-eed55461cd18 did not finish being created even after we waited 188 seconds or 61 attempts. And its status is downloading.].
The /var/log/cinder/volume.log file contains:
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server cinder.exception.ImageCopyFailure: Failed to copy image to volume: qemu-img: error while writing at byte 20401094656: No space left on device
This is strange because I have plenty of storage left
Controller/computenode
packstack.openstack.local:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 12G 0 12G 0% /dev
tmpfs 12G 4.0K 12G 1% /dev/shm
tmpfs 12G 9.4M 12G 1% /run
tmpfs 12G 0 12G 0% /sys/fs/cgroup
/dev/mapper/cl-root 162G 27G 135G 17% /
/dev/mapper/cl-home 30G 246M 30G 1% /home
/dev/loop0 1.9G 6.1M 1.7G 1% /srv/node/swiftloopback
/dev/sda1 1014M 254M 761M 25% /boot
//192.168.178.103/RAIDZ1_0 11T 8.7T 1.6T 85% /mnt
tmpfs 7.1G 0 7.1G 0% /run/user/0
NOTE: I am also running 1 compute node, I dont know if it matters but this is the df of that VM:
compute.openstack.local:
Compute Node
Filesystem Size Used Avail Use% Mounted on
devtmpfs 1.8G 0 1.8G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 9.1M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/mapper/cl_compute-root 17G 2.9G 15G 17% /
/dev/sda1 1014M 254M 761M 25% /boot
tmpfs 372M 0 372M 0% /run/user/0
I am trying to create the instance on the controller/compute node. Could it be that the qcow2 image I made is not right? I can create a instance from the cirros image without any problems, but I think it has to do with the size of the volume I am trying to make.
I tried changing "image_conversion_dir" in the cinder.config file, but this did not help. I also looked what happened in iotop, you can see that there is a file writen to loop1 with a size of around 19GB. And I also see some cinder threads in iotop:
qemu-img convert -O raw -t none -f qcow2 /var/lib/cinder/conversion/tmpsak_p8tcpackstac~.local#lvm /dev/mapper/cinder--volumes-volume--1beee724--bedc--4ea8--8c45--ef0406339597
This is the full log (/var/log/cinder/volume.log):
2021-03-26 15:11:35.881 3029 ERROR cinder.volume.volume_utils [req-87414510-3b58-4b2f-beed-5da4cd50cf1c afcf8e04d2cc4a33ba5193c8ee593d7b c107b21eb64941168b780871fc5315e8 - - -] Failed to copy image 90772271-6a81-475a-82dd-69e04796079f to volume: e87171f3-99bc-4ef1-bdcc-eed55461cd18: oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command.
Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf qemu-img convert -O raw -t none -f qcow2 /mnt/openstackvm/tmpbpz37frrpackstack.openstack.local#lvm /dev/mapper/cinder--volumes-volume--e87171f3--99bc--4ef1--bdcc--eed55461cd18
Exit code: 1
Stdout: ''
Stderr: 'qemu-img: error while writing at byte 20401094656: No space left on device\n'
2021-03-26 15:11:35.881 3029 ERROR cinder.volume.volume_utils Traceback (most recent call last):
2021-03-26 15:11:35.881 3029 ERROR cinder.volume.volume_utils File "/usr/lib/python3.6/site-packages/cinder/volume/volume_utils.py", line 1144, in copy_image_to_volume
2021-03-26 15:11:35.881 3029 ERROR cinder.volume.volume_utils context, volume, image_service, image_id)
2021-03-26 15:11:35.881 3029 ERROR cinder.volume.volume_utils File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/lvm.py", line 523, in copy_image_to_volume
2021-03-26 15:11:35.881 3029 ERROR cinder.volume.volume_utils size=volume['size'])
2021-03-26 15:11:35.881 3029 ERROR cinder.volume.volume_utils File "/usr/lib/python3.6/site-packages/cinder/image/image_utils.py", line 623, in fetch_to_raw
2021-03-26 15:11:35.881 3029 ERROR cinder.volume.volume_utils size=size, run_as_root=run_as_root)
2021-03-26 15:11:35.881 3029 ERROR cinder.volume.volume_utils File "/usr/lib/python3.6/site-packages/cinder/image/image_utils.py", line 729, in fetch_to_volume_format
2021-03-26 15:11:35.881 3029 ERROR cinder.volume.volume_utils run_as_root=run_as_root)
2021-03-26 15:11:35.881 3029 ERROR cinder.volume.volume_utils File "/usr/lib/python3.6/site-packages/cinder/image/image_utils.py", line 407, in convert_image
2021-03-26 15:11:35.881 3029 ERROR cinder.volume.volume_utils src_passphrase_file=src_passphrase_file)
2021-03-26 15:11:35.881 3029 ERROR cinder.volume.volume_utils File "/usr/lib/python3.6/site-packages/cinder/image/image_utils.py", line 349, in _convert_image
2021-03-26 15:11:35.881 3029 ERROR cinder.volume.volume_utils utils.execute(*cmd, run_as_root=run_as_root)
2021-03-26 15:11:35.881 3029 ERROR cinder.volume.volume_utils File "/usr/lib/python3.6/site-packages/cinder/utils.py", line 126, in execute
2021-03-26 15:11:35.881 3029 ERROR cinder.volume.volume_utils return processutils.execute(*cmd, **kwargs)
2021-03-26 15:11:35.881 3029 ERROR cinder.volume.volume_utils File "/usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py", line 441, in execute
2021-03-26 15:11:35.881 3029 ERROR cinder.volume.volume_utils cmd=sanitized_cmd)
2021-03-26 15:11:35.881 3029 ERROR cinder.volume.volume_utils oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command.
2021-03-26 15:11:35.881 3029 ERROR cinder.volume.volume_utils Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf qemu-img convert -O raw -t none -f qcow2 /mnt/openstackvm/tmpbpz37frrpackstack.openstack.local#lvm /dev/mapper/cinder--volumes-volume--e87171f3--99bc--4ef1--bdcc--eed55461cd18
2021-03-26 15:11:35.881 3029 ERROR cinder.volume.volume_utils Exit code: 1
2021-03-26 15:11:35.881 3029 ERROR cinder.volume.volume_utils Stdout: ''
2021-03-26 15:11:35.881 3029 ERROR cinder.volume.volume_utils Stderr: 'qemu-img: error while writing at byte 20401094656: No space left on device\n'
2021-03-26 15:11:35.881 3029 ERROR cinder.volume.volume_utils
2021-03-26 15:11:35.899 3029 WARNING cinder.volume.manager [req-87414510-3b58-4b2f-beed-5da4cd50cf1c afcf8e04d2cc4a33ba5193c8ee593d7b c107b21eb64941168b780871fc5315e8 - - -] Task 'cinder.volume.flows.manager.create_volume.CreateVolumeFromSpecTask;volume:create' (ca0d7bf4-66a3-45aa-a5f4-b52335b741a3) transitioned into state 'FAILURE' from state 'RUNNING'
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager Traceback (most recent call last):
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/volume/volume_utils.py", line 1144, in copy_image_to_volume
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager context, volume, image_service, image_id)
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/lvm.py", line 523, in copy_image_to_volume
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager size=volume['size'])
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/image/image_utils.py", line 623, in fetch_to_raw
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager size=size, run_as_root=run_as_root)
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/image/image_utils.py", line 729, in fetch_to_volume_format
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager run_as_root=run_as_root)
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/image/image_utils.py", line 407, in convert_image
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager src_passphrase_file=src_passphrase_file)
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/image/image_utils.py", line 349, in _convert_image
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager utils.execute(*cmd, run_as_root=run_as_root)
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/utils.py", line 126, in execute
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager return processutils.execute(*cmd, **kwargs)
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py", line 441, in execute
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager cmd=sanitized_cmd)
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command.
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf qemu-img convert -O raw -t none -f qcow2 /mnt/openstackvm/tmpbpz37frrpackstack.openstack.local#lvm /dev/mapper/cinder--volumes-volume--e87171f3--99bc--4ef1--bdcc--eed55461cd18
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager Exit code: 1
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager Stdout: ''
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager Stderr: 'qemu-img: error while writing at byte 20401094656: No space left on device\n'
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager During handling of the above exception, another exception occurred:
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager Traceback (most recent call last):
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager result = task.execute(**arguments)
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/volume/flows/manager/create_volume.py", line 1134, in execute
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager **volume_spec)
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/utils.py", line 683, in _wrapper
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager return r.call(f, *args, **kwargs)
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 409, in call
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager do = self.iter(retry_state=retry_state)
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 356, in iter
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager return fut.result()
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager File "/usr/lib64/python3.6/concurrent/futures/_base.py", line 425, in result
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager return self.__get_result()
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager File "/usr/lib64/python3.6/concurrent/futures/_base.py", line 384, in __get_result
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager raise self._exception
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 412, in call
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager result = fn(*args, **kwargs)
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/volume/flows/manager/create_volume.py", line 1034, in _create_from_image
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager image_service)
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/volume/flows/manager/create_volume.py", line 925, in _create_from_image_cache_or_download
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager image_service
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/volume/flows/manager/create_volume.py", line 738, in _create_from_image_download
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager image_service)
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/volume/volume_utils.py", line 1149, in copy_image_to_volume
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager raise exception.ImageCopyFailure(reason=ex.stderr)
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager cinder.exception.ImageCopyFailure: Failed to copy image to volume: qemu-img: error while writing at byte 20401094656: No space left on device
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager
2021-03-26 15:11:35.899 3029 ERROR cinder.volume.manager
2021-03-26 15:11:35.906 3029 WARNING cinder.volume.manager [req-87414510-3b58-4b2f-beed-5da4cd50cf1c afcf8e04d2cc4a33ba5193c8ee593d7b c107b21eb64941168b780871fc5315e8 - - -] Task 'cinder.volume.flows.manager.create_volume.CreateVolumeFromSpecTask;volume:create' (ca0d7bf4-66a3-45aa-a5f4-b52335b741a3) transitioned into state 'REVERTED' from state 'REVERTING'
2021-03-26 15:11:35.935 3029 WARNING cinder.volume.manager [req-87414510-3b58-4b2f-beed-5da4cd50cf1c afcf8e04d2cc4a33ba5193c8ee593d7b c107b21eb64941168b780871fc5315e8 - - -] Task 'cinder.volume.flows.manager.create_volume.NotifyVolumeActionTask;volume:create, create.start' (d9c32ff4-91dd-4f98-bc02-4731fb5912bc) transitioned into state 'REVERTED' from state 'REVERTING'
2021-03-26 15:11:35.937 3029 WARNING cinder.volume.manager [req-87414510-3b58-4b2f-beed-5da4cd50cf1c afcf8e04d2cc4a33ba5193c8ee593d7b c107b21eb64941168b780871fc5315e8 - - -] Task 'cinder.volume.flows.manager.create_volume.ExtractVolumeSpecTask;volume:create' (2ef60d35-ad24-472e-b465-a53e040c2f4d) transitioned into state 'REVERTED' from state 'REVERTING'
2021-03-26 15:11:35.948 3029 ERROR cinder.volume.flows.manager.create_volume [req-87414510-3b58-4b2f-beed-5da4cd50cf1c afcf8e04d2cc4a33ba5193c8ee593d7b c107b21eb64941168b780871fc5315e8 - - -] Volume e87171f3-99bc-4ef1-bdcc-eed55461cd18: create failed
2021-03-26 15:11:35.950 3029 WARNING cinder.volume.manager [req-87414510-3b58-4b2f-beed-5da4cd50cf1c afcf8e04d2cc4a33ba5193c8ee593d7b c107b21eb64941168b780871fc5315e8 - - -] Task 'cinder.volume.flows.manager.create_volume.OnFailureRescheduleTask;volume:create' (2cb69002-2c95-42f7-8a7d-79418a320b59) transitioned into state 'REVERTED' from state 'REVERTING'
2021-03-26 15:11:35.954 3029 WARNING cinder.volume.manager [req-87414510-3b58-4b2f-beed-5da4cd50cf1c afcf8e04d2cc4a33ba5193c8ee593d7b c107b21eb64941168b780871fc5315e8 - - -] Task 'cinder.volume.flows.manager.create_volume.ExtractVolumeRefTask;volume:create' (29e0fce7-6629-4a2e-9f58-d7d4de96d4c4) transitioned into state 'REVERTED' from state 'REVERTING'
2021-03-26 15:11:35.959 3029 WARNING cinder.volume.manager [req-87414510-3b58-4b2f-beed-5da4cd50cf1c afcf8e04d2cc4a33ba5193c8ee593d7b c107b21eb64941168b780871fc5315e8 - - -] Flow 'volume_create_manager' (23afb573-ff8f-43df-b65f-1ce3355be8d0) transitioned into state 'REVERTED' from state 'RUNNING'
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server [req-87414510-3b58-4b2f-beed-5da4cd50cf1c afcf8e04d2cc4a33ba5193c8ee593d7b c107b21eb64941168b780871fc5315e8 - - -] Exception during message handling: cinder.exception.ImageCopyFailure: Failed to copy image to volume: qemu-img: error while writing at byte 20401094656: No space left on device
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/cinder/volume/volume_utils.py", line 1144, in copy_image_to_volume
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server context, volume, image_service, image_id)
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/lvm.py", line 523, in copy_image_to_volume
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server size=volume['size'])
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/cinder/image/image_utils.py", line 623, in fetch_to_raw
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server size=size, run_as_root=run_as_root)
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/cinder/image/image_utils.py", line 729, in fetch_to_volume_format
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server run_as_root=run_as_root)
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/cinder/image/image_utils.py", line 407, in convert_image
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server src_passphrase_file=src_passphrase_file)
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/cinder/image/image_utils.py", line 349, in _convert_image
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server utils.execute(*cmd, run_as_root=run_as_root)
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/cinder/utils.py", line 126, in execute
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server return processutils.execute(*cmd, **kwargs)
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py", line 441, in execute
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server cmd=sanitized_cmd)
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command.
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf qemu-img convert -O raw -t none -f qcow2 /mnt/openstackvm/tmpbpz37frrpackstack.openstack.local#lvm /dev/mapper/cinder--volumes-volume--e87171f3--99bc--4ef1--bdcc--eed55461cd18
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server Exit code: 1
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server Stdout: ''
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server Stderr: 'qemu-img: error while writing at byte 20401094656: No space left on device\n'
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server During handling of the above exception, another exception occurred:
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args)
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args)
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "</usr/lib/python3.6/site-packages/decorator.py:decorator-gen-751>", line 2, in create_volume
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/cinder/objects/cleanable.py", line 212, in wrapper
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server result = f(*args, **kwargs)
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/cinder/volume/manager.py", line 748, in create_volume
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server _run_flow()
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/cinder/volume/manager.py", line 740, in _run_flow
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server flow_engine.run()
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/taskflow/engines/action_engine/engine.py", line 247, in run
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server for _state in self.run_iter(timeout=timeout):
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/taskflow/engines/action_engine/engine.py", line 340, in run_iter
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server failure.Failure.reraise_if_any(er_failures)
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/taskflow/types/failure.py", line 339, in reraise_if_any
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server failures[0].reraise()
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/taskflow/types/failure.py", line 346, in reraise
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server six.reraise(*self._exc_info)
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/six.py", line 703, in reraise
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server raise value
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server result = task.execute(**arguments)
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/cinder/volume/flows/manager/create_volume.py", line 1134, in execute
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server **volume_spec)
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/cinder/utils.py", line 683, in _wrapper
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server return r.call(f, *args, **kwargs)
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 409, in call
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server do = self.iter(retry_state=retry_state)
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 356, in iter
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server return fut.result()
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib64/python3.6/concurrent/futures/_base.py", line 425, in result
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server return self.__get_result()
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib64/python3.6/concurrent/futures/_base.py", line 384, in __get_result
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server raise self._exception
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 412, in call
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server result = fn(*args, **kwargs)
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/cinder/volume/flows/manager/create_volume.py", line 1034, in _create_from_image
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server image_service)
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/cinder/volume/flows/manager/create_volume.py", line 925, in _create_from_image_cache_or_download
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server image_service
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/cinder/volume/flows/manager/create_volume.py", line 738, in _create_from_image_download
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server image_service)
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/cinder/volume/volume_utils.py", line 1149, in copy_image_to_volume
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server raise exception.ImageCopyFailure(reason=ex.stderr)
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server cinder.exception.ImageCopyFailure: Failed to copy image to volume: qemu-img: error while writing at byte 20401094656: No space left on device
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server
2021-03-26 15:11:35.972 3029 ERROR oslo_messaging.rpc.server

Cinder volume creation failing on openstack

The following is the output of heat-engine.log. It says that
ResourceInError: resources.sdc_volume_data: Went to status error due to "Unknown"
2018-07-04 15:15:47.684 33906 INFO heat.engine.resource [req-c6dfeeec-ec2e-404b-a1c9-aadf516ab4e6 - admin - - -] CREATE: CinderVolume "sdc_volume_data" [1bdc9d36-a5f7-4e85-a1b2-8e3727819170] Stack "ssrr" [0dbada33-0c90-4230-b0e4-bae2dc04a455]
2018-07-04 15:15:47.684 33906 ERROR heat.engine.resource Traceback (most recent call last):
2018-07-04 15:15:47.684 33906 ERROR heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 776, in _action_recorder
2018-07-04 15:15:47.684 33906 ERROR heat.engine.resource yield
2018-07-04 15:15:47.684 33906 ERROR heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 878, in _do_action
2018-07-04 15:15:47.684 33906 ERROR heat.engine.resource yield self.action_handler_task(action, args=handler_args)
2018-07-04 15:15:47.684 33906 ERROR heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/scheduler.py", line 352, in wrapper
2018-07-04 15:15:47.684 33906 ERROR heat.engine.resource step = next(subtask)
2018-07-04 15:15:47.684 33906 ERROR heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 829, in action_handler_task
2018-07-04 15:15:47.684 33906 ERROR heat.engine.resource done = check(handler_data)
2018-07-04 15:15:47.684 33906 ERROR heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resources/openstack/cinder/volume.py", line 301, in check_create_complete
2018-07-04 15:15:47.684 33906 ERROR heat.engine.resource complete = super(CinderVolume, self).check_create_complete(vol_id)
2018-07-04 15:15:47.684 33906 ERROR heat.engine.resource File "/usr/lib/python2.7/site-packages/heat/engine/resources/volume_base.py", line 56, in check_create_complete
2018-07-04 15:15:47.684 33906 ERROR heat.engine.resource resource_status=vol.status)
2018-07-04 15:15:47.684 33906 ERROR heat.engine.resource ResourceInError: Went to status error due to "Unknown"
2018-07-04 15:15:47.684 33906 ERROR heat.engine.resource
Please tell what could be the reason of the error.

OpenDaylight integration with OpenStack: Missing table in database

I have the following OpenStack infrastructure:
2 compute nodes
1 controller node
This work fine, I can create network, router, vm, etc... Now, I want to add an OpenDaylight controller in the cloud infrastructure using the service NetVirt. I follow the officiel OpenDaylight guide: http://docs.opendaylight.org/projects/netvirt/en/latest/openstack-guide/openstack-with-netvirt.html#installing-opendaylight-on-an-existing-openstack
Every step in the installation is done without any probleme. I can see the open vSwitch on each nodes managed by my SDN controller. At the end, when i restart Neutron to test if everything is working, I have the following behavior:
[root#controller01 ~(keystone_admin)]# neutron router-create router1
Created a new router:
+-------------------------+--------------------------------------+
| Field | Value |
+-------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2018-04-18T08:50:30Z |
| description | |
| distributed | False |
| external_gateway_info | |
| flavor_id | |
| ha | False |
| id | 611aa06a-dca6-4637-98c9-0b9882762bd2 |
| name | router1 |
| project_id | 8e20ff8abaf14250aab8aa4db37f5b3c |
| revision_number | 3 |
| routes | |
| status | ACTIVE |
| tenant_id | 8e20ff8abaf14250aab8aa4db37f5b3c |
| updated_at | 2018-04-18T08:50:30Z |
+-------------------------+--------------------------------------+
[root#controller01 ~(keystone_admin)]# neutron net-create private
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2018-04-18T08:50:48Z |
| description | |
| id | d0333a22-dd9c-4522-8597-e605d0d4a5f5 |
| ipv4_address_scope | |
| ipv6_address_scope | |
| mtu | 1450 |
| name | private |
| project_id | 8e20ff8abaf14250aab8aa4db37f5b3c |
| provider:network_type | vxlan |
| provider:physical_network | |
| provider:segmentation_id | 78 |
| revision_number | 2 |
| router:external | False |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | |
| tenant_id | 8e20ff8abaf14250aab8aa4db37f5b3c |
| updated_at | 2018-04-18T08:50:48Z |
+---------------------------+--------------------------------------+
[root#controller01 ~]# curl -u admin:admin http://10.10.10.68:8080/controller/nb/v2/neutron/networks
{
"networks" : [ {
"id" : "4ad0b7ef-67b3-47ef-8595-4de5e24570a2",
"tenant_id" : "8e20ff8abaf14250aab8aa4db37f5b3c",
"project_id" : "8e20ff8abaf14250aab8aa4db37f5b3c",
"revision_number" : 2,
"name" : "private",
"admin_state_up" : true,
"status" : "ACTIVE",
"shared" : false,
"router:external" : false,
"provider:network_type" : "vxlan",
"provider:segmentation_id" : "73",
"segments" : [ ]
} ]
[root#controller01 ~(keystone_admin)]# neutron subnet-create private --name=private_subnet 10.10.5.0/24
Request Failed: internal server error while processing your request.
Neutron server returns request_ids: ['req-fb242532-76df-4283-98d0-0928ce958013']
Looking at neutron log file:
2018-04-18 10:51:15.447 30467 ERROR oslo.service.loopingcall ProgrammingError: (pymysql.err.ProgrammingError) (1146, u"Table 'neutron.opendaylight_periodic_task' doesn't exist") [SQL: u'SELECT opendaylight_periodic_task.state AS opendaylight_periodic_task_state, opendaylight_periodic_task.processing_operation AS opendaylight_periodic_task_processing_operation, opendaylight_periodic_task.task AS opendaylight_periodic_task_task, opendaylight_periodic_task.lock_updated AS opendaylight_periodic_task_lock_updated \nFROM opendaylight_periodic_task \nWHERE opendaylight_periodic_task.task = %(task_1)s AND opendaylight_periodic_task.lock_updated <= %(lock_updated_1)s'] [parameters: {u'task_1': 'hostconfig', u'lock_updated_1': datetime.datetime(2018, 4, 18, 10, 50, 45)}]
2018-04-18 10:51:15.447 30467 ERROR oslo.service.loopingcall
2018-04-18 10:51:16.379 30461 ERROR neutron.db.metering.metering_rpc [req-488bf35c-8e90-49cf-9517-28e736dea563 - - - - -] Unable to find agent controller01.
2018-04-18 10:51:38.478 30457 WARNING oslo_config.cfg [req-211fe59c-e7fd-4d03-8f88-20f186ff7f75 aaa93c4dbc4a4ec090f71e9061571a44 8e20ff8abaf14250aab8aa4db37f5b3c - - -] Option "rabbit_host" from group "oslo_messaging_rabbit" is deprecated for removal (Replaced by [DEFAULT]/transport_url). Its value may be silently ignored in the future.
2018-04-18 10:51:38.479 30457 WARNING oslo_config.cfg [req-211fe59c-e7fd-4d03-8f88-20f186ff7f75 aaa93c4dbc4a4ec090f71e9061571a44 8e20ff8abaf14250aab8aa4db37f5b3c - - -] Option "rabbit_port" from group "oslo_messaging_rabbit" is deprecated for removal (Replaced by [DEFAULT]/transport_url). Its value may be silently ignored in the future.
2018-04-18 10:51:38.480 30457 WARNING oslo_config.cfg [req-211fe59c-e7fd-4d03-8f88-20f186ff7f75 aaa93c4dbc4a4ec090f71e9061571a44 8e20ff8abaf14250aab8aa4db37f5b3c - - -] Option "rabbit_userid" from group "oslo_messaging_rabbit" is deprecated for removal (Replaced by [DEFAULT]/transport_url). Its value may be silently ignored in the future.
2018-04-18 10:51:38.480 30457 WARNING oslo_config.cfg [req-211fe59c-e7fd-4d03-8f88-20f186ff7f75 aaa93c4dbc4a4ec090f71e9061571a44 8e20ff8abaf14250aab8aa4db37f5b3c - - -] Option "rabbit_password" from group "oslo_messaging_rabbit" is deprecated for removal (Replaced by [DEFAULT]/transport_url). Its value may be silently ignored in the future.
2018-04-18 10:51:38.481 30457 WARNING oslo_config.cfg [req-211fe59c-e7fd-4d03-8f88-20f186ff7f75 aaa93c4dbc4a4ec090f71e9061571a44 8e20ff8abaf14250aab8aa4db37f5b3c - - -] Option "rabbit_use_ssl" from group "oslo_messaging_rabbit" is deprecated. Use option "ssl" from group "oslo_messaging_rabbit".
2018-04-18 10:51:38.507 30457 INFO neutron.quota [req-211fe59c-e7fd-4d03-8f88-20f186ff7f75 aaa93c4dbc4a4ec090f71e9061571a44 8e20ff8abaf14250aab8aa4db37f5b3c - - -] Loaded quota_driver: <neutron.db.quota.driver.DbQuotaDriver object at 0x6379210>.
2018-04-18 10:51:38.602 30457 INFO neutron.wsgi [req-211fe59c-e7fd-4d03-8f88-20f186ff7f75 aaa93c4dbc4a4ec090f71e9061571a44 8e20ff8abaf14250aab8aa4db37f5b3c - - -] 10.10.10.68 - - [18/Apr/2018 10:51:38] "POST /v2.0/routers.json HTTP/1.1" 201 697 0.272735
2018-04-18 10:51:45.465 30467 INFO networking_odl.journal.periodic_task [req-b476cffc-5bce-4893-b3c0-1a2f4f77f963 - - - - -] Starting hostconfig periodic task.
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall [req-b476cffc-5bce-4893-b3c0-1a2f4f77f963 - - - - -] Fixed interval looping call 'networking_odl.journal.periodic_task.PeriodicTask.execute_ops' failed: ProgrammingError: (pymysql.err.ProgrammingError) (1146, u"Table 'neutron.opendaylight_periodic_task' doesn't exist") [SQL: u'SELECT opendaylight_periodic_task.state AS opendaylight_periodic_task_state, opendaylight_periodic_task.processing_operation AS opendaylight_periodic_task_processing_operation, opendaylight_periodic_task.task AS opendaylight_periodic_task_task, opendaylight_periodic_task.lock_updated AS opendaylight_periodic_task_lock_updated \nFROM opendaylight_periodic_task \nWHERE opendaylight_periodic_task.task = %(task_1)s AND opendaylight_periodic_task.lock_updated <= %(lock_updated_1)s'] [parameters: {u'task_1': 'hostconfig', u'lock_updated_1': datetime.datetime(2018, 4, 18, 10, 51, 15)}]
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall Traceback (most recent call last):
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 66, in func
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall return f(*args, **kwargs)
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/networking_odl/journal/periodic_task.py", line 96, in execute_ops
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall if not forced and self.task_already_executed_recently(context):
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/networking_odl/journal/periodic_task.py", line 73, in task_already_executed_recently
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall context.session, self.task, self.interval)
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/networking_odl/db/db.py", line 168, in was_periodic_task_executed_recently
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall (now - delta >= (models.OpenDaylightPeriodicTask.lock_updated))
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2664, in one_or_none
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall ret = list(self)
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2736, in __iter__
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall return self._execute_and_instances(context)
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2751, in _execute_and_instances
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall result = conn.execute(querycontext.statement, self._params)
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 914, in execute
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall return meth(self, multiparams, params)
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 323, in _execute_on_connection
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall return connection._execute_clauseelement(self, multiparams, params)
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1010, in _execute_clauseelement
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall compiled_sql, distilled_params
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1146, in _execute_context
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall context)
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1337, in _handle_dbapi_exception
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall util.raise_from_cause(newraise, exc_info)
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 200, in raise_from_cause
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall reraise(type(exception), exception, tb=exc_tb)
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall context)
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 450, in do_execute
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall cursor.execute(statement, parameters)
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 166, in execute
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall result = self._query(query)
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 322, in _query
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall conn.query(q)
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 841, in query
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall self._affected_rows = self._read_query_result(unbuffered=unbuffered)
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1029, in _read_query_result
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall result.read()
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1312, in read
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall first_packet = self.connection._read_packet()
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 991, in _read_packet
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall packet.check_error()
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 393, in check_error
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall err.raise_mysql_exception(self._data)
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/pymysql/err.py", line 107, in raise_mysql_exception
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall raise errorclass(errno, errval)
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall ProgrammingError: (pymysql.err.ProgrammingError) (1146, u"Table 'neutron.opendaylight_periodic_task' doesn't exist") [SQL: u'SELECT opendaylight_periodic_task.state AS opendaylight_periodic_task_state, opendaylight_periodic_task.processing_operation AS opendaylight_periodic_task_processing_operation, opendaylight_periodic_task.task AS opendaylight_periodic_task_task, opendaylight_periodic_task.lock_updated AS opendaylight_periodic_task_lock_updated \nFROM opendaylight_periodic_task \nWHERE opendaylight_periodic_task.task = %(task_1)s AND opendaylight_periodic_task.lock_updated <= %(lock_updated_1)s'] [parameters: {u'task_1': 'hostconfig', u'lock_updated_1': datetime.datetime(2018, 4, 18, 10, 51, 15)}]
2018-04-18 10:51:45.476 30467 ERROR oslo.service.loopingcall
2018-04-18 10:51:56.377 30462 ERROR neutron.db.metering.metering_rpc [req-488bf35c-8e90-49cf-9517-28e736dea563 - - - - -] Unable to find agent controller01.
2018-04-18 10:52:15.497 30467 INFO networking_odl.journal.periodic_task [req-b476cffc-5bce-4893-b3c0-1a2f4f77f963 - - - - -] Starting hostconfig periodic task.
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall [req-b476cffc-5bce-4893-b3c0-1a2f4f77f963 - - - - -] Fixed interval looping call 'networking_odl.journal.periodic_task.PeriodicTask.execute_ops' failed: ProgrammingError: (pymysql.err.ProgrammingError) (1146, u"Table 'neutron.opendaylight_periodic_task' doesn't exist") [SQL: u'SELECT opendaylight_periodic_task.state AS opendaylight_periodic_task_state, opendaylight_periodic_task.processing_operation AS opendaylight_periodic_task_processing_operation, opendaylight_periodic_task.task AS opendaylight_periodic_task_task, opendaylight_periodic_task.lock_updated AS opendaylight_periodic_task_lock_updated \nFROM opendaylight_periodic_task \nWHERE opendaylight_periodic_task.task = %(task_1)s AND opendaylight_periodic_task.lock_updated <= %(lock_updated_1)s'] [parameters: {u'task_1': 'hostconfig', u'lock_updated_1': datetime.datetime(2018, 4, 18, 10, 51, 45)}]
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall Traceback (most recent call last):
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 66, in func
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall return f(*args, **kwargs)
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/networking_odl/journal/periodic_task.py", line 96, in execute_ops
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall if not forced and self.task_already_executed_recently(context):
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/networking_odl/journal/periodic_task.py", line 73, in task_already_executed_recently
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall context.session, self.task, self.interval)
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/networking_odl/db/db.py", line 168, in was_periodic_task_executed_recently
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall (now - delta >= (models.OpenDaylightPeriodicTask.lock_updated))
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2664, in one_or_none
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall ret = list(self)
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2736, in __iter__
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall return self._execute_and_instances(context)
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2751, in _execute_and_instances
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall result = conn.execute(querycontext.statement, self._params)
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 914, in execute
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall return meth(self, multiparams, params)
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 323, in _execute_on_connection
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall return connection._execute_clauseelement(self, multiparams, params)
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1010, in _execute_clauseelement
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall compiled_sql, distilled_params
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1146, in _execute_context
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall context)
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1337, in _handle_dbapi_exception
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall util.raise_from_cause(newraise, exc_info)
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 200, in raise_from_cause
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall reraise(type(exception), exception, tb=exc_tb)
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall context)
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 450, in do_execute
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall cursor.execute(statement, parameters)
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 166, in execute
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall result = self._query(query)
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 322, in _query
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall conn.query(q)
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 841, in query
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall self._affected_rows = self._read_query_result(unbuffered=unbuffered)
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1029, in _read_query_result
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall result.read()
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1312, in read
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall first_packet = self.connection._read_packet()
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 991, in _read_packet
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall packet.check_error()
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/pymysql/connections.py", line 393, in check_error
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall err.raise_mysql_exception(self._data)
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/pymysql/err.py", line 107, in raise_mysql_exception
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall raise errorclass(errno, errval)
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall ProgrammingError: (pymysql.err.ProgrammingError) (1146, u"Table 'neutron.opendaylight_periodic_task' doesn't exist") [SQL: u'SELECT opendaylight_periodic_task.state AS opendaylight_periodic_task_state, opendaylight_periodic_task.processing_operation AS opendaylight_periodic_task_processing_operation, opendaylight_periodic_task.task AS opendaylight_periodic_task_task, opendaylight_periodic_task.lock_updated AS opendaylight_periodic_task_lock_updated \nFROM opendaylight_periodic_task \nWHERE opendaylight_periodic_task.task = %(task_1)s AND opendaylight_periodic_task.lock_updated <= %(lock_updated_1)s'] [parameters: {u'task_1': 'hostconfig', u'lock_updated_1': datetime.datetime(2018, 4, 18, 10, 51, 45)}]
2018-04-18 10:52:15.506 30467 ERROR oslo.service.loopingcall
[root#controller01 ~(keystone_admin)]#
So it look like creating object work, but they are not stored in the database, and so it is impossible to reference them. In the same way, i can create several times a router/network with the same name, and I can't list my router or network.
The missing table, opendaylight_periodic_task, is supposed to be create by the python module networking-odl, which is installed.
Thank in advance for your help.

Resources