kolla-ansible opnestack,cinder.exception.NoValidBackend: No valid backend was found - openstack

kolla-ansible (stable/ussuri) (openstack ussuri)
kolla-ansible deploy openstack(four node with ubuntu18.04)
configuration cinder with an existing ceph(cephadm deployed,ceph version 15.2.8,octopus (stable) );
cinder-volume | server1#ceph | nova | enabled | down |
how to fix this, thanks.
logs
(venv36) root#dev ~/kolla-ansible (stable/ussuri) $ cinder service-list
+------------------+------------------------+------+---------+-------+----------------------------+---------+-----------------+---------------+
| Binary | Host | Zone | Status | State | Updated_at | Cluster | Disabled Reason | Backend State |
+------------------+------------------------+------+---------+-------+----------------------------+---------+-----------------+---------------+
| cinder-backup | server1 | nova | enabled | up | 2021-02-22T07:22:53.000000 | - | - | |
| cinder-backup | server2 | nova | enabled | up | 2021-02-22T07:22:55.000000 | - | - | |
| cinder-backup | server3 | nova | enabled | up | 2021-02-22T07:22:50.000000 | - | - | |
| cinder-backup | server4 | nova | enabled | up | 2021-02-22T07:22:50.000000 | - | - | |
| cinder-scheduler | server1 | nova | enabled | up | 2021-02-22T07:22:49.000000 | - | - | |
| cinder-scheduler | server2 | nova | enabled | up | 2021-02-22T07:22:56.000000 | - | - | |
| cinder-scheduler | server3 | nova | enabled | up | 2021-02-22T07:22:53.000000 | - | - | |
| cinder-volume | server1#ceph | nova | enabled | down | 2021-02-22T01:21:03.000000 | - | - | - |
| cinder-volume | server1#rbd-1#rbd-1 | nova | enabled | down | 2021-02-21T07:41:14.000000 | - | - | - |
| cinder-volume | storage1:volumes#rbd-1 | nova | enabled | down | 2021-02-21T07:07:29.000000 | - | - | - |
+------------------+------------------------+------+---------+-------+----------------------------+---------+-----------------+---------------+
tree /etc/kolla/
/etc/kolla/config/
├── cinder
│   ├── ceph.conf
│   ├── cinder-backup
│   │   ├── ceph.client.cinder-backup.keyring
│   │   └── ceph.client.cinder.keyring
│   ├── cinder-backup.conf
│   ├── cinder-volume
│   │   └── ceph.client.cinder.keyring
│   ├── cinder-volume.conf
│   └── cinder-volume.conf.bak
├── glance
│   ├── ceph.client.glance.keyring
│   ├── ceph.conf
│   └── glance-api.conf
└── nova
├── ceph.client.cinder.keyring
├── ceph.client.nova.keyring
├── ceph.conf
└── nova-compute.conf
cinder_volume,cinder_scheduler logs as following:
2021-02-22 14:08:20.766 7
ERROR cinder.scheduler.flows.create_volume
[req-1e4cc16b-cace-4019-8ce5-9f543e758e77 2ff1ec3c53da405e90a71c993cf969eb c7675e760bbd498f9bc143cd165c4099 -
default default] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;
volume:create: No valid backend was found. No weighed backends available: cinder.exception.NoValidBackend:
No valid backend was found. No weighed backends available
(venv36) root#dev ~/kolla-ansible (stable/ussuri) $ cat /etc/kolla/config/cinder/cinder-volume.conf
[DEFAULT]
enabled_backends=ceph
[ceph]
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=cinder
rbd_pool=volumes
volume_backend_name=ceph
#backend_host=storage1:volumes
backend_host=server1
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_secret_uuid = 67ca7759-ea2b-4bd0-9464-d771382b13c7

Related

Deploy Drupal 9 site from ubuntu local machine to shared hosting

I am newbie with drupal. My Drupal 9.4.3 sit dev.xyz.in created with composer is ready for deployment. I have a linux based shared hosting plan.
My Local web development environment/IDE is :
OS Ubuntu 18.04LTS
php 7.4.3
mariaDB 15.1
local server nginx
My website directory structure – var/www/dev.xyz.in
dev.xyz.in
| - config
| | - sync
| | | - .htaccess
| - drush
| | - Commands
| | - sites
| | - drush.yml
| | - README.md
| - scripts
| | - composer
| - vendor
| | - composer
| | - drush
| | - bin
| | - twig
| | - symphony
| | - …. more
| - web
| | - core
| | - modules
| | | - contrib
| | | - ds
| | - profiles
| | - sites
| | | - default
| | | - default.services.yml
| | | - default.settings.php
| | | - settings.local.php
| | | - settings.php
| | - themes
| | | - contrib
| | | - custom
| | - update.php
| | - .htaccess
| | - …. more
| - .github
| - composer.json
| - composer.lock
| - load.environment.php
| - phpunit.xml.dist
| - README.md
| - .editorconfog
| - .env.example
| - gitattributes
| - .gitignore
I have changed some settings locally on settings.php as given below:
if (file_exists($app_root . '/' . $site_path . '/settings.local.php')) {
include $app_root . '/' . $site_path . '/settings.local.php';
}
and
$settings['trusted_host_patterns'] = [
'^localhost',
];
I have changed some settings locally on settings.local.php as given below:
$settings['rebuild_access'] = TRUE;
changed it to
$settings['rebuild_access'] = FALSE;
How can I deploy my website to linux based shared hosting using ftp (filezilla) and what necessary change will taken before deployment of local website? please help me!

Nginx root directory with nested symlinks

Our website (laravel) project directory is like this:
/home/user/project/{app,route,public_html,storage,...}
New releases are placed in:
/home/user/releases/v1
For some reason we have to link public_html directory for every release, so:
/home/user/releases/v1/public_html > /home/user/project/public_html
Nginx root diretory is:
/home/user/www/site/public_html
Which:
/home/user/www/site > /home/user/releases/v1
Since NGINX will follow symlinks, final root directory would be:
/home/user/project/public_html
Is there a way to fix this situation?
Since the root of your virtual host is /home/user/www/site/public_html and since /home/user/www/site/public_html is a symlink to /home/user/project/public_html and since /home/user/project/public_html is a symlink to your latest release - nginX location blocks will actually search inside /home/user/releases/v1/public_html.
Here is an example. Let's say we have a folder /my_releases. Everytime I publish/deploy a new release I will create a new subfolder inside /my_releases with the version of the release (/my_releases/v1, then /my_releases/v2 and so on). All assets of the release will be inside the corresponding subfolder - so I will have
/my_releases
|
+---- /v1
| |
| +--- /css
| | |
| | +--- /home.a120cd8.css
| |
| +--- /img
| | |
| | +--- /logo.7f40c3a.svg
| |
| +--- /js
| | |
| | +--- /main.ba4de98.js
| |
| +--- /api
| | |
| | +--- /index.php
| | |
| | +--- /routes
| | |
| | +--- /login.php
| +--- /index.html
|
+---- /v2
| |
| +--- /css
| | |
| | +--- /home.7845c7.css
| |
| +--- /img
| | |
| | +--- /logo.23038ad.svg
| |
| +--- /js
| | |
| | +--- /main.acb33f1.js
| |
| +--- /api
| | |
| | +--- /index.php
| | |
| | +--- /routes
| | |
| | +--- /login.php
| +--- /index.html
........ next releases until the end of the world
My nginX is configured in such a way, that my virtual host has
server {
server_name my.personal.web.site;
root /var/www/public_html;
.....
}
Before starting nginX, I have run the following 2 commands:
ln -s -f -n /my_releases/current /my_releases/v1
ln -s -f -n /var/www/public_html /my_releases/current
Then I started nginX - service nginx start. It will now serve v1 of my web site/application.
Now, any time I deploy a new release, I run the following command (replace v2 with the relevant revision)
ln -s -f -n /my_releases/current /my_releases/v2
Don't forget to set the proper filesystem permissions and ownership.

Centos7.8 install openstack mitaka version, control the node to install mirror service glance, the mirror contains problems

Centos7.8 install openstack mitaka version, control the node to install mirror service glance, the mirror contains problems
According to the official documentation Mitaka official documentation operations, Step 3 Upload the image to the image service using the QCOW2 disk format, bare container format, and public visibility so all projects can access it:
I execute the following command
openstack image create "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public
The size of the image in the output is zero. How should I check this problem
[root#controller ~]# openstack image create "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | d41d8cd98f00b204e9800998ecf8427e |
| container_format | bare |
| created_at | 2020-05-24T14:45:54Z |
| disk_format | qcow2 |
| file | /v2/images/c89f6866-0c48-4ee5-84f1-bf7fa0998edf/file |
| id | c89f6866-0c48-4ee5-84f1-bf7fa0998edf |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | a9629b19eb9348adbf02a5432dd79411 |
| protected | False |
| schema | /v2/schemas/image |
| size | 0 |
| status | active |
| tags | |
| updated_at | 2020-05-24T14:45:54Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+

docker - multiple projects on one Dockerfile and docker-compose.yml

I'm starting with Docker and in my opinion is great! Now I'm looking solution for this organization:
Now I have this structure:
Applications
| +--app1
| | +--node_modules
| | +--package.json
| | +--...
| +--app2
| | +--node_modules
| | +--package.json
| | +--...
| ....
| docker-compose.app1.yml
| docker-compose.app2.yml
| ....
| Dockerfile //my personalized image for all projects
But I want reach this:
Applications
| +--app1
| | +--node_modules //empty in host
| | +--package.json
| | +--docker-compose.app1.yml //override compose
| | +--...
| +--app2
| | +--node_modules //empty in host
| | +--package.json
| | +--...
| ....
| +--node_modules //global node_modules folder (linked to projects)
| docker-compose.yml //principal compose
| Dockerfile //my personalized image for all projects
I thinking too about create one global "server" and link all projects on VHosts but how I'll get access to each of project?
You are looking for docker-comopose extends. Thas permits you override previus configurations.
web:
extends: file: common-services.yml
service: webapp
See full documentation in : https://docs.docker.com/compose/extends/#extending-services

Cygnus install on localhost

By following this guide
https://github.com/telefonicaid/fiware-connectors/blob/master/flume/doc/quick_start_guide.md
I tried to use
/usr/cygnus/bin/cygnus-flume-ng agent --conf /usr/cygnus/conf/ -f /usr/cygnus/conf/agent_1.conf -n cygnusagent -Dflume.root.logger=INFO,console
But I got this error
time=2015-03-11T17:35:01.965CET | lvl=WARN | trans= | function=warn | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[76] : failed SocketConnector#0.0.0.0:8081: java.net.BindException: Address already in use
time=2015-03-11T17:35:01.965CET | lvl=WARN | trans= | function=warn | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[76] : failed Server#57c59fac: java.net.BindException: Address already in use
time=2015-03-11T17:35:01.965CET | lvl=FATAL | trans= | function=run | comp=Cygnus | msg=es.tid.fiware.fiwareconnectors.cygnus.http.JettyServer[63] : Fatal error running the Management Interface. Details=Address already in use
And besides this error. I use service cygnus status and start correctly.
time=2015-03-11T17:46:52.337CET | lvl=ERROR | trans= | function=run | comp=Cygnus | msg=org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable[253] : Unable to start EventDrivenSourceRunner: { source:org.apache.flume.source.http.HTTPSource{name:http-source,state:IDLE} } - Exception follows.
java.lang.IllegalStateException: Running HTTP Server found in source: http-source before I started one.Will not attempt to start.
at com.google.common.base.Preconditions.checkState(Preconditions.java:145)
at org.apache.flume.source.http.HTTPSource.start(HTTPSource.java:137)
at org.apache.flume.source.EventDrivenSourceRunner.start(EventDrivenSourceRunner.java:44)
at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:701)
I change the port to 8085 8084 8083 ... see that he read the conf and ignore this conf ...
[root#alex alex]# /usr/cygnus/bin/cygnus-flume-ng agent --conf /usr/cygnus/conf -f /usr/cygnus/conf/cygnus_instance_1.conf -n cygnusagent -Dflume.root.logger=INFO,console [-p 8085]
+ exec /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.34.x86_64//bin/java -Xmx20m -Dflume.root.logger=INFO,console -cp '/usr/cygnus/conf:/usr/cygnus/lib/*:/usr/cygnus/plugins.d/cygnus/lib/*:/usr/cygnus/plugins.d/cygnus/libext/*' -Djava.library.path= es.tid.fiware.fiwareconnectors.cygnus.nodes.CygnusApplication -f /usr/cygnus/conf/cygnus_instance_1.conf -n cygnusagent '[-p' '8085]'
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/cygnus/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/cygnus/plugins.d/cygnus/lib/cygnus-0.7.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
time=2015-03-11T19:47:50.882CET | lvl=INFO | trans= | function=start | comp=Cygnus | msg=org.apache.flume.node.PollingPropertiesFileConfigurationProvider[61] : Configuration provider starting
time=2015-03-11T19:47:50.895CET | lvl=INFO | trans= | function=run | comp=Cygnus | msg=org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable[133] : Reloading configuration file:/usr/cygnus/conf/cygnus_instance_1.conf
time=2015-03-11T19:47:50.906CET | lvl=WARN | trans= | function=<init> | comp=Cygnus | msg=org.apache.flume.conf.FlumeConfiguration[101] : Configuration property ignored: CONFIG_FILE = /usr/cygnus/conf/agent_1.conf
time=2015-03-11T19:47:50.907CET | lvl=WARN | trans= | function=<init> | comp=Cygnus | msg=org.apache.flume.conf.FlumeConfiguration[101] : Configuration property ignored: CONFIG_FOLDER = /usr/cygnus/conf
time=2015-03-11T19:47:50.907CET | lvl=WARN | trans= | function=<init> | comp=Cygnus | msg=org.apache.flume.conf.FlumeConfiguration[101] : Configuration property ignored: AGENT_NAME = cygnusagent
time=2015-03-11T19:47:50.907CET | lvl=WARN | trans= | function=<init> | comp=Cygnus | msg=org.apache.flume.conf.FlumeConfiguration[101] : Configuration property ignored: CYGNUS_USER = root
time=2015-03-11T19:47:50.907CET | lvl=WARN | trans= | function=<init> | comp=Cygnus | msg=org.apache.flume.conf.FlumeConfiguration[101] : Configuration property ignored: LOGFILE_NAME = cygnus.log
time=2015-03-11T19:47:50.907CET | lvl=WARN | trans= | function=<init> | comp=Cygnus | msg=org.apache.flume.conf.FlumeConfiguration[101] : Configuration property ignored: ADMIN_PORT = 8085
time=2015-03-11T19:47:50.907CET | lvl=INFO | trans= | function=validateConfiguration | comp=Cygnus | msg=org.apache.flume.conf.FlumeConfiguration[140] : Post-validation flume configuration contains configuration for agents: []
time=2015-03-11T19:47:50.908CET | lvl=WARN | trans= | function=getConfiguration | comp=Cygnus | msg=org.apache.flume.node.AbstractConfigurationProvider[138] : No configuration found for this host:cygnusagent
time=2015-03-11T19:47:50.913CET | lvl=INFO | trans= | function=startAllComponents | comp=Cygnus | msg=org.apache.flume.node.Application[138] : Starting new configuration:{ sourceRunners:{} sinkRunners:{} channels:{} }
time=2015-03-11T19:47:50.925CET | lvl=INFO | trans= | function=startManagementInterface | comp=Cygnus | msg=es.tid.fiware.fiwareconnectors.cygnus.nodes.CygnusApplication[85] : Starting a Jetty server listening on port 8081 (Management Interface)
time=2015-03-11T19:47:50.942CET | lvl=INFO | trans= | function=info | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[67] : Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
time=2015-03-11T19:47:50.942CET | lvl=INFO | trans= | function=stopAllComponents | comp=Cygnus | msg=org.apache.flume.node.Application[101] : Shutting down configuration: { sourceRunners:{} sinkRunners:{} channels:{} }
time=2015-03-11T19:47:50.942CET | lvl=INFO | trans= | function=info | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[67] : jetty-6.1.26
time=2015-03-11T19:47:50.942CET | lvl=INFO | trans= | function=startAllComponents | comp=Cygnus | msg=org.apache.flume.node.Application[138] : Starting new configuration:{ sourceRunners:{} sinkRunners:{} channels:{} }
time=2015-03-11T19:47:50.949CET | lvl=INFO | trans= | function=startManagementInterface | comp=Cygnus | msg=es.tid.fiware.fiwareconnectors.cygnus.nodes.CygnusApplication[85] : Starting a Jetty server listening on port 8081 (Management Interface)
time=2015-03-11T19:47:50.958CET | lvl=INFO | trans= | function=info | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[67] : jetty-6.1.26
time=2015-03-11T19:47:50.978CET | lvl=WARN | trans= | function=warn | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[76] : failed SocketConnector#0.0.0.0:8081: java.net.SocketException: Address already in use
time=2015-03-11T19:47:50.980CET | lvl=INFO | trans= | function=info | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[67] : Started SocketConnector#0.0.0.0:8081
time=2015-03-11T19:47:50.982CET | lvl=WARN | trans= | function=warn | comp=Cygnus | msg=org.mortbay.log.Slf4jLog[76] : failed Server#6e811049: java.net.SocketException: Address already in use
time=2015-03-11T19:47:50.982CET | lvl=FATAL | trans= | function=run | comp=Cygnus | msg=es.tid.fiware.fiwareconnectors.cygnus.http.JettyServer[63] : Fatal error running the Management Interface. Details=Address already in use
Alejandro, this is a well-known bug for Cygnus 0.7.0. A new 0.7.1 version was uploaded at the beginig of this week to the FIWARE repo. Anyway, that supposedly FATAL error (it is an error, but not FATAL :)) does not affect the behaviour of Cygnus since it only affects the Management Interface (which currently has only one method that returns the version you are running). Thus, Cygnus should be working properly in the port you have configured for the HTTPSource at your /usr/cygnus/conf/agent_1.conf file:
cygnusagent.sources.http-source.port = 5050
Before installing the new version, I recommend you to remove the previous one. I mean, do not simply run yum install cygnus in order to update the existent installacion, but actively yum remove cygnus and then yum install cygnus. The reason is another bug regarding the RPM deployment that was fixed witin version 0.7.1 as well.

Resources