Hi I am trying to deploy overcloud using this link, but getting error on TASK [Ensure system is NTP time synced] this task
How can I fix this ?
fatal: [overcloud-controller-0]: FAILED! => {“changed”: true, “cmd”:
[“chronyc”, “waitsync”, “20”], “delta”: “0:03:10.196302”, “end”:
“2022-11-21 12:47:46.528790”, “msg”: “non-zero return code”, “rc”: 1,
“start”: “2022-11-21 12:44:36.332488”, “stderr”: “”, “stderr_lines”:
[],”stdout”: “try: 1, refid: 00000000, correction: 0.000000000, skew:
0.000 … try: 20, refid: 00000000, correction: 0.000000000, skew: 0.000”, “stdout_lines”: [“try: 1, refid: 00000000, correction: 0.000000000, skew: 0.000”, … , “try: 20, refid: 00000000, correction: 0.000000000, skew: 0.000”]}
And my final output is
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
overcloud-controller-0 : ok=179 changed=54 unreachable=0 failed=1 skipped=167 rescued=0 ignored=1
overcloud-novacompute-0 : ok=79 changed=18 unreachable=0 failed=0 skipped=127 rescued=0 ignored=0
undercloud : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I am trying to deploy overcloud
reproduce openstack overcloud deploy --templates --control-scale 1 --compute-scale 1 with this command
Related
I'm trying to configure xpack for elasticsearch/kibana, I've activated the trial license for elasticsearch, configured xpack for kibana/elasticsearch and also I've generated ca.crt, node1-elk.crt, node1-elk.key and also kibana.key , kibana.crt and if I'm testing with curl towards the elasticsearch using the kibana user and password and also the ca.crt it's working like a charm, if I'm trying to access kibana from the GUI says that the "Server is not ready yet" and the logs show that " unable to verify the first certificate" :
{"type":"log","#timestamp":"2021-11-16T04:41:09-05:00","tags":["error","savedobjects-service"],"pid":13250,"message":"Unable to retrieve version information from Elasticsearch nodes. unable to verify the first certificate"}
My configuration:
kibana.yml
server.name: "my-kibana"
server.host: "0.0.0.0"
elasticsearch.hosts: ["https://0.0.0.0:9200"]
server.ssl.enabled: true
server.ssl.certificate: /etc/kibana/certs/kibana.crt
server.ssl.key: /etc/kibana/certs/kibana.key
server.ssl.certificateAuthorities: ["/etc/kibana/certs/ca.crt"]
elasticsearch.username: "kibana_system"
elasticsearch.password: "kibana"
elasticsearch.yml
node.name: node1
network.host: 0.0.0.0
discovery.seed_hosts: [ "0.0.0.0" ]
cluster.initial_master_nodes: ["node1"]
xpack.security.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.http.ssl.key: /etc/elasticsearch/certs/node1.key
xpack.security.http.ssl.certificate: /etc/elasticsearch/certs/node1.crt
xpack.security.http.ssl.certificate_authorities: [ "/etc/elasticsearch/certs/ca.crt" ]
xpack.security.transport.ssl.key: /etc/elasticsearch/certs/node1.key
xpack.security.transport.ssl.certificate: /etc/elasticsearch/certs/node1.crt
xpack.security.transport.ssl.certificate_authorities: [ "/etc/elasticsearch/certs/ca.crt" ]
curl testing:
[root#localhost kibana]# curl -XGET https://0.0.0.0:9200/_cat/nodes?v -u kibana_system:kibana --cacert /etc/elasticsearch/certs/ca.crt
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.100.102 23 97 3 0.00 0.02 0.08 cdfhilmrstw * node1
I don't know what to do more here:
[root#localhost kibana]# curl -XGET https://0.0.0.0:9200/_license -u kibana_system:kibana --cacert /etc/elasticsearch/certs/ca.crt
{
"license" : {
"status" : "active",
"uid" : "872f0ad0-723e-43c8-b346-f43e2707d3de",
"type" : "trial",
"issue_date" : "2021-11-08T18:26:15.422Z",
"issue_date_in_millis" : 1636395975422,
"expiry_date" : "2021-12-08T18:26:15.422Z",
"expiry_date_in_millis" : 1638987975422,
"max_nodes" : 1000,
"issued_to" : "elasticsearch",
"issuer" : "elasticsearch",
"start_date_in_millis" : -1
}
}
Thank you for your help
I am currently developing test case in Robot Framework with Eclipse and RED plugin to automate a test case on Linux VM.
The code for one of the keyword goes like this
**Check Auth Certificate**
[Documentation] To Check Whether the Authentication certificate is present or not
Log *** Needs to be implemented ***
Send Command pwd
Send Command cd /root/.ssh/
Send Command pwd
${fileExist} File Should Exist 'mqtt-server.crt'
Send Command is a custom keyword to execute the commands with Write & Read Keywords and logs the result.
Problem is - there is a file (mqtt-server.crt) present in this location - /root/.ssh/. From the console output, am able to validate that the control has reached within the required folder.
However, when the keyword from SSHLibrary- File Should Exist is executed, it Fails.
I want to validate if the mentioned file is present in the folder, and if it is present, it needs to be deleted.
The output in the console is
20210623 00:23:47.024 : INFO : *** Needs to be implemented ***
20210623 00:23:47.118 : INFO : pwd
20210623 00:23:48.120 : INFO : /root
[root#<host> ~]#
20210623 00:23:48.121 : INFO : Response After :: pwd - -> /root
[root#<host> ~]#
20210623 00:23:48.226 : INFO : cd /root/.ssh/
20210623 00:23:49.227 : INFO : [root#<host> .ssh]#
20210623 00:23:49.227 : INFO : ${resultCommand} = [root#<host> .ssh]#
20210623 00:23:49.228 : INFO : Response After :: cd /root/.ssh/ - -> [root#<host> .ssh]#
20210623 00:23:49.320 : INFO : pwd
20210623 00:23:50.321 : INFO : /root/.ssh
20210623 00:23:50.321 : INFO : ${resultCommand} = /root/.ssh
20210623 00:23:50.408 : INFO : ls
20210623 00:23:50.408 : INFO : ${responseCommand} = ls
20210623 00:23:52.411 : INFO : known_hosts mqtt-server.crt
20210623 00:23:52.411 : INFO : ${resultCommand} = known_hosts mqtt-server.crt
[root#<host> .ssh]#
20210623 00:23:52.413 : INFO : Response After :: ls - -> known_hosts mqtt-server.crt
[root#<host> .ssh]#
20210623 00:23:52.413 : INFO : ${listOfFiles} = None
**20210623 00:23:52.415 : DEBUG : [chan 1] Max packet in: 32768 bytes
20210623 00:23:52.529 : INFO : [chan 1] Opened sftp connection (server version 3)
20210623 00:23:52.529 : DEBUG : [chan 1] normalize(b'.')
20210623 00:23:52.570 : DEBUG : [chan 1] stat(b'mqtt-server.crt')
20210623 00:23:52.603 : FAIL : File 'mqtt-server.crt' does not exist.
20210623 00:23:52.604 : DEBUG : Traceback (most recent call last):
File "D:\Program Files (x86)\Python\Python39\Lib\site-packages\SSHLibrary\library.py", line 1809, in file_should_exist
raise AssertionError("File '%s' does not exist." % path)**
Ending test: Demo-Telemetry.TestCases.ConnectToJumpServer.First Jump
Can you please let me know, what needs to be changed to make this working or how this can be fixed .
use absolute path in "File Should Exist" keyword. Or use "Move Directory" keyword. I dont think that if you change directory in your custom keyword, that this directory changes for the OperatingSystem library.
${fileExist} File Should Exist /root/.ssh/mqtt-server.crt
I encounter a problem in my production site in Symfony 4.2.8 (yes I know...).
When I try to launch a bin/console fos:elastica:populate or bin/console fos:elastica:reset the console crash (with -vvv option) :
Exception trace:
() at /var/www/current/vendor/ruflin/elastica/lib/Elastica/Transport/Http.php:190
Elastica\Transport\Http->exec() at /var/www/current/vendor/ruflin/elastica/lib/Elastica/Request.php:194
Elastica\Request->send() at /var/www/current/vendor/ruflin/elastica/lib/Elastica/Client.php:689
Elastica\Client->request() at /var/www/current/vendor/friendsofsymfony/elastica-bundle/src/Elastica/Client.php:58
FOS\ElasticaBundle\Elastica\Client->request() at /var/www/current/vendor/ruflin/elastica/lib/Elastica/Client.php:721
Elastica\Client->requestEndpoint() at /var/www/current/vendor/ruflin/elastica/lib/Elastica/Index.php:586
Elastica\Index->requestEndpoint() at /var/www/current/vendor/ruflin/elastica/lib/Elastica/Index.php:225
Elastica\Index->delete() at /var/www/current/vendor/ruflin/elastica/lib/Elastica/Index.php:296
Elastica\Index->create() at /var/www/current/vendor/friendsofsymfony/elastica-bundle/src/Index/Resetter.php:110
FOS\ElasticaBundle\Index\Resetter->resetIndex() at /var/www/current/vendor/friendsofsymfony/elastica-bundle/src/Command/PopulateCommand.php:184
FOS\ElasticaBundle\Command\PopulateCommand->populateIndex() at /var/www/current/vendor/friendsofsymfony/elastica-bundle/src/Command/PopulateCommand.php:164
FOS\ElasticaBundle\Command\PopulateCommand->execute() at /var/www/current/vendor/symfony/console/Command/Command.php:255
Symfony\Component\Console\Command\Command->run() at /var/www/current/vendor/symfony/console/Application.php:926
Symfony\Component\Console\Application->doRunCommand() at /var/www/current/vendor/symfony/framework-bundle/Console/Application.php:89
Symfony\Bundle\FrameworkBundle\Console\Application->doRunCommand() at /var/www/current/vendor/symfony/console/Application.php:269
Symfony\Component\Console\Application->doRun() at /var/www/current/vendor/symfony/framework-bundle/Console/Application.php:75
Symfony\Bundle\FrameworkBundle\Console\Application->doRun() at /var/www/current/vendor/symfony/console/Application.php:145
Symfony\Component\Console\Application->run() at /var/www/current/bin/console:39
In Http.php line 190:
[Elastica\Exception\Connection\HttpException]
Couldn't connect to host, Elasticsearch down?
A curl on ES server respond :
{
"name" : "prod",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "********",
"version" : {
"number" : "7.10.0",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "51e9d6f22758d0374a0f3f5c6e8f3a7997850f96",
"build_date" : "2020-11-09T21:30:33.964949Z",
"build_snapshot" : false,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
I create a new branch to upgrade all the component but I need a clue to operate a quick fix on production...
If somebody have an idea thank in advance.
When I try to use httpoison to query an elasticsearch server like
iex(1)> HTTPoison.get "http://localhost:9200"
I get
{:error, %HTTPoison.Error{id: nil, reason: :econnrefused}}.
If I do
curl -XGET "http://localhost:9200"
I get
{
"name" : "es01",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "Wik-EpMkQ8ummJE6ctNAOg",
"version" : {
"number" : "7.0.1",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "e4efcb5",
"build_date" : "2019-04-29T12:56:03.145736Z",
"build_snapshot" : false,
"lucene_version" : "8.0.0",
"minimum_wire_compatibility_version" : "6.7.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
Does anyone know what this behavior is due to and how to fix it?
P.S.: Changing localhost to 127.0.0.1 does not solve the problem.
Here's my setup:
elasticsearch Version: 7.0.1
{:httpoison, "~> 1.5"} #=> mix.lock shows version 1.5.1 was installed
curl results:
$ curl -XGET "http://localhost:9200"
{
"name" : "My-MacBook-Pro.local",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "vEFl3B5TTYaBxPhQFuXPyQ",
"version" : {
"number" : "7.0.1",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "e4efcb5",
"build_date" : "2019-04-29T12:56:03.145736Z",
"build_snapshot" : false,
"lucene_version" : "8.0.0",
"minimum_wire_compatibility_version" : "6.7.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
HTTPoison results:
$ iex -S mix
Erlang/OTP 20 [erts-9.3] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:10] [hipe] [kernel-poll:false]
===> Compiling parse_trans
===> Compiling mimerl
===> Compiling metrics
===> Compiling unicode_util_compat
===> Compiling idna
==> ssl_verify_fun
Compiling 7 files (.erl)
Generated ssl_verify_fun app
===> Compiling certifi
===> Compiling hackney
==> httpoison
Compiling 3 files (.ex)
Generated httpoison app
==> hello
Compiling 15 files (.ex)
Generated hello app
Interactive Elixir (1.6.6) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)> HTTPoison.get "http://localhost:9200"
{:ok,
%HTTPoison.Response{
body: "{\n \"name\" : \"My-MacBook-Pro.local\",\n \"cluster_name\" :
\"elasticsearch\",\n \"cluster_uuid\" : \"vEFl3B5TTYaBxPhQFuXPyQ\",\n
\"version\" : {\n \"number\" : \"7.0.1\",\n \"build_flavor\" :
\"default\",\n \"build_type\" : \"tar\",\n \"build_hash\" :
\"e4efcb5\",\n \"build_date\" : \"2019-04-29T12:56:03.145736Z\",\n
\"build_snapshot\" : false,\n \"lucene_version\" : \"8.0.0\",\n
\"minimum_wire_compatibility_version\" : \"6.7.0\",\n
\"minimum_index_compatibility_version\" : \"6.0.0-beta1\"\n },\n
\"tagline\" : \"You Know, for Search\"\n}\n",
headers: [
{"content-type", "application/json; charset=UTF-8"},
{"content-length", "522"}
],
request: %HTTPoison.Request{
body: "",
headers: [],
method: :get,
options: [],
params: %{},
url: "http://localhost:9200"
},
request_url: "http://localhost:9200",
status_code: 200
}}
iex(2)>
Next, I stopped the elasticsearch server, then I ran the HTTPoison request again:
ex(2)> HTTPoison.get "http://localhost:9200"
{:error, %HTTPoison.Error{id: nil, reason: :econnrefused}}
I got similar results for the curl request:
$ curl -XGET "http://localhost:9200"
curl: (7) Failed to connect to localhost port 9200: Connection refused
What happens if you issue two curl requests in a row? Do they both succeed? Try issuing the HTTPoison request first, then the curl request. Which one fails? Try the reverse order. Same results?
I'm almost positive you have the same problem that I did. Check to make sure you are not forcing ipv6 in your /etc/hosts file.
If you have something like this:
::1 localhost
...get rid of it and Httpoison should work again
I was trying to print a document for one of my games but the page viewer couldn't see the printer so I checked the print spooler service
C:\WINDOWS\system32>sc qc spooler
[SC] QueryServiceConfig SUCCESS
SERVICE_NAME: spooler
TYPE : 110 WIN32_OWN_PROCESS (interactive)
START_TYPE : 2 AUTO_START
ERROR_CONTROL : 1 NORMAL
BINARY_PATH_NAME : C:\WINDOWS\System32\spoolsv.exe
LOAD_ORDER_GROUP : SpoolerGroup
TAG : 0
DISPLAY_NAME : Print Spooler
DEPENDENCIES : RPCSS
: http
SERVICE_START_NAME : LocalSystem
C:\WINDOWS\system32>sc query spooler
SERVICE_NAME: spooler
TYPE : 110 WIN32_OWN_PROCESS (interactive)
STATE : 1 STOPPED
WIN32_EXIT_CODE : 1068 (0x42c)
SERVICE_EXIT_CODE : 0 (0x0)
CHECKPOINT : 0x0
WAIT_HINT : 0x0
C:\WINDOWS\system32>
And tried to start it, then this happened
C:\WINDOWS\system32>net start spooler
System error 1068 has occurred.
The dependency service or group failed to start.
C:\WINDOWS\system32>
Ok so I checked the dependencies
C:\WINDOWS\system32>sc qc rpcss
[SC] QueryServiceConfig SUCCESS
SERVICE_NAME: rpcss
TYPE : 20 WIN32_SHARE_PROCESS
START_TYPE : 2 AUTO_START
ERROR_CONTROL : 1 NORMAL
BINARY_PATH_NAME : C:\WINDOWS\system32\svchost.exe -k rpcss
LOAD_ORDER_GROUP : COM Infrastructure
TAG : 0
DISPLAY_NAME : Remote Procedure Call (RPC)
DEPENDENCIES : RpcEptMapper
: DcomLaunch
SERVICE_START_NAME : NT AUTHORITY\NetworkService
C:\WINDOWS\system32>sc query rpcss
SERVICE_NAME: rpcss
TYPE : 20 WIN32_SHARE_PROCESS
STATE : 4 RUNNING
(NOT_STOPPABLE, NOT_PAUSABLE, IGNORES_SHUTDOWN)
WIN32_EXIT_CODE : 0 (0x0)
SERVICE_EXIT_CODE : 0 (0x0)
CHECKPOINT : 0x0
WAIT_HINT : 0x0
C:\WINDOWS\system32>
Ok RPCSS is good, next one
C:\WINDOWS\system32>sc qc http && sc query http
[SC] QueryServiceConfig SUCCESS
SERVICE_NAME: http
TYPE : 1 KERNEL_DRIVER
START_TYPE : 3 DEMAND_START
ERROR_CONTROL : 1 NORMAL
BINARY_PATH_NAME : system32\drivers\HTTP.sys
LOAD_ORDER_GROUP :
TAG : 0
DISPLAY_NAME : HTTP Service
DEPENDENCIES :
SERVICE_START_NAME :
SERVICE_NAME: http
TYPE : 1 KERNEL_DRIVER
STATE : 1 STOPPED
WIN32_EXIT_CODE : 1009 (0x3f1)
SERVICE_EXIT_CODE : 0 (0x0)
CHECKPOINT : 0x0
WAIT_HINT : 0x0
C:\WINDOWS\system32>
OK seeing it stopped I tried to start it again
C:\WINDOWS\system32>net start http
System error 1009 has occurred.
The configuration registry database is corrupt.
C:\WINDOWS\system32>
So I run SFC to try and fix this BUT...
C:\WINDOWS\system32>sfc /scannow
Beginning system scan. This process will take some time.
Beginning verification phase of system scan.
Verification 100% complete.
Windows Resource Protection did not find any integrity violations.
C:\WINDOWS\system32>
A fat lot of help this is, it can't even fix something so inherently wrong...
So this is where I ask the community for help, I don't know what to do past this point. Help is very much appreciated.
In my case, I had a sub-key under
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\SslBindingInfo that was missing information. i.e. all the keys such as 0.0.0.0:40015 have values like "AppId","DefaultFlags", etc.
I had one that had no values under this key. I deleted that "empty" key and HTTP was able to start up.