MariaDB [ERROR] WSREP: context: library has no ciphers - mariadb

I have a mariadb/galera problem.
I updated my OpenSuse tumbleweed server.
I think the relevant changes are:
OpenSSL 1.0.x -> OpenSSL 1.1.x
mariadb-10.1.25 -> mariadb-10.2.13
galera 3.23.20 -> galera 3.23.20 (not changed)
After the update, the mariadb server didn't start anymore with this error:
2018-04-06 13:12:13 139995648866240 [Note] WSREP: wsrep_load(): Galera 3.23(rac090bc) by Codership Oy <info#codership.com> loaded successfully.
2018-04-06 13:12:13 139995648866240 [Note] WSREP: CRC-32C: using hardware acceleration.
2018-04-06 13:12:13 139995648866240 [Note] WSREP: Found saved state: c5779b6f-035a-11e7-89db-52dab15b6e6b:34767967, safe_to_bootstrap: 0
2018-04-06 13:12:13 139995648866240 [Note] WSREP: Passing config to GCS: base_dir = /srv/mysql/; base_host = x.x.x.x; base_port = 4567; cert.log_conflicts = no; debug = no; evs.auto_evict = 0; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 2; evs.view_forget_timeout = PT24H; gcache.dir = /srv/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /srv/mysql//galera.cache; gcache.page_size = 128M; gcache.recover = no; gcache.size = 128M; gcomm.thread_prio = ; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.segment = 0; gmcast.version = 0; pc.announce_timeout = PT3S; pc.checksum = false; pc.ignore_quoru
2018-04-06 13:12:13 139995648866240 [Note] WSREP: MemPool(SlaveTrxHandle): hit ratio: 0, misses: 0, in use: 0, in pool: 0
2018-04-06 13:12:13 139995648866240 [Note] WSREP: Flushing memory map to disk...
2018-04-06 13:12:13 139995648866240 [ERROR] WSREP: context: library has no ciphers
2018-04-06 13:12:13 139995648866240 [ERROR] WSREP: wsrep::init() failed: 7, must shutdown
2018-04-06 13:12:13 139995648866240 [ERROR] Aborting
As far as i understand it, it's a problem with the openSSL library and that he couldn't read the supported ciphers. (Something like this)
But i haven't enabled ssl for wsrep. Could i tell mariadb/galera somehow not even to try to enable ssl?

I downloaded the source for the libgalera_smm.so from the galera website.
I had to rewrite some lines in the file "SConstruct" to make it python3 compatible.
After compiling the source, wsrep works fine again.

Related

vault dial tcp 127.0.0.1:8200: connect: connection refused

I'm try to run vault instance on aws and when i want to run command: vault operator init -key-shares=5 -key-threshold=3 -format json on Ansible role and i have error code :
fatal: [vault]: FAILED! => {"changed": true, "cmd": "vault operator init -key-shares=5 -key-threshold=3 -format json", "delta": "0:00:00.054870", "end": "2021-12-12 14:30:50.956504", "msg": "non-zero return code", "rc": 2, "start": "2021-12-12 14:30:50.901634", "stderr": "Error initializing: Put \"http://127.0.0.1:8200/v1/sys/init\": dial tcp 127.0.0.1:8200: connect: connection refused", "stderr_lines": ["Error initializing: Put \"http://127.0.0.1:8200/v1/sys/init\": dial tcp 127.0.0.1:8200: connect: connection refused"], "stdout": "", "stdout_lines": []}
When i'm on my vault server and when i do service vault status, i have this result :
vault.service - a tool for managing secrets
Loaded: loaded (/etc/systemd/system/vault.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2021-12-12 14:19:47 UTC; 6min ago
Docs: https://vaultproject.io/docs/
Process: 5152 ExecStart=/usr/local/bin/vault server -config=/etc/vault.hcl (code=exited, status=213/SECUREBITS)
Main PID: 5152 (code=exited, status=213/SECUREBITS)
Dec 12 14:19:47 ip-172-31-37-194 systemd[1]: Started a tool for managing secrets.
Dec 12 14:19:47 ip-172-31-37-194 systemd[5152]: vault.service: Failed to set process secure bits: Operation not perm
Dec 12 14:19:47 ip-172-31-37-194 systemd[5152]: vault.service: Failed at step SECUREBITS spawning /usr/local/bin/vau
Dec 12 14:19:47 ip-172-31-37-194 systemd[1]: vault.service: Main process exited, code=exited, status=213/SECUREBITS
Dec 12 14:19:47 ip-172-31-37-194 systemd[1]: vault.service: Failed with result 'exit-code'.
There'is my 2 config files :
vault.hcl :
disable_mlock = true
listener "tcp" {
address = "http://{{ listener_address }}"
tls_disable = 1
}
backend "file" {
path = "/var/lib/vault"
}
my vault.service :
[Unit]
Description=a tool for managing secrets
Documentation=https://vaultproject.io/docs/
After=network.target
ConditionFileNotEmpty=/etc/vault.hcl
[Service]
User=vault
Group=vault
ExecStart=/usr/local/bin/vault server -config=/etc/vault.hcl
ExecReload=/usr/local/bin/kill --signal HUP $MAINPID
CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK
Capabilities=CAP_IPC_LOCK+ep
SecureBits=keep-caps
NoNewPrivileges=yes
KillSignal=SIGINT
[Install]
WantedBy=multi-user.target
I didn't find anything yet who could unlock this situation, if someone have an idea.

Nginx Core Dump

In my Nginx, I keep getting:
2020/10/02 06:08:08 [alert] 35#35: *94913 open socket #39 left in connection 8
2020/10/02 06:08:08 [alert] 35#35: *94899 open socket #23 left in connection 9
2020/10/02 06:08:08 [alert] 35#35: *94886 open socket #32 left in connection 15
2020/10/02 06:08:08 [alert] 35#35: *94903 open socket #40 left in connection 24
2020/10/02 06:08:08 [alert] 35#35: *94911 open socket #17 left in connection 31
2020/10/02 06:08:08 [alert] 35#35: *94895 open socket #37 left in connection 36
2020/10/02 06:08:08 [alert] 35#35: *94909 open socket #43 left in connection 40
2020/10/02 06:08:08 [alert] 35#35: *94898 open socket #20 left in connection 47
2020/10/02 06:08:08 [alert] 35#35: *94887 open socket #33 left in connection 54
2020/10/02 06:08:08 [alert] 35#35: *94893 open socket #36 left in connection 55
2020/10/02 06:08:08 [alert] 35#35: *94890 open socket #35 left in connection 58
2020/10/02 06:08:08 [alert] 35#35: *94889 open socket #34 left in connection 63
2020/10/02 06:08:08 [alert] 35#35: *94896 open socket #18 left in connection 71
2020/10/02 06:08:08 [alert] 35#35: aborting
2020/10/02 06:08:08 [alert] 33#33: worker process 35 exited on signal 6 (core dumped)
So I enabled a core dump by adding this to my nginx.conf
worker_rlimit_core 500M;
working_directory /tmp;
debug_points abort;
When the core is dumped, I used gdp as such:
gdb /usr/sbin/nginx /tmp/somecorefile.xxxx
backtrace full
And I am getting this output.
warning: Unexpected size of section `.reg-xstate/35' in core file.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `nginx: worker pr'.
Program terminated with signal SIGABRT, Aborted.
warning: Unexpected size of section `.reg-xstate/35' in core file.
#0 __GI_raise (sig=sig#entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) backtrace full
#0 __GI_raise (sig=sig#entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
set = {__val = {0, 93898407404718, 206158430256, 140725383938616, 140725383938416, 712653358923658240, 2, 6, 93898408092161, 2, 6, 8, 93898411815600,
140725383938672, 140725383938600, 140725383938672}}
pid = <optimized out>
tid = <optimized out>
ret = <optimized out>
#1 0x00007fc51604f535 in __GI_abort () at abort.c:79
save_stage = 1
act = {__sigaction_handler = {sa_handler = 0x7fc515dc62c0, sa_sigaction = 0x7fc515dc62c0}, sa_mask = {__val = {712653358923658240, 140484452045760,
0, 140484452041200, 140484451262672, 1, 0, 93898411799280, 93898407566008, 140484451264536, 12, 93898408473376, 0, 140484452049600,
140484452057768, 93898411799280}}, sa_flags = 368698904, sa_restorer = 0x55666d7af2f0}
sigs = {__val = {32, 0 <repeats 15 times>}}
#2 0x000055666d377613 in ?? ()
No symbol table info available.
#3 0x000055666d3a2d68 in ?? ()
No symbol table info available.
#4 0x000055666d3a3a22 in ?? ()
No symbol table info available.
#5 0x000055666d3a1dc3 in ngx_spawn_process ()
No symbol table info available.
#6 0x000055666d3a2fd0 in ?? ()
No symbol table info available.
#7 0x000055666d3a4654 in ngx_master_process_cycle ()
No symbol table info available.
#8 0x000055666d3782bb in main ()
No symbol table info available.
And this one is a separate core dump
Type "apropos word" to search for commands related to "word"...
Reading symbols from /usr/sbin/nginx...(no debugging symbols found)...done.
[New LWP 26510]
warning: Unexpected size of section `.reg-xstate/26510' in core file.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `nginx: worker pr'.
Program terminated with signal SIGABRT, Aborted.
warning: Unexpected size of section `.reg-xstate/26510' in core file.
#0 __GI_raise (sig=sig#entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) backtrace full
#0 __GI_raise (sig=sig#entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
set = {__val = {0, 9223372036854775807, 0, 0, 0, 60, 5, 0, 1288, 0, 10, 94846235324710, 1, 0, 94846235939456, 3560692332834648576}}
pid = <optimized out>
tid = <optimized out>
ret = <optimized out>
#1 0x00007f0e3ff4d535 in __GI_abort () at abort.c:79
save_stage = 1
act = {__sigaction_handler = {sa_handler = 0x56431c2a3268, sa_sigaction = 0x56431c2a3268}, sa_mask = {__val = {3560692332834648576, 94846233719536,
0, 139699176612336, 139699175833808, 1, 0, 94846233719536, 94846217345720, 139699175834328, 12, 94846218253088, 0, 139699176624816,
139699176621944, 94846233719536}}, sa_flags = 1072285208, sa_restorer = 0x56431c11b2f0}
sigs = {__val = {32, 0 <repeats 15 times>}}
#2 0x000056431b14f613 in ?? ()
No symbol table info available.
#3 0x000056431b17ad68 in ?? ()
No symbol table info available.
#4 0x000056431b17ba22 in ?? ()
No symbol table info available.
#5 0x000056431b179dc3 in ngx_spawn_process ()
No symbol table info available.
#6 0x000056431b17afd0 in ?? ()
No symbol table info available.
#7 0x000056431b17c654 in ngx_master_process_cycle ()
No symbol table info available.
#8 0x000056431b1502bb in main ()
No symbol table info available.
My nginx version is such:
nginx version: nginx/1.14.2
built with OpenSSL 1.1.1d 10 Sep 2019
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2 -fdebug-prefix-map=/build/nginx-Cjs4TR/nginx-1.14.2=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -fPIC' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_geoip_module=dynamic --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_xslt_module=dynamic --with-stream=dynamic --with-stream_ssl_module --with-stream_ssl_preread_module --with-mail=dynamic --with-mail_ssl_module --add-dynamic-module=/build/nginx-Cjs4TR/nginx-1.14.2/debian/modules/http-auth-pam --add-dynamic-module=/build/nginx-Cjs4TR/nginx-1.14.2/debian/modules/http-dav-ext --add-dynamic-module=/build/nginx-Cjs4TR/nginx-1.14.2/debian/modules/http-echo --add-dynamic-module=/build/nginx-Cjs4TR/nginx-1.14.2/debian/modules/http-upstream-fair --add-dynamic-module=/build/nginx-Cjs4TR/nginx-1.14.2/debian/modules/http-subs-filter
Does anyone know what any of this means? And is this the correct place to report this problem?

JFrog Artifactory fails to connect to PostgreSQL database

I followed the following guides on installing JFrog Artifactory OSS using RPM/Yum and using an external PostgreSQL database.
https://www.jfrog.com/confluence/display/JFROG/Installing+Artifactory
https://www.jfrog.com/confluence/display/RTF6X/PostgreSQL
SELinux is disabled and jfrog-artifactory-oss is installed from the JFrog repository [https://jfrog.bintray.com/artifactory-rpms].
Check the service:
[root#jfrog ~]# systemctl status artifactory -l
● artifactory.service - Artifactory service
Loaded: loaded (/usr/lib/systemd/system/artifactory.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2020-08-08 01:56:50 +08; 11min ago
Process: 9714 ExecStop=/opt/jfrog/artifactory/app/bin/artifactoryManage.sh stop (code=exited, status=0/SUCCESS)
Process: 10268 ExecStart=/opt/jfrog/artifactory/app/bin/artifactoryManage.sh start (code=exited, status=0/SUCCESS)
Main PID: 12388 (java)
CGroup: /system.slice/artifactory.service
‣ 12388 /opt/jfrog/artifactory/app/third-party/java/bin/java -Djava.util.logging.config.file=/opt/jfrog/artifactory/app/artifactory/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djdk.tls.ephemeralDHKeySize=2048 -Djava.protocol.handler.pkgs=org.apache.catalina.webresources -Dorg.apache.catalina.security.SecurityListener.UMASK=0027 -server -Xss256k -XX:+UseG1GC -XX:OnOutOfMemoryError=kill -9 %p --add-opens java.base/java.util=ALL-UNNAMED --add-opens java.base/java.lang.reflect=ALL-UNNAMED --add-opens java.base/java.lang.invoke=ALL-UNNAMED --add-opens java.base/java.text=ALL-UNNAMED --add-opens java.base/java.nio=ALL-UNNAMED --add-opens java.desktop/java.awt.font=ALL-UNNAMED -Dfile.encoding=UTF8 -Djruby.compile.invokedynamic=false -Djruby.bytecode.version=1.8 -Dorg.apache.tomcat.util.buf.UDecoder.ALLOW_ENCODED_SLASH=true -Djava.security.egd=file:/dev/./urandom -Dartdist=rpm -Djf.product.home=/opt/jfrog/artifactory -Xms512m -Xmx3g -Djruby.bytecode.version=1.8 -Dartifactory.metadata.native.ui=true -Dignore.endorsed.dirs= -classpath /opt/jfrog/artifactory/app/artifactory/tomcat/bin/bootstrap.jar:/opt/jfrog/artifactory/app/artifactory/tomcat/bin/tomcat-juli.jar -Dcatalina.base=/opt/jfrog/artifactory/app/artifactory/tomcat -Dcatalina.home=/opt/jfrog/artifactory/app/artifactory/tomcat -Djava.io.tmpdir=/opt/jfrog/artifactory/var/work/artifactory/tomcat/temp org.apache.catalina.startup.Bootstrap start
Aug 08 01:56:50 jfrog artifactoryManage.sh[10268]: 2020-08-07T17:56:50.027Z [shell] [INFO ] [] [systemYamlHelper.sh:462 ] [main] - Resolved shared.logging.consoleLog.enabled (true) from /opt/jfrog/artifactory/var/etc/system.yaml
Aug 08 01:56:50 jfrog artifactoryManage.sh[10268]: JF_METADATA_ACCESSCLIENT_URL: http://localhost:8081/access
Aug 08 01:56:50 jfrog artifactoryManage.sh[10268]: metadata started. PID: 12988
Aug 08 01:56:50 jfrog su[13048]: (to artifactory) root on none
Aug 08 01:56:50 jfrog artifactoryManage.sh[10268]: Starting frontend...
Aug 08 01:56:50 jfrog artifactoryManage.sh[10268]: frontend not running. Proceed to start it up.
Aug 08 01:56:50 jfrog artifactoryManage.sh[10268]: 2020-08-07T17:56:50.317Z [shell] [INFO ] [] [systemYamlHelper.sh:462 ] [main] - Resolved shared.logging.consoleLog.enabled (true) from /opt/jfrog/artifactory/var/etc/system.yaml
Aug 08 01:56:50 jfrog artifactoryManage.sh[10268]: frontend started. PID: 13147
Aug 08 01:56:50 jfrog systemd[1]: Started Artifactory service.
Aug 08 01:56:51 jfrog artifactoryManage.sh[10268]: 2020-08-07T17:56:51.003Z [shell] [INFO ] [] [systemYamlHelper.sh:462 ] [main] - Resolved shared.logging.consoleLog.enabled (true) from /opt/jfrog/artifactory/var/etc/system.yaml
[root#jfrog ~]#
Test:
[root#jfrog ~]# curl -I http://localhost:8082/ui/
HTTP/1.1 503 Service Unavailable
Date: Fri, 07 Aug 2020 18:08:50 GMT
Content-Length: 19
Content-Type: text/plain; charset=utf-8
[root#jfrog ~]#
/opt/jfrog/artifactory/var/log/console.log shows the following errors:
[DEBUG] Resolved system configuration file path: /opt/jfrog/artifactory/var/etc/system.yaml
No ssl parameter found, falling back to sslmode=disable
2020-08-07T17:56:50.179Z [jfmd ] [INFO ] [1462831a45a25233] [database_bearer.go:84 ] [main ] - Connecting to (db config: {postgresql user='jfroguser' password='***' dbname=jfrogdb host=dbserver.example.com port= sslmode=disable}) [database]
2020-08-07T17:56:50.216Z [jfmd ] [ERROR] [1462831a45a25233] [database_bearer.go:68 ] [main ] - Could not initialize database (db config: {postgresql user='jfroguser' password='***' dbname=jfrogdb host=dbserver.example.com port= sslmode=disable}): error connecting to database
jfrog.com/metadata/services/common/db.(*databaseBearer).init
/src/jfrog.com/metadata/services/common/db/database_bearer.go:114
jfrog.com/metadata/services/common/db.NewDatabaseBearer
/src/jfrog.com/metadata/services/common/db/database_bearer.go:66
main.main
/src/jfrog.com/metadata/metadata.go:38
runtime.main
/src/runtime/proc.go:203
runtime.goexit
/src/runtime/asm_amd64.s:1373
goroutine 1 [running]:
runtime/debug.Stack(0x38, 0xc00015c040, 0xc00032c080)
/src/runtime/debug/stack.go:24 +0x9d
jfrog.com/jfrog-go-commons/pkg/log.(*standardLogger).Panicfc(0xc00043bda0, 0x166e420, 0xc000142750, 0x13eb133, 0x32, 0xc00032c080, 0x2, 0x2)
/src/jfrog.com/go-commons/pkg/log/standard_logger.go:42 +0x6a
jfrog.com/metadata/services/common/db.NewDatabaseBearer(0x166e420, 0xc000142750, 0x166f220, 0xc00007f770, 0x1673460, 0xc0000c97c0, 0x1666260, 0xc000011098, 0x16489c0, 0xc00043bd70, ...)
/src/jfrog.com/metadata/services/common/db/database_bearer.go:68 +0x2d4
main.main()
/src/jfrog.com/metadata/metadata.go:38 +0x5b7
[database]
panic: Could not initialize database (db config: {postgresql user='jfroguser' password='***' dbname=jfrogdb host=dbserver.example.com port= sslmode=disable}): error connecting to database
jfrog.com/metadata/services/common/db.(*databaseBearer).init
/src/jfrog.com/metadata/services/common/db/database_bearer.go:114
jfrog.com/metadata/services/common/db.NewDatabaseBearer
/src/jfrog.com/metadata/services/common/db/database_bearer.go:66
main.main
/src/jfrog.com/metadata/metadata.go:38
runtime.main
/src/runtime/proc.go:203
runtime.goexit
/src/runtime/asm_amd64.s:1373
goroutine 1 [running]:
runtime/debug.Stack(0x38, 0xc00015c040, 0xc00032c080)
/src/runtime/debug/stack.go:24 +0x9d
jfrog.com/jfrog-go-commons/pkg/log.(*standardLogger).Panicfc(0xc00043bda0, 0x166e420, 0xc000142750, 0x13eb133, 0x32, 0xc00032c080, 0x2, 0x2)
/src/jfrog.com/go-commons/pkg/log/standard_logger.go:42 +0x6a
jfrog.com/metadata/services/common/db.NewDatabaseBearer(0x166e420, 0xc000142750, 0x166f220, 0xc00007f770, 0x1673460, 0xc0000c97c0, 0x1666260, 0xc000011098, 0x16489c0, 0xc00043bd70, ...)
/src/jfrog.com/metadata/services/common/db/database_bearer.go:68 +0x2d4
main.main()
/src/jfrog.com/metadata/metadata.go:38 +0x5b7
goroutine 1 [running]:
github.com/rs/zerolog.(*Logger).Panic.func1(0xc000358500, 0x4bb)
/pkg/mod/github.com/rs/zerolog#v1.18.0/log.go:338 +0x4f
github.com/rs/zerolog.(*Event).msg(0xc0000be240, 0xc000358500, 0x4bb)
/pkg/mod/github.com/rs/zerolog#v1.18.0/event.go:146 +0x200
github.com/rs/zerolog.(*Event).Msgf(0xc0000be240, 0xc000961dc0, 0x35, 0xc00015c0c0, 0x3, 0x4)
/pkg/mod/github.com/rs/zerolog#v1.18.0/event.go:126 +0x83
jfrog.com/jfrog-go-commons/pkg/log.(*standardLogger).logMessage(0xc00043bda0, 0x166e420, 0xc000142750, 0xc0000be240, 0xc000961dc0, 0x35, 0xc00015c0c0, 0x3, 0x4)
/src/jfrog.com/go-commons/pkg/log/standard_logger.go:61 +0x197
jfrog.com/jfrog-go-commons/pkg/log.(*standardLogger).Panicfc(0xc00043bda0, 0x166e420, 0xc000142750, 0x13eb133, 0x32, 0xc00015c0c0, 0x3, 0x4)
/src/jfrog.com/go-commons/pkg/log/standard_logger.go:43 +0x1df
jfrog.com/metadata/services/common/db.NewDatabaseBearer(0x166e420, 0xc000142750, 0x166f220, 0xc00007f770, 0x1673460, 0xc0000c97c0, 0x1666260, 0xc000011098, 0x16489c0, 0xc00043bd70, ...)
/src/jfrog.com/metadata/services/common/db/database_bearer.go:68 +0x2d4
main.main()
/src/jfrog.com/metadata/metadata.go:38 +0x5b7
Any ideas what to check? The server is an up-to-date Centos 7 server. Login to the external database is also possible:
[root#jfrog ~]# psql -h dbserver.example.com -p 5432 -U jfrog
Password for user jfrog:
psql (11.8)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.
jfrog=> SHOW server_version;
server_version
----------------
11.8
(1 row)
jfrog=> \q
[root#jfrog ~]#

NGINX: How to use set variable from nginx.conf to Nginx C language policy?

Example of nginx.conf
server {
set $abc_variable "abcabc";
........
}
How can I access abc_variable with the help of module api defined here https://www.nginx.com/resources/wiki/extending/api/
I'm using following code
ngx_str_t var = ngx_string("abc_variable");
ngx_uint_t key = ngx_hash_strlow(var.data, var.data, var.len);
ngx_http_variable_value_t *val = NULL;
val = ngx_http_get_variable(r, &var, key);
But I'm getting follow error
019/12/04 01:24:02 [notice] 12442#0: signal 17 (SIGCHLD) received from 12444
2019/12/04 01:24:02 [alert] 12442#0: worker process 12444 exited on signal 11 (core dumped)
2019/12/04 01:24:02 [notice] 12442#0: start worker process 12561
2019/12/04 01:24:02 [notice] 12442#0: signal 29 (SIGIO) received
2019/12/04 01:24:02 [debug] 12561#0: setting SA_RESTART for signal 1
You can use ngx-http-get-variable.

AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED

i am trying to add additional compute node on different virtual machine to the pre-installed openstack. I disabled the firewall services,enable to ping other virtual machine.. but still compute node is not able to register with Rabbitmq service running on controller node..
Here it is my nova.conf file...
[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
volumes_path=/var/lib/nova/volumes
enabled_apis=ec2,osapi_compute,metadata
rpc_backend = rabbit
auth_strategy = keystone
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
my_ip = #compute node ip
rabbit_host= #controller_node_ip
rabbit_port = 5672
rabbit_userid = stackrabbit
rabbit_password = devstack
rabbit_use_ssl = False
rabbit_virtual_host=/
[keystone_authtoken]
auth_uri = http://controller_node_ip:5000
auth_url = http://controller_node_ip:35357
memcached_servers = controller_node_ip:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = devstack
auth_host = controller_node_ip
auth_port = 35357
auth_protocol = http
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller_node_ip:6080/vnc_auto.html
[glance]
api_servers = http://controller_node_ip:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
Here it is my nova-compute.log:
2016-09-20 19:08:57.701 7201 INFO oslo.messaging._drivers.impl_rabbit [-] Reconnecting to AMQP server on localhost:5672
2016-09-20 19:08:57.701 7201 INFO oslo.messaging._drivers.impl_rabbit [-] Delaying reconnect for 1.0 seconds...
2016-09-20 19:08:58.708 7201 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 30 seconds...
Please suggest me something so that i can resolve this issue...
Thank you in advance...
I encountered this when expanding my nova-compute estate (although I'm not using Devstack).
In my newly created compute server, the following was seen in /var/log/nova/nova-compute.log : -
2017-11-14 11:40:53.287 52408 ERROR oslo.messaging._drivers.impl_rabbit [req-adfd6dc7-fe8c-4de5-8401-58d325c3b4a8 - - - - -] [be6e0302-dfc8-4512-8b48-0d824fc6ea14] AMQP server on 127.0.0.1:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 1 seconds. Client port: None
The solution was quite simple. I checked /var/log/sysinfo (I run ubuntu; /var/log/messages for those on Redhat systems) and could see the following lines:-
Nov 14 12:01:48 compute2 systemd[1]: Started OpenStack Compute.
Nov 14 12:01:49 compute2 nova-compute[3222]: Traceback (most recent call last):
Nov 14 12:01:49 compute2 nova-compute[3222]: File "/usr/bin/nova-compute", line 10, in <module>
Nov 14 12:01:49 compute2 nova-compute[3222]: sys.exit(main())
Nov 14 12:01:49 compute2 nova-compute[3222]: File "/usr/lib/python2.7/dist-packages/nova/cmd/compute.py", line 42, in main
Nov 14 12:01:49 compute2 nova-compute[3222]: config.parse_args(sys.argv)
Nov 14 12:01:49 compute2 nova-compute[3222]: File "/usr/lib/python2.7/dist-packages/nova/config.py", line 52, in parse_args
Nov 14 12:01:49 compute2 nova-compute[3222]: default_config_files=default_config_files)
Nov 14 12:01:49 compute2 nova-compute[3222]: File "/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2355, in __call__
Nov 14 12:01:49 compute2 nova-compute[3222]: self._namespace._files_permission_denied)
Nov 14 12:01:49 compute2 nova-compute[3222]: oslo_config.cfg.ConfigFilesPermissionDeniedError: Failed to open some config files: /etc/nova/nova.conf
Nov 14 12:01:49 compute2 systemd[1]: nova-compute.service: Main process exited, code=exited, status=1/FAILURE
Which shows that my /etc/nova/nova.conf file was unreadable. It turns out this was because I used scp to copy the nova.conf from my first compute to my new machine, and the file was read-only to the root user. The solution was to (on my new compute)
cd /etc/nova/
chown nova:nova nova.conf
service nova-compute restart

Resources