Error "ldap_sasl_bind_s failed" on n-way multi-master openldap - openldap

I am trying to connect openldap nodes in cluster but I receive the
following message (The password is update on all different openldap).
What password is failing and how can I force to be update?
Feb 25 18:57:01 ldap03 slapd[9556]: slapd starting
Feb 25 18:57:01 ldap03 slapd[9556]: slap_client_connect: URI=ldap://ldap01 DN="cn=admin,dc=clients,dc=enterprise,dc=com" ldap_sasl_bind_s failed (-1)
Feb 25 18:57:01 ldap03 slapd[9556]: do_syncrepl: rid=001 rc -1 retrying (4 retries left)
Thanks in advance.

I am met same issue...
625cf83c slapd starting
625cf83c slap_client_connect: URI=ldaps://ldap.example.com:636 DN="cn=admin,dc=example,dc=com" ldap_sasl_bind_s failed (-1)
625cf83c do_syncrepl: rid=123 rc -1 retrying
But in my case, the issue was on transport layer. The OpenLDAP server was built without SSL support. Re-installation the OpenLDAP server with SSL support solved my issue.

Related

Mariadb Galera 10.5.13-16 Node Crash

I have a cluster with 2 galera nodes and 1 arbitrator.
My node 1 crashed I don't understand why..
Here is the log of the node 1.
It seems that it is a problem with the pthread library.
Also every requests are proxied by 2 HAProxy.
2023-01-03 12:08:55 0 [Warning] WSREP: Handshake failed: peer did not return a certificate
2023-01-03 12:08:55 0 [Warning] WSREP: Handshake failed: peer did not return a certificate
2023-01-03 12:08:56 0 [Warning] WSREP: Handshake failed: http request
terminate called after throwing an instance of 'boost::wrapexcept<std::system_error>'
what(): remote_endpoint: Transport endpoint is not connected
230103 12:08:56 [ERROR] mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Server version: 10.5.13-MariaDB-1:10.5.13+maria~focal
key_buffer_size=134217728
read_buffer_size=2097152
max_used_connections=101
max_threads=102
thread_count=106
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 760333 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x0 thread_stack 0x49000
mariadbd(my_print_stacktrace+0x32)[0x55b1b67f7e42]
Printing to addr2line failed
mariadbd(handle_fatal_signal+0x485)[0x55b1b62479a5]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x153c0)[0x7ff88ea983c0]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcb)[0x7ff88e59e18b]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x12b)[0x7ff88e57d859]
/lib/x86_64-linux-gnu/libstdc++.so.6(+0x9e911)[0x7ff88e939911]
/lib/x86_64-linux-gnu/libstdc++.so.6(+0xaa38c)[0x7ff88e94538c]
/lib/x86_64-linux-gnu/libstdc++.so.6(+0xaa3f7)[0x7ff88e9453f7]
/lib/x86_64-linux-gnu/libstdc++.so.6(+0xaa6a9)[0x7ff88e9456a9]
/usr/lib/galera/libgalera_smm.so(+0x448ad)[0x7ff884b5e8ad]
/usr/lib/galera/libgalera_smm.so(+0x1fc315)[0x7ff884d16315]
/usr/lib/galera/libgalera_smm.so(+0x1ff7eb)[0x7ff884d197eb]
/usr/lib/galera/libgalera_smm.so(+0x1ffc28)[0x7ff884d19c28]
/usr/lib/galera/libgalera_smm.so(+0x2065b6)[0x7ff884d205b6]
/usr/lib/galera/libgalera_smm.so(+0x1f81f3)[0x7ff884d121f3]
/usr/lib/galera/libgalera_smm.so(+0x1e6f04)[0x7ff884d00f04]
/usr/lib/galera/libgalera_smm.so(+0x103438)[0x7ff884c1d438]
/usr/lib/galera/libgalera_smm.so(+0xe8eea)[0x7ff884c02eea]
/usr/lib/galera/libgalera_smm.so(+0xe9a8d)[0x7ff884c03a8d]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x9609)[0x7ff88ea8c609]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x43)[0x7ff88e67a293]
The manual page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mysqld/ contains
information that should help you find out what is causing the crash.
Writing a core file...
Working directory at /var/lib/mysql
Resource Limits:
Fatal signal 11 while backtracing
PS: if you want more data ask me :)
OK it seems that 2 simultaneous scans of OpenVAS crashes the node.
I tried with version 10.5.13 and 10.5.16 -> crash.
Solution: Upgrade to 10.5.17 at least.

Clickhouse default http handlers not supported

I have been trying to run clickhouse on ec2 instance from terraform. So far the ec2 instance runs well and I have access to the http localhost:8123. However when I try to access the localhost:8123/play I get the following message:
There is no handle /play
Use / or /ping for health checks.
Or /replicas_status for more sophisticated health checks.
Send queries from your program with POST method or GET /?query=...
Use clickhouse-client:
For interactive data analysis:
clickhouse-client
For batch query processing:
clickhouse-client --query='SELECT 1' > result
clickhouse-client < query > result
I don't understand why this is happening as I was not getting that error when running in local.
When I check the status of the clickhouse server I get the following output:
● clickhouse-server.service - ClickHouse Server
Loaded: loaded (/lib/systemd/system/clickhouse-server.service; enabled; vendor preset: enabled)
Mar 25 12:14:35 systemd[1]: Started ClickHouse Server.
Mar 25 12:14:35 clickhouse-server[11774]: Include not found: clickhouse_remote_servers
Mar 25 12:14:35 clickhouse-server[11774]: Include not found: clickhouse_compression
Mar 25 12:14:35 clickhouse-server[11774]: Logging warning to /var/log/clickhouse-server/clickhouse-server.log
Mar 25 12:14:35 clickhouse-server[11774]: Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
Mar 25 12:14:35 clickhouse-server[11774]: Include not found: networks
Mar 25 12:14:35 clickhouse-server[11774]: Include not found: networks
Mar 25 12:14:37 clickhouse-server[11774]: Include not found: clickhouse_remote_servers
Mar 25 12:14:37 clickhouse-server[11774]: Include not found: clickhouse_compression
I don't know if this will help but maybe it is related to the problem.(logs file are empty)
Another question that I have and that has nothing to do with the problem above, is about the understanding of how clickhouse works because we hear many different articles talking about clickhouse but none seem very clear to me. We often hear about "nodes" in the articles that I've been reading. So far I think that clickhouse works with servers on which we put clusters. Inside those clusters we put shards and in each of those shards we put replicas, the so called "nodes". As we will be running in production I just want to make sure that when we talk about "nodes" we are talking about container which act as compute units or it is completely something else.
So far I've tried to open all port ingress and egress but it did not fix the problem. I've checked the clickhouse documentation which mention custom http endpoint but none talk about this error.

installing asterisk on ubuntu 18.04

Assuming you are Database Root
Checking if SELinux is enabled...Its not (good)!
Reading /etc/asterisk/asterisk.conf...Done
Checking if Asterisk is running and we can talk to it as the 'asterisk' user...Error!
Error communicating with Asterisk. Ensure that Asterisk is properly installed and running as the asterisk user
Asterisk appears to be running as asterisk
Try starting Asterisk with the './start_asterisk start' command in this directory
tried ./start_asterisk start ./install -n
help help please, what's the problem, 3rd day I'm trying to solve the problem.
● asterisk.service - Asterisk PBX
Loaded: loaded (/lib/systemd/system/asterisk.service; enabled; vendor preset: enabled)
Active: failed (Result: core-dump) since Sat 2020-07-25 01:12:16 UTC; 32min ago
Docs: man:asterisk(8)
Process: 84496 ExecStart=/usr/sbin/asterisk -g -f -p -U asterisk (code=dumped, signal=SEGV)
Main PID: 84496 (code=dumped, signal=SEGV)
Jul 25 01:12:16 webserver systemd[1]: asterisk.service: Scheduled restart job, restart counter is at 91.
Jul 25 01:12:16 webserver systemd[1]: Stopped Asterisk PBX.
Jul 25 01:12:16 webserver systemd[1]: asterisk.service: Start request repeated too quickly.
Jul 25 01:12:16 webserver systemd[1]: asterisk.service: Failed with result 'core-dump'.
Jul 25 01:12:16 webserver systemd[1]: Failed to start Asterisk PBX.```
Selinux enabled - can be issue here.
For check what is really gooing try start asterisk manually and see verbose log
asterisk -vvvgc

ERROR o.k.k.s.n.s.i.KaaNodeInitializationService - Failed to connect to Zookeeper within 5 minutes. Kaa Node Server will be stopped

I have the same error as this topic here: Error on zookeeper but Zookeeper is running. Am I missing an entry in Kaa or a setting somewhere not mentioned in the Kaa install.
Thanks
My solution:
sudo /usr/share/zookeeper/bin/zkServer.sh restart
sudo service kaa-node restart
wait a minutes => Done!

Solr closes connection to Zookeeper

I have two servers, server one running apache zookeeper and server two running Solr.
When starting the zookeeper I can connect to it on server one (through bin/zkCli.sh) but not through server two with solr.
Zookeeper is started through supervisor, but I have also tried starting it through bind/zkServer.sh without improvements.
When looking in the tomcat log (which Solr is logging to) I get:
WARNING: Overseer cannot talk to ZK
Jun 04, 2013 3:26:52 PM org.apache.solr.cloud.Overseer$ClusterStateUpdater amILeader
WARNING:
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /overseer_elect/leader
at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1151)
at org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:253)
at org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:250)
at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:65)
at org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:250)
at org.apache.solr.cloud.Overseer$ClusterStateUpdater.amILeader(Overseer.java:199)
at org.apache.solr.cloud.Overseer$ClusterStateUpdater.run(Overseer.java:122)
at java.lang.Thread.run(Thread.java:722)
...
Jun 04, 2013 3:31:04 PM org.apache.zookeeper.ClientCnxn$SendThread logStartConnect
INFO: Opening socket connection to server XXX.XXX.XXX.XXX/XXX.XXX.XXX.XXX:2181. Will not attempt to authenticate using SASL (unknown error)
Jun 04, 2013 3:31:04 PM org.apache.zookeeper.ClientCnxn$SendThread run
INFO: Client session timed out, have not heard from server in 46974ms for sessionid 0x13f0f5a570c0006, closing socket connection and attempting reconnect
Jun 04, 2013 3:31:05 PM org.apache.zookeeper.ClientCnxn$SendThread logStartConnect
INFO: Opening socket connection to server XXX.XXX.XXX.XXXXXX.XXX.XXX.XXX.75:2181. Will not attempt to authenticate using SASL (unknown error)
Jun 04, 2013 3:32:01 PM org.apache.zookeeper.ClientCnxn$SendThread run
INFO: Client session timed out, have not heard from server in 56627ms for sessionid 0x13f0f5a570c0006, closing socket connection and attempting reconnect
How do I setup zookeeper such that it can be accessed by solr on server two?
Additional info: Using netstat -l on server one, I get the following:
tcp6 0 0 [::]:2181 [::]:* LISTEN
I.e. it is only listening on tcp6, not tcp.
Check you firewall configuration on the zookeeper server and ensure port 2181, 2888 and 3888 are all open. 2181 is the client communication port, 2888 and 3888 are used for zookeeper cluster communication (in case you decide to run zookeeper in an ensemble).

Resources