MYSQL Plugin 'InnoDB' is disabled - innodb

I get this error and the plugin InnobDB is DISABLED.
"
140315 17:09:06 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
140315 17:09:07 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
140315 17:09:07 [Note] Plugin 'InnoDB' is disabled.
140315 17:09:07 [Note] Event Scheduler: Loaded 0 events
140315 17:09:07 [Note] /usr/libexec/mysqld: ready for connections.
Version: '5.1.73' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution
"
Thanks for help.

You need to enable it in my.cnf file, then restart your server.
Refer out this link
To load an InnoDB plugin during runtime you can go on to this link

The following error had occured because of error in your code is being generated by the shared object .so file you have created
140315 17:09:06 mysqld_safe mysqld from pid file
/var/run/mysqld/mysqld.pid ended
So what i would reccomend you is to check your code and then automatically 'InnoDB' Plugin will be enabled

Related

MariaDB Galera SST using mysqldump

Background:
I have a 3x server Mariadb Galera cluster that is happily working with mariabackup SST method, but recently have dumped a lot of legacy data out of tables (was 370GB+ now down to 75GB storage).
In order to shrink the database size to avoid such large SST requirements my understanding is mysqldump/restore could be used to achieve this. As such my thought process was to drop one of the servers from the cluster, and bring it back in with mysqldump SST method to shrink the disk usage. After this, I was then planning to force SST on the other nodes with the shrunken server as donor with mariabackup SST method again.
If my logic is flawed on the above please feel free to pull me up.
Problem:
After dropping one node and switching SST method to mysqldump, the server came up and made contact with the other nodes, but SST would fail.
SST Donor as Node 2, I get below logs on that node;
2023-02-16 21:02:40 9 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
ERROR 1045 (28000): Access denied for user 'backupuser'#'192.168.2.100' (using password: YES)
ERROR 1045 (28000): Access denied for user 'backupuser'#'192.168.2.100' (using password: YES)
ERROR 1045 (28000): Access denied for user 'backupuser'#'192.168.2.100' (using password: YES)
/usr//bin/wsrep_sst_mysqldump: line 128: [: -gt: unary operator expected
ERROR 1045 (28000): Access denied for user 'backupuser'#'192.168.2.100' (using password: YES)
2023-02-16 21:02:40 9 [ERROR] WSREP: Process completed with error: wsrep_sst_mysqldump --address '192.168.1.100:3306' --port '3306' --local-port '3306' --socket '/var/lib/mysql/mysql.sock' --gtid 'a98addba-189c-11ed-887e-4258cd7b8de4:687374620' --gtid-domain-id '0' --mysqld-args --basedir=/usr: 1 (Operation not permitted)
2023-02-16 21:02:40 9 [ERROR] WSREP: Try 1/3: 'wsrep_sst_mysqldump --address '192.168.1.100:3306' --port '3306' --local-port '3306' --socket '/var/lib/mysql/mysql.sock' --gtid 'a98addba-189c-11ed-887e-4258cd7b8de4:687374620' --gtid-domain-id '0' --mysqld-args --basedir=/usr' failed: 1 (Operation not permitted)
2023-02-16 21:02:41 0 [Note] WSREP: (a897af72, 'tcp://0.0.0.0:4567') turning message relay requesting off
SST Donor as Node 3, I get below logs on that node;
2023-02-16 21:24:01 8 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
mysqldump: Couldn't execute 'show events': Access denied for user 'backupuser'#'localhost' to database 'main_db' (1044)
ERROR 1064 (42000) at line 24: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'SST failed to complete' at line 1
2023-02-16 21:24:02 8 [ERROR] WSREP: Process completed with error: wsrep_sst_mysqldump --address '192.168.1.100:3306' --port '3306' --local-port '3306' --socket '/var/lib/mysql/mysql.sock' --gtid 'a98addba-189c-11ed-887e-4258cd7b8de4:687381861' --gtid-domain-id '0' --mysqld-args --basedir=/usr: 1 (Operation not permitted)
2023-02-16 21:24:02 8 [ERROR] WSREP: Try 1/3: 'wsrep_sst_mysqldump --address '192.168.1.100:3306' --port '3306' --local-port '3306' --socket '/var/lib/mysql/mysql.sock' --gtid 'a98addba-189c-11ed-887e-4258cd7b8de4:687381861' --gtid-domain-id '0' --mysqld-args --basedir=/usr' failed: 1 (Operation not permitted)
mysqldump: Couldn't execute 'show events': Access denied for user 'backupuser'#'localhost' to database 'main_db' (1044)
ERROR 1064 (42000) at line 24: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'SST failed to complete' at line 1
2023-02-16 21:24:04 8 [ERROR] WSREP: Process completed with error: wsrep_sst_mysqldump --address '192.168.1.100:3306' --port '3306' --local-port '3306' --socket '/var/lib/mysql/mysql.sock' --gtid 'a98addba-189c-11ed-887e-4258cd7b8de4:687381861' --gtid-domain-id '0' --mysqld-args --basedir=/usr: 1 (Operation not permitted)
2023-02-16 21:24:04 8 [ERROR] WSREP: Try 2/3: 'wsrep_sst_mysqldump --address '192.168.1.100:3306' --port '3306' --local-port '3306' --socket '/var/lib/mysql/mysql.sock' --gtid 'a98addba-189c-11ed-887e-4258cd7b8de4:687381861' --gtid-domain-id '0' --mysqld-args --basedir=/usr' failed: 1 (Operation not permitted)
For identification in the above;
192.168.1.100 is Node 1
192.168.2.100 is Node 2
192.168.3.100 is Node 3 (Though it's IP isn't listed in above)
Server version: 10.3.32-MariaDB-log MariaDB Server
I have confirmed that the backup user account and password in the "wsrep_sst_auth" parameter can be used from any node to connect to any other node via console (including itself) and even went as far as to drop a new database into Node 1 and explicitly add that user with full permissions before I tried to join it to the cluster again with the same results.
Node 1 has been rolled back to mariabackup SST method and successfully rejoined the cluster, but I would like to reattempt the above to shrink the disk usage if anyone is able to provide some guidance on why I might be seeing the above errors and additional options to try and resolve.

Cannot get galera cluster to start - -bash: galera_new_cluster: command not found

I am following instructions to install MariaDB Galera cluster on Centos 7.6
But, I just cannot get the cluster to start.
I can get the MariaDB service started on both nodes.
Here is my server.cnf
[galera]
# Mandatory settings
wsrep_cluster_name="galera_cluster"
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address=gcomm://172.18.35.XXX,172.18.35.XXX
binlog_format=row
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
I am stumped, there is nothing in the Maria DB logs. What else should I be looking at?
Never mind, I was able to get past that step, but the cluster will not start.
I do not get any errors when I run
root#db-mmr101:/var/lib/mysql$ /usr/bin/mysqld_safe --wsrep-new-cluster
190709 15:01:24 mysqld_safe Logging to '/var/lib/mysql/db-mmr101.err'.
190709 15:01:25 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
Or start the MariaDB service. Nothing in the error logs as well?
90709 15:01:30 mysqld_safe mysqld from pid file /var/lib/mysql/db-mmr101.pid ended
190709 15:01:38 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
190709 15:01:38 [Note] /usr/libexec/mysqld (mysqld 5.5.60-MariaDB) starting as process 19920 ...
190709 15:01:38 InnoDB: The InnoDB memory heap is disabled
190709 15:01:38 InnoDB: Mutexes and rw_locks use GCC atomic builtins
190709 15:01:38 InnoDB: Compressed tables use zlib 1.2.7
190709 15:01:38 InnoDB: Using Linux native AIO
190709 15:01:38 InnoDB: Initializing buffer pool, size = 128.0M
190709 15:01:38 InnoDB: Completed initialization of buffer pool
190709 15:01:38 InnoDB: highest supported file format is Barracuda.
190709 15:01:38 InnoDB: Waiting for the background threads to start
190709 15:01:39 Percona XtraDB (http://www.percona.com) 5.5.59-MariaDB-38.11 started; log sequence number 1597945
190709 15:01:39 [Note] Plugin 'FEEDBACK' is disabled.
190709 15:01:39 [Note] Server socket created on IP: '0.0.0.0'.
190709 15:01:39 [Note] Event Scheduler: Loaded 0 events
190709 15:01:39 [Note] /usr/libexec/mysqld: ready for connections.
Version: '5.5.60-MariaDB' socket: '/var/lib/mysql/mysql.sock' port: 3306 MariaDB Server
On newer machines using SystemD as init system, additional steps might be necessary to start the first cluster node again.
First make sure that the node, which will be the new primary node, is allowed to bootstrap the cluster (this part is irrelevant to SystemD):
# cat /var/lib/mysql/grastate.dat
# GALERA saved state
version: 2.1
uuid: 6a1f102a-13a3-11e7-b710-b2876418a643
seqno: -1
safe_to_bootstrap: 0
Replace the value of safe_to_bootstrap to 1:
# sed -i "/safe_to_bootstrap/s/0/1/" /var/lib/mysql/grastate.dat
Then run the command
# galera_new_cluster
You have to tell the first node that is the first participant in the cluster, with MariaDB the command is:
galera_new_cluster
https://galeracluster.com/library/training/tutorials/starting-cluster.html
You may need to use the full path to the script
I realized that yum install was not installing MariaDB 10+ on Centos7.6, since there is no build in the repo for that. Had to use rpm to download/build MariaDB 10.4.
yum will install the default MariaDB 5.5 which comes with Centos 7.6. 5.5 is a really old version, which does not have the galera_new_cluster command.
Here is a good guide to get MariaDB installed on RHEL 7+ using rpm-qa
https://medium.com/#thomashysselinckx/installing-mariadb-with-rpm-on-centos7-bce648cce758
I spent a lot of time, trying to get it working with yum and eventually gave up and went the rpm route.

ERROR InnoDB: InnoDB: Unable to allocate memory of size 18446744073709544120

I have three nodes to setup the mariadb cluster.
When I use /usr/sbin/mysqld --wsrep-new-cluster --user=root & to start the cluster in the node1, but get the below error:
2017-08-05 17:41:36 140123776886528 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2017-08-05 17:41:36 140123777161408 [Note] /usr/sbin/mysqld: ready for connections.
Version: '10.1.19-MariaDB' socket: '/var/lib/mysql/mysql.sock' port: 3306 MariaDB Server
2017-08-05 17:41:41 140123698513664 [ERROR] InnoDB: InnoDB: Unable to allocate memory of size 18446744073709544120.
2017-08-05 17:41:41 7f7117464700 InnoDB: Assertion failure in thread 140123698513664 in file ha_innodb.cc line 22407
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.6/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
170805 17:41:41 [ERROR] mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Server version: 10.1.19-MariaDB
key_buffer_size=0
read_buffer_size=131072
max_used_connections=3
max_threads=10002
thread_count=6
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 21969763 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x0x7f6fe0b7e008
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x7f7117463cc0 thread_stack 0x40000
/usr/sbin/mysqld(my_print_stacktrace+0x2e)[0x7f711ca3dcde]
/usr/sbin/mysqld(handle_fatal_signal+0x2d5)[0x7f711c561005]
/lib64/libpthread.so.0(+0xf100)[0x7f711bb7b100]
/lib64/libc.so.6(gsignal+0x37)[0x7f7119ed65f7]
/lib64/libc.so.6(abort+0x148)[0x7f7119ed7ce8]
/usr/sbin/mysqld(+0x73880f)[0x7f711c6e480f]
/usr/sbin/mysqld(+0x78fbb7)[0x7f711c73bbb7]
/usr/sbin/mysqld(+0x78fcfc)[0x7f711c73bcfc]
/usr/sbin/mysqld(+0x82da1c)[0x7f711c7d9a1c]
/usr/sbin/mysqld(+0x82dc9e)[0x7f711c7d9c9e]
/usr/sbin/mysqld(+0x80ab5f)[0x7f711c7b6b5f]
/usr/sbin/mysqld(+0x8002cc)[0x7f711c7ac2cc]
/usr/sbin/mysqld(+0x7360f2)[0x7f711c6e20f2]
/usr/sbin/mysqld(_ZN7handler18index_read_idx_mapEPhjPKhm16ha_rkey_function+0x85)[0x7f711c561965]
/usr/sbin/mysqld(_ZN7handler21ha_index_read_idx_mapEPhjPKhm16ha_rkey_function+0xa6)[0x7f711c565ca6]
/usr/sbin/mysqld(+0x479524)[0x7f711c425524]
/usr/sbin/mysqld(+0x4796a0)[0x7f711c4256a0]
/usr/sbin/mysqld(+0x47edd6)[0x7f711c42add6]
/usr/sbin/mysqld(_ZN4JOIN14optimize_innerEv+0x72f)[0x7f711c432c1f]
/usr/sbin/mysqld(_ZN4JOIN8optimizeEv+0x2f)[0x7f711c43551f]
/usr/sbin/mysqld(_Z12mysql_selectP3THDPPP4ItemP10TABLE_LISTjR4ListIS1_ES2_jP8st_orderSB_S2_SB_yP13select_resultP18st_select_lex_unitP13st_select_lex+0x8f)[0x7f711c43565f]
/usr/sbin/mysqld(_Z13handle_selectP3THDP3LEXP13select_resultm+0x245)[0x7f711c4361c5]
/usr/sbin/mysqld(+0x4291a1)[0x7f711c3d51a1]
/usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x5f8f)[0x7f711c3e14cf]
/usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x352)[0x7f711c3e4e62]
/usr/sbin/mysqld(+0x439689)[0x7f711c3e5689]
/usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1fb0)[0x7f711c3e7d10]
/usr/sbin/mysqld(_Z10do_commandP3THD+0x169)[0x7f711c3e8bb9]
/usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x18a)[0x7f711c4af71a]
/usr/sbin/mysqld(handle_one_connection+0x40)[0x7f711c4af8f0]
/usr/sbin/mysqld(+0x96f37d)[0x7f711c91b37d]
/lib64/libpthread.so.0(+0x7dc5)[0x7f711bb73dc5]
/lib64/libc.so.6(clone+0x6d)[0x7f7119f9721d]
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (0x7f6fd9420020): is an invalid pointer
Connection ID (thread ID): 16
Status: NOT_KILLED
Optimizer switch: index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=on,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=off,table_elimination=on,extended_keys=on,exists_to_in=on,orderby_uses_equalities=off
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.
We think the query pointer is invalid, but we will try to print it anyway.
Query: SELECT agents.id AS agents_id, agents.agent_type AS agents_agent_type, agents.`binary` AS agents_binary, agents.topic AS agents_topic, agents.host AS agents_host, agents.availability_zone AS agents_availability_zone, agents.admin_state_up AS agents_admin_state_up, agents.created_at AS agents_created_at, agents.started_at AS agents_started_at, agents.heartbeat_timestamp AS agents_heartbeat_timestamp, agents.description AS agents_description, agents.configurations AS agents_configurations, agents.resource_versions AS agents_resource_versions, agents.`load` AS agents_load
FROM agents
WHERE agents.agent_type = 'Linux bridge agent' AND agents.host = 'ha-node2'

Unable to create MariaDB Galera Cluster

I have built an image based on mariadb:10.1 which basically adds a new cluster.conf but facing the following error on the second node after the first node started working successfully. Can somebody help me debug here?
Error log tail
2016-09-28 10:12:55 139799503415232 [ERROR] WSREP: failed to open gcomm backend connection: 110: failed to reach primary view: 110 (Connection timed out)
at gcomm/src/pc.cpp:connect():162
2016-09-28 10:12:55 139799503415232 [ERROR] WSREP: gcs/src/gcs_core.cpp:gcs_core_open():208: Failed to open backend connection: -110 (Connection timed out)
2016-09-28 10:12:55 139799503415232 [ERROR] WSREP: gcs/src/gcs.cpp:gcs_open():1380: Failed to open channel 'test_cluster' at 'gcomm://172.17.0.2,172.17.0.3,172.17.0.4': -110 (Connection timed out)
2016-09-28 10:12:55 139799503415232 [ERROR] WSREP: gcs connect failed: Connection timed out
2016-09-28 10:12:55 139799503415232 [ERROR] WSREP: wsrep::connect(gcomm://172.17.0.2,172.17.0.3,172.17.0.4) failed: 7
2016-09-28 10:12:55 139799503415232 [ERROR] Aborting
MySQL init process failed.
Debugging steps taken
NOTE: Container IP addresses were ensured to be the same as shown.
To ensure networking between containers is working, tried creating another container which could login to the first container's mysql instance.
This is definitely not related to MYSQL_HOST
To see if the container was running out of memory, I used docker stats and saw that the failed container was using only a meagre 142MB all through its lifecycle until it failed, which is way lesser than the total memory it was allowed (~4GB).
I am using Docker for Mac, but tried running the same on a CentOS VirtualBox and gives the same results. Doesn't look like Docker on Mac has a problem.
Config
[mysqld]
user=mysql
binlog_format=ROW
bind-address=0.0.0.0
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
innodb_flush_log_at_trx_commit=0
innodb_buffer_pool_size=122M
innodb_file_per_table=1
innodb_doublewrite=1
query_cache_size=0
query_cache_type=0
wsrep_on=ON
wsrep_provider=/usr/lib/libgalera_smm.so
wsrep_sst_method=rsync
Steps to start containers
# bootstrap node
docker run --rm -e MYSQL_ROOT_PASSWORD=123 \
activatedgeek/mariadb:devel \
--wsrep-cluster-name=test_cluster \
--wsrep-cluster-address=gcomm://172.17.0.2,172.17.0.3,172.17.0.4 \
--wsrep-new-cluster
# add node into cluster
docker run --rm -e MYSQL_ROOT_PASSWORD=123 \
activatedgeek/mariadb:devel \
--wsrep-cluster-name=test_cluster \
--wsrep-cluster-address=gcomm://172.17.0.2,172.17.0.3,172.17.0.4
# add node into cluster
docker run --rm -e MYSQL_ROOT_PASSWORD=123 \
activatedgeek/mariadb:devel \
--wsrep-cluster-name=test_cluster \
--wsrep-cluster-address=gcomm://172.17.0.2,172.17.0.3,172.17.0.4
This problem is caused due to the hanging init process. The configurations and CLI arguments above are correct. The only thing to be done before the init process starts is to create and empty mysql directory in the data directory (/var/lib/mysql by default). The must only be created on all nodes except the bootstrap node.
mkdir -p /var/lib/mysql/mysql
See sample MariaDB Cluster for usage which uses a custom MariaDB image and is a proof of concept for creating clusters.
I guess your containers should either expose the required ports:
-p 3306:3306 -p 4444:4444 -p 4567:4567 -p 4568:4568
or should be --link (ed) together.

Openldap unexpectedly shutdown

I installed openldap 2.4.35 from source tarball with berkeleydb 5.0.32.NC on CentSO 6.4 x86_64.
After running a few days , the ldap server shutdown unexpectedly. And here is the last log:
ber_get_next
TLS trace: SSL3 alert read:warning:close notify
52b7b798 ber_get_next on fd 13 failed errno=0 (Success)
52b7b798 conn=1023 op=70 do_unbind
52b7b798 connection_close: conn=1023 sd=13
TLS trace: SSL3 alert write:warning:close notify
52b7cbba daemon: shutdown requested and initiated.
52b7cbba slapd shutdown: waiting for 0 operations/tasks to finish
52b7cbba slapd shutdown: initiated
52b7cbba ====> bdb_cache_release_all
52b7cbba slapd destroy: freeing system resources.
52b7cbba slapd stopped.
The configuration file (slapd.conf):
include /home/ucportal/local/openldap/etc/openldap/schema/core.schema
include /home/ucportal/local/openldap/etc/openldap/schema/corba.schema
include /home/ucportal/local/openldap/etc/openldap/schema/cosine.schema
include /home/ucportal/local/openldap/etc/openldap/schema/duaconf.schema
include /home/ucportal/local/openldap/etc/openldap/schema/dyngroup.schema
include /home/ucportal/local/openldap/etc/openldap/schema/inetorgperson.schema
include /home/ucportal/local/openldap/etc/openldap/schema/java.schema
include /home/ucportal/local/openldap/etc/openldap/schema/misc.schema
include /home/ucportal/local/openldap/etc/openldap/schema/nis.schema
include /home/ucportal/local/openldap/etc/openldap/schema/openldap.schema
include /home/ucportal/local/openldap/etc/openldap/schema/ppolicy.schema
include /home/ucportal/local/openldap/etc/openldap/schema/collective.schema
include /home/ucportal/local/openldap/etc/openldap/schema/uc.schema
pidfile /home/ucportal/local/openldap/var/run/slapd.pid
argsfile /home/ucportal/local/openldap/var/run/slapd.args
loglevel 1
logfile /home/ucportal/openldap/var/log/slapd.log
database bdb
suffix "dc=ucweb,dc=com"
rootdn "cn=admin,dc=ucweb,dc=com"
rootpw 123456
directory /home/ucportal/local/openldap/var/openldap-data
index objectClass eq
index entryUUID,entryCSN eq
TLSCACertificateFile /home/ucportal/openldap/etc/openldap/cacerts/ca.crt
TLSCertificateFile /home/ucportal/openldap/etc/openldap/ldap-server.crt
TLSCertificateKeyFile /home/ucportal/openldap/etc/openldap/ldap-key.pem
Attention : I installed and run openldap with non-root user
I used this command to start ldap daemon process: slapd -f ~/openldap/etc/openldap/slapd.conf -d 1 -h 'ldaps://0.0.0.0:6361'
Any suggestions?
This is a very common issue with Open-LDAP servers, firstly I'll recommend you to migrate this question to serverfault. This will be a good practice to always run your daemons with root priviledges.
Based on my so far research I'd like to share these links with you, I hope they may help you to fix your problems.
http://www.clearfoundation.com/component/option,com_kunena/Itemid,232/catid,10/func,view/id,19945/
http://www.openldap.org/lists/openldap-software/200502/msg00268.html
Configure OpenLDAP
https://serverfault.com/questions/138286/configuring-openldap-and-ssl
http://www.openldap.org/doc/admin24/slapdconf2.html

Resources