Random OpenLdap Timeout Issue - openldap

I am am currently running OpenLdap 2.4.31 on Ubuntu 12.04 in EC2. I am having an issue where I get random timeouts when doing ldapsearch or ldapadd commands against the ldap server.
There is really no load against the ldap servers, I am using them for name resolution for EC2 internal hostnames and using ldap as an external node classifier for puppet.
When the timeout happens I get the following error:
ldap_sasl_bind(SIMPLE): Can't contact LDAP server
If I rerun the command it works fine, this is causing some issues in my automation (and while I can put in error checking for this it seems odd its happening in the first place).
Here is a copy of my slapd.conf (with some env specific info commented out) hopefully someone has some suggestions on what I am missing in the config to prevent the timeout issue:
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/core.schema
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/collective.schema
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/corba.schema
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/cosine.schema
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/duaconf.schema
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/dyngroup.schema
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/inetorgperson.schema
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/java.schema
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/misc.schema
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/nis.schema
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/openldap.schema
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/ppolicy.schema
include /opt/openldap/openldap-2.4.31/etc/openldap/schema/puppet.schema
pidfile /opt/openldap/openldap-2.4.31/var/run/slapd.pid
argsfile /opt/openldap/openldap-2.4.31/var/run/slapd.args
loglevel 0
serverID 001
database bdb
suffix "dc=example,dc=local"
rootdn "cn=admin,dc=example,dc=local"
rootpw secret
directory /opt/openldap/openldap-2.4.31/var/openldap-data
idletimeout 120
timelimit 300
cachesize 2000
syncrepl rid=000
provider=ldap://10.10.10.10
type=refreshAndPersist
retry="5 5 10 +"
searchbase="dc=example,dc=local"
attrs="*,+"
bindmethod=simple
binddn="cn=admin,dc=example,dc=local"
credentials=secret
syncrepl rid=000
provider=ldap://10.10.10.20
type=refreshAndPersist
retry="5 5 10 +"
searchbase="dc=example,dc=local"
attrs="*,+"
bindmethod=simple
binddn="cn=admin,dc=example,dc=local"
credentials=secret
index entryCSN eq
index entryUUID eq
mirrormode TRUE
overlay syncprov
syncprov-checkpoint 100 10

Ignore this question. My self-healing automation was misconfiguration and was restarting the slapd process every minute on accident.

Related

Not able to generate a Let's Encrypt Certificat

I am using Laravel Forge to manage my servers and websites. So generating SSL certificates via Let's Encrypt is also done vie Forge. Somehow one of my domains throws me an error (see attached).
This domain is running on a server which holds several other domains. The nginx configuration is exactly the same.
The application is a Laravel app running on Laravel Octane.
Error:
2022-06-13 10:41:26 URL:https://forge-certificates.laravel.com/le/1441847/1663342/ecdsa?
env=production [4653] -> "letsencrypt_script1655109686" [1] Cloning
into 'letsencrypt1655109686'... Note: switching to
'91cccc0c234e4decf0a19595fa19a6f306788032'.
You are in 'detached HEAD' state. You can look around, make
experimental changes and commit them, and you can discard any commits
you make in this state without impacting any branches by switching
back to a branch.
If you want to create a new branch to retain commits you create, you
may do so (now or later) by using -c with the switch command. Example:
git switch -c
Or undo this operation with:
git switch -
Turn off this advice by setting config variable advice.detachedHead to
false
HEAD is now at 91cccc0 ensure newline before new section in
openssl.cnf ERROR: Challenge is invalid! (returned: invalid) (result:
["type"] "http-01" ["status"] "invalid"
["error","type"] "urn:ietf:params:acme:error:connection"
["error","detail"] "111.222.333.444: Fetching
http://my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw:
Timeout during connect (likely firewall problem)"
["error","status"] 400
["error"] {"type":"urn:ietf:params:acme:error:connection","detail":"111.222.333.444:
Fetching
http://my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw:
Timeout during connect (likely firewall problem)","status":400}
["url"] "https://acme-v02.api.letsencrypt.org/acme/chall-v3/119151352296/awZDUg"
["token"] "_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw"
["validationRecord",0,"url"] "http://www.my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw"
["validationRecord",0,"hostname"] "www.my-domain.de"
["validationRecord",0,"port"] "80"
["validationRecord",0,"addressesResolved",0] "111.222.333.444"
["validationRecord",0,"addressesResolved",1] "2a01:4f8:141:333::84"
["validationRecord",0,"addressesResolved"] ["111.222.333.444","2a01:4f8:141:333::84"]
["validationRecord",0,"addressUsed"] "2a01:4f8:141:333::84"
["validationRecord",0] {"url":"http://www.my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw","hostname":"www.my-domain.de","port":"80","addressesResolved":["111.222.333.444","2a01:4f8:141:333::84"],"addressUsed":"2a01:4f8:141:333::84"} ["validationRecord",1,"url"] "http://www.my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw"
["validationRecord",1,"hostname"] "www.my-domain.de"
["validationRecord",1,"port"] "80"
["validationRecord",1,"addressesResolved",0] "111.222.333.444"
["validationRecord",1,"addressesResolved",1] "2a01:4f8:141:333::84"
["validationRecord",1,"addressesResolved"] ["111.222.333.444","2a01:4f8:141:333::84"]
["validationRecord",1,"addressUsed"] "111.222.333.444"
["validationRecord",1] {"url":"http://www.my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw","hostname":"www.my-domain.de","port":"80","addressesResolved":["111.222.333.444","2a01:4f8:141:333::84"],"addressUsed":"111.222.333.444"}
["validationRecord",2,"url"] "http://my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw"
["validationRecord",2,"hostname"] "my-domain.de"
["validationRecord",2,"port"] "80"
["validationRecord",2,"addressesResolved",0] "111.222.333.444"
["validationRecord",2,"addressesResolved",1] "2a01:4f8:141:333::84"
["validationRecord",2,"addressesResolved"] ["111.222.333.444","2a01:4f8:141:333::84"]
["validationRecord",2,"addressUsed"] "2a01:4f8:141:333::84"
["validationRecord",2] {"url":"http://my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw","hostname":"my-domain.de","port":"80","addressesResolved":["111.222.333.444","2a01:4f8:141:333::84"],"addressUsed":"2a01:4f8:141:333::84"} ["validationRecord"] [{"url":"http://www.my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw","hostname":"www.my-domain.de","port":"80","addressesResolved":["111.222.333.444","2a01:4f8:141:333::84"],"addressUsed":"2a01:4f8:141:333::84"},{"url":"http://www.my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw","hostname":"www.my-domain.de","port":"80","addressesResolved":["111.222.333.444","2a01:4f8:141:333::84"],"addressUsed":"111.222.333.444"},{"url":"http://my-domain.de/.well-known/acme-challenge/_bL98lTvqMOeJG-NCeLzl2Z3VWUm7EJBa1i6IEBDuLw","hostname":"my-domain.de","port":"80","addressesResolved":["111.222.333.444","2a01:4f8:141:333::84"],"addressUsed":"2a01:4f8:141:333::84"}] ["validated"] "2022-06-13T08:41:47Z")
I've finally found the solution. Laravel Forge does not support IPv6 out of the box. So you either have to configure Forge to use IPv6 as well or remove all AAAA records pointing to the server managed by Laravel Forge.

Wsrep MariaDB Crash Thread pointer: 0x0

Cluster is constantly crashing.
Sometimes it works stable for 2 days. Sometimes five minutes after 2 servers stand.
server.cnf was tested with different parameters. The result did not change
When the setup is not installed, the single server is running without problems.
Clean installation several times.
iptables stop
selinux disable
yum repo
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.1/centos6-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
Centos 6.9 64bit
yum install MariaDB-server MariaDB-client
Erorr Log
/var/lib/mysql/maria1.err
180305 16:17:03 [ERROR] mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Server version: 10.1.31-MariaDB
key_buffer_size=134217728
read_buffer_size=131072
max_used_connections=5
max_threads=153
thread_count=2
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 467163 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x0 thread_stack 0x48400
/etc/my.cnf.d/server.cnf
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
bind-address=0.0.0.0
skip-name-resolve
#
# * Galera-related settings
#
[galera]
# Mandatory settings
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://10.1.1.11,10.1.1.12"
wsrep_cluster_name='galera_cluster'
wsrep_node_address='10.1.1.11'
wsrep_node_name='maria1'
wsrep_sst_method=rsync
wsrep_sst_auth=sst_user:!!PassWordSs!!
binlog_format=row
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
#
# Allow server to accept connections on all interfaces.
#
#bind-address=0.0.0.0
#
# Optional setting
#wsrep_slave_threads=1
#innodb_flush_log_at_trx_commit=0
# this is only for embedded server
[embedded]
# This group is only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
# This group is only read by MariaDB-10.1 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mariadb-10.1]
Test DB structure
Please report this crash to MariaDB bug tracker

MariaDB lacks a conf file?

I have recently installed MariaDB on my server; however, it doesn't have the appropriate database settings for tweaking/tuning! Here is my "my.cnf":
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Settings user and group are ignored when systemd is used.
# If you need to run mysqld under a different user or group,
# customize your systemd unit file for mariadb according to the
# instructions in http://fedoraproject.org/wiki/Systemd
[mysqld_safe]
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid
#
# include all files from the config directory
#
!includedir /etc/my.cnf.d
Where are the other settings?? Such as thread, memory adjustment, etc.?
Add what you need. But generally you don't need any changes from the defaults until there is some kind of issue.

Openldap unexpectedly shutdown

I installed openldap 2.4.35 from source tarball with berkeleydb 5.0.32.NC on CentSO 6.4 x86_64.
After running a few days , the ldap server shutdown unexpectedly. And here is the last log:
ber_get_next
TLS trace: SSL3 alert read:warning:close notify
52b7b798 ber_get_next on fd 13 failed errno=0 (Success)
52b7b798 conn=1023 op=70 do_unbind
52b7b798 connection_close: conn=1023 sd=13
TLS trace: SSL3 alert write:warning:close notify
52b7cbba daemon: shutdown requested and initiated.
52b7cbba slapd shutdown: waiting for 0 operations/tasks to finish
52b7cbba slapd shutdown: initiated
52b7cbba ====> bdb_cache_release_all
52b7cbba slapd destroy: freeing system resources.
52b7cbba slapd stopped.
The configuration file (slapd.conf):
include /home/ucportal/local/openldap/etc/openldap/schema/core.schema
include /home/ucportal/local/openldap/etc/openldap/schema/corba.schema
include /home/ucportal/local/openldap/etc/openldap/schema/cosine.schema
include /home/ucportal/local/openldap/etc/openldap/schema/duaconf.schema
include /home/ucportal/local/openldap/etc/openldap/schema/dyngroup.schema
include /home/ucportal/local/openldap/etc/openldap/schema/inetorgperson.schema
include /home/ucportal/local/openldap/etc/openldap/schema/java.schema
include /home/ucportal/local/openldap/etc/openldap/schema/misc.schema
include /home/ucportal/local/openldap/etc/openldap/schema/nis.schema
include /home/ucportal/local/openldap/etc/openldap/schema/openldap.schema
include /home/ucportal/local/openldap/etc/openldap/schema/ppolicy.schema
include /home/ucportal/local/openldap/etc/openldap/schema/collective.schema
include /home/ucportal/local/openldap/etc/openldap/schema/uc.schema
pidfile /home/ucportal/local/openldap/var/run/slapd.pid
argsfile /home/ucportal/local/openldap/var/run/slapd.args
loglevel 1
logfile /home/ucportal/openldap/var/log/slapd.log
database bdb
suffix "dc=ucweb,dc=com"
rootdn "cn=admin,dc=ucweb,dc=com"
rootpw 123456
directory /home/ucportal/local/openldap/var/openldap-data
index objectClass eq
index entryUUID,entryCSN eq
TLSCACertificateFile /home/ucportal/openldap/etc/openldap/cacerts/ca.crt
TLSCertificateFile /home/ucportal/openldap/etc/openldap/ldap-server.crt
TLSCertificateKeyFile /home/ucportal/openldap/etc/openldap/ldap-key.pem
Attention : I installed and run openldap with non-root user
I used this command to start ldap daemon process: slapd -f ~/openldap/etc/openldap/slapd.conf -d 1 -h 'ldaps://0.0.0.0:6361'
Any suggestions?
This is a very common issue with Open-LDAP servers, firstly I'll recommend you to migrate this question to serverfault. This will be a good practice to always run your daemons with root priviledges.
Based on my so far research I'd like to share these links with you, I hope they may help you to fix your problems.
http://www.clearfoundation.com/component/option,com_kunena/Itemid,232/catid,10/func,view/id,19945/
http://www.openldap.org/lists/openldap-software/200502/msg00268.html
Configure OpenLDAP
https://serverfault.com/questions/138286/configuring-openldap-and-ssl
http://www.openldap.org/doc/admin24/slapdconf2.html

mount: nfs access denied by server

Am trying to mount a NFS device in my linux machine.
My /etc/fstab is like this,
192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs nfs rsize=32768,wsize=32768,intr,noatime 1 0
My /etc/mtab is like this,
192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs nfs rw,addr=192.168.0.5 0 0
I have enabled NFS in my NAS device.
When i type mount " mount -t nfs -v 192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs/" I get like this,
mount.nfs: timeout set for Thu Aug 1 07:01:04 2013
mount.nfs: trying text-based options 'vers=4,addr=192.168.0.5,clientaddr=192.168.1.1'
mount.nfs: mount(2): Permission denied
mount.nfs: access denied by server while mounting 192.168.0.5:/volume2/Asterisk_Recordings
Any possible reasons?
Thanks in advance.
This error can also occur if the /etc/hosts file on the nfs server maps the hostname of the client to an incorrect IP address, or the IP address of the client to an incorrect hostname. It is quick and easy to check, so worth doing before looking for other problems. Note that, if you do have to change any entries then the nfs-server has to be stopped and re-started, as it reads the hosts file only when it is started.
Is there a config file on the NAS where to put allowances for clients? E.g. in debian based OS the config file is "/etc/exports" and you would put there "/volume2/Asterisk_Recordings 192.168.1.1(rw,sync)" and activate this with "exportfs -a" (your NAS may do this automatically if you update the config via a web interface, I guess.) Check also https://stackoverflow.com/questions/22246477/mounting-nfs-results-in-access-denied-by-server.
Remember to add IP addresses/hostnames of your NFS' clients to /etc/hosts.allow of NFS' server
nfs: clienthost2, clienthost2, clienthost3
You might restart nfs config and nfs service on the NFS server as well as run export again.
systemctl restart nfs-config.service
systemctl status nfs.service
exportfs -arv
I have a Debian 10 system with a Debian 10 VM running inside it. I wanted to access a physical partition from the hard drive on the VM. I mounted the physical drive on the host and exported it. I was not able to mount it on the guest continually getting a access denied error
The solution after many hours was to add the no_all_squash option in the exports file. This is supposed to be the default but I needed to add it explicitly. As soon as I did that the problem went away and I could mount the file system. Unfortunately I could not see the files on the fs.
/media/dev 192.168.100.0/24(rw,sync,no_subtree_check,no_root_squash,no_all_squash)
On the server I could see the files and on the host I could not.
I had to change the line to
/media/dev 192.168.100.0/255.255.255.0(rw,sync,no_subtree_check,no_root_squash,no_all_squash)
to see the actual files that were on the file sets
I saw this error presumably due to an older NFS client and adding -o nfsvers=3 fixed the issue for me e.g. mount -t nfs -o nfsvers=3 x.x.x.x:/nfs_mount /mnt/nfs_mount
Or in /etc/fstab
x.x.x.x://nfs_mount /mnt/nfs_mount nfs proto=tcp,port=2049,nfsvers=3 0 0
Ref: https://www.thegeekdiary.com/mount-nfs-access-denied-by-server-while-mounting-how-to-resolve/

Resources