How to activate "ppolicy" module in OpenLDAP? - openldap

I trie to activate ppolicy module in OpenLDAP.
OS Version : Debian 8.4
LDAP Version: #(#) $OpenLDAP: slapd (Jan 16 2016 23:00:08)
$root#chimera: /tmp/buildd/openldap-2.4.40+dfsg/debian/build/servers/slap
I have the next message:
# ldapmodify -x -a -H ldap://localhost -D cn=admin,dc=domain,dc=local -w adminpassword -f /etc/ldap/schema/ppolicy.ldif
adding new entry "cn=ppolicy,cn=schema,cn=config"
ldap_add: Insufficient access (50)
# ldapsearch -x -s one -H ldap://localhost -D cn=admin,dc=domain,dc=local -w adminpassword -b cn=schema,cn=config cn -LLL
No such object (32)
Logs :
[13-03-2018 13:08:06] slapd debug conn=1002 fd=14 ACCEPT from IP=[::1]:45318 (IP=[::]:389)
[13-03-2018 13:08:06] slapd debug conn=1002 op=0 BIND dn="cn=admin,dc=domain,dc=local" method=128
[13-03-2018 13:08:06] slapd debug conn=1002 op=0 BIND dn="cn=admin,dc=domain,dc=local" mech=SIMPLE ssf=0
[13-03-2018 13:08:06] slapd debug conn=1002 op=0 RESULT tag=97 err=0 text=
[13-03-2018 13:08:06] slapd debug conn=1002 op=1 SRCH base="cn=schema,cn=config" scope=1 deref=0 filter="(objectClass=*)"
[13-03-2018 13:08:06] slapd debug conn=1002 op=1 SRCH attr=cn
[13-03-2018 13:08:06] slapd debug conn=1002 op=1 SEARCH RESULT tag=101 err=32 nentries=0 text=
[13-03-2018 13:08:06] slapd debug conn=1002 op=2 UNBIND
[13-03-2018 13:08:06] slapd debug conn=1002 fd=14 closed
Any idea please?
Thank you.

You can use slapadd to add a schema (where -n0 selects the configuration database). First stop your ldap server.
sudo -u openldap slapadd -n0 -l /etc/ldap/schema/ppolicy.ldif
Then create a file ppolicy-module.ldif with this content:
dn: cn=module,cn=config
objectClass: olcModuleList
cn: module
olcModuleLoad: ppolicy
And load it the same way:
sudo -u openldap slapadd -n0 -l ppolicy-module.ldif
Then you can check the module is correctly loaded:
$ sudo slapcat -n 0 | grep olcModuleLoad | grep ppolicy
olcModuleLoad: {0}ppolicy
Then restart your ldap server.

Related

Varnish 6.0.8 Secret file is not created

Please we're facing some issues when installing Varnish 6.0.8 on ubutnu 18.04.6 OS, it doesn't create the secret file inside the /etc/varnish dir as shown below:
enter image description here
we use the following script to for installation :
curl -s https://packagecloud.io/install/repositories/varnishcache/varnish60lts/script.deb.sh | sudo bash
can someone please help ?
PS: we tried to install later versions (6.6 and 7.0.0) and we got the same issue.
Form a security point of view, remote CLI access is not enabled by default. You can see this when looking at /lib/systemd/system/varnish.service:
[Unit]
Description=Varnish Cache, a high-performance HTTP accelerator
After=network-online.target nss-lookup.target
[Service]
Type=forking
KillMode=process
# Maximum number of open files (for ulimit -n)
LimitNOFILE=131072
# Locked shared memory - should suffice to lock the shared memory log
# (varnishd -l argument)
# Default log size is 80MB vsl + 1M vsm + header -> 82MB
# unit is bytes
LimitMEMLOCK=85983232
# Enable this to avoid "fork failed" on reload.
TasksMax=infinity
# Maximum size of the corefile.
LimitCORE=infinity
ExecStart=/usr/sbin/varnishd \
-a :6081 \
-a localhost:8443,PROXY \
-p feature=+http2 \
-f /etc/varnish/default.vcl \
-s malloc,256m
ExecReload=/usr/sbin/varnishreload
[Install]
WantedBy=multi-user.target
There are no -T and -S parameters in the standard systemd configuration. However, you can enable this by modifying the systemd configuration yourself.
Just run sudo systemctl edit --full varnish to edit the runtime configuration and add a -T parameter to enable remote CLI access.
Be careful with this and make sure you restrict access to this endpoint via firewalling rules.
Additionally you'll add -S /etc/varnish/secret as a varnishd runtime parameter in /lib/systemd/system/varnish.service.
You can use the following command to add a random unique value to the secret file:
uuidgen | sudo tee /etc/varnish/secret
This is what your runtime parameters would look like:
ExecStart=/usr/sbin/varnishd \
-a :6081 \
-a localhost:8443,PROXY \
-p feature=+http2 \
-f /etc/varnish/default.vcl \
-s malloc,2g \
-S /etc/varnish/secret \
-T :6082
When you're done just run the following command to restart Varnish:
sudo systemctl restart varnish

Openresty: how to use a different nginx.conf?

In Ubuntu 20.04:
I have openresty installed and it is running well.
It uses the default /usr/local/openresty/nginx/conf/nginx.conf.
I want to instead use a new custom nginx.conf file in ~/conf/nginx.conf.
How can I ask openresty to run on this new conf file?
Just found the answer:
ubuntu#john:~$ openresty -help
nginx version: openresty/1.19.9.1
Usage: nginx [-?hvVtTq] [-s signal] [-p prefix]
[-e filename] [-c filename] [-g directives]
Options:
-?,-h : this help
-v : show version and exit
-V : show version and configure options then exit
-t : test configuration and exit
-T : test configuration, dump it and exit
-q : suppress non-error messages during configuration testing
-s signal : send signal to a master process: stop, quit, reopen, reload
-p prefix : set prefix path (default: /usr/local/openresty/nginx/)
-e filename : set error log file (default: logs/error.log)
-c filename : set configuration file (default: conf/nginx.conf)
-g directives : set global directives out of configuration file
use
openresty -c filename

Distcc .distcc/zeroconf/hosts contained no hosts

I am getting an error from distcc. I am using the package from the repos. Here is my configuration
$ cat /etc/default/distcc | grep -v \#
STARTDISTCC="true"
ALLOWEDNETS="127.0.0.0/16 10.0.0.0/8"
LISTENER="0.0.0.0"
NICE="10"
JOBS="3"
ZEROCONF="true"
$ cat /etc/distcc/hosts | grep -v \#
+zeroconf
$ dpkg -l | grep distcc
ii distcc 3.1-6 amd64 simple distributed compiler client and server
ii distcc-pump 3.1-6 amd64 pump mode for distcc a distributed compiler client and server
$ cat ~/.distcc/zeroconf/hosts
10.16.114.52:3632/16
$ ifconfig
...
inet addr:10.16.114.52 Bcast:10.16.115.255 Mask:255.255.252.0
...
When I run a bunch of compilations (1000 C files I generated) like,
distcc gcc -o 41.o -c 41.c
I get the error,
distcc[26927] (dcc_parse_hosts) Warning: /home/amacdonald/.distcc/zeroconf/hosts contained no hosts; can't distribute work
distcc[26927] (dcc_zeroconf_add_hosts) CRITICAL! failed to parse host file.
distcc[26927] (dcc_build_somewhere) Warning: failed to distribute, running locally instead
distcc[26929] (dcc_parse_hosts) Warning: /home/amacdonald/.distcc/zeroconf/hosts contained no hosts; can't distribute work
distcc[26929] (dcc_zeroconf_add_hosts) CRITICAL! failed to parse host file.
You need a hosts file with the list of machines that run distcc. Use this path:
~/.distcc/hosts
For example:
10.0.0.1 10.0.0.2 10.0.0.42
Even with the hosts environment variable and hosts files, in /etc and /usr/local/etc, I distcc still couldn't find any hosts.
After checking the log:
DISTCCD_OPTS="${DISTCCD_OPTS} --log-file /var/log/distccd.log"
Which said:
(dcc_warn_masquerade_whitelist) CRITICAL! /usr/local/lib/distcc not found. You must see up masquerade (see distcc(1)) to list whitelisted compilers or pass --enable-tcp-insecure. To set up masquerade automatically run update-distcc-symlinks.
I had to run:
sudo update-distcc-symlinks
sudo ln -s /usr/lib/distcc /usr/local/lib/distcc
To get it working.

What is the difference b/w sshd_config and sshd command?

I changed my sshd_config file to set strictmodes to no and then restarted the sshd service.
However, I am getting two different outputs as shown below.
root#localhost httpd]# service sshd stop
Stopping sshd: [ OK ]
[root#localhost httpd]# service sshd start
Starting sshd: [ OK ]
[root#localhost httpd]# sudo cat /etc/ssh/sshd_config | grep StrictModes
#StrictModes no
[root#localhost httpd]# sshd -T | grep strictmodes
strictmodes yes
The most of the configuration files have # as a comment and therefore the options behind it are not used and therefore your server is using its defaults, which is yes. Change the line in your sshd_config to
StrictModes no
and restart the service. It will be applied.

Unexpected local arg: /rsync

I would like to use rsync to synchronise my /rsync folder.
I create the rsync users on my 2 servers and configure the ssh key.
I installed rsync, created /rsync folder put chmod 777 on it.
But when I execute
rsync -avz -e ssh rsync#1.2.3.4:/rsync /rsync -p 8682
I have
Unexpected local arg: /rsync
If arg is a remote file/dir, prefix it with a colon (:).
rsync error: syntax or usage error (code 1) at main.c(1246) [Receiver=3.0.9]
("ssh rsync#1.2.3.4 -p 8682" works)
rsync -avz -e 'ssh -p 8682' rsync#1.2.3.4:/rsync /rsync

Resources