I'm pretty new to openLDAP and I am trying to set up a password policy. I have the following in my slapd.conf:
include /etc/openldap/schema/ppolicy.schema
moduleload ppolicy.la
objectClass: top
objectClass: device
objectClass: pwdPolicy
cn: default
pwdAttribute: userPassword
pwdMaxAge: 5184000
pwdExpireWarning: 432000
pwdInHistory: 6
pwdCheckQuality: 1
pwdMinLength: 8
pwdMaxFailure: 5
pwdLockout: TRUE
pwdLockoutDuration: 1920
pwdGraceAuthNLimit: 0
pwdFailureCountInterval: 0
pwdMustChange: TRUE
pwdAllowUserChange: TRUE
pwdSafeModify: FALSE
pwdCheckModule: check_password.so
pwdCheckQuality: 2
the problem though is that when I restart slapd I get the following error:
/etc/openldap/slapd.conf: line 86: unknown directive inside backend database definition.
can anyone tell me what I have done wrong?
The basic problem here is that you are inventing syntax. LDIF commands don't belong in slapd.conf.
Related
I've been trying to read the following yaml file using symfony/yaml(v4.4.0) with cakephp3.
But, I get the following error.
Reference "aaa" does not exist in "path to yml" at line xx (near "*aaa:").
Symfony\Component\Yaml\Exception\ParseException
I would like to user the 'aaa' as a key later.
It doesn't work with "*aaa:" and works with "1:".
Basically, is it possible to use alias for keys in yaml file?
Here's the yaml file.
aaa: &aaa 1
bbb: &bbb 2
ccc: &ccc 3
*aaa: # <- this doesn't work and works with '1:'
- *bbb
- *ccc
For general spec-conforming YAML parsers
You need to write it with a space before :.
aaa: &aaa 1
bbb: &bbb 2
ccc: &ccc 3
*aaa :
- *bbb
- *ccc
YAML 1.2 allows : to be part of an anchor and therefore, the line will not be parsed as implicit key if : is written adjacent to the alias (since it becomes part of the alias then).
This has been discussed on the YAML core mailing list.
For Symfony
It seems Symfony parses *aaa: as alias *aaa with : as value indicator. While this is a spec violation, it shouldn't bother us since according to the mailing list, this is more like an oversight in the spec. However, Symfony fails to resolve the alias here, there's nothing much you can do about it but to file an issue for it.
We have an instance that uses cloudinit for initial instantiation, and this instance and cloudinit work great.
We want to add swap to this instance, and have correctly configured a suitable disk, however we cannot figure out how to get cloudinit to initialise the swap disks, like cloudinit does with all the other disks on the machine.
Our configuration of our disks, including swap, is as follows:
fs_setup:
- label: vidi
device: /dev/xvde
filesystem: ext4
- label: swap
device: /dev/xvdg
filesystem: swap
mounts:
- [ /dev/xvde, /var/lib/vidispine, ext4, defaults, 0, 0 ]
- [ /dev/xvdg, none, swap, sw, 0, 0 ]
This results in an /etc/fstab as follows:
LABEL=cloudimg-rootfs / ext4 defaults,discard 0 0
/dev/xvde /var/lib/vidispine ext4 defaults,comment=cloudconfig 0 0
/dev/xvdg none swap sw,comment=cloudconfig 0 0
The disk /dev/xvde is formatted correctly on startup. The disk /dev/xvdg is ignored.
What addditional steps are required for cloudinit to "mkswap" and "swapon" the /dev/xvdg disk?
In response to "What addditional steps are required for cloudinit to 'mkswap' and 'swapon' the /dev/xvdg disk?", the short answer is "nothing".
The longer answer is that you need to be running a version of cloud-init with the following bugfix applied:
https://github.com/canonical/cloud-init/pull/143
Which fixes the following error when running mkswap:
mkswap: invalid block count argument: ''
Most specifically, Ubuntu Bionic images 20200131 and newer work properly.
Older versions of cloudinit require the following added to the runcmd scripts on boot to work around the bug above:
- mkswap /dev/xvdg
- swapon -a
Solution: swap is a space on the disk that is used when the amount of physical ram of memory is full, therefore it can save your system from crashing due to out of memory exception. In order to apply swap partition using cloud-init package on ubuntu, you'll need to mount dedicated disk partition ( in boot time ) on /etc/fstab ( a configuration table designed to ease the burden of mounting and unmounting file systems to a machine) create ( using mkswap ) and start swap ( using swapon ).
First create an attach an additional disk to your machine, I'm attaching example using terraform:
resource "aws_instance" "example" {
ami ="<some-ami>"
instance_type = "t3.micro"
tags = {
Name = "example"
}
// root
root_block_device {
volume_size = 50
volume_type = "gp2"
delete_on_termination = true
}
// swap partition
ebs_block_device {
device_name = "/dev/xvdb"
volume_size = 20
volume_type = "gp2"
delete_on_termination = true
}
}
Second mount an additional disk and toggle the swap functionality on template file of cloud-init:
mounts:
- [ /dev/nvme1n1, none, swap, sw, 0, 0 ]
bootcmd:
- mkswap /dev/nvme1n1
- swapon /dev/nvme1n1
For verification: type on terminal:
swapon --show
#output:
NAME TYPE SIZE USED PRIO
/dev/nvme1n1 partition 20G 0B -2
Currently i have an OpenStack environment with Ceph as a backend storage driver for Cinder.
I have looked into cinder documentation and code and i could not find any values or options to set the default stripe_unit or stripe_count for rbd volumes.
The reason i want to do this is that i want to have my volumes striped.
Is it possible to set default stripe_count and stripe_unit in ceph.conf ?
i have tried to add the following to the [client] section in ceph.conf , but it did not work
rbd stripe-count N
rbd stripe-unit N
Any advise ?
Needed to set these values in the ceph.conf file :
I also needed to enable some features for expirements and i needed to increase the object size as well, this is what order is.
rbd default features = 12
rbd default format = 2
rbd default stripe_count = 16
rbd default stripe_unit = 4194304
rbd default order = 23
I have a sasslint.yml file with a a list of rules
One of them is
property-sort-order: 1
I have tried to exclude it with
property-sort-order: enabled:false
and with
scss-lint --exclude-linter PropertySortOrder
But all this unsuccessful.
Any ideas?
Many thanks
You configure scss-lint in yml a configuration file. The default is .scss-lint.yml, and you can specify a different file via the command line with --config (I think -c works too). The documentation covers this here: https://github.com/brigade/scss-lint#configuration
You disable a linter with
linters:
LinterName:
enabled: false
Judging by https://github.com/brigade/scss-lint/issues/132,
linters:
PropertySortOrder:
enabled: false
will work correctly.
If you'd actually rather not turn it off completely, configuration options for scss-lint's property-sort-order are documented here https://github.com/brigade/scss-lint/blob/master/lib/scss_lint/linter/README.md#propertysortorder
I am new to Apache Nutch 2.3 and Solr. I am trying to get my first crawl working. I installed Apache Nutch and Solr as mentioned in official documentation and both are working fine. However when I did the following steps I get errors -
bin/nutch inject examples/dmoz/ - Works correctly
(InjectorJob: total number of urls rejected by filters: 2
InjectorJob: total number of urls injected after normalization and filtering:130)
Error - $ bin/nutch generate -topN 5
GeneratorJob: starting at 2015-06-25 17:51:50
GeneratorJob: Selecting best-scoring urls due for fetch.
GeneratorJob: starting
GeneratorJob: filtering: true
GeneratorJob: normalizing: true
GeneratorJob: topN: 5
java.util.NoSuchElementException
at java.util.TreeMap.key(TreeMap.java:1323)
at java.util.TreeMap.firstKey(TreeMap.java:290)
at org.apache.gora.memory.store.MemStore.execute(MemStore.java:125)
at org.apache.gora.query.impl.QueryBase.execute(QueryBase.java:73) ...
GeneratorJob: generated batch id: 1435279910-1190400607 containing 0 URLs
Same errors if i do - $ bin/nutch readdb -stats
Error - java.util.NoSuchElementException ...
Statistics for WebTable:
jobs: {db_stats-job_local970586387_0001={jobName=db_stats, jobID=job_local970586387_0001, counters={Map-Reduce Framework={MAP_OUTPUT_MATERIALIZED_BYTES=6, REDUCE_INPUT_RECORDS=0, SPILLED_RECORDS=0, MAP_INPUT_RECORDS=0, SPLIT_RAW_BYTES=653, MAP_OUTPUT_BYTES=0, REDUCE_SHUFFLE_BYTES=0, REDUCE_INPUT_GROUPS=0, COMBINE_OUTPUT_RECORDS=0, REDUCE_OUTPUT_RECORDS=0, MAP_OUTPUT_RECORDS=0, COMBINE_INPUT_RECORDS=0, COMMITTED_HEAP_BYTES=514850816}, File Input Format Counters ={BYTES_READ=0}, File Output Format Counters ={BYTES_WRITTEN=98}, FileSystemCounters={FILE_BYTES_WRITTEN=1389120, FILE_BYTES_READ=1216494}}}}
TOTAL urls: 0
I am also not able to use generate or crawl commands.
Can anyone tell me what am I doing wrong?
Thanks.
I too am new to nutch. However, I think the problem is that you haven't configured a data store. I got the same error, and got a bit further. You need to follow this: https://wiki.apache.org/nutch/Nutch2Tutorial, or this: https://wiki.apache.org/nutch/Nutch2Cassandra. Then, rebuild: ant runtime