I've got over 150 GB of logs in /var/lib/ldap/accesslog and for some reason they won't rotate out or purge old logs. I'm not sure what I'm missing.
I'm using HDB.
I'm using an OLC setup, so I have no slapd.conf file.
My Slapd version is 2.4.31-1+nmu2ubuntu8.2
My AccessLog overlay is configured with:
dn: olcOverlay={1}accesslog
objectClass: olcOverlayConfig
objectClass: olcAccessLogConfig
olcOverlay: {1}accesslog
olcAccessLogDB: cn=accesslog
olcAccessLogOps: writes
olcAccessLogPurge: 7+00:00 1+00:00
olcAccessLogSuccess: TRUE
...
Unfortunately I still have log files dated from November of 2014, which is probably when this server was upgraded to Ubuntu 14.04 LTS and the LDAP services were re-initialized from a backup LDIF file.
I've tried creating a DB_CONFIG file in the accesslog directory containing the following:
set_flags DB_LOG_AUTOREMOVE
to no avail.
So apparently the "set_flags DB_LOG_AUTOREMOVE" didn't kick in until I restarted the slapd service. As soon as I did the old logs were purged out.
Related
I have an ubuntu server running openldap to connect to our phones.
A while back I set this to use ldaps with letsencrypt which has worked fine with most things until recently they made a change ref the X3 cert. I am unable to install a late enough version so I can run the --preferred-chain "ISRG ROOT X1 and can't use the snap version as the box ix on lcx and wont run it.
The company has now bought a digi cert wild card certificate and would like this to be on the ldap server, but I can't get it to load the config
The original ldif file I created to import is below with the domain name changed.
dn: cn=config
add: olcTLSCACertificateFile
olcTLSCACertificateFile: /etc/letsencrypt/live/directory.mydomain.co.uk/fullchain.pem
-
add: olcTLSCertificateFile
olcTLSCertificateFile: /etc/letsencrypt/live/test-directory.mydomain.co.uk/cert.pem
-
add: olcTLSCertificateKeyFile
olcTLSCertificateKeyFile: /etc/letsencrypt/live/test-directory.mydomain.co.uk/privkey.pem
I have tried to change the file with modify commands and it's just wont have it and seem to keep getting the below.
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
modifying entry "cn=config"
ldap_modify: Inappropriate matching (18)
additional info: modify/add: olcTLSCACertificateFile: no equality matching rule
Any advise here would be great thanks.
I propose to check the contents of the ldif for special not intended characters. Like: $sudo cat -tve *.ldif?
I have SLS files set up to copy things from a network folder to a local directory on a minion.
Looks a little like this:
cmd-test:
cmd.run:
- name: 'ROBOCOPY \\\CygwinSource C:\CygwinSource /E'
and get the following output:
-------------------------------------------------------------------------------
ROBOCOPY :: Robust File Copy for Windows
-------------------------------------------------------------------------------
Started : Tuesday, December 6, 2016 10:50:35 AM
2016/12/06 10:50:35 ERROR 1808 (0x00000710) Getting File System Type of Source \\<Server>\<program>\<file>\
The account used is a computer account. Use your global user account or local user account to access this server.
Source - \\<Server>\<program>\<folder>\
Dest : C:\<path>\<folder>\
Files : *.*
Options : *.* /S /E /DCOPY:DA /COPY:DATS /PURGE /MIR /NP /R:1 /W:1
------------------------------------------------------------------------------
NOTE : NTFS Security may not be copied - Source may not be NTFS.
2016/12/06 10:50:35 ERROR 1808 (0x00000710) Accessing Source Directory \\<Server>\<program>\<file>\
The account used is a computer account. Use your global user account or local user account to access this server.
Waiting 1 seconds... Retrying...
When I run the same thing locally in command line as 'ROBOCOPY \\\CygwinSource C:\CygwinSource /E' and it worked perfectly. I have no idea how to fix this 'use local user account' that Robocopy seems to give when using it through salt.
I also tried adding /MIR and /SEC which didnt't work.
Running Windows 10, Minion 2016.3.3
Master: Red Hat, 2016.3.3
Salt seems to be connecting to the network resource with a computer account. A few possible solutions:
Try changing the Salt Service on the Client (if that's how salt is executing the commands) to run as a domain user.
Try using the salt file server
Implement this hacky workaround where a scheduled task is created - discussed in the github issue that seems related to your problem: https://github.com/saltstack/salt/issues/16340
I upgraded the mysql5.1 to MariaDB5.5 on my CentOS6.8 SELinux server. The command service mysql start fails to start mysql service. If I use setenforce 0 then I can start mysql service and everything works, until a reboot.
getsebool reports: (I tested changing allow_user_mysql_connect to 'on')
allow_user_mysql_connect --> off
mysql_connect_any --> on
I searched and found a question stating to set permissions on the var/lib/mysql directory:
chcon -Rt mysqld_db_t mysql
chcon -Ru system_u mysql
The aboved did not solve the fail to start problem. What else do I need to configure in SELinux to have mysql service run with SELInux enabled?
Thanks
After reviewing my backups, the my.cnf file had set user=mysql. Once I added user=mysql to the new my.cnf file everything worked. The upgrade had created a new my.cnf file, as expected, which no longer had the setting user=mysql.
Atom is able to open a project, and to show the whole tree of the project on the left side, a really nice feature.
Now I'm using SSH on Host OS to access a Guest OS (say Red Hat Enterprise Linux, RHEL) on Virtualbox, is there a way of Atom located in Host OS to open a project located on RHEL?
Well yes there is!
You just need to configure sshfs, optionally with autofs. Then you can access the files as if they are stored locally. I've used this with Atom and it works seamlessly.
Instructions for Ubuntu
Install sshfs
$ sudo apt-get install sshfs
Mount the remote directory on a local mountpoint
$ sshfs [user#]host:[dir] mountpoint
Combining it with autofs
The following link has instructions for a setup using autofs.
Note: This requires you to setup SSH for the root user.
http://www.mccambridge.org/blog/2007/05/totally-seamless-sshfs-under-linux-using-fuse-and-autofs/
Additionally to that post, I've added some tricks for an even more seamless experience.
Enhance performance
I've noticed a significant performance boost by adding this SSH config to /root/.ssh/config:
Ciphers arcfour
Compression no
Note: This does make the connection less secure.
Make it appear as a disk
If you set the mount point to a directory in /media, the mount point will show up as a disk in your file browser. For example /media/sshfs.
I would recommend the Remote sync plugin for this. I have a python environment set up on a linux box and i connect to it from my PC.
It allows me to upload changes automatically when i save a file and also define files to be monitored for changes.
Not 100% what you're looking for, but there's the Remote-Edit package: https://atom.io/packages/remote-edit
This will allow you to define the connection parameters for the server, and will then allow you to browse and edit the files found on the server.
Complement to Remco's sshfs answer above:
If you use different users in the client and server hosts, consider using the 'idmap' option of sshfs.
I use different users in my working host and in the development or testing VMs.
Example:
using option '-o idmap=user' will automatically translate UID/GID of the remote host to the UID/GID of the connecting user in the local host
Files owned by remote user (devuser) in remote host (devhost1) will appear as belonging to the connecting user (locuser) in local host (clienthost)
locuser#clienthost:~$ sshfs devuser#devhost1:/var/www ~/dev/www -o idmap=user
locuser#clienthost:~$ ls -lR ~/dev/www
(...)
-rw-rw-r-- 1 locuser locuser 269 abr 1 11:37 index.html
-rw-rw-r-- 1 locuser locuser 249 abr 3 03:59 page1.html
-rw-rw-r-- 1 locuser locuser 1118 abr 2 15:07 page2.html
-rw-rw-r-- 1 locuser locuser 847 abr 3 03:20 page3.html
(...)
The mapping can also be made explicit (userx <-> usery). For more details see man sshfs
I am writing this answer because none of the other answers worked for me.
Mounting as a directory & browsing with atom (#Remco Haszing answer) was a brilliant one.
but in my case, atom wants to index all of the remote project & its a heavy one. and it gets not responding.
using remote-sync package was good when you working locally then want to upload the files to server.
Actually the remote-edit is the package meant to do this job. (editing files remotely on ssh)
the problem with this is, it has been abandon.
These help me as its replacements:
https://atom.io/packages/remote-edit-ni
https://atom.io/packages/remote-editor
I'm trying to keep SELinux enforcing but to allow NGINX to directly access shared OSX folders that are connected via Parallels Desktop.
Host system: Mac OSX 10.10
Parallels Desktop: 10
Running Virtual OS: CentOS 7 (minimal / command line)
I have the the Parallels tools installed and in CentOS I see the shared folder: /media/psf/Shared-Folder
When I set the Nginx server root to that folder I get a 403 Forbidden. I know it is a configuration parameter that needs editing because if I change SELinux to Permissive, the files are served correctly in NGINX.
When checking how the files are mounted I see this:
root root system_u:object_r:removable_t:s0 /media/psf/Shared-Folder/
I can see the 'removable_t' context - however - my issue is that I cannot seem to find a way to allow the httpd service to serve files that are mounted as removable storage.
I have tried:
chcon -R -t public_content_t /media/psf/Shared_Folder/
chcon -R -t httpd_sys_content_t /media/psf/Development-Projects/
and in all cases I get a "chcon: failed to change context of: '...': Operational not supported" error.
Checking /usr/sbin/getsebool -a | grep http I do not see any option to allow httpd to access removable storage mounts.
Last item: I do not believe I can change the way Parallels mounts the shared folders.
Question: Is there a way to keep SELinux enforcing but to allow NGINX to directly access shared OSX folders that are connected via Parallels Desktop?
What you need to do is use semanage.To get it you have to install policycoreutils-python.
The same type of question has already been asked Here. Cheers!