The answer maybe obvious to you, but I am beginning to run my first logrotate instance.
The configuration in /etc/logrotate.d
/mnt/nginx/logs/access.log {
size 1k
dateext
missingok
rotate 10
compress
delaycompress
notifempty
create 640 root root
sharedscripts
postrotate
[ -f /opt/nginx/logs/nginx.pid ] && kill -USR1 `cat /opt/nginx/logs/nginx.pid`
endscript
}
Then I tested it with
sudo /usr/sbin/logrotate -dvf /etc/logrotate.d/nginx
And got following
reading config file /etc/logrotate.d/nginx
reading config info for /mnt/nginx/logs/access.log
Handling 1 logs
rotating pattern: /mnt/nginx/logs/access.log forced from command line (10 rotations)
empty log files are not rotated, old logs are removed
considering log /mnt/nginx/logs/access.log
log needs rotating
rotating log /mnt/nginx/logs/access.log, log->rotateCount is 10
dateext suffix '-20140724'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
glob finding logs to compress failed
glob finding old rotated logs failed
renaming /mnt/nginx/logs/access.log to /mnt/nginx/logs/access.log-20140724
creating new /mnt/nginx/logs/access.log mode = 0640 uid = 0 gid = 0
running postrotate script
running script with arg /mnt/nginx/logs/access.log : "
[ -f /opt/nginx/logs/nginx.pid ] && kill -USR1 `cat /opt/nginx/logs/nginx.pid`
"
Before test
total 84K
-rw-r--r-- 1 root root 51K Jul 22 15:05 access.log
-rw-r--r-- 1 root root 24K Jul 15 17:02 error.log
After test
total 84K
-rw-r--r-- 1 root root 51K Jul 22 15:05 access.log
-rw-r--r-- 1 root root 24K Jul 15 17:02 error.log
There was exactly nothing changed. And it didn't complain errors.
Can you please help this?
Thanks
I made a mistake.
-d option should not be used.
But since it actions as such, why not call it "dry" run?
Related
I connect to a remote computer thru VNC and use Konsole for my work. Konsole is version 2.3.3 using KDE 4.3.4. I have these two aliases:
alias ll 'ls -haltr; pwd'
alias cd 'cd \!*; ll'
which I observed to have the following behavior:
When path exists, it will cd to it and also do the ll alias
When path doesn't exist, it will simply say the path doesn't exist and won't do the ll anymore
Example:
Path exists
[10] % cd foo
total 14K
-rw-r----- 1 user group 913 Jun 3 2014 readme
-rw-r----- 1 user group 1.2K Dec 3 2020 report.txt
drwxr-x--- 2 user group 4.0K Jan 12 17:50 ./
drwx------ 77 user group 8.0K Jun 24 11:57 ../
/home/user/foo
[11] %
Path doesn't exist
[10] % cd nowhere
nowhere: No such file or directory.
[11] %
Now our department has transferred to another division and we've just started to connect remotely thru Exceed TurboX. I still use Konsole but the version is now 2.10.5 using KDE 4.10.5. I copied over the same two aliases, but I'm now observing a different behavior:
When path exists, it will cd to it and also do the ll alias (basically same as #1 above)
When path doesn't exist, it will attempt to cd AND still do the ll (different as #2 above)
So for #2, here's how it looks like:
[10] % cd nowhere
nowhere: No such file or directory.
total 120K
-rw-r----- 1 user group 272 Jan 6 2021 .cshrc
-rw-r----- 1 user group 1.2K Jan 6 2021 .alias
drwxr-x--- 2 user group 4.0K Jan 12 17:50 ./
drwx------ 77 user group 8.0K Jun 24 11:57 ../
/home/user
[11] %
I would like to know how to get behavior #2 of the previous working environment to this current working environment.
I've added the information on the Konsole and KDE versions because if the behavior is due to the version and there's no workaround, then I'll just be sad for the rest of my life working in this new remote desktop env. ^_^
I'm currently exploring a "check first if the path exists before doing ll" but to no avail. :'(
Edit:
The shell I'm using is tcsh
% printenv SHELL in both environments showed /bin/tcsh
And in addition:
Old environment
% echo $version
tcsh 6.17.00 (Astron) 2009-07-10 (x86_64-unknown-linux) options wide,nls,dl,al,kan,sm,rh,color,filec
New environment
% echo $version
3
The information in the question is still not sufficient to find out why tcsh behaves differently in the two working environments.
The alias defines a sequence of commands (cd followed by ll) without any condition.
alias cd 'cd \!*; ll'
On one system the execution of the alias seems to stop if the cd command reports an error for currently unknown reasons.
The correct way to prevent the execution of ll after a failed cd command would be the conditional execution of the ll command, e.g.
alias cd 'cd \!* && ll'
The decade long advice is: do not use aliases, use functions.
ll() { ls -haltr "$#" && pwd; }
cd() { cd "$#" && ll; }
I'm trying to automate our backups of some mysql databases in MariaDB Server 10.2.15 on CentOS 7.5:
mariabackup --backup --target-dir=/srv/db_backup --databases="wordpress" --xbstream | \
openssl enc -aes-256-cbc -k mysecretpassword > \
$(date +"%Y%m%d%H").backup.xb.enc
What I expect is a file in /srv/db_backup called $(date +"%Y%m%d%H").backup.xb.enc
What I'm finding is a file called $(date +"%Y%m%d%H").backup.xb.enc in my home directory with file size 0, and the /srv/db_backup dir looks like:
[root#wordpressdb1 ~]# ls -la /srv/db_backup/
total 77868
-rw------- 1 root root 16384 Jul 31 14:30 aria_log.00000001
-rw------- 1 root root 52 Jul 31 14:30 aria_log_control
-rw------- 1 root root 298 Jul 31 14:30 backup-my.cnf
-rw------- 1 root root 938 Jul 31 14:30 ib_buffer_pool
-rw------- 1 root root 79691776 Jul 31 14:30 ibdata1
-rw------- 1 root root 2560 Jul 31 14:30 ib_logfile0
drwx------ 2 root root 19 Jul 31 14:30 wordpress
-rw------- 1 root root 103 Jul 31 14:30 xtrabackup_checkpoints
-rw------- 1 root root 458 Jul 31 14:30 xtrabackup_info
All further attempts to run the mariabackup command fail on:
mariabackup: Can't create/write to file '/srv/db_backup/ib_logfile0' \
(Errcode: 17 "File exists")
mariabackup: error: failed to open the target stream for 'ib_logfile0'.
What have I done wrong?
EDIT
First error was a missing dash in openssl -aes-256-cbc
Now I'm seeing this:
180731 15:18:37 Executing FLUSH NO_WRITE_TO_BINLOG TABLES...
Error: failed to execute query FLUSH NO_WRITE_TO_BINLOG TABLES: Access \
denied; you need (at least one of) the RELOAD privilege(s) for this operation
I've granted both SUPER and RELOAD to root user, still get this error.
Partial answer:
"What I expect is a file in /srv/db_backup called $(date +"%Y%m%d%H").backup.xb.enc" -- Then you need to specify a directory other than the current directory:
mariadbdump ... > \
/srv/db_backup/$(date +"%Y%m%d%H").backup.xb.enc
As for "unable to write", what do you get from
ls -ld /srv/db_backup
You need to use --stream=xbstream , not --xbstream
You backed up into directory, not into stream.
in a buildroot environment I added one user to the group wheel. Now I can execute commands with the root's privileges using sudo.
It seems it works but when I try to export a pin on my RPi I always get Permission denied:
rpi:~$ sudo echo 4 > /sys/class/gpio/export
sh: can't create /sys/class/gpio/export: Permission denied
Here the contents of that directory:
rpi:~$ ls -l /sys/class/gpio/
total 0
--w------- 1 root root 4096 Jan 1 00:00 export
lrwxrwxrwx 1 root root 0 Jan 1 00:00 gpiochip0 -> ../../devices/platform/soc/3f200000.gpio/gpio/gpiochip0
--w------- 1 root root 4096 Jan 1 00:00 unexport
Isn't enough to get the root's privilege with sudo to write in the export file? I'm afraid about the owner and groups. In fact if I type:
rpi:~$ sudo chmod a+w /sys/class/gpio/*
then I can successfully export the pin. But I don't know if this is the best way to do this.
When you run the command sudo echo 4 > /sys/class/gpio/export, it first executes sudo echo 4 which runs echo with elevated privileges (which is kind of pointless). Then the result is passed by the shell (not by echo) to a new command of /sys/class/gpio/export, which because it is a new command it isn't executed with elevated privileges.
There is a Unix.SE question here which explains this and the options.
In summary of that link you should be able to do something like:
sudo sh -c 'echo 4 > /sys/class/gpio/export'
I am working on Solaris 10 machine. In that i cannot able to untar my file. Logs are given below. Anyone please suggest what may be the issue? I can able to create tar file but unable to untar. :(
bash-3.2# ls -lrth ConfigCheck-120614-KL.out*
-rw-r--r-- 1 root root 144K Jun 12 17:15 ConfigCheck-120614-KL.out
-rwxrwxrwx 1 root root 146K Jun 16 16:49 ConfigCheck-120614-KL.out.tar
bash-3.2# tar xvf ConfigCheck-120614-KL.out.tar
tar: extract not authorized
bash-3.2# tar tvf ConfigCheck-120614-KL.out.tar
-rw-r--r-- 0/0 147377 Jun 12 17:15 2014 ConfigCheck-120614-KL.out
Solaris 11 tar will fail with that error message if you are running as uid 0 but do not have the Media Restore profile set up in the RBAC configuration.
Unless you're trying to restore from backup, you should normally be untarring files as a normal user, not root, to avoid accidentally overwriting critical system files.
I keep seeing a 403 error for my stylesheet which is hosted on my Rasberry Pi (webserver). I ran ls -al and this is the result:
pi#raspberrypi ~/www $ ls -al
total 16
drwxr-xr-x 2 pi root 4096 Mar 17 20:18 .
drwxr-xr-x 12 root root 4096 Mar 15 16:44 ..
-rw-r--r-- 1 pi root 644 Mar 17 20:18 index.html
-rw------- 1 pi root 329 Mar 17 20:19 stylesheet.css
The index.html data shows up when I point my browser at the ip, but there is no formatting and whenever I try to acess the css file through looking at the source code it keeps telling me theres a 403 error :(
Can anyone help a brother out??
Cheers!
You need proper permissions for the www folder, and that depends on which webserver you are running. For apache on debian the user is www-data, if your webroot is ~/www and you are user pi try these commands
Change owner to apache user recursively
Change Permissions to read for all recursively
chown -R www-data:www-data /home/pi/www
chmod -R 644 /home/pi/www