I have two logrotate files:
/etc/logrotate.d/nginx-size
/var/log/nginx/*.log
/var/log/www/nginx/50x.log
{
missingok
rotate 3
size 2G
dateext
compress
compresscmd /usr/bin/bzip2
compressoptions -6
compressext .bz2
uncompresscmd /usr/bin/bunzip2
notifempty
create 640 nginx nginx
sharedscripts
postrotate
[ -f /var/run/nginx.pid ] && kill -USR1 `cat /var/run/nginx.pid`
endscript
}
and
/etc/logrotate.d/nginx-daily
/var/log/nginx/*.log
/var/log/www/nginx/50x.log
{
missingok
rotate 3
dateext
compress
compresscmd /usr/bin/bzip2
compressoptions -6
compressext .bz2
uncompresscmd /usr/bin/bunzip2
notifempty
create 640 nginx nginx
sharedscripts
postrotate
[ -f /var/run/nginx.pid ] && kill -USR1 `cat /var/run/nginx.pid`
endscript
}
Command logrotate -d -v /etc/logrotate.d/nginx-sizeoutput:
reading config file /etc/logrotate.d/nginx-size
compress_prog is now /usr/bin/bzip2
compress_options is now -6
compress_ext is now .bz2
uncompress_prog is now /usr/bin/bunzip2
Handling 1 logs
rotating pattern: /var/log/nginx/*.log
/var/log/www/nginx/50x.log
2147483648 bytes (3 rotations)
empty log files are not rotated, old logs are removed
considering log /var/log/nginx/access.log
log does not need rotating
considering log /var/log/nginx/error.log
log does not need rotating
considering log /var/log/nginx/get.access.log
log does not need rotating
considering log /var/log/nginx/post.access.log
log needs rotating
considering log /var/log/www/nginx/50x.log
log does not need rotating
rotating log /var/log/nginx/post.access.log, log->rotateCount is 3
dateext suffix '-20141204'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
glob finding old rotated logs failed
renaming /var/log/nginx/post.access.log to /var/log/nginx/post.access.log-20141204
creating new /var/log/nginx/post.access.log mode = 0640 uid = 497 gid = 497
running postrotate script
running script with arg /var/log/nginx/*.log
/var/log/www/nginx/50x.log
: "
[ -f /var/run/nginx.pid ] && kill -USR1 `cat /var/run/nginx.pid`
"
compressing log with: /usr/bin/bzip2
Same (normal) output on ngnix-daily..
If I run from root command
logrotate -f /etc/logrotate.d/nginx-size
manually, it do all the thing. BUT! It don't run it automatically!
contab:
*/5 5-23 * * * root logrotate -f -v /etc/logrotate.d/nginx-size 2>&1 > /tmp/logrotate_size
00 04 * * * root logrotate -f -v /etc/logrotate.d/nginx-daily 2>&1 > /tmp/logrotate_daily
Also, files /tmp/logrotate_daily & /tmp/logrotate_size are always empty..
Cron don't give me any errors in /var/log/cron
Dec 4 14:45:01 (root) CMD (logrotate -f -v /etc/logrotate.d/nginx-rz-size 2>&1 > /tmp/logrotate_size )
Dec 4 14:50:01 (root) CMD (logrotate -f -v /etc/logrotate.d/nginx-rz-size 2>&1 > /tmp/logrotate_size )
What's wrong with dat thing?.. Centos 6.5 x86_64, Logrotate version 3.8.7 (out of source) + logrotate version 3.7.8 (via rpm).
Thx in advance.
Your redirections are incorrect in those cron lines. They will not output error information to those files.
Redirection order matters. You want >/tmp/logrotate_size 2>&1 to get what you want.
The underlying issue here is one of the things covered by the "Debugging crontab" section of the cron info page.
Namely "Making assumptions about the environment".
Making assumptions about the environment
Graphical programs (X11 apps), java programs, ssh and sudo are notoriously problematic to run as cron jobs. This is because they rely on things from interactive environments that may not be present in cron's environment.
To more closely model cron's environment interactively, run
env -i sh -c 'yourcommand'
This will clear all environment variables and run sh which may be more meager in features that your current shell.
Common problems uncovered this way:
foo: Command not found or just foo: not found.
Most likely $PATH is set in your .bashrc or similar interactive init file. Try specifying all commands by full path (or put source ~/.bashrc at the start of the script you're trying to run).
Related
I am currently working on a research tool that is supposed to be containerized using docker to hopefully be run on as many different systems as possible. This works fine for the most part, we have run into a permission problem because of the workflow though: The tool takes an input file (which we mount into the container), evaluates it using R scripts and is then supposed to generate a report on the input file exactly where the file was taken from on the host system.
The latter part is problematic as at least in our university context, the internal container user lacks write permissions in the (non-root) user home folders, which we are currently taking our testing data from. This would obviously also be bad in a production context as we don't know how the potential users' system is set up, which is why we are trying to dynamically and temporarily set the permissions of the container user to the host user.
I have found different solutions that involve passing the UID/GID to the docker daemon when building the container in some way or another:
docker build --build-arg USER_ID=$(id -u ${USER}) --build-arg GROUP_ID=$(id -g ${USER}) -t IMAGE .
I also changed the dockerfile accordingly using a tutorial that suggested replacing the internal www-data user:
[...Package installation steps that are supposed to be run as root...]
ARG USER_ID
ARG GROUP_ID
RUN if [ ${USER_ID:-0} -ne 0 ] && [ ${GROUP_ID:-0} -ne 0 ]; then \
userdel -f www-data &&\
if getent group www-data ; then groupdel www-data; fi &&\
groupadd -g ${GROUP_ID} www-data &&\
useradd -l -u ${USER_ID} -g www-data www-data &&\
install -d -m 0755 -o www-data -g www-data /work/ &&\
chown --changes --silent --no-dereference --recursive \
--from=33:33 ${USER_ID}:${GROUP_ID} \
/work \
;fi
USER www-data
WORKDIR /work
RUN mkdir files
COPY data/ /opt/MTB/data/
COPY helpers/ /opt/MTB/helpers/
COPY src/www/ /opt/MTB/www/
COPY tmp/ /opt/MTB/tmp/
COPY example_data/ /opt/MTB/example_data/
COPY src/ /opt/MTB/src/
EXPOSE 8080
ENTRYPOINT ["/opt/MTB/src/starter_s_c.sh"]
The entrypoint script starter_s_c.sh is a small bashscript that feeds the trailing argument to the corresponding R script as an input file - the R script writes the report.
This works, but requires the container to be built again for every new user. What we are looking for is a solution that handles the dynamic permission setting at runtime, so that we only have to build the container once and can use it with many different user configurations.
I have found this but I am not entirely sure how to implement it as it would replace our entrypoint script and I'm not sure how to integrate this solution into our project.
Here is our current entrypoint script which already needs the permissions to be set so localmaster.r can generate the report in the host directory:
#!/bin/sh
file="$1"
cd $(dirname $0)/..
if [ $# -eq 0 ]; then
echo '.libPaths(c("~/lib/R/library", .libPaths())); library(shiny); library(shinyjs); runApp("src")' | R --vanilla
else
echo "Rscript --vanilla /opt/MTB/src/localmaster.r "$file""
Rscript --vanilla /opt/MTB/src/localmaster.r "$file"
fi
(If no arguments are given, it starts a shiny app, just to avoid confusion)
Any help or tips would be much appreciated! Thank you.
Need some help to understand what's wrong.
In short: I've written a bourne shell script, which creates links to contents of source directory in the target directory.
It worked fine on the host system but when targeted on directories on mounted fs (both from chroot and native system) it doesn't work and provides no output at all.
Details:
mounted fs: ext3, rw
host system: 3.2.0-48-generic #74-Ubuntu SMP GNU/Linux
To narrow the question, "/usr" was taken as an example.
permissions for "/usr" in the host system: drwxr-xr-x
permissions for "/usr" on mounted partition: drwxr-xr-x
Tried to use both bash and dash from host system. Same result - works for native file systems, does not work for the mounted.
script (cord.sh; run from root in my cases):
# !/bin/sh
SRCFOLDER=$2 # folder with package installation
DESTFOLDER=$3 # destination folder to install symlinks to ('/' - for base sys; '/usr' - userland)
TARGETS=$(ls $SRCFOLDER) # targets to handle
SRCFOLDER=${SRCFOLDER%/} # stripping slashes from the end, if they are present
DESTFOLDER=${DESTFOLDER%/} #
##
## LINKING
##
if [ "$1" = "-c" ];
then printf %s "$TARGETS" | while IFS= read -r line
do
current_target=$(file $SRCFOLDER/$line) # had an issue with different output in different systems
if [ "${current_target% }" = "$SRCFOLDER/$line: directory" ]; # stripping space helped
then
mkdir -v $DESTFOLDER/$line # if other package created it - it'll fail
/usr/local/bin/cord.sh -c $SRCFOLDER/$line $DESTFOLDER/$line # RECURSION
else
ln -sv $SRCFOLDER/$line $DESTFOLDER/$line # will fail, if exists
fi;
done
##
## REMOVING LINKS
##
elif [ "$1" = "-d" ];
then printf %s "$TARGETS" | while IFS= read -r line
do
current_target=$(file $SRCFOLDER/$line)
if [ "${current_target% }" = "$SRCFOLDER/$line: directory" ];
then
/usr/local/bin/cord.sh -d $SRCFOLDER/$line $DESTFOLDER/$line # RECURSION
else
rm -v $DESTFOLDER/$line
fi;
done
elif [ "$1" = "-h" ];
then
echo "Usage:"
echo "cord -c /path/to/pkgdir /path/to/linkdir - create simlinks for package contents"
echo "cord -d /path/to/pkgdir /path/to/linkdir - delete links for package"
echo "cord -h - displays this help note"
else
echo "Usage:"
echo "cord -c /path/to/pkgdir /path/to/linkdir - create simlinks for package contents"
echo "cord -d /path/to/pkgdir /path/to/linkdir - delete links for package"
echo "cord -h - displays this help note"
fi;
The most obvious thing to suggest, was some issue with permissions. Yet everything looks sane. Maybe I've missed something?
I don't know what your main problem might be (permissions or something else - you should include an example of how you run the script and how you prepare for it with the mounts and everything). But this script can be cleaned up.
First, if you want to test whether something is a directory, use
if [ -d "$something ]
That'll get rid of the clumsy file usage.
Second, don't go through the redundant steps of converting your $TARGETS array to a series of lines and then reading the lines with a loop. Just loop over the array directly.
for line in $TARGETS
Also, instead of using ls to populate an array of filenames, I'd use a glob. But instead of either of those, I'd use find so it can take care of recursion and eliminate the tree of processes you're creating by recursing with a call to the same script. And instead of writing a symlink-tree-maker script I'd use something like lndir which already exists for that purpose...
I wrote a Dancer app, with the log config:
logger: file
logger_format: <%T> %m
log_path: '/usr/local/myapp/log'
log_file: 'myapp.log'
log: debug
and start it with:
plackup -E deployment -D -s Starman --workers=10 --port 8080 -a bin/app.pl
rotate the log file with logrotate
/usr/local/myapp/log/myapp.log {
daily
rotate 10
create 0660 root root
compress
missingok
dateext
}
but the new logfile is zero.
I tried to add postrotate in logrotate conf to send HUP and process HUP sinal in bin/app.pl with
Dancer::Logger::File::init;
but nothing help.
Can anyone tell me how to rotate the dancer's logfile?
One solution to let this happen is that plackup supports the -R switch to watch a folder, so you could add -R <appdir>/run and then change logrotate to:
postrotate
touch <appdir>/run/last.run
As far as I can tell though what you want can't be done in this configuration. Starman explicitly handles sighup to restart the child processes. From Starman/Server.pm:
24 # Override Net::Server's HUP handling - just restart all the workers and that's about it
25 sub sig_hup {
26 my $self = shift;
27 $self->hup_children;
28 }
Perhaps you need an alternative logging solution such as using Dancer::Logger::Syslog
Or, if you must write to a file, maybe check for changes with Linux::Inotify2
I have two questions:
Is there a difference between: nginx -s reload and pkill -HUP -F nginx.pid
What's the simplest way to watch the Nginx conf file and upon changes test the conf file (nginx -t), and if it passes reload Nginx. Can that be done with runit or a process manager like Supervisor?
#!/bin/bash
# NGINX WATCH DAEMON
#
# Author: Devonte
#
# Place file in root of nginx folder: /etc/nginx
# This will test your nginx config on any change and
# if there are no problems it will reload your configuration
# USAGE: sh nginx-watch.sh
# Set NGINX directory
# tar command already has the leading /
dir='etc/nginx'
# Get initial checksum values
checksum_initial=$(tar --strip-components=2 -C / -cf - $dir | md5sum | awk '{print $1}')
checksum_now=$checksum_initial
# Start nginx
nginx
# Daemon that checks the md5 sum of the directory
# ff the sums are different ( a file changed / added / deleted)
# the nginx configuration is tested and reloaded on success
while true
do
checksum_now=$(tar --strip-components=2 -C / -cf - $dir | md5sum | awk '{print $1}')
if [ $checksum_initial != $checksum_now ]; then
echo '[ NGINX ] A configuration file changed. Reloading...'
nginx -t && nginx -s reload;
fi
checksum_initial=$checksum_now
sleep 2
done
At least on Unix, both "reload" action and HUP signal are treated as one thanks to the declaration code
ngx_signal_t signals[] = {
{ ngx_signal_value(NGX_RECONFIGURE_SIGNAL),
"SIG" ngx_value(NGX_RECONFIGURE_SIGNAL),
"reload",
ngx_signal_handler },
in src/os/unix/ngx_process.c. In ngx_signal_handler() the same comnmon code
case ngx_signal_value(NGX_RECONFIGURE_SIGNAL):
ngx_reconfigure = 1;
action = ", reconfiguring";
break;
is executed, that prepares for a common reconfiguration.
To trigger an action when a file is modified, you could either make a crontab and decide of a check-periodicity, or use inotifywait.
To determine if nginx -t is in error, check the return code in a bash file, $?
nginx -t
if [ $? -eq 0 ] then;
nginx -s reload
fi
Note: you may also use service nginx reload
(See return code check examples here)
I've spent two days trying to understand why I can not get cron to work on my Ubuntu EC2 instance. I've read the documentation. Can anyone help? All I want is to get a working cronjob.
I am using a simple wget command to test cron. I have verified that this works manually from the command line:
/usr/bin/wget -O /home/ubuntu/backups/testfile http://www.nytimes.com/
My crontab file looks like this:
02 * * * * /usr/bin/wget -O /home/ubuntu/backups/testfile http://www.nytimes.com/
I have single spaces between the commands and I have a blank line below the command. I've also tried to execute this command from the system level sudo crontab -e. It still doesn't work.
The cron daemon is running:
ps aux | grep crond
ubuntu 2526 0.0 0.1 8096 928 pts/4 S+ 10:37 0:00 grep crond
The cronjob appear to be running:
$ crontab -l
02 * * * * /usr/bin/wget -O /home/ubuntu/backups/testfile http://www.nytimes.com/
Does anyone have any advice or possible solutions?
Thanks for your time.
Cron can be run in Amazon-based linux server just like in any other linux server.
Login to console with SSH.
Run crontab -e on the command line.
You are now inside a vi editor of the crontab of the current user (which is by default the console user, with root permissions)
To test cron, add the following line: * * * * * /usr/bin/uptime > /tmp/uptime
Now save the file and exit vi (press Esc and enter :wq).
After a minute or two, check that the uptime file was created in /tmp (cat /tmp/uptime).
Compare it with the current system uptime by typing the uptime command on the command line.
The scenario above worked successfully on a server with the Amazon Linux O/S installed, but it should work on other linux boxes as well. This modifies the crontab of the current user, without touching the system's crontabs and doesn't require the user inside the crontab entry, since you are running things under your own user. Easier, and safer!
Your cron daemon is not running. When you're running ps aux | grep crond the result is showing that only the grep command is running. Be aware of this whenever you run ps aux | grep blah.
Check the status of the cron service by running this command.
Try:
sudo service crond status
Additional information here: http://www.cyberciti.biz/faq/howto-linux-unix-start-restart-cron/.
On some AWS Ubuntu EC2 machines, cron jobs cannot be edited or made to run by using crontab -e or even sudo crontab -e (for whatever reason). I was able to get cron jobs working by:
touch /home/ubuntu/crontest.log to create a log file
sudo vim /etc/crontab which edits the system-wide crontab
add your own cron job on the second to last line using the root user, such as * * * * * root date && echo 'It works!'>> /home/ubuntu/crontest.log 2>&1 which dumps stdout and stderr into the logfile you created in step 1
Verify it is working by waiting 1 minute and then cat /home/ubuntu/crontest.log to see the output of the cron job
Don't forget to specify the user to run it as. Try creating a new file inside your /etc/cron.d folder named after what you want to do like getnytimes and have the contents of that file just be:
02 * * * * root /usr/bin/wget -O /home/ubuntu/backups/testfile http://www.nytimes.com/
In my case the cron job was working but the script it was running failed. The failure reason was due to the fact that I used relative path instead of absolute path in my include line inside the script.
What did the trick for me was
Make sure the crontab was active:
sudo service crond status
Restart the crontab by running:
sudo service crond restart
Reschedule the cron job as usual:
crontab -e
running
/usr/bin/wget -O /home/ubuntu/backups/testfile http://www.nytimes.com/
gives me an error
/home/ubuntu/backups/testfile: No such file or directory
is this your issue?
I guess cron is not writing this error to anywhere you can redirect stderr to stdout and see the error like this :
02 * * * * /usr/bin/wget -O /home/ubuntu/backups/testfile http://www.nytimes.com/ > /home/ubuntu/error.log 2&>1