I am currently working on a research tool that is supposed to be containerized using docker to hopefully be run on as many different systems as possible. This works fine for the most part, we have run into a permission problem because of the workflow though: The tool takes an input file (which we mount into the container), evaluates it using R scripts and is then supposed to generate a report on the input file exactly where the file was taken from on the host system.
The latter part is problematic as at least in our university context, the internal container user lacks write permissions in the (non-root) user home folders, which we are currently taking our testing data from. This would obviously also be bad in a production context as we don't know how the potential users' system is set up, which is why we are trying to dynamically and temporarily set the permissions of the container user to the host user.
I have found different solutions that involve passing the UID/GID to the docker daemon when building the container in some way or another:
docker build --build-arg USER_ID=$(id -u ${USER}) --build-arg GROUP_ID=$(id -g ${USER}) -t IMAGE .
I also changed the dockerfile accordingly using a tutorial that suggested replacing the internal www-data user:
[...Package installation steps that are supposed to be run as root...]
ARG USER_ID
ARG GROUP_ID
RUN if [ ${USER_ID:-0} -ne 0 ] && [ ${GROUP_ID:-0} -ne 0 ]; then \
userdel -f www-data &&\
if getent group www-data ; then groupdel www-data; fi &&\
groupadd -g ${GROUP_ID} www-data &&\
useradd -l -u ${USER_ID} -g www-data www-data &&\
install -d -m 0755 -o www-data -g www-data /work/ &&\
chown --changes --silent --no-dereference --recursive \
--from=33:33 ${USER_ID}:${GROUP_ID} \
/work \
;fi
USER www-data
WORKDIR /work
RUN mkdir files
COPY data/ /opt/MTB/data/
COPY helpers/ /opt/MTB/helpers/
COPY src/www/ /opt/MTB/www/
COPY tmp/ /opt/MTB/tmp/
COPY example_data/ /opt/MTB/example_data/
COPY src/ /opt/MTB/src/
EXPOSE 8080
ENTRYPOINT ["/opt/MTB/src/starter_s_c.sh"]
The entrypoint script starter_s_c.sh is a small bashscript that feeds the trailing argument to the corresponding R script as an input file - the R script writes the report.
This works, but requires the container to be built again for every new user. What we are looking for is a solution that handles the dynamic permission setting at runtime, so that we only have to build the container once and can use it with many different user configurations.
I have found this but I am not entirely sure how to implement it as it would replace our entrypoint script and I'm not sure how to integrate this solution into our project.
Here is our current entrypoint script which already needs the permissions to be set so localmaster.r can generate the report in the host directory:
#!/bin/sh
file="$1"
cd $(dirname $0)/..
if [ $# -eq 0 ]; then
echo '.libPaths(c("~/lib/R/library", .libPaths())); library(shiny); library(shinyjs); runApp("src")' | R --vanilla
else
echo "Rscript --vanilla /opt/MTB/src/localmaster.r "$file""
Rscript --vanilla /opt/MTB/src/localmaster.r "$file"
fi
(If no arguments are given, it starts a shiny app, just to avoid confusion)
Any help or tips would be much appreciated! Thank you.
I have two logrotate files:
/etc/logrotate.d/nginx-size
/var/log/nginx/*.log
/var/log/www/nginx/50x.log
{
missingok
rotate 3
size 2G
dateext
compress
compresscmd /usr/bin/bzip2
compressoptions -6
compressext .bz2
uncompresscmd /usr/bin/bunzip2
notifempty
create 640 nginx nginx
sharedscripts
postrotate
[ -f /var/run/nginx.pid ] && kill -USR1 `cat /var/run/nginx.pid`
endscript
}
and
/etc/logrotate.d/nginx-daily
/var/log/nginx/*.log
/var/log/www/nginx/50x.log
{
missingok
rotate 3
dateext
compress
compresscmd /usr/bin/bzip2
compressoptions -6
compressext .bz2
uncompresscmd /usr/bin/bunzip2
notifempty
create 640 nginx nginx
sharedscripts
postrotate
[ -f /var/run/nginx.pid ] && kill -USR1 `cat /var/run/nginx.pid`
endscript
}
Command logrotate -d -v /etc/logrotate.d/nginx-sizeoutput:
reading config file /etc/logrotate.d/nginx-size
compress_prog is now /usr/bin/bzip2
compress_options is now -6
compress_ext is now .bz2
uncompress_prog is now /usr/bin/bunzip2
Handling 1 logs
rotating pattern: /var/log/nginx/*.log
/var/log/www/nginx/50x.log
2147483648 bytes (3 rotations)
empty log files are not rotated, old logs are removed
considering log /var/log/nginx/access.log
log does not need rotating
considering log /var/log/nginx/error.log
log does not need rotating
considering log /var/log/nginx/get.access.log
log does not need rotating
considering log /var/log/nginx/post.access.log
log needs rotating
considering log /var/log/www/nginx/50x.log
log does not need rotating
rotating log /var/log/nginx/post.access.log, log->rotateCount is 3
dateext suffix '-20141204'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
glob finding old rotated logs failed
renaming /var/log/nginx/post.access.log to /var/log/nginx/post.access.log-20141204
creating new /var/log/nginx/post.access.log mode = 0640 uid = 497 gid = 497
running postrotate script
running script with arg /var/log/nginx/*.log
/var/log/www/nginx/50x.log
: "
[ -f /var/run/nginx.pid ] && kill -USR1 `cat /var/run/nginx.pid`
"
compressing log with: /usr/bin/bzip2
Same (normal) output on ngnix-daily..
If I run from root command
logrotate -f /etc/logrotate.d/nginx-size
manually, it do all the thing. BUT! It don't run it automatically!
contab:
*/5 5-23 * * * root logrotate -f -v /etc/logrotate.d/nginx-size 2>&1 > /tmp/logrotate_size
00 04 * * * root logrotate -f -v /etc/logrotate.d/nginx-daily 2>&1 > /tmp/logrotate_daily
Also, files /tmp/logrotate_daily & /tmp/logrotate_size are always empty..
Cron don't give me any errors in /var/log/cron
Dec 4 14:45:01 (root) CMD (logrotate -f -v /etc/logrotate.d/nginx-rz-size 2>&1 > /tmp/logrotate_size )
Dec 4 14:50:01 (root) CMD (logrotate -f -v /etc/logrotate.d/nginx-rz-size 2>&1 > /tmp/logrotate_size )
What's wrong with dat thing?.. Centos 6.5 x86_64, Logrotate version 3.8.7 (out of source) + logrotate version 3.7.8 (via rpm).
Thx in advance.
Your redirections are incorrect in those cron lines. They will not output error information to those files.
Redirection order matters. You want >/tmp/logrotate_size 2>&1 to get what you want.
The underlying issue here is one of the things covered by the "Debugging crontab" section of the cron info page.
Namely "Making assumptions about the environment".
Making assumptions about the environment
Graphical programs (X11 apps), java programs, ssh and sudo are notoriously problematic to run as cron jobs. This is because they rely on things from interactive environments that may not be present in cron's environment.
To more closely model cron's environment interactively, run
env -i sh -c 'yourcommand'
This will clear all environment variables and run sh which may be more meager in features that your current shell.
Common problems uncovered this way:
foo: Command not found or just foo: not found.
Most likely $PATH is set in your .bashrc or similar interactive init file. Try specifying all commands by full path (or put source ~/.bashrc at the start of the script you're trying to run).
I have two questions:
Is there a difference between: nginx -s reload and pkill -HUP -F nginx.pid
What's the simplest way to watch the Nginx conf file and upon changes test the conf file (nginx -t), and if it passes reload Nginx. Can that be done with runit or a process manager like Supervisor?
#!/bin/bash
# NGINX WATCH DAEMON
#
# Author: Devonte
#
# Place file in root of nginx folder: /etc/nginx
# This will test your nginx config on any change and
# if there are no problems it will reload your configuration
# USAGE: sh nginx-watch.sh
# Set NGINX directory
# tar command already has the leading /
dir='etc/nginx'
# Get initial checksum values
checksum_initial=$(tar --strip-components=2 -C / -cf - $dir | md5sum | awk '{print $1}')
checksum_now=$checksum_initial
# Start nginx
nginx
# Daemon that checks the md5 sum of the directory
# ff the sums are different ( a file changed / added / deleted)
# the nginx configuration is tested and reloaded on success
while true
do
checksum_now=$(tar --strip-components=2 -C / -cf - $dir | md5sum | awk '{print $1}')
if [ $checksum_initial != $checksum_now ]; then
echo '[ NGINX ] A configuration file changed. Reloading...'
nginx -t && nginx -s reload;
fi
checksum_initial=$checksum_now
sleep 2
done
At least on Unix, both "reload" action and HUP signal are treated as one thanks to the declaration code
ngx_signal_t signals[] = {
{ ngx_signal_value(NGX_RECONFIGURE_SIGNAL),
"SIG" ngx_value(NGX_RECONFIGURE_SIGNAL),
"reload",
ngx_signal_handler },
in src/os/unix/ngx_process.c. In ngx_signal_handler() the same comnmon code
case ngx_signal_value(NGX_RECONFIGURE_SIGNAL):
ngx_reconfigure = 1;
action = ", reconfiguring";
break;
is executed, that prepares for a common reconfiguration.
To trigger an action when a file is modified, you could either make a crontab and decide of a check-periodicity, or use inotifywait.
To determine if nginx -t is in error, check the return code in a bash file, $?
nginx -t
if [ $? -eq 0 ] then;
nginx -s reload
fi
Note: you may also use service nginx reload
(See return code check examples here)
How to set cron to clean drupal cache every 2 days?
You have to set this in your server, not from Drupal.
Cron example:
minute hour day month day-of-week command-line-to-execute
0 * * * "MON,WED,FRI" wget -O - -q -t 1 http://www.example.com/cron.php
http://drupal.org/cron
http://en.wikipedia.org/wiki/CRON_expression
to set up a crontab entry for this to run every 2 days, only once a day at 01:00am :
(Using step values in conjunction with ranges in crontab)
0 1 1-31/2 * * wget -O - -q -t 1 http://yoursite.com/cron.php
See:
man 5 crontab
http://drupal.org/node/23714
Does any operating system provide a mechanism (system call — not command line program) to change the pathname referenced by a symbolic link (symlink) — other than by unlinking the old one and creating a new one?
The POSIX standard does not. Solaris 10 does not. MacOS X 10.5 (Leopard) does not. (I'm tolerably certain neither AIX nor HP-UX does either. Judging from this list of Linux system calls, Linux does not have such a system call either.)
Is there anything that does?
(I'm expecting that the answer is "No".)
Since proving a negative is hard, let's reorganize the question.
If you know that some (Unix-like) operating system not already listed has no system call for rewriting the value of a symlink (the string returned by readlink()) without removing the old symlink and creating a new one, please add it — or them — in an answer.
Yes, you can!
$ ln -sfn source_file_or_directory_name softlink_name
AFAIK, no, you can't. You have to remove it and recreate it. Actually, you can overwrite a symlink and thus update the pathname referenced by it:
$ ln -s .bashrc test
$ ls -al test
lrwxrwxrwx 1 pascal pascal 7 2009-09-23 17:12 test -> .bashrc
$ ln -s .profile test
ln: creating symbolic link `test': File exists
$ ln -s -f .profile test
$ ls -al test
lrwxrwxrwx 1 pascal pascal 8 2009-09-23 17:12 test -> .profile
EDIT: As the OP pointed out in a comment, using the --force option will make ln perform a system call to unlink() before symlink(). Below, the output of strace on my linux box proving it:
$ strace -o /tmp/output.txt ln -s -f .bash_aliases test
$ grep -C3 ^unlink /tmp/output.txt
lstat64("test", {st_mode=S_IFLNK|0777, st_size=7, ...}) = 0
stat64(".bash_aliases", {st_mode=S_IFREG|0644, st_size=2043, ...}) = 0
symlink(".bash_aliases", "test") = -1 EEXIST (File exists)
unlink("test") = 0
symlink(".bash_aliases", "test") = 0
close(0) = 0
close(1) = 0
So I guess the final answer is "no".
EDIT: The following is copied from Arto Bendiken's answer over on unix.stackexchange.com, circa 2016.
This can indeed be done atomically with rename(2), by first creating the new symlink under a temporary name and then cleanly overwriting the old symlink in one go. As the man page states:
If newpath refers to a symbolic link the link will be overwritten.
In the shell, you would do this with mv -T as follows:
$ mkdir a b
$ ln -s a z
$ ln -s b z.new
$ mv -T z.new z
You can strace that last command to make sure it is indeed using rename(2) under the hood:
$ strace mv -T z.new z
lstat64("z.new", {st_mode=S_IFLNK|0777, st_size=1, ...}) = 0
lstat64("z", {st_mode=S_IFLNK|0777, st_size=1, ...}) = 0
rename("z.new", "z") = 0
Note that in the above, both mv -T and strace are Linux-specific.
On FreeBSD, use mv -h alternately.
Editor's note: This is how Capistrano has done it for years now, ever since ~2.15. See this pull request.
It is not necessary to explicitly unlink the old symlink. You can do this:
ln -s newtarget temp
mv temp mylink
(or use the equivalent symlink and rename calls). This is better than explicitly unlinking because rename is atomic, so you can be assured that the link will always point to either the old or new target. However this will not reuse the original inode.
On some filesystems, the target of the symlink is stored in the inode itself (in place of the block list) if it is short enough; this is determined at the time it is created.
Regarding the assertion that the actual owner and group are immaterial, symlink(7) on Linux says that there is a case where it is significant:
The owner and group of an existing symbolic link can be changed using
lchown(2). The only time that the ownership of a symbolic link matters is
when the link is being removed or renamed in a directory that has the sticky
bit set (see stat(2)).
The last access and last modification timestamps of a symbolic link can be
changed using utimensat(2) or lutimes(3).
On Linux, the permissions of a symbolic link are not used in any operations;
the permissions are always 0777 (read, write, and execute for all user
categories), and can't be changed.
Just a warning to the correct answers above:
Using the -f / --force Method provides a risk to lose the file if you mix up source and target:
mbucher#server2:~/test$ ls -la
total 11448
drwxr-xr-x 2 mbucher www-data 4096 May 25 15:27 .
drwxr-xr-x 18 mbucher www-data 4096 May 25 15:13 ..
-rw-r--r-- 1 mbucher www-data 4109466 May 25 15:26 data.tar.gz
-rw-r--r-- 1 mbucher www-data 7582480 May 25 15:27 otherdata.tar.gz
lrwxrwxrwx 1 mbucher www-data 11 May 25 15:26 thesymlink -> data.tar.gz
mbucher#server2:~/test$
mbucher#server2:~/test$ ln -s -f thesymlink otherdata.tar.gz
mbucher#server2:~/test$
mbucher#server2:~/test$ ls -la
total 4028
drwxr-xr-x 2 mbucher www-data 4096 May 25 15:28 .
drwxr-xr-x 18 mbucher www-data 4096 May 25 15:13 ..
-rw-r--r-- 1 mbucher www-data 4109466 May 25 15:26 data.tar.gz
lrwxrwxrwx 1 mbucher www-data 10 May 25 15:28 otherdata.tar.gz -> thesymlink
lrwxrwxrwx 1 mbucher www-data 11 May 25 15:26 thesymlink -> data.tar.gz
Of course this is intended, but usually mistakes occur. So, deleting and rebuilding the symlink is a bit more work but also a bit saver:
mbucher#server2:~/test$ rm thesymlink && ln -s thesymlink otherdata.tar.gz
ln: creating symbolic link `otherdata.tar.gz': File exists
which at least keeps my file.
Wouldn't unlinking it and creating the new one do the same thing in the end anyway?
Just in case it helps: there is a way to edit a symlink with midnight commander (mc).
The menu command is (in French on my mc interface):
Fichier / Éditer le lien symbolique
which may be translated to:
File / Edit symbolic link
The shortcut is C-x C-s
Maybe it internally uses the ln --force command, I don't know.
Now, I'm trying to find a way to edit a whole lot of symlinks at once (that's how I arrived here).
Technically, there's no built-in command to edit an existing symbolic link. It can be easily achieved with a few short commands.
Here's a little bash/zsh function I wrote to update an existing symbolic link:
# -----------------------------------------
# Edit an existing symbolic link
#
# #1 = Name of symbolic link to edit
# #2 = Full destination path to update existing symlink with
# -----------------------------------------
function edit-symlink () {
if [ -z "$1" ]; then
echo "Name of symbolic link you would like to edit:"
read LINK
else
LINK="$1"
fi
LINKTMP="$LINK-tmp"
if [ -z "$2" ]; then
echo "Full destination path to update existing symlink with:"
read DEST
else
DEST="$2"
fi
ln -s $DEST $LINKTMP
rm $LINK
mv $LINKTMP $LINK
printf "Updated $LINK to point to new destination -> $DEST"
}
You can modify the softlink created once in one of the two ways as below in Linux
one is where you can remove existing softlink with rm and again create new softlink with ln -s command .
However this can be done in one step , you can replace existing softlink with updated path with "ln -vfns Source_path Destination_path" command.
Listing initial all files in directory
$ ls -lrt
drwxrwxr-x. 3 root root 110 Feb 27 18:58 test_script
$
Create softlink test for test_script with ln -s command.
$ ln -s test_script test
$ ls -lrt
drwxrwxr-x. 3 root root 110 Feb 27 18:58 test_script
lrwxrwxrwx. 1 root root 11 Feb 27 18:58 test -> test_script
$
Update softlink test with new directory test_script/softlink with single command
$ ln -vfns test_script/softlink/ test
'test' -> 'test_script/softlink/'
$
List new softlink location
$ ls -lrt
lrwxrwxrwx. 1 root root 21 Feb 27 18:59 test -> test_script/softlink/
$
ln --help
-v, --verbose print name of each linked file
-f, --force remove existing destination files
-n, --no-dereference treat LINK_NAME as a normal file if it is a symbol
-s, --symbolic make symbolic links instead of hard links