How does zsh interpret non-absolute paths in shebangs? (WAS: Why does python3 -i permit non-absolute paths in shebang?) - zsh

I recently discovered the -i argument to Python, which drops into interactive mode after the script completes. Pretty neat!
$ cat test.py
#!python3 -i
x=5
print('The value of x is ' + str(x))
$ ./test.py
The value of x is 5
>>> print(str(x+1))
6
>>>
zsh: suspended ./test.py
However, when I tried to copy this script to a version that terminates on completion, it fails:
$ cat test1.py
#!python3
x=5
print('The value of x is ' + str(x))
$ ./test.py
/usr/local/Cellar/python3/3.6.3/Frameworks/Python.framework/Versions/3.6/Resources/Python.app/Contents/MacOS/Python: can't open file '
x=5
print('The value of x is ' + str(x))
': [Errno 2] No such file or directory
From some further reading, I discovered that I had originally made a mistake, and #!/usr/bin/env python3 is the correct shebang.
However, I'm curious why a non-absolute path to python3 succeeds only when I give the -i flag. I guess this must be something to do with how zsh interprets non-absolute shebangs, but I don't know enough to know how to investigate that.
System setup: MacOS 10.12.6, iTerm2 3.1.6, zsh 5.2. which python3 gives /usr/local/bin/python3, and that directory is on $PATH.
Interestingly, I don't get the same behaviour on sh:
$ sh
sh-3.2$ cat test.py
#!python3
x=5
print('The value of x is ' + str(x))
sh-3.2$ ./test.py
sh: ./test.py: python3: bad interpreter: No such file or directory
I got some comments suggesting that this is something to do with CWD or permissions. python3 is not in my CWD, and both files have execute permission:
$ ls -al | grep 'py' | awk '{print $1, $10}'
-rw------- .python_history
-rwxr-xr-x test.py
-rwxr-xr-x test1.py

Your kernel will not execute the script unless the interpreter is
specified as an absolute path, or
specified as a path relative to the current working directory
Then if the kernel refuses to execute the script, your shell might take over and try to execute it anyway, interpreting the shebang line according to its own rules (like finding the executable in the $PATH for example).
zsh does attempt to do this. sh does not.
However the way zsh interprets the shebang (and probably subsequent lines) is really really strange. It looks like it always expects a single argument after the command name. See what it does:
$ cat test.py
#!python3 -b -i
x=5
print('The value of x is ' + str(x))
$ strace -f -e execve zsh
execve("/bin/zsh", ["zsh"], 0x7ffd35c9e198 /* 78 vars */) = 0
host% ./test.py
strace: Process 5510 attached
[pid 5510] execve("./test.py", ["./test.py"], 0x558ec6e46710 /* 79 vars */) = -1 ENOENT (No such file or directory)
[pid 5510] execve("/usr/bin/python3", ["python3", "-b -i", "./test.py"], 0x558ec6e46710 /* 79 vars */) = 0
[pid 5510] execve("/usr/lib/python-exec/python3.4/python3", ["/usr/lib/python-exec/python3.4/p"..., "-b -i", "./test.py"], 0x7fffd30eb208 /* 79 vars */) = 0
Unknown option: -
usage: /usr/lib/python-exec/python3.4/python3 [option] ... [-c cmd | -m mod | file | -] [arg] ...
Try `python -h' for more information.
[pid 5510] +++ exited with 2 +++
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=5510, si_uid=1000, si_status=2, si_utime=0, si_stime=0} ---
host%
+++ exited with 2 +++
See how ["python3", "-b -i", "./test.py"] are passed as arguments. It seems highly counterintuitive to me to lump the two switches -b and -i together, but that's what zsh does. Python obviously doesn't understand this.
When there are no arguments, the exact behaviour depends on whether there is a space after the program name, but is strange in either case. Check it with strace yourself because you are not going to believe me.
It is my understanding that zsh handling of the shebang line is just buggy.

Related

redirect from output file so can be captured

A program foo writes to an output file and prints diagnostic information to stdout.
Thus:
foo -o ./out -i ./input > log
results in the valuable stuff in ./out and some mumbo jumbo in log.
I need to read ./out into another program (an R script I am writing).
The intermediate step of writing to file ./out and reading again is slow, and I need to do this for big ./out files, hundreds of times.
I would like to perform some kind of redirection so that foo writes to a file descriptor rather than a file ./out, which I can read into my script directly. But stdout is already used.
Here is what I tried:
in R:
library(data.table)
fread(cmd = "foo -o /dev/stdout -i ./input > /dev/null")
but I get an error:
File '/data1/tmp/RtmpkKIpBm/fileeb789130da8e3' has size 0. Returning a NULL data.table.
foo -o >(cat) -i ./input > log
Demo:
$ cat foo
#!/bin/bash
if [ "$1" != "-o" ]; then
exit 2
fi
echo "mumbo jumbo stdout" >&1
echo "valueable info" > "$2"
echo "mumbo jumbo stderr" >&2 # threw in stderr for good measure
$ ./foo -o x
mumbo jumbo stdout
mumbo jumbo stderr
$ cat x
valueable info
$ rm x
$ ./foo -o /dev/stdout
mumbo jumbo stdout
valueable info
mumbo jumbo stderr
$ ./foo -o /dev/stdout &>/dev/null
$ ./foo -o >(cat) &>/dev/null
valueable info
Explanation:
Every process has its own stdout. with ./foo -o /dev/stdout &>/dev/null, you're telling foo to output its valuable info into its own stdout, which is /dev/null. But with ./foo -o >(cat) &>/dev/null, you're telling foo to output its valuable info into some pipe, and that pipe goes to cat, whose stdout is not /dev/null, but rather inherited from the shell.
In the demo, the shell's stdout is the terminal, but if it was coming from R's fread(), both the shell's stdout and cat's stdout would go to where fread() can read them.
Try
foo -o /dev/fd/3 -i ./input 3>&1 1>/dev/null
Redirections are processed before commands are run
Redirections are processed left-to-right
3>&1 causes anything written to file descriptor 3 to go to the current stdout
1>/dev/null causes anything written to stdout to go to /dev/null
Writing to /dev/fd/3 then writes to the original stdout
The code should work with any POSIX-compliant shell on any system that supports the /dev/fd directory.
The /dev/fd directory is not part of POSIX, but it is supported on many Unix-like systems, including Linux, FreeBSD, macOS, and Solaris.

zsh: standard error not getting redirected

I have created an empty director for testing
$ mkdir test
$ cd test
$ grep da *
zsh: no matches found: *
Now i am trying to redirect the error zsh: no matches found: * to a file. (my purpose is only for testing and understanding, kindly dont ask what and why i want to do this)
$ grep da * 2> grep-errors.txt
zsh: no matches found: *
Now the error is still visible
$ ls -al
total 60
drwxr-xr-x 2 test users 4096 Mar 2 20:18 .
drwxr-xr-x 117 test users 53248 Mar 2 20:25 ..
the file grep-errors.txt is not created
So what is happening here can some one explain. why the errors are not getting redirected to the file.
The redirection only applies to grep, but it is zsh itself producing that error when * doesn't expand to any matching files, before grep ever runs, or even before zsh tries to process the redirection.
One workaround is to disable the NOMATCH option, so that * is passed literally to grep. Then grep will run and produce an error (grep: *: No such file or directory) which will be written to the file.
% setopt NO_NOMATCH
% grep da * 2> grep-errors.txt
% cat grep-errors.txt
grep: *: No such file or directory

Grabbing .jar application output stream to console after console was closed and new one opened on Oracle Solaris 11

On Oracle Solaris 11 console when ps -ef | grep java command is issued I can see running some java process PID, which was started on other console window and then it (console window) was closed (.jar application output then was visible). Is it some way to grab again that application output without restarting .jar file?
Application was started like this (as a root user):
java -jar SomeFile.jar &
Write output to file is not an option in this case.
Yes, you can do that, but it involves some mad skills with gdb. Here is how to do that in Linux and I believe you can do the same in Solaris (since it has gdb and it has all needed system calls I'm gonna use further).
There are 3 file descriptors for standard streams:
stdin: 0
stdout: 1
stderr: 2
You are interested in stdout and stderr (both are console output), so you need file descriptors with numbers 1 and 2, just keep it in mind.
Now I'm gonna show you how to do what you ask for "okular" application (instead of your "java" application) for stderr stream.
Run "okular" in terminal, like this:
$ okular &
and then close this terminal. This is just to simulate your situation.
Open another terminal
Look for "okular" process:
$ ps aux | grep okular
Output:
joe 27599 2.2 0.9 515644 73944 ? S 23:46 0:00 okular
So "okular" PID is 27599.
Look for open file descriptors of "okular" process:
$ ls -l /proc/27599/fd
Output:
lrwx------ 1 joe joe 64 Feb 18 23:46 0 -> /dev/pts/0 (deleted)
lrwx------ 1 joe joe 64 Feb 18 23:46 1 -> /dev/pts/0 (deleted)
lrwx------ 1 joe joe 64 Feb 18 23:46 2 -> /dev/pts/0 (deleted)
You see that all 3 streams are deleted.
Now let's attach to our process with gdb:
$ gdb -p 27599 /usr/bin/okular
Inside of gdb perform next operations:
(gdb) p close(2)
(gdb) p creat("/tmp/okular_2", 0600)
(gdb) detach
(gdb) quit
Here we invoked 2 system calls:
close(), to close file for stderr stream of our process
creat(), to create new file for stderr stream of our process
p is gdb command, it prints (in our case) system calls return values.
Now all new stderr output of our process will be appended to text file /tmp/okular_2. We can read it constantly this way:
$ tail -f /tmp/okular_2
Conclusion
Ok, that's it, we revived stderr stream. You can do the same for stdout stream, the only difference is that you need to call "close(1)" instead of "close(2)" in gdb. Also, in your case be sure to replace all "okular" words with your "java" word.
The most of answer was inspired by this article.
If you need to revive stdin stream, you can attach it to pipe (FIFO) file, see details here.
Yes, it is possible to snoop any process output with Solaris native tools.
One way would be using dtrace which allows tracing processes even when they are already grabbed by a debugger or similar tool.
This dtrace script will display a given process stdout:
#!/bin/ksh
pid=$1
dtrace -qn "syscall::write:entry /pid == $pid && arg0 == 1 /
{ printf(\"%s\",copyinstr(arg1)); }"
You should should pass the process id of the java application to trace as its first argument, eg. $(pgrep -f "java -jar SomeFile.jar").
Replace arg0 == 1 by arg0 == 2 if you want to trace stderr vs stdin.
Should you want to see non displayable characters (in octal), you might use this slightly modified version:
#!/bin/ksh
pid=$1
dtrace -qn "syscall::write:entry /pid == $pid && arg0 == 1 /
{ printf(\"%s\",copyinstr(arg1)); }" | od -c
Another native way is to use the truss command. The following script will show all writes from your process to any file descriptors, and will include a full detailed trace for both stdout and stderr (3799 is your target process pid):
truss -w1,2 -t write -p 3799
dtrace:
http://docs.oracle.com/cd/E18752_01/html/819-5488/gcgkk.html
truss:
http://docs.oracle.com/cd/E36784_01/html/E36870/truss-1.html#scrolltoc

I'm stuck on logrotate mystery

I have two logrotate files:
/etc/logrotate.d/nginx-size
/var/log/nginx/*.log
/var/log/www/nginx/50x.log
{
missingok
rotate 3
size 2G
dateext
compress
compresscmd /usr/bin/bzip2
compressoptions -6
compressext .bz2
uncompresscmd /usr/bin/bunzip2
notifempty
create 640 nginx nginx
sharedscripts
postrotate
[ -f /var/run/nginx.pid ] && kill -USR1 `cat /var/run/nginx.pid`
endscript
}
and
/etc/logrotate.d/nginx-daily
/var/log/nginx/*.log
/var/log/www/nginx/50x.log
{
missingok
rotate 3
dateext
compress
compresscmd /usr/bin/bzip2
compressoptions -6
compressext .bz2
uncompresscmd /usr/bin/bunzip2
notifempty
create 640 nginx nginx
sharedscripts
postrotate
[ -f /var/run/nginx.pid ] && kill -USR1 `cat /var/run/nginx.pid`
endscript
}
Command logrotate -d -v /etc/logrotate.d/nginx-sizeoutput:
reading config file /etc/logrotate.d/nginx-size
compress_prog is now /usr/bin/bzip2
compress_options is now -6
compress_ext is now .bz2
uncompress_prog is now /usr/bin/bunzip2
Handling 1 logs
rotating pattern: /var/log/nginx/*.log
/var/log/www/nginx/50x.log
2147483648 bytes (3 rotations)
empty log files are not rotated, old logs are removed
considering log /var/log/nginx/access.log
log does not need rotating
considering log /var/log/nginx/error.log
log does not need rotating
considering log /var/log/nginx/get.access.log
log does not need rotating
considering log /var/log/nginx/post.access.log
log needs rotating
considering log /var/log/www/nginx/50x.log
log does not need rotating
rotating log /var/log/nginx/post.access.log, log->rotateCount is 3
dateext suffix '-20141204'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
glob finding old rotated logs failed
renaming /var/log/nginx/post.access.log to /var/log/nginx/post.access.log-20141204
creating new /var/log/nginx/post.access.log mode = 0640 uid = 497 gid = 497
running postrotate script
running script with arg /var/log/nginx/*.log
/var/log/www/nginx/50x.log
: "
[ -f /var/run/nginx.pid ] && kill -USR1 `cat /var/run/nginx.pid`
"
compressing log with: /usr/bin/bzip2
Same (normal) output on ngnix-daily..
If I run from root command
logrotate -f /etc/logrotate.d/nginx-size
manually, it do all the thing. BUT! It don't run it automatically!
contab:
*/5 5-23 * * * root logrotate -f -v /etc/logrotate.d/nginx-size 2>&1 > /tmp/logrotate_size
00 04 * * * root logrotate -f -v /etc/logrotate.d/nginx-daily 2>&1 > /tmp/logrotate_daily
Also, files /tmp/logrotate_daily & /tmp/logrotate_size are always empty..
Cron don't give me any errors in /var/log/cron
Dec 4 14:45:01 (root) CMD (logrotate -f -v /etc/logrotate.d/nginx-rz-size 2>&1 > /tmp/logrotate_size )
Dec 4 14:50:01 (root) CMD (logrotate -f -v /etc/logrotate.d/nginx-rz-size 2>&1 > /tmp/logrotate_size )
What's wrong with dat thing?.. Centos 6.5 x86_64, Logrotate version 3.8.7 (out of source) + logrotate version 3.7.8 (via rpm).
Thx in advance.
Your redirections are incorrect in those cron lines. They will not output error information to those files.
Redirection order matters. You want >/tmp/logrotate_size 2>&1 to get what you want.
The underlying issue here is one of the things covered by the "Debugging crontab" section of the cron info page.
Namely "Making assumptions about the environment".
Making assumptions about the environment
Graphical programs (X11 apps), java programs, ssh and sudo are notoriously problematic to run as cron jobs. This is because they rely on things from interactive environments that may not be present in cron's environment.
To more closely model cron's environment interactively, run
env -i sh -c 'yourcommand'
This will clear all environment variables and run sh which may be more meager in features that your current shell.
Common problems uncovered this way:
foo: Command not found or just foo: not found.
Most likely $PATH is set in your .bashrc or similar interactive init file. Try specifying all commands by full path (or put source ~/.bashrc at the start of the script you're trying to run).

Drush rsync code 23 error

I have a path issue. I can't seem to figure out why I am getting this code 23 error. Here is the complete error message: I am guessing that rsync cant write to my local /private/tmp directory.
Here is the output:
```
Do you really want to continue? (y/n): y
rsync: link_stat "/tmp/SGDU55.sql" failed: No such file or directory (2)
rsync error: some files could not be transferred (code 23) at /SourceCache/rsync/rsync-42/rsync/main.c(1400) [receiver=2.6.9]
Could not rsync from xxx#staging-5244.prod.xxx.com:/tmp/SGDU55.sql to [error]
/private/tmp/-to-drupal_db.sql.p0YIBu
```
Here is the drush simulate command abbreviated output.
```
$ drush sql-sync #aq6 #aqsolo --simulate
.....
Calling system(rsync -e 'ssh -i /Users/dave.ferrera/.vagrant.d/insecure_private_key' -akz --exclude=".git" --exclude=".gitignore" --exclude=".hg" --exclude=".hgignore" --exclude=".hgrags" --exclude=".bzr" --exclude=".bzrignore" --exclude=".bzrtags" --exclude=".svn" /private/tmp/-to-drupal_db.sql.iXOzSo vagrant#12.12.12.12:tmp/drupal_db.sql);
Calling system(ssh -i /Users/dave.ferrera/.vagrant.d/insecure_private_key vagrant#12.12.12.12 'mysql --database=drupal_db --host=localhost --user=root --password=password --silent < tmp/drupal_db.sql 2>&1');
$
```
Is there a way change the /private/tmp path to something else?
I have added chmod 1777 to /private and /private/tmp
Since I was using Acquia the problem seemed to be solved as soon as I changed to the correct %dum-dir path.
so now I have:
'%dump-dir' => '/mnt/tmp/',
If you alias root is beginning with 'root' => '/mnt/gfs..... than it should be the same.

Resources