Redirect not working correctly, 2> /dev/null becomes 2 > /dev/null and stderr doesn't get redirected - unix

I am hoping someone can help me figure out what setting I might need to overwrite. I am working on a Unix terminal server, running a Linux Xterm linux shell. Everytime I use a command like grep "blah" 2> /dev/null at the shell prompt, the command is run as grep "blah" 2 > /dev/null and needless to say the redirection fails.
xterm version is X.Org 6.8.99.903(238)
I can not update or install anything, this is a locked down production server.
Thanks for any help and illumination on the topic, it is making my grep useless at high directory levels with recursion.

That's Bourne shell syntax, and it doesn't work in c-shell.
The best you can do is
( command >stdout_file ) >&stderr_file
Where you get stdout to one file, and stderr to another. Redirecting just stderr is not possible.

In a comment, you say "A minor note, this is csh". That's not a minor note, that's the cause of the problem. xterm is just a terminal emulator, not a shell; all it does is set up a window that provides textual input and output. csh (or bash, or ...) is the shell, the program that interprets the commands you type.
csh has different syntax for redirection, and doesn't let you redirect just stderr. command > file redirects stdout; command >& file redirects both stdout and stderr.
You say the system doesn't have bash, but it does have ksh. I suggest just using ksh; it will be a lot more familiar to you. Both bash and ksh are derived from the old Bourne shell.
All (?) Unix-like systems will have a Bourne-like shell installed as /bin/sh. Even if you're using csh (or tcsh?) as your interactive shell, you can still invoke sh, even in a one-liner. For example:
sh -c 'command 2>/dev/null'
will invoke sh, which in turn will invoke command and redirect just its stderr to /dev/null.
The purpose of an interactive shell is (mostly) to let you use other commands that are available on the system. sh, or any shell, can be used as just another command.

Related

Running a script according to shebang line

I've got a script on my computer named test.py. What I've been doing so far to run the program is type python test.py into the terminal.
Is there a command on Unix operating systems that doesn't require the user to specify the program he/she uses to run the script but that will instead run the script using whichever program the shebang line is pointing to?
For example, I'm looking for a command that would let me type some_command test.txtinto the terminal, and if the first line of test.txt is #!/usr/bin/python, the script would be interpreted as a python script, but if the first line is #!/path/to/javascript/interpreter, the the script would be interpreted as javascript.
This is the default behavior of the terminal (or just executing a file in general) all you have to do is make the script executable with
chmod u+x test.txt
Then (assuming text.txt is in your current directory) every time you type
./text.txt
It will look at the sh-bang line and use the program there to run text.txt.
If you really want to duplicate built-in functionality, try this.
#!/bin/sh
x=$1
shift
p=$(sed -n 's/^#!//p;q' "$x" | grep .) && exec $p "$#"
exec "$x" "$#"
echo "$0: $x: No can do" >&2
Maybe call it start to remind you of the similarly useful Windows command.

How do I use the nohup command without getting nohup.out?

I have a problem with the nohup command.
When I run my job, I have a lot of data. The output nohup.out becomes too large and my process slows down. How can I run this command without getting nohup.out?
The nohup command only writes to nohup.out if the output would otherwise go to the terminal. If you have redirected the output of the command somewhere else - including /dev/null - that's where it goes instead.
nohup command >/dev/null 2>&1 # doesn't create nohup.out
Note that the >/dev/null 2>&1 sequence can be abbreviated to just >&/dev/null in most (but not all) shells.
If you're using nohup, that probably means you want to run the command in the background by putting another & on the end of the whole thing:
nohup command >/dev/null 2>&1 & # runs in background, still doesn't create nohup.out
On Linux, running a job with nohup automatically closes its input as well. On other systems, notably BSD and macOS, that is not the case, so when running in the background, you might want to close input manually. While closing input has no effect on the creation or not of nohup.out, it avoids another problem: if a background process tries to read anything from standard input, it will pause, waiting for you to bring it back to the foreground and type something. So the extra-safe version looks like this:
nohup command </dev/null >/dev/null 2>&1 & # completely detached from terminal
Note, however, that this does not prevent the command from accessing the terminal directly, nor does it remove it from your shell's process group. If you want to do the latter, and you are running bash, ksh, or zsh, you can do so by running disown with no argument as the next command. That will mean the background process is no longer associated with a shell "job" and will not have any signals forwarded to it from the shell. (A disowned process gets no signals forwarded to it automatically by its parent shell - but without nohup, it will still receive a HUP signal sent via other means, such as a manual kill command. A nohup'ed process ignores any and all HUP signals, no matter how they are sent.)
Explanation:
In Unixy systems, every source of input or target of output has a number associated with it called a "file descriptor", or "fd" for short. Every running program ("process") has its own set of these, and when a new process starts up it has three of them already open: "standard input", which is fd 0, is open for the process to read from, while "standard output" (fd 1) and "standard error" (fd 2) are open for it to write to. If you just run a command in a terminal window, then by default, anything you type goes to its standard input, while both its standard output and standard error get sent to that window.
But you can ask the shell to change where any or all of those file descriptors point before launching the command; that's what the redirection (<, <<, >, >>) and pipe (|) operators do.
The pipe is the simplest of these... command1 | command2 arranges for the standard output of command1 to feed directly into the standard input of command2. This is a very handy arrangement that has led to a particular design pattern in UNIX tools (and explains the existence of standard error, which allows a program to send messages to the user even though its output is going into the next program in the pipeline). But you can only pipe standard output to standard input; you can't send any other file descriptors to a pipe without some juggling.
The redirection operators are friendlier in that they let you specify which file descriptor to redirect. So 0<infile reads standard input from the file named infile, while 2>>logfile appends standard error to the end of the file named logfile. If you don't specify a number, then input redirection defaults to fd 0 (< is the same as 0<), while output redirection defaults to fd 1 (> is the same as 1>).
Also, you can combine file descriptors together: 2>&1 means "send standard error wherever standard output is going". That means that you get a single stream of output that includes both standard out and standard error intermixed with no way to separate them anymore, but it also means that you can include standard error in a pipe.
So the sequence >/dev/null 2>&1 means "send standard output to /dev/null" (which is a special device that just throws away whatever you write to it) "and then send standard error to wherever standard output is going" (which we just made sure was /dev/null). Basically, "throw away whatever this command writes to either file descriptor".
When nohup detects that neither its standard error nor output is attached to a terminal, it doesn't bother to create nohup.out, but assumes that the output is already redirected where the user wants it to go.
The /dev/null device works for input, too; if you run a command with </dev/null, then any attempt by that command to read from standard input will instantly encounter end-of-file. Note that the merge syntax won't have the same effect here; it only works to point a file descriptor to another one that's open in the same direction (input or output). The shell will let you do >/dev/null <&1, but that winds up creating a process with an input file descriptor open on an output stream, so instead of just hitting end-of-file, any read attempt will trigger a fatal "invalid file descriptor" error.
nohup some_command > /dev/null 2>&1&
That's all you need to do!
Have you tried redirecting all three I/O streams:
nohup ./yourprogram > foo.out 2> foo.err < /dev/null &
You might want to use the detach program. You use it like nohup but it doesn't produce an output log unless you tell it to. Here is the man page:
NAME
detach - run a command after detaching from the terminal
SYNOPSIS
detach [options] [--] command [args]
Forks a new process, detaches is from the terminal, and executes com‐
mand with the specified arguments.
OPTIONS
detach recognizes a couple of options, which are discussed below. The
special option -- is used to signal that the rest of the arguments are
the command and args to be passed to it.
-e file
Connect file to the standard error of the command.
-f Run in the foreground (do not fork).
-i file
Connect file to the standard input of the command.
-o file
Connect file to the standard output of the command.
-p file
Write the pid of the detached process to file.
EXAMPLE
detach xterm
Start an xterm that will not be closed when the current shell exits.
AUTHOR
detach was written by Robbert Haarman. See http://inglorion.net/ for
contact information.
Note I have no affiliation with the author of the program. I'm only a satisfied user of the program.
Following command will let you run something in the background without getting nohup.out:
nohup command |tee &
In this way, you will be able to get console output while running script on the remote server:
sudo bash -c "nohup /opt/viptel/viptel_bin/log.sh $* &> /dev/null" &
Redirecting the output of sudo causes sudo to reask for the password, thus an awkward mechanism is needed to do this variant.
If you have a BASH shell on your mac/linux in-front of you, you try out the below steps to understand the redirection practically :
Create a 2 line script called zz.sh
#!/bin/bash
echo "Hello. This is a proper command"
junk_errorcommand
The echo command's output goes into STDOUT filestream (file descriptor 1).
The error command's output goes into STDERR filestream (file descriptor 2)
Currently, simply executing the script sends both STDOUT and STDERR to the screen.
./zz.sh
Now start with the standard redirection :
zz.sh > zfile.txt
In the above, "echo" (STDOUT) goes into the zfile.txt. Whereas "error" (STDERR) is displayed on the screen.
The above is the same as :
zz.sh 1> zfile.txt
Now you can try the opposite, and redirect "error" STDERR into the file. The STDOUT from "echo" command goes to the screen.
zz.sh 2> zfile.txt
Combining the above two, you get:
zz.sh 1> zfile.txt 2>&1
Explanation:
FIRST, send STDOUT 1 to zfile.txt
THEN, send STDERR 2 to STDOUT 1 itself (by using &1 pointer).
Therefore, both 1 and 2 goes into the same file (zfile.txt)
Eventually, you can pack the whole thing inside nohup command & to run it in the background:
nohup zz.sh 1> zfile.txt 2>&1&
You can run the below command.
nohup <your command> & > <outputfile> 2>&1 &
e.g.
I have a nohup command inside script
./Runjob.sh > sparkConcuurent.out 2>&1

SharpSsh - script runs twice in csh and ksh

i'm running a script from ASP.NET/C# using SharpSsh. I realize when the script runs and i do a ps -ef grep from unix, i see the same script running twice, one in csh -c, and the other with ksh. The script has shebang ksh, so i'm not sure why a copy of csh is also running. Also if i run the same script directly from unix, only one copy runs with ksh. There's no other shell running from within the script.
Most Unix/Linux now have a command or option that will show process trees, with indented list like, look for -t or -T options to ps OR ptree OR ???
USER PID PPID START TT TIME CMD
daemon 1 1 11-03-06 ? 0 init
myusr 221568 1 11-03-07 tty10 1.00s \_ -ksh
myusr 350976 221568 07:52:11 tty10 0 | \_ ps -efT
I bet you'll see that the csh is the user login shell that includes your script as an argument ( you may have to use different options to ps to see the full command-line of the csh process) AND as a sub process you'll see ksh executing your script, and further sub-processes under ksh for any external commands that the script is calling.
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, or give it a + (or -) as a useful answer.

strange behavior of fc -l command

I have two unix machines, both running AIX 5.3
My $HOME is mounted on machine1.
Using NFS, login machine2 will go to the same $HOME
I login machine2 first, then machine1.
Both using telnet.
The 2 sessions will share the same .sh_history file.
I found out that the fc -l behavior very strange.
In machine2, I issue the commands in telnet:
fc -l
ksh fc -l
Both give the same output.
In machine1,
fc -l
ksh fc -l
give DIFFERENT results
The result for ksh fc -l
is the same as /usr/bin/fc -l
Also, when I run a script like this:
#!/usr/bin/ksh
fc -l
The result is same as /usr/bin/fc -l
Could anyone tell me what happened?
Alvin SIU
Ah, wisdom of the ancients... (Since this post is over a year old.)
Anyway, I just encountered this problem in Solaris 10. Issue seems to be this: When you define a function in /etc/profile, or in any file called by /etc/profile, your HISTFILE variable gets ignored by the Korn shell, and the shell instead uses ".sh_history" when accessing its history. Not sure why this is.
Result is that you see other root shell's commands. You can test it with :
lsof -p $$
or
cat /proc/$$/fd/63
It's possible that the login shell is not ksh or that $HISTFILE is being reset. One thing you can do is echo $HISTFILE in the various situations and see if it's different. Another thing to check is to see what shell you're in using ps.
Bash (default $HOME/.bash_history), for example, will have a different $HISTFILE than ksh (default $HOME/.sh_history).
Another possible reason for the difference is that the builtin fc may be able to see in-memory history that hasn't been written to disk yet (which the external /usr/bin/fc wouldn't be able to see). If this is true, it may be version dependent. Bash, for example, doesn't write history to the file until the shell exits. Ksh (at least the version I'm using) writes it immediately.

How to log a output(regular or error) of a unix program into a file

How do I log the output(regular or error) of a Unix program into a file?
start it with ./program > file.log 2>&1
this will redirect stdout and stderr to file.log
Redirection like the above answer is very standard, but sometimes you really want to capture everything in a session. For that you can use the 'script' command.
$ script /path/to/output_file
[starts a subshell]
$ ./program
$ exit
$ cat /path/to/output_file
The advantage of script is you don't need to worry about shell semantics and knowing which shell you're running etc.. The disadvantage is that it really does capture everything that makes it to your terminal, including control codes, delete keys, etc...

Resources