The unix tee command splits the standard input to stdout AND a file.
What I need is something that works the other way around, merging several inputs to one output - I need to concatenate the stdout of two (or more) commands.
Not sure what the semantics of this app should be - let's suppose each argument is a complete command.
Example:
> eet "echo 1" "echo 2" > file.txt
should generate a file that has contents
1
2
I tried
> echo 1 && echo 2 > zz.txt
It doesn't work.
Side note: I know I could just append the outputs of each command to the file, but I want to do this in one go (actually, I want to pipe the merged outputs to another program).
Also, I could roll my own, but I'm lazy whenever I can afford it :-)
Oh yeah, and it would be nice if it worked in Windows (although I guess any bash/linux-flavored solution works, via UnxUtils/msys/etc)
Try
( echo 1; echo 2 ) > file.txt
That spawn a subshell and executes the commands there
{ echo 1; echo 2; } > file.txt
is possible, too. That does not spawn a subshell (the semicolon after the last command is important)
I guess what you want is to run both commands in parallel, and pipe both outputs merged to another command.
I would do:
( echo 1 & echo 2 ) | cat
Where "echo 1" and "echo 2" are the commands generating the outputs and "cat" is the command that will receive the merged output.
echo 1 > zz.txt && echo 2 >> zz.txt
That should work. All you're really doing is running two commands after each other, where the first redirects to a file, and then, if that was successful, you run another command that appends its output to the end of the file you wrote in the first place.
Related
So when I try to do
echo cat file1.txt
The output is cat file.txt
However, when I do:
echo 'cat file1.txt'
The output is the actual contents of the file1.txt
Although I recognize the echo command is not required at all to achieve the goal of displaying the file contents, I was curious as to why the outputs differed in these given situations
As you are confused about behaviour
Every unix statment inside `` is considered as a saparate command
you can directly use `` on command to assign the output of command into that variable
see below:
[cloudera#quickstart sub1]$ a=`echo "Hello"`
[cloudera#quickstart sub1]$ echo $a
Hello
[cloudera#quickstart sub1]$
in above example you can see i am assigning the output of echo "Hello" to a variable named a
Are you sure you typed
echo 'cat file.txt'
and not
echo `cat file.txt`
?
In the first case, I have no clue what’s going on. In the second, however, there is no mystery. In most shells, typing
foo `bar`
means, somewhat (over)simplified, "run bar, and use its output as the command parameters for running foo".
If you are running Bash, this is described in the section on command substitution in the manual.
Let suppose you have a terminal (T1) open with 6350 pid.
Type :
echo "ls\n" > /proc/6350/fd/0 (writen in another terminal (T2)).
This writes ls and the line jump in T1 but does not execute it ? Why ?
I also tried using
cat|bash with echo "ls\n" > /proc/catPID/fd/0
but it is still not executed.
Any idea ?
Thanks,
Edited :
One possible trick :
mkfifo toto
$bash < toto
$echo "ls" > toto
First, if you want echo to intpret the \n as newline you have to call it with -e.
Secondly, what you want (hijack a terminal) is not (easy) doable, see unix.stackexchange. I would use screen on both sessions (one with the -x option).
Let's say I do this in a unix shell
$ some-script.sh | grep mytext
$ echo $?
this will give me the exit code of grep
but how can I get the exit code of some-script.sh
EDIT
Assume that the pipe operation is immutable. ie, I can not break it apart and run the two commands seperately
There are multiple solutions, it depends on what you want to do exactly.
The easiest and understandable way would be to send the output to a file, then grep for it after saving the exit code:
tmpfile=$(mktemp)
./some-script.sh > $tmpfile
retval=$?
grep mytext $tmpfile
rm tmpfile
A trick from the comp.unix.shell FAQ (#13) explains how using the pipeline in the Bourne shell should help accomplish what you want:
You need to use a trick to pass the exit codes to the main
shell. You can do it using a pipe(2). Instead of running
"cmd1", you run "cmd1; echo $?" and make sure $? makes it way
to the shell.
exec 3>&1
eval `
# now, inside the `...`, fd4 goes to the pipe
# whose other end is read and passed to eval;
# fd1 is the normal standard output preserved
# the line before with exec 3>&1
exec 4>&1 >&3 3>&-
{
cmd1 4>&-; echo "ec1=$?;" >&4
} | {
cmd2 4>&-; echo "ec2=$?;" >&4
} | cmd3
echo "ec3=$?;" >&4
If you're using bash:
PIPESTATUS
An array variable (see Arrays) containing a list of exit status values from the processes in the most-recently-executed foreground pipeline (which may contain only a single command).
There is a utility named mispipe which is part of the moreutils package.
It does exactly that: mispipe some-script.sh 'grep mytext'
First approach, temporarly save exit status in some file. This cause you must create subshell using braces:
(your_script.sh.pl.others; echo $? >/tmp/myerr)|\ #subshell with exitcode saving
grep sh #next piped commands
exitcode=$(cat /tmp/myerr) #restore saved exitcode
echo $exitcode #and print them
another approach presented by Randy above, simplier code implementation:
some-script.sh | grep mytext
echo ${PIPESTATUS[0]} #print exitcode for first commands. tables are indexted from 0
its all. both works under bash (i know, bashizm). good luck :)
both approaches does not save temporarly pipe to physical file, only exit code.
I'm trying to write a script that breaks up a VERY large file into smaller pieces that are then sent to a script that runs in the background. The motivation is that if the script is running in the background, I can run in parallel.
Here is my code, ./seq works just like the normal seq command (which mac doesn't have). and $1 is the huge file to be split.
echo "Splitting and Running Script"
for i in $(./seq 0 14000000 500000)
do
awk ' { if (NR>='$i' && NR<'$(($i+500000))') { print $0 > "xPart'$i'" } }' $1
python FastQ2Seq.py xPart$i &
done
wait
echo "Concatenating"
for k in *.out.seq
do
cat $k >> original.seq
done
for j in *.out.qul
do
cat $j >> original.qul
done
echo "Cleaning"
rm xPart*
My problem is that only xPart0 is made and it only has 499995 lines in it before the program hangs. I put some debugging echos in the script and I know the awk statement is what stops the script. I just can't figure out what's going wrong.
Check out the split command --
split -- split a file into pieces
Output fixed-size pieces of INPUT to PREFIXaa, PREFIXab, ...; default
size is 1000 lines, and default PREFIX is `x'. With no INPUT, or when
INPUT is -, read standard input.
Should be much faster, reliable, and cleaner than running awk in a loop!
echo "Splitting and Running Script"
# splits to smaller files each 50000 lines, if i understand your problem correctly
awk 'NR%50000==1{++c}{print $0 > "xPart"c".txt"}' file
# or use split -l 50000
for file in xPart*
do
python FastQ2Seq.py "$file" &
done
echo "Concatenating"
cat *.out.seq >> original.seq
cat *.out.qul >> original.qul
If your seq truly works like the standard seq, you're calling it wrong. The proper command line for seq is:
seq FIRST INCREMENT LAST
So you would need to change your seq commandline to:
seq 0 500000 14000000
I want my ksh script to have different behaviors depending on whether there is something incoming through stdin or not:
(1) cat file.txt | ./script.ksh (then do "cat <&0 >./tmp.dat" and process tmp.dat)
vs. (2) ./script.ksh (then process $1 which must be a readable regular file)
Checking for stdin to see if it is a terminal[ -t 0 ] is not helpful, because my script is called from an other script.
Doing "cat <&0 >./tmp.dat" to check tmp.dat's size hangs up waiting for an EOF from stdin if stdin is "empty" (2nd case).
How to just check if stdin is "empty" or not?!
EDIT: You are running on HP-UX
Tested [ -t 0 ] on HP-UX and it appears to be working for me. I have used the following setup:
/tmp/x.ksh:
#!/bin/ksh
/tmp/y.ksh
/tmp/y.ksh:
#!/bin/ksh
test -t 0 && echo "terminal!"
Running /tmp/x.ksh prints: terminal!
Could you confirm the above on your platform, and/or provide an alternate test setup more closely reflecting your situation? Is your script ultimately spawned by cron?
EDIT 2
If desperate, and if Perl is available, define:
stdin_ready() {
TIMEOUT=$1; shift
perl -e '
my $rin = "";
vec($rin,fileno(STDIN),1) = 1;
select($rout=$rin, undef, undef, '$TIMEOUT') < 1 && exit 1;
'
}
stdin_ready 1 || 'stdin not ready in 1 second, assuming terminal'
EDIT 3
Please note that the timeout may need to be significant if your input comes from sort, ssh etc. (all these programs can spawn and establish the pipe with your script seconds or minutes before producing any data over it.) Also, using a hefty timeout may dramatically penalize your script when there is nothing on the input to begin with (e.g. terminal.)
If potentially large timeouts are a problem, and if you can influence the way in which your script is called, then you may want to force the callers to explicitly instruct your program whether stdin should be used, via a custom option or in the standard GNU or tar manner (e.g. script [options [--]] FILE ..., where FILE can be a file name, a - to denote standard input, or a combination thereof, and your script would only read from standard input if - were passed in as a parameter.)
This strategy works for bash, and would likely work for ksh. Poll 'tty':
#!/bin/bash
set -a
if [ "$( tty )" == 'not a tty' ]
then
STDIN_DATA_PRESENT=1
else
STDIN_DATA_PRESENT=0
fi
if [ ${STDIN_DATA_PRESENT} -eq 1 ]
then
echo "Input was found."
else
echo "Input was not found."
fi
Why not solve this in a more traditional way, and use the command line argument to indicate that the data will be coming from stdin?
For an example, consider the difference between:
echo foo | cat -
and
echo foo > /tmp/test.txt
cat /tmp/test.txt