Redirect stdin stdout to multiple files - unix

i am using tcsh shell,
I am trying to write two files concurrently with same output.
One file the stdout will send to the start of the file
and the second file stdout will send to the end of file.
I have tried doing this
./something 2>&1 | tee log1.txt 1> log2.txt
Just log1.txt has the STDOUT data
Any ideas?
Thanks,
Koby

You should simply call:
./something | tee file1.txt file2.txt file3.txt
EDIT: Ugly fix to append/prepend
./something | tee -a file1.txt | cat - file2.txt > tmp && mv tmp file2.txt

Related

Get size of file on remote server

I have a requirement where I need to do SFTP connection to remote server, get the size of the file on remote server and depending on the size, i need to get the file onto local server.
Is there any command in SFTP to get the size of the file.
If you'd like the size output to be human readable, try: ls -lah
You can get the file size of the remote files using the ls command by passing parameters.
To get Size of the file pass ls -l
To get Size of the file (HIdden files included) ls -al
To get it in human readable format pass ls -lh or ls -alh
you can get the size using James's with awk
ls -l | grep "filename" | awk '{print $5}'
If you are using it in a script and want to check using logic, you can store the file size in a variable like so.
varname=$(ls -l | grep "filename" | awk '{print $5}')
Then call sftp and the task
For a remote file maybe do this
filesize=$(ssh user#domain.ex << EOT ls -l | grep "filename" | awk '{print $5}' EOT)

Line count in UNIX by reading file name from another file

I have a file FILE1.TXT. It contains only one file name FILE2.TXT.
How will I find the record count / line count of FILE2.TXT using only FILE1.TXT? What I have already tried is:
cat FILE1.TXT | wc -l
But the above command did not work.
Actually, I need to display the output as below:
File name is FILE2.TXT and the count is 2.
What I have already tried is (using the below statement inside a script file):
echo "File name is "`cat FILE1.TXT`" and the count is " `wc -l < $(cat FILE1.TXT)`
But the above command did not work and gave error
syntax error at line 1: `(' unexpected
For a POSIX-compliant shell:
wc -l $(cat FILE1.txt)
or, with Bash:
wc -l $(<FILE1.txt)
These will both report the file name (but will work if there are multiple file names in FILE1.txt). If you don't want the file name reported (but there's only one name in the file), you could use:
wc -l < $(cat FILE1.txt)
wc -l < $(<FILE1.txt)
file=$(cat FILE1.txt | grep -o "FILE2.txt")
cat "$file" | wc -l

Piping into rm command

I`m going to delete all files from directory that contains "2" in their name.
this command work well
ls | grep [*2*]
but when i try to pipe the output from that command to command rm
ls | grep [*2*] | rm
there is error "Try `rm --help' for more information."
please help
Why not use the wildcarding in the shell directly ?
e.g.
$ rm *2*
I don't think you need the ls or the grep. Your above problem stems from the fact that you're piping output into the stdin of rm, whereas you want to supply command line arguments to rm. rm doesn't read from stdin.
To pipe output from another command to rm you must use xargs commant for rm
Try this
ls | grep [*2*] | xargs rm
the output will send like arguments of rm command
you need to feed every line to rm command as an input. For this you need xargs along with pipe.
so modify the command like ls -1 | grep [*2*] | xargs rm -rf
just complementing on other answers, instead of running ls then grep, you could use find.
find . -name "*2*" | xargs rm

xargs to copy one file into several

I have a directory that has one file with information (call it masterfile.inc) and several files that are empty (call them file1.inc-file20.inc)
I'm trying to formulate an xargs command that copies the contents of masterfile.inc into all of the empty files.
So far I have
ls -ltr | awk '{print $9}' | grep -v masterfile | xargs -I {} cat masterfile.inc > {}
Unfortunately, all this does is creates a file called {} and prints masterfile.inc into it N times.
Is there something I'm missing with the syntax here?
Thanks in advance
You can use this command to copy file 20 times:
$ tee <masterfile.inc >/dev/null file{1..20}.inc
Note: file{1..20}.inc will expand to file1, file2, ... , file20
If you disternation filenames are random:
$ shopt -s extglob
$ tee <masterfile.inc >/dev/null $(ls !(masterfile.inc))
Note: $(ls !(masterfile.inc)) will expand to all file in current directory except masterfile.inc (please don't use spaces in filename)
While the tee trick is really brilliant you might be interested in a solution that is easier to adapt for other situations. Here using GNU Parallel:
ls -ltr | awk '{print $9}' | grep -v masterfile | parallel "cat masterfile.inc > {}"
It takes literally 10 seconds to install GNU Parallel:
wget pi.dk/3 -qO - | sh -x
Watch the intro videos to learn more: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Suppress find & grep "cannot open" output

I was given this syntax by user phi
find . | awk '!/((\.jpeg)|(\.jpg)|(\.png))$/ {print $0;}' | xargs grep "B206"
I would like to suppress the output of grep: can't open..... and find: cannot open lines from the results.sample output to be ignored:
grep: can't open ./cisc/.xdbhist
find: cannot open ./cisc/.ssh
Have you tried redirecting stderr to /dev/null ?
2>/dev/null
So the above redirects stream no.2 (which is stderr) to /dev/null. That's shell dependent, but the above should work for most. Because find and grep are different processes, you may have to do it for both, or (perhaps) execute in a subshell. e.g.
find ... 2>/dev/null | xargs grep ... 2>/dev/null
Here's a reference to some documentation on bash redirection. Unless you're using csh, this should work for most.
The option flag grep -s will suppress these messages for the grep command

Resources