Pipe Find command "stderr" to a command - unix

Hi I have a peculiar problem and I'm trying hard to find (pun intended) a solution for it.
$> find ./subdirectory -type f 2>>error.log
I get an error, something like, "find: ./subdirectory/noidea: Permission denied" from this command and this will be redirected to error.log.
Is there any way I can pipe the stderr to another command before the redirection to error.log?
I want to be able to do something like
$> find ./subdirectory -type f 2 | sed "s#\(.*\)#${PWD}\1#" >> error.log
where I want to pipe only the stderr to the sed command and get the whole path of the find command error.
I know piping doesn't work here and is probably not the right way to go about.
My problem is I need both the stdout and stderr and the both have to be processed through different things simultaneously.
EDIT:
Ok. A slight modification to my problem.
Now, I have a shell script, solve_problem.sh
In this shell script, I have the following code
ErrorFile="error.log"
for directories in `find ./subdirectory -type f 2>> $ErrorFile`
do
field1=`echo $directories | cut -d / -f2`
field2=`echo $directories | cut -d / -f3`
done
Same problem but inside a shell script. The "find: ./subdirectory/noidea: Permission denied" error should go into $ErrorFile and stdout should get assigned to the variable $directories.

Pipe stderr and stdout simultaneously - idea taken from this post:
(find /boot | sed s'/^/STDOUT:/' ) 3>&1 1>&2 2>&3 | sed 's/^/STDERR:/'
Sample output:
STDOUT:/boot/grub/usb_keyboard.mod
STDERR:find: `/boot/lost+found': Brak dostępu
Bash redirections like 3>&1 1>&2 2>&3 swaps stderr and stdout.
I would modify your sample script to look like this:
#!/bin/bash
ErrorFile="error.log"
(find ./subdirectory -type f 3>&1 1>&2 2>&3 | sed "s#^#${PWD}: #" >> $ErrorFile) 3>&1 1>&2 2>&3 | while read line; do
field1=$(echo "$line" | cut -d / -f2)
...
done
Notice that I swapped stdout & stderr twice.
Small additional comment - look at -printf option in find manual page. It might be useful to you.

If you need to redirect stderr to stdout so that the following command in the pipe gets it as its input, then you can use 2>&1.
For more information, please have a look at the all about redirection how-to.
Edit: Even if you need to edit pass stdout further in the pipe, you can use sed to filter error messages and write them to a file:
$ find . -type f 2>&1 | sed '/^find:/{
> w error.log
> d
> }'
In this example:
in find command stderr is redirected to stdout
in sed command errors from find that match a regular expression are written to a file (w error.log) and removed from output (d).
any command in the pipeline following the pipeline will received the output from find.
Note: This will work as lon as all the error messages from find start with find:. Otherwise, the regular expression in sed should be modified to properly match all cases.

You can try this (on bash), which appears to work:
find ./subdirectory -type f 2> >(sed "s#\(.*\)#${PWD}\1#" >> error.log)
This does the following:
2> redirects stderr to
>(...) a process substitution (running sed, which appends to error.log)

Related

bzgrep not printing the file name

find . -name '{fileNamePattern}*.bz2' | xargs -n 1 -P 3 bzgrep -H "{patternToSearch}"
I am using the command above to find out a .bz2 file from set of files that have a pattern that I am looking for. It does go through the files because I can see the pattern that I am trying to find being printed on the console but I don't see the file name.
If you look at the bzgrep script (for example this version for OS X) you will see that it pipes the output from bzip2 through grep. That process loses the original filenames. grep never sees them so it cannot print them out (despite your -H flag).
Something like this should do, not exactly what you want but something similar. (You could get the prefix you were expecting by piping the output from bzgrep into sed/awk but that's a bit less simple of a command to write out.)
find . -name '{fileNamePattern}*.bz2' -printf '### %p\n' -exec bzgrep "{patternToSearch}" {} \;
I printed the file name through echo command and xargs.
find . -name "*bz2" | parallel -j 128 echo -n {}\" \" | xargs bzgrep {pattern}
Etan is very close with his answer: grep indeed does not show the filename when only dealing with one file, so you can make grep believe he's looking into multiple files, just by adding the NULL file, so the command becomes:
find . -name '{fileNamePattern}*.bz2' -printf '### %p\n'
-exec bzgrep "{patternToSearch}" {} /dev/null \;
(It's a dirty trick but it's helping me already for more than 15 years :-) )

Editing a file in-place with SED seems to prevent any further append operations by processes that are already running

I have a log file that is written to by a server. I wrote a bash script to send me an email if there is an error in the server. I would now like to remove the lines containing the errors so I don't keep getting emails. I accomplish this by doing the following:
sed -i "/WARNING/d" logs/console.log
After running sed however, no more changes are written to the log. I'm guessing this is because running sed closes any open file descriptors or something. However, when I edit the file and manually remove the warning lines with vi I don't have this problem.
I have also tried redirecting the server output myself with both '>' and '>>' operators and after editing the file with sed the same thing happens (i.e. they are no longer updated).
When sed rewrites the log file the server process probably gets an IO error and doesn't try to reopen the log file. I don't know if this approach can work out. sed definitely doesn't have flags to tweak the way it rewrites files with the -i flag, and I don't know if the server can be tweaked to be more resilient when appending to the log.
So your best option might be a different approach: save the timestamp of the last error look for errors after that timestamp. Something like this:
ts=
file=console.log
while :; do
if test "$ts"; then
if sed -e "1,/$ts/d" $file | grep -q WARNING; then
sed -e "1,/$ts/d" $file | sendmail ...
ts=$(tail -n 1 $file | cut -f1 -d' ')
fi
else
if grep -q WARNING $file; then
sendmail ... < $file
ts=$(tail -n 1 $file | cut -f1 -d' ')
fi
fi
sleep 15
done
This script is just to give you an idea, it can be improved.

sed edit file in place

I am trying to find out if it is possible to edit a file in a single sed command without manually streaming the edited content into a new file and then renaming the new file to the original file name.
I tried the -i option but my Solaris system said that -i is an illegal option. Is there a different way?
The -i option streams the edited content into a new file and then renames it behind the scenes, anyway.
Example:
sed -i 's/STRING_TO_REPLACE/STRING_TO_REPLACE_IT/g' filename
while on macOS you need:
sed -i '' 's/STRING_TO_REPLACE/STRING_TO_REPLACE_IT/g' filename
On a system where sed does not have the ability to edit files in place, I think the better solution would be to use perl:
perl -pi -e 's/foo/bar/g' file.txt
Although this does create a temporary file, it replaces the original because an empty in place suffix/extension has been supplied.
Note that on OS X you might get strange errors like "invalid command code" or other strange errors when running this command. To fix this issue try
sed -i '' -e "s/STRING_TO_REPLACE/STRING_TO_REPLACE_IT/g" <file>
This is because on the OSX version of sed, the -i option expects an extension argument so your command is actually parsed as the extension argument and the file path is interpreted as the command code. Source: https://stackoverflow.com/a/19457213
The following works fine on my mac
sed -i.bak 's/foo/bar/g' sample
We are replacing foo with bar in sample file. Backup of original file will be saved in sample.bak
For editing inline without backup, use the following command
sed -i'' 's/foo/bar/g' sample
One thing to note, sed cannot write files on its own as the sole purpose of sed is to act as an editor on the "stream" (ie pipelines of stdin, stdout, stderr, and other >&n buffers, sockets and the like). With this in mind you can use another command tee to write the output back to the file. Another option is to create a patch from piping the content into diff.
Tee method
sed '/regex/' <file> | tee <file>
Patch method
sed '/regex/' <file> | diff -p <file> /dev/stdin | patch
UPDATE:
Also, note that patch will get the file to change from line 1 of the diff output:
Patch does not need to know which file to access as this is found in the first line of the output from diff:
$ echo foobar | tee fubar
$ sed 's/oo/u/' fubar | diff -p fubar /dev/stdin
*** fubar 2014-03-15 18:06:09.000000000 -0500
--- /dev/stdin 2014-03-15 18:06:41.000000000 -0500
***************
*** 1 ****
! foobar
--- 1 ----
! fubar
$ sed 's/oo/u/' fubar | diff -p fubar /dev/stdin | patch
patching file fubar
Versions of sed that support the -i option for editing a file in place write to a temporary file and then rename the file.
Alternatively, you can just use ed. For example, to change all occurrences of foo to bar in the file file.txt, you can do:
echo ',s/foo/bar/g; w' | tr \; '\012' | ed -s file.txt
Syntax is similar to sed, but certainly not exactly the same.
Even if you don't have a -i supporting sed, you can easily write a script to do the work for you. Instead of sed -i 's/foo/bar/g' file, you could do inline file sed 's/foo/bar/g'. Such a script is trivial to write. For example:
#!/bin/sh
IN=$1
shift
trap 'rm -f "$tmp"' 0
tmp=$( mktemp )
<"$IN" "$#" >"$tmp" && cat "$tmp" > "$IN" # preserve hard links
should be adequate for most uses.
You could use vi
vi -c '%s/foo/bar/g' my.txt -c 'wq'
sed supports in-place editing. From man sed:
-i[SUFFIX], --in-place[=SUFFIX]
edit files in place (makes backup if extension supplied)
Example:
Let's say you have a file hello.txtwith the text:
hello world!
If you want to keep a backup of the old file, use:
sed -i.bak 's/hello/bonjour' hello.txt
You will end up with two files: hello.txt with the content:
bonjour world!
and hello.txt.bak with the old content.
If you don't want to keep a copy, just don't pass the extension parameter.
If you are replacing the same amount of characters and after carefully reading “In-place” editing of files...
You can also use the redirection operator <> to open the file to read and write:
sed 's/foo/bar/g' file 1<> file
See it live:
$ cat file
hello
i am here # see "here"
$ sed 's/here/away/' file 1<> file # Run the `sed` command
$ cat file
hello
i am away # this line is changed now
From Bash Reference Manual → 3.6.10 Opening File Descriptors for Reading and Writing:
The redirection operator
[n]<>word
causes the file whose name is the expansion of word to be opened for
both reading and writing on file descriptor n, or on file descriptor 0
if n is not specified. If the file does not exist, it is created.
Like Moneypenny said in Skyfall: "Sometimes the old ways are best."
Kincade said something similar later on.
$ printf ',s/false/true/g\nw\n' | ed {YourFileHere}
Happy editing in place.
Added '\nw\n' to write the file. Apologies for delay answering request.
You didn't specify what shell you are using, but with zsh you could use the =( ) construct to achieve this. Something along the lines of:
cp =(sed ... file; sync) file
=( ) is similar to >( ) but creates a temporary file which is automatically deleted when cp terminates.
mv file.txt file.tmp && sed 's/foo/bar/g' < file.tmp > file.txt
Should preserve all hardlinks, since output is directed back to overwrite the contents of the original file, and avoids any need for a special version of sed.
To resolve this issue on Mac I had to add some unix functions to core-utils following this.
brew install grep
==> Caveats
All commands have been installed with the prefix "g".
If you need to use these commands with their normal names, you
can add a "gnubin" directory to your PATH from your bashrc like:
PATH="/usr/local/opt/grep/libexec/gnubin:$PATH"
Call with gsed instead of sed. The mac default doesn't like how grep -rl displays file names with the ./ preprended.
~/my-dir/configs$ grep -rl Promise . | xargs sed -i 's/Promise/Bluebird/g'
sed: 1: "./test_config.js": invalid command code .
I also had to use xargs -I{} sed -i 's/Promise/Bluebird/g' {} for files with a space in the name.
Very good examples. I had the challenge to edit in place many files and the -i option seems to be the only reasonable solution using it within the find command. Here the script to add "version:" in front of the first line of each file:
find . -name pkg.json -print -exec sed -i '.bak' '1 s/^/version /' {} \;
In case you want to replace stings contain '/',you can use '?'. i.e. replace '/usr/local/bin/python' with '/usr/bin/python3' for all *.py files.
find . -name \*.py -exec sed -i 's?/usr/local/bin/python?/usr/bin/python3?g' {} \;

single quotes not working in shell script

I have a .bash_profile script and I can't get the following to work
alias lsls='ls -l | sort -n +4'
when I type the alias lsls
it does the sort but then posts this error message
"-bash: +4: command not found"
How do I get the alias to work with '+4'?
It works when type ls -l | sort -n +4 in the command line
I'm in OS X 10.4
Thanks for any help
bash-4.0$ ls -l | sort -n +4
sort: open failed: +4: No such file or directory
You need ls -l | sort -n -k 5, gnu sort is different from bsd sort
alias lsls='ls -l | sort -n -k 5'
Edit: updated to reflect change from 0 based indexing to 1 based indexing, thanks Matthew.
alias lsls='ls -l | sort -n +4' should work fine with the sort in OS X 10.4 (which does support that syntax).
when I type the alias lsls it does the sort but then posts this error message "-bash: +4: command not found"
Is it possible that you inserted a stray newline when editing your .bash_profile? e.g. if you ended up with something like this:
alias lsls='ls -l | sort -n
+4'
...that might explain the error message.
As an aside, you can get the same effect without piping through sort at all, using:
ls -lrS
This link discusses a very similar alias containing a pipe.
The problem may not have been the pipe, but the interesting solution was to use a function.

Suppress find & grep "cannot open" output

I was given this syntax by user phi
find . | awk '!/((\.jpeg)|(\.jpg)|(\.png))$/ {print $0;}' | xargs grep "B206"
I would like to suppress the output of grep: can't open..... and find: cannot open lines from the results.sample output to be ignored:
grep: can't open ./cisc/.xdbhist
find: cannot open ./cisc/.ssh
Have you tried redirecting stderr to /dev/null ?
2>/dev/null
So the above redirects stream no.2 (which is stderr) to /dev/null. That's shell dependent, but the above should work for most. Because find and grep are different processes, you may have to do it for both, or (perhaps) execute in a subshell. e.g.
find ... 2>/dev/null | xargs grep ... 2>/dev/null
Here's a reference to some documentation on bash redirection. Unless you're using csh, this should work for most.
The option flag grep -s will suppress these messages for the grep command

Resources