How to sed search and replace without changing ownership - unix

I found this command line search and replace example:
find . -type f -print0 | xargs -0 sed -i 's/find/replace/g'
It worked fine except it changed the date and file ownership on EVERY file it searched through, even those that did not contain the search text.
What's a better solution to this task?

Using the -c option (if you are on Linux) ought to cause sed to preserve ownership. As you are using the command, it is in fact rewriting every file, even though it is not making changes to the files that do not contain find.

The easiest way to fix that would be to only execute sed on the files if the contain the text by using grep first:
find . -type f | while read file; do
grep -q find $file && sed -i 's/find/replace/g' $file
done
This does require reading each file twice (in the worst case), so it might be a little slower. Hopefully, though, your OS should keep the file in its disk cache, so you shouldn't see much of a slowdown, since this process is definitely I/O-bound, not CPU-bound.

Some distro's versions of sed (namely, RedHat and family) have added a -c option that accomplishes this, in which case see #Isaac's answer.
But for those with an unpatched GNU sed, the easiest way I've found is to simply substitute with perl, which rewrites files similar to sed -c when available. The following commands are basically equivalent:
sed -ci 's/find/replace/'
perl -pi -e 's/find/replace/'
Just don't get too excited and do perl -pie; like with sed -ie, instead of interpreting as another option, it will use e as an argument to -i, and use it as a suffix to backup the original file. See perldoc perlrun for more details there.
Perl's regex parsing is a little different from sed (better, imo) for more complicated things: generally speaking, use less backslashes.

Related

Grep to find a pattern and replace in same line

I have a project directory with folders containing .html files. I want to find those files which have the pattern -
'btn-primary.*{.*Save'
And replace the
'btn-primary' word with 'btn-primary Save'
only in those lines.
What I have done:
grep -rl -e 'btn-primary.*{Save' . |xargs sed -i 's/btn-primary/btn-primary Save/g'
What this did:
This found all files that have that pattern, that's okay. Then, sed ran on all of those files and replaced 'btn-primary' with 'btn-primary save' wherever it got - which is not what I want
What I want: to replace on those lines where there is 'Save' somewhere after 'btn-primary'.
Any help will be very much appreciated.
Regards,
Rahul
Why are you using grep at all? Sed does pattern matching:
sed -e 's/btn-primary\(.*{.*Save\)/btn-primary Save\1/g'
or:
sed -e 's/\(btn-primary\)\(.*{.*Save\)/\1 Save\2/g'
If you are using grep to try to trim down the number of files that sed will operate on, you're fooling yourself if you believe that is more efficient. By doing that, you will read every file that doesn't match only once, but every file that does match will be read twice. If you only use sed, every file will be read only once.

To replace a set of strings in a file with another string in a unix file

I have a parameter name like
PAR="DBS_OUT"
and I have a text file(Repla.txt) with below values:
DB_TECH
DB_ADMIN
DB_TERA
DB_APS
These values in the files can defer but the parameter value will remain the same.
Now I have some Unix shell script where I need to find all such values mentioned in the file (Repla.txt)
and replace them with the parameter (PAR). Since the values in the Repla.txt is not fixed I am not able to use the sed command. for eg:
sed 's/old/new/g' input.txt > output.txt
Can anyone please help me.
Thanks
I'm not sure I completely understand what you are trying to do but if you are trying to use the values contained in Repla.txt as the strings that you want to replace in other files then the following bash line will do what you want:
PAR="DBS_OUT"; for FIND in `cat Repla.txt`; do $( find /path/to/files -name 'test?.txt' -exec sed -i "s/$FIND/$PAR/g" '{}' \;); done;
It will replace the strings contained in Repla.txt with the string DBS_OUT in all files that match test?.txt in the dir (and subdirs) /path/to/files. You will need to understand how find works.
Also note that I am not telling sed to backup, you probably want to test this out on some test files before you execute it for real. Hopefully you also have your scripts in source control so its not a big deal if you mess things up.
I hope your replacement on Capital letters only.
sed 's/DBS_[A-Z]*/DBS_OUT/g' repla.txt > destination file
or
sed 's/DBS_[A-Z]*/DBS_OUT/g' repla.txt

modifiy grep output with find and replace cmd

I use grep to sort log big file into small one but still there is long dir path in output log file which is common every time.I have to do find and replace every time.
Isnt there any way i can grep -r "format" log.log | execute findnreplce thing?
Sed will do what you want. Basic syntax to replace all the matches of foo with bar in-place in $file is:
sed -i 's/foo/bar/g' $file
If you're just wanting to delete rather than replace, simply leave out the 'bar' (so s/foo//g).
See this tutorial for a lot more detail, such as regex support.
sed -n '/match/s/pattern/repl/p'
Will print all the lines that match the regex match, with all instances of pattern replaced by repl. Since your lines may contain paths, you will probably want to use a different delimeter. / is customary, but you can also do:
sed -n '\#match#s##repl#p`
In the second case, omitting pattern will cause match to be used for the pattern to be replaced.

sed -i option is not working on solaris

I am using sed to replace a line with NULL in a file. The command i used is
sed -i "s/.*shayam.*//g" FILE
This is working fine in linux. shayam is replaced with blank in the FILE. But when i used this in solaris it is showing some error.
sed: illegal option -- i
How to use -i functionality of sed in solaris. Kindly help.
The -i option is GNU-specific. The Solaris version does not support the option.
You will need to install the GNU version, or rename the new file over the old one:
sed 's/.shayam.//g' FILE > FILE.new && mv FILE.new FILE
I just answered a similar question sed -i + what the same option in SOLARIS, but for those who find this thread instead (I saw it in the related thread section):
The main problem I see with most answers given is that it doesn't work if you want to modify multiple files. The answer I gave in the other thread:
It isn't exactly the same as sed -i, but i had a similar issue. You
can do this using perl:
perl -pi -e 's/find/replace/g' file
doing the copy/move only works for single files. if you want to
replace some text across every file in a directory and
sub-directories, you need something which does it in place. you can do
this with perl and find:
find . -exec perl -pi -e 's/find/replace/g' '{}' \;
sed doesn't haven an -i option.
You are probably using some vendor-specific variant of sed. If you want to use the vendor-specific non-standardized extensions of your vendor-specific non-standardized variant of sed, you need to make sure that you install said vendor-specific non-standardized variant and need to make sure that you call it and don't call the standards-compliant version of sed that is part of your operating environment.
Note that as always when using non-standardized vendor-specific extensions, there is absolutely no guarantee that your code will be portable, which is exactly the problem you are seeing.
In this particular case, however, there is a much better solution: use the right tool for the job. sed is a stream editor (that's why it is called "sed"), i.e. it is for editing streams, not files. If you want to edit files, use a file editor, such as ed:
ed FILE <<-HERE
,s/.shayam.//g
w
q
HERE
See also:
Unable to use SED to edit files fast
How can I replace a specific line by line number in a text file?
Either cat the file or try <?
Then pipe (|) the result to a temp file and if all goes well (&&) mv the tempfile to the original file.
Example:
cat my_file | sed '!A!B!' > my_temp_file && mv my_temp_file my_file

Shell script - search and replace text in multiple files using a list of strings

I have a file "changesDictionary.txt" containing (a variable number of) pairs of key-value strings.
e.g.
"textToSearchFor" = "theReplacementText"
(The format of the dictionary is unimportant, and be changed as required.)
I need to iterate through the contents of a given directory, including sub-directories. For each file encountered with the extension ".txt", we search for each of the keys in changesDictionary.txt, replacing each found instance with the replacement string value.
i.e. a search and replace over multiple files, but using a list of search/replace terms rather than a single search/replace term.
How could I do this? (I have studied single search/replace examples, but do not understand how to do multiple searches within a file.)
The implementation (bash, perl, whatever) is not important as long as I can run it from the command line in Mac OS X. Thanks for any help.
I'd convert your changesDictionary.txt file to a sed script, with... sed:
$ sed -e 's/^"\(.*\)" = "\(.*\)"$/s\/\1\/\2\/g/' \
changesDictionary.txt > changesDictionary.sed
Note, any special characters for either regular expressions or sed expressions in your dictionary will be falsely interpreted by sed, so your dictionary can either only have only the most primitive search-and-replacements, or you'll need to maintain the sed file with valid expressions. Unfortunately, there's no easy way in sed to either shut off regular expression and use only string matching or quote your searches and replacements as "literals".
With the resulting sed script, use find and xargs -- rather than find -exec -- to convert your files with the sed script as quickly as possible, by processing them more than one at a time.
$ find somedir -type f -print0 \
| xargs -0 sed -i -f changesDictionary.sed
Note, the -i option of sed edits files "in-place", so be sure to make backups for safety, or use -i~ to create tilde-backups.
Final note, using search and replaces can have unintended consequences. Will you have searches that are substrings of other searches? Here's an example.
$ cat changesDictionary.txt
"fix" = "broken"
"fixThat" = "Fixed"
$ sed -e 's/^"\(.*\)" = "\(.*\)"$/s\/\1\/\2\/g/' changesDictionary.txt \
| tee changesDictionary.sed
s/fix/broken/g
s/fixThat/Fixed/g
$ mkdir subdir
$ echo fixThat > subdir/target.txt
$ find subdir -type f -name '*.txt' -print0 \
| xargs -0 sed -i -f changesDictionary.sed
$ cat subdir/target.txt
brokenThat
Should "fixThat" have become "Fixed" or "brokenThat"? Order matters for sed script. Similarly, a search and replace can be search and replaced more than once -- changing "a" to "b", may be changed by another search-and-replace later from "b" to "c".
Perhaps you've already considered both of these, but I mention because I've tried what you were doing before and didn't think of it. I don't know of anything that simply does the right thing for doing multiple search and replacements at once. So, you need to program it to do the right thing yourself.
Here are the basic steps I would do
Copy the changesDictionary.txt file
In it replace "a"="b" to the equivalent sed line: e.g. (use $1 for the file name)
sed -e 's/a/b/g' $1
(you could write a script to do this or just do it by hand, if you just need to do this once and it's not too big).
If the files are all in one directory, then you can do something like:
ls *.txt | xargs scriptFromStep2.sh
If they are in subdirs, use a find to call that script on all of the files, something like
find . -name '*.txt' -exec scriptFromStep2.sh {} \;
These aren't exact, do some experiments to make sure you get it right -- it's just the approach I would use.
(but, if you can, just use perl, it would be a lot simpler)
Use this tool, which is written in Perl - with quite a lot of bells and whistles - oldie, but goodie:
http://unixgods.org/~tilo/replace_string/
Features:
do multiple search-replace or query-search-replace operations
search-replace expressions can be given on the command line or read from a file
processes multiple input files
recursively descend into directory and do multiple search/replace operations on all files
user defined perl expressions are applied to each line of each input file
optionally run in paragraph mode (for multi-line search/replace)
interactive mode
batch mode
optionally backup files and backup numbering
preserve modes/owner when run as root
ignore symbolic links, empty files, write protected files, sockets, named pipes, and directory names
optionally replace lines only matching / not matching a given regular expression
This script has been used quite extensively over the years with large data sets.
#!/bin/bash
f="changesDictionary.tx"
find /path -type f -name "*.txt" | while read FILE
do
awk 'BEGIN{ FS="=" }
FNR==NR{ s[$1]=$2; next }
{
for(i in s){
if( $0 ~ i ){ gsub(i,s[i]) }
}
print $0
}' $f $FILE > temp
mv temp $FILE
done
for i in ls -1 /script/arq*.sh
do
echo -e "ARQUIVO ${i}"
sed -i 's|/$file_path1|/file_path2|g' ${i}
done

Resources