Remove last line from all files of a specific extension - unix

I have several files with same extension .txt in a directory. I want to remove the last line from all .txt files.
What I did is
find . -type f -name '*.txt' -exec sed '$d' {} \;
This prints the desired output from sed to the terminal for every .txt file.
What I want is to modify the respective files.

Try this:--
sed -i '$d' *.txt
"$" is used as a line number and means the last line in the file.
"d" is usd to delete the respective line(last line in this case).
"*.txt" is used to select all files whose extension is .txt in the present directory.

You should use -i with sed statement
To modify and do some changes in file we need to specify -i in sed command
your command should be like
find . -type f -name '*.txt' -exec sed -i '$d' {} \;
But please note that it will update all the files with .txt and you wont be able to revert back so please take backup of important files

Considering the risk of loosing data while modifying large number of files using sed, this is what I used after creating a new sub-directory:
awk 'NR>1 {print last > "./modified/"FILENAME} {last=$0}' *.txt
This filters all the files with extension .txt into new files with changes to the sub-directory named modified.
Ref: what is the easiest way to remove 1st and last line from file with awk?

Related

Grep to find a pattern and replace in same line

I have a project directory with folders containing .html files. I want to find those files which have the pattern -
'btn-primary.*{.*Save'
And replace the
'btn-primary' word with 'btn-primary Save'
only in those lines.
What I have done:
grep -rl -e 'btn-primary.*{Save' . |xargs sed -i 's/btn-primary/btn-primary Save/g'
What this did:
This found all files that have that pattern, that's okay. Then, sed ran on all of those files and replaced 'btn-primary' with 'btn-primary save' wherever it got - which is not what I want
What I want: to replace on those lines where there is 'Save' somewhere after 'btn-primary'.
Any help will be very much appreciated.
Regards,
Rahul
Why are you using grep at all? Sed does pattern matching:
sed -e 's/btn-primary\(.*{.*Save\)/btn-primary Save\1/g'
or:
sed -e 's/\(btn-primary\)\(.*{.*Save\)/\1 Save\2/g'
If you are using grep to try to trim down the number of files that sed will operate on, you're fooling yourself if you believe that is more efficient. By doing that, you will read every file that doesn't match only once, but every file that does match will be read twice. If you only use sed, every file will be read only once.

trouble listing directories that contain files with specific file extensions

How to I list only directories that contain certain files. I am running on a Solaris box. Example, I want to list sub-directories of directory ABC that contain files that end with .out, .dat and .log .
Thanks
Something along these lines might work out for you:
find ABC/ \( -name "*.out" -o -name "*.log" \) -print | while read f
do
echo "${f%/*}"
done | sort -u
The sort -u bit could be just uniq instead, but either should work.
Should work on bash or ksh. Probably not so much on /bin/sh - you'd have to replace the variable expansion with something like echo "${f}" | sed -e 's;/[^/]*$;;' or something else that would strip off the last component of the path. dirname "${f}" would be good for that, but I don't recall if Solaris includes that utility...

I want to recursively insert two lines in all files of my directory where it was not present?

I have a directory customer. I have many customers in customer directory.
Now I want to add two lines in some process_config file within customer directory where it was not available.
For example:
/home/sam/customer/a1/na/process_config.txt
/home/sam/customer/p1/emea/process_config.txt
and so so.
Is this possible by single command like find & sed?
With a simple for loop :
for file in /home/sam/customer/*/*/process_config.txt; do
printf "one line\nanother line\n" >> "$file"
done
find /home/sam/customer -name 'process_config.txt' -exec DoYourAddWithSedAwkEchoOrWhatever {} \;
find give you the possibility to select each wanted (selected) file
option -exec start a subshell with your command on this file.
{} is the file name (full name) in this case.
Use \; as end of command for the iteration (other command couldbe used with the standard behaviour of ; ex -exec echo 'line1' >> {} ; echo "line2" >> {} \;
sed, awk or echo like in sample can modify the file

batch rename to change only single character

How to rename all the files in one directory to new name using the command mv. Directory have 1000s of files and requirement is to change the last character of each file name to some specific char. Example: files are
abc.txt
asdf.txt
zxc.txt
...
ab_.txt
asd.txt
it should change to
ab_.txt
asd_.txt
zx_.txt
...
ab_.txt
as_.txt
You have to watch out for name collisions but this should work okay:
for i in *.txt ; do
j=$(echo "$i" | sed 's/..txt$/_.txt/')
echo mv \"$i\" \"$j\"
#mv "$i" "$j"
done
after you uncomment the mv (I left it commented so you could see what it does safely). The quotes are for handling files with spaces (evil, vile things in my opinion :-).
If all files end in ".txt", you can use mmv (Multiple Move) for that:
mmv "*[a-z].txt" "#1_.txt"
Plus: mmv will tell you when this generates a collision (in your example: abc.txt becomes ab_.txt which already exists) before any file is renamed.
Note that you must quote the file names, else the shell will expand the list before mmv sees it (but mmv will usually catch this mistake, too).
If your files all have a .txt suffix, I suggest the following script:
for i in *.txt
do
r=`basename $i .txt | sed 's/.$//'`
mv $i ${r}_.txt
done
Is it a definite requirement that you use the mv command?
The perl rename utility was written for this sort of thing. It's standard for debian-based linux distributions, but according to this page it can be added really easily to any other.
If it's already there (or if you install it) you can do:
rename -v 's/.\.txt$/_\.txt/' *.txt
The page included above has some basic info on regex and things if it's needed.
Find should be more efficient than for file in *.txt, which expands all of your 1000 files into a long list of command line parameters. Example (updated to use bash replacement approach):
find . \( -type d ! -name . -prune \) -o \( -name "*.txt" \) | while read file
do
mv $file ${file%%?.txt}_.txt
done
I'm not sure if this will work with thousands of files, but in bash:
for i in *.txt; do
j=`echo $i |sed 's/.\.txt/_.txt/'`
mv $i $j
done
You can use bash's ${parameter%%word} operator thusly:
for FILE in *.txt; do
mv $FILE ${FILE%%?.txt}_.txt
done

Shell script - search and replace text in multiple files using a list of strings

I have a file "changesDictionary.txt" containing (a variable number of) pairs of key-value strings.
e.g.
"textToSearchFor" = "theReplacementText"
(The format of the dictionary is unimportant, and be changed as required.)
I need to iterate through the contents of a given directory, including sub-directories. For each file encountered with the extension ".txt", we search for each of the keys in changesDictionary.txt, replacing each found instance with the replacement string value.
i.e. a search and replace over multiple files, but using a list of search/replace terms rather than a single search/replace term.
How could I do this? (I have studied single search/replace examples, but do not understand how to do multiple searches within a file.)
The implementation (bash, perl, whatever) is not important as long as I can run it from the command line in Mac OS X. Thanks for any help.
I'd convert your changesDictionary.txt file to a sed script, with... sed:
$ sed -e 's/^"\(.*\)" = "\(.*\)"$/s\/\1\/\2\/g/' \
changesDictionary.txt > changesDictionary.sed
Note, any special characters for either regular expressions or sed expressions in your dictionary will be falsely interpreted by sed, so your dictionary can either only have only the most primitive search-and-replacements, or you'll need to maintain the sed file with valid expressions. Unfortunately, there's no easy way in sed to either shut off regular expression and use only string matching or quote your searches and replacements as "literals".
With the resulting sed script, use find and xargs -- rather than find -exec -- to convert your files with the sed script as quickly as possible, by processing them more than one at a time.
$ find somedir -type f -print0 \
| xargs -0 sed -i -f changesDictionary.sed
Note, the -i option of sed edits files "in-place", so be sure to make backups for safety, or use -i~ to create tilde-backups.
Final note, using search and replaces can have unintended consequences. Will you have searches that are substrings of other searches? Here's an example.
$ cat changesDictionary.txt
"fix" = "broken"
"fixThat" = "Fixed"
$ sed -e 's/^"\(.*\)" = "\(.*\)"$/s\/\1\/\2\/g/' changesDictionary.txt \
| tee changesDictionary.sed
s/fix/broken/g
s/fixThat/Fixed/g
$ mkdir subdir
$ echo fixThat > subdir/target.txt
$ find subdir -type f -name '*.txt' -print0 \
| xargs -0 sed -i -f changesDictionary.sed
$ cat subdir/target.txt
brokenThat
Should "fixThat" have become "Fixed" or "brokenThat"? Order matters for sed script. Similarly, a search and replace can be search and replaced more than once -- changing "a" to "b", may be changed by another search-and-replace later from "b" to "c".
Perhaps you've already considered both of these, but I mention because I've tried what you were doing before and didn't think of it. I don't know of anything that simply does the right thing for doing multiple search and replacements at once. So, you need to program it to do the right thing yourself.
Here are the basic steps I would do
Copy the changesDictionary.txt file
In it replace "a"="b" to the equivalent sed line: e.g. (use $1 for the file name)
sed -e 's/a/b/g' $1
(you could write a script to do this or just do it by hand, if you just need to do this once and it's not too big).
If the files are all in one directory, then you can do something like:
ls *.txt | xargs scriptFromStep2.sh
If they are in subdirs, use a find to call that script on all of the files, something like
find . -name '*.txt' -exec scriptFromStep2.sh {} \;
These aren't exact, do some experiments to make sure you get it right -- it's just the approach I would use.
(but, if you can, just use perl, it would be a lot simpler)
Use this tool, which is written in Perl - with quite a lot of bells and whistles - oldie, but goodie:
http://unixgods.org/~tilo/replace_string/
Features:
do multiple search-replace or query-search-replace operations
search-replace expressions can be given on the command line or read from a file
processes multiple input files
recursively descend into directory and do multiple search/replace operations on all files
user defined perl expressions are applied to each line of each input file
optionally run in paragraph mode (for multi-line search/replace)
interactive mode
batch mode
optionally backup files and backup numbering
preserve modes/owner when run as root
ignore symbolic links, empty files, write protected files, sockets, named pipes, and directory names
optionally replace lines only matching / not matching a given regular expression
This script has been used quite extensively over the years with large data sets.
#!/bin/bash
f="changesDictionary.tx"
find /path -type f -name "*.txt" | while read FILE
do
awk 'BEGIN{ FS="=" }
FNR==NR{ s[$1]=$2; next }
{
for(i in s){
if( $0 ~ i ){ gsub(i,s[i]) }
}
print $0
}' $f $FILE > temp
mv temp $FILE
done
for i in ls -1 /script/arq*.sh
do
echo -e "ARQUIVO ${i}"
sed -i 's|/$file_path1|/file_path2|g' ${i}
done

Resources