Sort Matched Files by Last Modified and Timestamp - unix

I need to look for files that match a certain pattern of characters, then find the most recent file and display it. The below code isn't quite getting me there but, I think I'm close.
Code:
find /home/weather/data/blend/ -type f -name "*.ctl" -printf '%Ts\t%p\n' | sort -nr | cut -f2

Here's a working solution:
find . -mmin -720 -type f -name "*.ctl" -exec ls -t {} \; | cut -c 3-

Related

How to grep for files containing a specific word and pass the list of files as argument to second command?

grep rli "stringName" * | xargs <second_command> <list_of files>
will the above code work for the functionality mentioned?
I am a beginner to not sure how to use it.
You are just missing hyphen for options to grep. Following should work
grep -rli "stringName" * | xargs <second_command>
Considering above command cannot handle whitespace or weird characters in file names, more robust solution would be to use find
find . -type f -exec grep -qi "stringName" {} + -print0 | xargs -0 <second_command>
Or use -Z option with xargs -0
grep -rli "stringName" * -Z | xargs -0 <second_command>
Extending on jkshah's answer, which is already quite good.
find . -type f -exec grep -qi "regex" {} \; -exec "second_command" {} \;
This has the advantage of being more portable (-print0 and -0 are gnu extensions).
It executes the second command for each matching file in turn. If you want to execute with a list of all matching files at the end instead, change the last \; to +

Pipe output to parameter

So I wanted to write a simple command that counts one less than the number of files in my current directory. I have this command that comes close but is off by one.
ls | wc -l
How can I pipe this to bc so I can subtract it by one?
Thanks!
To pipe to bc you could use something like this
echo " $(ls | wc -l) - 1 " | bc
EDIT: replace the part in the $( ) with steve's answer, or any other command you need.
That's really not what you want to do. Use find instead:
find . -maxdepth 1 -type f | wc -l
Also, you can exclude hidden files, with:
find . -maxdepth 1 -type f ! -name ".*" | wc -l
For completeness, you can handle files containing newlines and spaces like:
find . -maxdepth 1 -type f -print0 | tr -dc '\0' | wc -c

Adding data line by line in file in Unix

I am extracting file names from one command it returns many file names and i am putting them into one file
code :
echo `find ${FILE_SYSTEM}/${dir_name}/${sub_dir_name} -type f -size +${BADFILES_SIZE} -exec ls -1lutr {} \; | sort -rn | awk '{print $9}'` >> Somefile.txt
The problem here is that i am not getting file names on each line.
Its giving two filenames on 1 line.
But i want to have each filename on 1 line.
Eg :
/informatica/ETD/PC9/scripts/kamil/temp/temp1.txt /informatica/ETD/PC9/scripts/kamil/temp/temp2.txt
I am getting filenames as shown above and i want as shown below.
/informatica/ETD/PC9/scripts/kamil/temp/temp1.txt
/informatica/ETD/PC9/scripts/kamil/temp/temp2.txt
Please give ur suggestions,
The problem is that you're using echo and backticks. Don't! The echo flattens all its arguments (a list of two files, it seems) into a single line of output.
Wrong:
echo `find ${FILE_SYSTEM}/${dir_name}/${sub_dir_name} -type f -size +${BADFILES_SIZE} -exec ls -1lutr {} \; | sort -rn | awk '{print $9}'` >> Somefile.txt
Right:
find ${FILE_SYSTEM}/${dir_name}/${sub_dir_name} -type f \
-size +${BADFILES_SIZE} -exec ls -1lutr {} + |
sort -rn |
awk '{print $9}' >> Somefile.txt

UNIX find for finding file names not paired by

Is there a simple way to recursively find all files in a directory hierarchy, that do not have a matching file with a different extension?
For example the directory has a bunch of files ending in .dat
I want to find the .dat files that do not have an accompanying .out file.
I have a while loop that checks each entry, but that is slow for long lists...
I am using GNU find.
Perhaps something like this?
find . -name "*.dat" -print | sort > column1.txt
find . -name "*.out" -print | sort > column2.txt
diff column1.txt column2.txt
I haven't tested it, but I think it's probably close to what you're asking for.
find . -name '*.dat' -printf "[ -f %p ] || echo %p\n" | sed 's/\.dat/.out/' | sh
I had to add a bunch of bells and whistles to the 1st solution, but that was a good start, thanks...
find . -print | grep -Fi '.dat' | grep -vFi '.dat.' | sort | sed -e 's/.dat//g' > column1.txt
find . -print | grep -Fi '.out' | grep -vFi '.out.' | sort | sed -e 's/.out//g' > column2.txt
sdiff -s column1.txt column2.txt | grep -F '<' | cut -f1 -d"<" > c12diff.txt

Unix Find Replace Special Characters in Multiple Files

I've got a set of files in a web root that all contain special characters that I'd like to remove (Â,€,â,etc).
My command
find . -type f -name '*.*' -exec grep -il "Â" {} \;
finds & lists out the files just fine, but my command
find . -type f -name '*.*' -exec tr -d 'Â' '' \;
doesn't produce the results I'm looking for.
Any thoughts?
to replace all non-ascii characters in all files inside the current directory you could use:
find . -type f | xargs perl -pi.bak -e 's,[^[:ascii:]],,g'
afterwards you will have to find and remove all the '.bak' files:
find . -type f -a -name \*.bak | xargs rm
I would recommend looking into sed. It can be used to replace the contents of the file.
So you could use the command:
find . -type f -name '*.*' -exec sed -i "s/Â//" {} \;
I have tested this with a simple example and it seems to work. The -exec should handle files with whitespace in their name, but there may be other vulnerabilities I'm not aware of.
Use
tr -d 'Â'
What does the ' ' stands for? On my system using your command produces this error:
tr: extra operand `'
Only one string may be given when deleting without squeezing repeats.
Try `tr --help' for more information.
sed 's/ø//' file.txt
That should do the trick for replacing a special char with an empty string.
find . -name "*.*" -exec sed 's/ø//' {} \
It would be helpful to know what "doesn't produce the results I'm looking for" means. However, in your command tr is not provided with the filenames to process. You could change it to this:
find . -type f -name '*.*' -exec tr -d 'Â' {} \;
Which is going to output everything to stdout. You probably want to modify the files instead. You can use Grundlefleck's answer, but one of the issues alluded to in that answer is if there are large numbers of files. You can do this:
find . -type f -name '*.*' -print0 | xargs -0 -I{} sed -i "s/Â//" \{\}
which should handle files with spaces in their names as well as large numbers of files.
with bash shell
for file in *.*
do
case "$file" in
*[^[:ascii:]]* )
mv "$file" "${file//[^[:ascii:]]/}"
;;
esac
done
I would use something like this.
for file in `find . -type f`
do
# Search for char end remove it. Save file as file.new
sed -e 's/[ۉ]//g' $file > $file.new
# mv file.new to file DON'T RUN IF YOU WILL NOT OVERITE ORIGINAL FILE
mv $file.new $file
done
The above script will fail as levislevis85 has mentioned it with spaces in filenames. This would not be the case if you use the following code.
find . -type f | while read file
do
# Search for char end remove it. Save file as file.new
sed -e 's/[ۉ]//g' "$file" > "$file".new
# mv file.new to file DON'T RUN IF YOU WILL NOT OVERITE ORIGINAL FILE
mv "$file".new "$file"
done

Resources