searching for part of a filename (UNIX) - unix

On unix I have files which have been renamed as their original name follwed by _inode number (ie the file dog would be renamed dog_inodeno). I am now trying to remove the inode no so i can search for the original file name elsewhere. Does anyone know how I can do this and teh coding neccesary.
Thanks

This should do the job:
find . -type f -name "*_[0-9]*" -exec \
sh -c 'for i do
b=$(basename "$i")
r=$(basename "$i" "_$(ls -i "$i"|awk "{print \$1}")")
if [ "$b" != "$r" ]; then
echo mv "$i" "$(dirname $i)/$r"
fi
done' sh {} +
Replace echo mv by mv for the script to actually rename the files.

The solution here will do rename your files only if the inode number of a file is part of the file's name in the mentioned format, which is what the OP wants.
Solution is successfuly tested at my end.
find ./ -name "*_[0-9][0-9][0-9][0-9][0-9][0-9]" -exec sh 'rename-files.sh' {} \;
Store the below script for the find command to be successful.
#Script Name: rename-files.sh
#!/bin/bash
#Store the result of find
find_result=$1
#Get the existing file name
fname_alone=`expr ${find_result} : '.*/\(.*\)' '|' ${find_result}`
fname_with_relative_path=`expr ${find_result} : '.\(.*\)' '|' ${find_result}`
fname_with_full_path=`echo "$(pwd)${fname_with_relative_path}"`
#Get the inode number of file name
file_inode_no=`find ./ -name ${fname_alone} -printf '%i'`
#Read the end of name
end_of_name=`echo $fname_alone | awk -F "_" '{print $NF}' `
#Check if end of name contains its file's inode number
if [ $end_of_name -eq $file_inode_no ]
then
#Remove the inode number at the end of file name
new_name=`expr $find_result : '.\(.*\)_.*' '|' $find_result`
#Append the path of the file
renamed_to=`echo "$(pwd)${new_name}"`
#Rename your dog_inodeno to dog
mv $fname_with_full_path $renamed_to
fi
Hope this helps.

Related

script to watch new files in a folder and when found, based on filename call different scripts

I am trying to design a file watcher solution in which I need to watch a particular folder for different file names everyday, once the file name is found, I need to call a script specific to the file name.
Example:
Watch Folder -
file1.txt
file2.txt
file3.txt
call script.sh abc file1
call script.sh abc file2
call script.sh abc file3
I tried to make use of the inotifywait but have not been able to get it to work. Any help would be appreciated.
sftp_home=/app/public/ent_sftp
script=/app/public/bin
curr_date=$(TZ=":US/Eastern" date '+%Y%m%d')
inotifywait -m $sftp_home -e create -e moved_to |
while read path action file; do
echo "The file '$file' appeared in directory '$path' via '$action'"
if [ "$file" = "file1${curr_date}*.txt" ]; then
echo "file1${curr_date}*.txt was found and process will be initiated"
cd $script
./script.sh file1
elif [ "$file" = "file2${curr_date}*.txt" ]; then
echo "file2${curr_date}*.txtwas found today and process will be initiated"
cd $script
./script.sh file2
fi
done
Thanks,
Kavin
If you want to do glob expansions in the match, you can do that with a case statement:
unset arg
case $file in
file1${curr_date}*.txt)
arg=file1
;;
file2${curr_date}*.txt)
arg=file2
;;
*)
echo No file found >&2
;;
esac
if test -n "$arg"; then
echo "${arg}${curr_date}*.txt was found and process will be initiated"
cd $script
./script.sh "$arg"
fi

How to tell script to look only into a specific folder

I'm trying to make a recycle bin for UNIX, so I have two scripts. 1 to delete the file and move it to the bin, the other script to restore the file back to its original location.
my restore script only works if the person gives the path to the deleted file.
ex: sh restore ~/trashbin/filename
How do I hardcode into my script so that I don't need to give the path to the deleted file it should already know to look in the trashbin for the file. My restore script works only when someone calls in the path to the file.
#!/bin/bash
rlink=$(readlink -e "$1")
rname=$(basename "$rlink")
function restoreFile() {
rlink=$(readlink -e "$1")
rname=$(basename "$rlink")
rorgpath=$(grep "$rname" ~/.restore.info | cut -d":" -f2)
rdirect=$(dirname "$rorgpath")
#echo $orgpath
if [ ! -d "$rdirect" ]
then
mkdir -p $rdirect
#echo $var
mv $rlink $rorgpath
else
mv $rlink $rorgpath
fi
}
if [ -z "$1" ]
then
echo "Error no filename provided."
exit 1
elif [ ! -f "$1" ]
then
echo "Error file does not exist."
exit 1
elif [ -f "$rorgpath" ]
then
echo "File already exists in original path."
read -p "Would you like to overwrite it? (y/n)" ovr
if [[ $ovr = y || $ovr = Y || $ovr = yes ]]
then
echo "Restoring File and overwriting."
restoreFile $1
grep -v "$rname" ~/.restore.info > ~/.restorebackup.info
mv ~/.restorebackup.info ~/.restore.info
fi
else
echo "Restoring file into original path."
restoreFile $1
grep -v "$rname" ~/.restore.info > ~/.restorebackup.info
mv ~/.restorebackup.info ~/.restore.info
fi
When you "remove" the file from the file-system to your trash-bin, move it so that its path is remembered. Example: removing file /home/user/file.txt should mean moving this file to ~/.trash/home/user/file.txt. That way, you'll be able to restore files to the original location, and you'll have auto-complete work, since you can do: sh restore ~/.trash/<TAB><TAB>

KSH sort filenames

I'm searching through a number of directories for "searchstring", and then running a script on each $file:
for file in `find $dir -name ${searchstring}'*'`;
do
echo $file >> $debug
script.sh $file >> $output
done
My $debug file yields the following:
/root/0007_searchstring/out/filename_20120105_020000.log
/root/0006_searchstring/out/filename_20120105_010000.log
/root/0005_searchstring/out/filename_20120105_013000.log
(filename is _yyyymmdd_hhmmss.log)
...
Is there a way to get find to order by filename or by mktime? Should I pipe find to sort first? Make an array then sort it as per this question?
If you want to ignore the directory path and just use the file name, then you should be able to use:
for file in `find $dir -name ${searchstring}'*' | sort --field-separator=/ --key=4`;
'ls -t' if you need to regenerate the list based on timestamp.
'sort -n' if the list is fairly static?
To sort by modification time, you can use stat with find:
$ find . -exec stat {} -c '%Y %n' \; | sort -n | cut -d ' ' -f 2
You can pipe the output of find through sort to sort by filename:
find $dir -name "${searchstring}*" | sort | while read file
do
echo "$file" >> $debug
script.sh "$file" >> $output
done

Concatenate multiple files but include filename as section headers

I would like to concatenate a number of text files into one large file in terminal. I know I can do this using the cat command. However, I would like the filename of each file to precede the "data dump" for that file. Anyone know how to do this?
what I currently have:
file1.txt = bluemoongoodbeer
file2.txt = awesomepossum
file3.txt = hownowbrowncow
cat file1.txt file2.txt file3.txt
desired output:
file1
bluemoongoodbeer
file2
awesomepossum
file3
hownowbrowncow
Was looking for the same thing, and found this to suggest:
tail -n +1 file1.txt file2.txt file3.txt
Output:
==> file1.txt <==
<contents of file1.txt>
==> file2.txt <==
<contents of file2.txt>
==> file3.txt <==
<contents of file3.txt>
If there is only a single file then the header will not be printed. If using GNU utils, you can use -v to always print a header.
I used grep for something similar:
grep "" *.txt
It does not give you a 'header', but prefixes every line with the filename.
This should do the trick as well:
$ find . -type f -print -exec cat {} \;
./file1.txt
Content of file1.txt
./file2.txt
Content of file2.txt
Here is the explanation for the command-line arguments:
find = linux `find` command finds filenames, see `man find` for more info
. = in current directory
-type f = only files, not directories
-print = show found file
-exec = additionally execute another linux command
cat = linux `cat` command, see `man cat`, displays file contents
{} = placeholder for the currently found filename
\; = tell `find` command that it ends now here
You further can combine searches trough boolean operators like -and or -or. find -ls is nice, too.
When there is more than one input file, the more command concatenates them and also includes each filename as a header.
To concatenate to a file:
more *.txt > out.txt
To concatenate to the terminal:
more *.txt | cat
Example output:
::::::::::::::
file1.txt
::::::::::::::
This is
my first file.
::::::::::::::
file2.txt
::::::::::::::
And this is my
second file.
This should do the trick:
for filename in file1.txt file2.txt file3.txt; do
echo "$filename"
cat "$filename"
done > output.txt
or to do this for all text files recursively:
find . -type f -name '*.txt' -print | while read filename; do
echo "$filename"
cat "$filename"
done > output.txt
find . -type f -print0 | xargs -0 -I % sh -c 'echo %; cat %'
This will print the full filename (including path), then the contents of the file. It is also very flexible, as you can use -name "expr" for the find command, and run as many commands as you like on the files.
And the missing awk solution is:
$ awk '(FNR==1){print ">> " FILENAME " <<"}1' *
This is how I normally handle formatting like that:
for i in *; do echo "$i"; echo ; cat "$i"; echo ; done ;
I generally pipe the cat into a grep for specific information.
I like this option
for x in $(ls ./*.php); do echo $x; cat $x | grep -i 'menuItem'; done
Output looks like this:
./debug-things.php
./Facebook.Pixel.Code.php
./footer.trusted.seller.items.php
./GoogleAnalytics.php
./JivositeCode.php
./Live-Messenger.php
./mPopex.php
./NOTIFICATIONS-box.php
./reviewPopUp_Frame.php
$('#top-nav-scroller-pos-<?=$active**MenuItem**;?>').addClass('active');
gotTo**MenuItem**();
./Reviews-Frames-PopUps.php
./social.media.login.btns.php
./social-side-bar.php
./staticWalletsAlerst.php
./tmp-fix.php
./top-nav-scroller.php
$active**MenuItem** = '0';
$active**MenuItem** = '1';
$active**MenuItem** = '2';
$active**MenuItem** = '3';
./Waiting-Overlay.php
./Yandex.Metrika.php
you can use this simple command instead of using a for loop,
ls -ltr | awk '{print $9}' | xargs head
If the files all have the same name or can be matched by find, you can do (e.g.):
find . -name create.sh | xargs tail -n +1
to find, show the path of and cat each file.
If you like colors, try this:
for i in *; do echo; echo $'\e[33;1m'$i$'\e[0m'; cat $i; done | less -R
or:
tail -n +1 * | grep -e $ -e '==.*'
or: (with package 'multitail' installed)
multitail *
Here is a really simple way. You said you want to cat, which implies you want to view the entire file. But you also need the filename printed.
Try this
head -n99999999 * or head -n99999999 file1.txt file2.txt file3.txt
Hope that helps
If you want to replace those ugly ==> <== with something else
tail -n +1 *.txt | sed -e 's/==>/\n###/g' -e 's/<==/###/g' >> "files.txt"
explanation:
tail -n +1 *.txt - output all files in folder with header
sed -e 's/==>/\n###/g' -e 's/<==/###/g' - replace ==> with new line + ### and <== with just ###
>> "files.txt" - output all to a file
find . -type f -exec cat {} \; -print
AIX 7.1 ksh
... glomming onto those who've already mentioned head works for some of us:
$ r head
head file*.txt
==> file1.txt <==
xxx
111
==> file2.txt <==
yyy
222
nyuk nyuk nyuk
==> file3.txt <==
zzz
$
My need is to read the first line; as noted, if you want more than 10 lines, you'll have to add options (head -9999, etc).
Sorry for posting a derivative comment; I don't have sufficient street cred to comment/add to someone's comment.
I made a combination of:
cat /sharedpath/{unique1,unique2,unique3}/filename > newfile
and
tail -n +1 file1 file2
into this:
tail -n +1 /sharedpath/{folder1,folder2,...,folder_n}/file.extension | cat > /sharedpath/newfile
The result is a newfile that contains the content from each subfolder (unique1,unique2..) in the {} brackets, separated by subfolder name.
note unique1=folder1
In my case the file.extension has the same name in all subfolders.
If you want the result in the same format as your desired output you can try:
for file in `ls file{1..3}.txt`; \
do echo $file | cut -d '.' -f 1; \
cat $file ; done;
Result:
file1
bluemoongoodbeer
file2
awesomepossum
file3
hownowbrowncow
You can put echo -e before and after the cut so you have the spacing between the lines as well:
$ for file in `ls file{1..3}.txt`; do echo $file | cut -d '.' -f 1; echo -e; cat $file; echo -e ; done;
Result:
file1
bluemoongoodbeer
file2
awesomepossum
file3
hownowbrowncow
This method will print filename and then file contents:
tail -f file1.txt file2.txt
Output:
==> file1.txt <==
contents of file1.txt ...
contents of file1.txt ...
==> file2.txt <==
contents of file2.txt ...
contents of file2.txt ...
For solving this tasks I usually use the following command:
$ cat file{1..3}.txt >> result.txt
It's a very convenient way to concatenate files if the number of files is quite large.
First I created each file: echo 'information' > file1.txt for each file[123].txt.
Then I printed each file to makes sure information was correct:
tail file?.txt
Then I did this: tail file?.txt >> Mainfile.txt. This created the Mainfile.txt to store the information in each file into a main file.
cat Mainfile.txt confirmed it was okay.
==> file1.txt <==
bluemoongoodbeer
==> file2.txt <==
awesomepossum
==> file3.txt <==
hownowbrowncow

How to find file extension using UNIX?

I need to find file extension for file to be processed using UNIX. The two file extension which i will be handling are '.dat' and '.csv'.
Please let me know how this can be done.
find . -name "*.dat" -o -name "*.csv"
Finds in the current directory and recursively down, all files that end in those two extensions.
So my stab at this.
filename=file.dat
extension=$(echo ${filename}|awk -F\. '{print $2}')
if [ ${extension} == "dat" ]; then
your code here
fi
Echo the variable ${filename} pipe that output to awk. With awk reset the field separator to a . then pick up field 2 (the print $2 part)
This is what you want ?
find . -name "*.dat"
find . -name "*.csv"
find /path -type f \( -name "*.dat" -o -name "*.csv" \) | while read -r file
do
echo "Do something with $file"
done
if you have the filename in a variable
filename = test.csv
then just use this to get the "csv" part:
echo ${filename##*.}
works for bash, try it in ksh
edit:
filename=test.csv
fileext=${filename##*.}
if [ fileext = "csv" ]; then
echo "file is csv, do something"
else
if [ fileext = "dat" ]; then
echo "file is dat, do something"
else
echo "mhh what now?"
fi
fi

Resources