I have a text file containing filenames of 1300 files:
mmjr0_si2166.wav
mesd0_si1002.wav
mjes0_sx214.wav
mjln0_si819.wav
mkcl0_si1721.wav
.
.
.
mjth0_sx216.wav
how can I edit the filenames in UNIX so that instead of their names, each line shows their paths? I mean something like this:
/Users/Desktop/TIMIT_wav/mmjr0_si2166.wav
/Users/Desktop/TIMIT_wav/mesd0_si1002.wav
/Users/Desktop/TIMIT_wav/mjes0_sx214.wav
/Users/Desktop/TIMIT_wav/mjln0_si819.wav
/Users/Desktop/TIMIT_wav/mkcl0_si1721.wav
.
.
.
/Users/Desktop/TIMIT_wav/mjth0_sx216.wav
sed -i -e 's;^;/Users/Desktop/TIMIT_wav/;' file_with_filenames.txt
Which will make substitution (s/from/to/) of the beginning of line (^) with desired path (/Users/Desktop/TIMIT_wav/).
Related
I would like to change the file name according to the specific pattern within the file. Let's say I have the unique pattern that starts with "XmacTmas". I would like to use this pattern to rename the file with the additional character like "_dbp1".
Now my file name is "xxo1" and I want "XmacTmas_dbp1".
How can I do this in for thousands of files with some script.
Thanks
find . -name 'XmacTmas*' -exec echo mv {} {}_dbp1 \;
find the files of interest and execute command after replacing {} with the found filename.
Escape the ;. Without the \, find would take it as part of the command to execute.
If only files in the actual directory are needed, add -maxdepth 0 before -name (or any other of find's numerous options)
If the result is as needed, remove the echo
I want to list the file which only have blankspaces in the name nothing else.
e.g file created using command touch " ".
This will create a file having 3 blank spaces.
I am using the following command but it's not listing the desired files.
ls -lart | grep '[^a-zA-Z0-9]'
Do not parse ls. Instead, you can use find with regular expressions:
find . -regextype posix-extended -regex '^.*/ *$'
Since man find says This is a match on the whole path, not a search, we can just say (credits to John Kugelman):
find . -regextype posix-extended -regex '.*/ *'
This will find files consisting in multiple spaces.
You can list it the same way you would any file: by name, or with a pattern that matches only it.
ls -lart " "
The pattern requires shopt -s extglob in bash (other shells may have something simliar). One possibility:
ls -lart "+([[:space:]])"
I wanted to write a command that would help me fetch recursively in a folder all filenames that have a particular text in them . Suppose my folder contains lot of files two of them being largest_pallindrome_subsequence_1.cpp and largest_pallindrome_subsequence_2.cpp . Now I want to find files which have sub in it . So the search should return me these 2 cpp files as mentioned above.
The thing is that I also want to look for a file with particular extension say .txt or .cpp .
I tried using grep --include=\*{.cpp} -rnw . -e "sub" but this doesnot work for me.
You can do:
find ./ -name "*sub*"
or:
find ./ | grep "sub"
When I use ls . (/bin/ls) it returns a list of files.
when "." has directories and I try to redirect ls . by ls . > tmp.txt,
it contains many symbols like below
[1m[36m010202E[39;49m[0m
[1m[36m031403C[39;49m[0m
Directory names are 010202E and 031403C
this txt file can be read by "less" but not by vi or any other editors like text wrangler .
How can I avoid this problem?
I know there is a way to delete those characters after making "tmp.txt".
It is likely that there's an alias that makes ls print the output with color. Try to use "ls --color=none", instead.
I have hundreds of files where I need to change a portion of its text.
For example, I want to replace every instance of "http://" with "rtmp://" .
The files have the .txt extention and are spread across several folders and subfolder.
I basically am looking for a way/script that goes trough every single folder/subfolder and every single file and if it finds inside that file the occurrence of "http" to replace it with "rtmp".
You can do this with a combination of find and sed:
find . -type f -name \*.txt -exec sed -i.bak 's|http://|rtmp://|g' {} +
This will create backups of each file. I suggest you check a few to make sure it did what you want, then you can delete them using
find . -name \*.bak -delete
Here's a zsh function I use to do this:
change () {
from=$1
shift
to=$1
shift
for file in $*
do
perl -i.bak -p -e "s{$from}{$to}g;" $file
echo "Changing $from to $to in $file"
done
}
It makes use of the nice Perl mechanism to create a backup file and modify the nominated file. You can use the above to iterate through files thus:
zsh$ change http:// rtmp:// **/*.html
or just put it in a trivial #!/bin/zsh script (I just use zsh for the powerful globbing)