I would like to change the file name according to the specific pattern within the file. Let's say I have the unique pattern that starts with "XmacTmas". I would like to use this pattern to rename the file with the additional character like "_dbp1".
Now my file name is "xxo1" and I want "XmacTmas_dbp1".
How can I do this in for thousands of files with some script.
Thanks
find . -name 'XmacTmas*' -exec echo mv {} {}_dbp1 \;
find the files of interest and execute command after replacing {} with the found filename.
Escape the ;. Without the \, find would take it as part of the command to execute.
If only files in the actual directory are needed, add -maxdepth 0 before -name (or any other of find's numerous options)
If the result is as needed, remove the echo
Related
I have a list of 50 names that look like this:
O8-E7
O8-F2
O8-F6
O8-F8
O8-H2
O9-A5
O9-B8
O9-D8
O9-E2
O9-F5
O9-H12
S37-A5
S37-B11
S37-B12
S37-C12
S37-D12
S37-E8
S37-G2
I want to look inside a specific directory for all the subdirectories whose name contains one of these elements.
For example, the directory Sample_S37-G2-from-Specimen-001 would be a match.
Inside those subdirectories, there is a file called accepted_hits.bam (unfortunately named the same way in all of them). I want to find these files and copy them into a single folder, with the name of the sample subdirectory that they came from.
For example, I would copy the accepted_hits.bam file from the subdirectory Sample_S37-G2-from-Specimen-001 to the new_dir as S37-G2_accepted_hits.bam
I tried using find, but it's not working and I don't really understand why.
cat sample.list | while read FILENAME; do find /path/to/sampleDirectories -name "$FILENAME" -exec cp '{}' new
_dir\; done
Any ideas? Thanks!
You are looking for dirs that are exactly the same as the lines in your input.
The first improvement would be using wildcards
cat sample.list | while read FILENAME; do
find /path/to/sampleDirectories -name "*${FILENAME}*" -exec cp '{}' new_dir\; done
Your new problem is that now you will be looking for dir's, not files. You want to find dir's with the filename accepted_hits.bam.
So your next try would be parsing the output of
find /path/to/sampleDirectories -name accepted_hits.bam | grep "${FILENAME}"
but you do not want to call find for each entry in sample.list.
You need to start with 1 find command and get the relevant subdirs from it.
A complication is that you want to have the substring from orgfile in your destfile name. Look at the grep options o and f, they help!
find /path/to/sampleDirectories -name accepted_hits.bam | while read orgfile | do
matched_part=$(echo "${orgfile}" | grep -of sample.list)
if [ -n "${matched_part}" ]; then
cp ${orgfile} newdir/${matched_part}accepted_hits.bam
fi
done
This will only work when your sample.list is without additional spaces. When you have spaces and can not cange the file, you need to copy/parse sample.list to another file.
When one of your 50 entries in sample.list is a substring of "accepted_hits.bam", you need to do some extra work.
Edit: if [ -n "${matched_part}" ] was missing the $.
Try using egrep with alternation
build a text file with single line of patterns: (pat1|pat2|pat3)
call find to list all of the regular files
use egrep to select the ones based on the patterns in the pattern file
awk 'BEGIN { printf("(") } FNR==1 {printf("%s", $0)} FNR>1 {printf("|%s", $0)} END{printf(")\n") } ' sample.list > t.sed
find /path/to/sampleDirectories -type f | egrep -f t.sed > filelist
I wanted to write a command that would help me fetch recursively in a folder all filenames that have a particular text in them . Suppose my folder contains lot of files two of them being largest_pallindrome_subsequence_1.cpp and largest_pallindrome_subsequence_2.cpp . Now I want to find files which have sub in it . So the search should return me these 2 cpp files as mentioned above.
The thing is that I also want to look for a file with particular extension say .txt or .cpp .
I tried using grep --include=\*{.cpp} -rnw . -e "sub" but this doesnot work for me.
You can do:
find ./ -name "*sub*"
or:
find ./ | grep "sub"
Using cat command as follows we can display content of multiple files on screen
cat file1 file2 file3
But in a directory if there are more than 20 files and I want content of all those files to be displayed on the screen without using the cat command as above by mentioning the names of all files.
How can I do this?
You can use the * character to match all the files in your current directory.
cat * will display the content of all the files.
If you want to display only files with .txt extension, you can use cat *.txt, or if you want to display all the files whose filenames start with "file" like your example, you can use cat file*
If it's just one level of subdirectory, use cat * */*
Otherwise,
find . -type f -exec cat {} \;
which means run the find command, to search the current directory (.) for all ordinary files (-type f). For each file found, run the application (-exec) cat, with the current file name as a parameter (the {} is a placeholder for the filename). The escaped semicolon is required to terminate the -exec clause.
I also found it useful to print filename before printing content of each file:
find ./ -type f | xargs tail -n +1
It will go through all subdirectories as well.
Have you tried this command?
grep . *
It's not suitable for large files but works for /sys or /proc, if this is what you meant to see.
You could use awk too. Lets consider we need to print the content of a all text files in a directory some-directory
awk '{print}' some-directory/*.txt
If you want to do more then just one command called for every file, you will be more flexible with for loop. For example if you would like to print filename and it contents
for file in parent_dir/*.file_extension; do echo $file; cat $file; echo; done
I know the file im looking for begins with a data for example 20131111 and I know the file ends in .log, but I don't know the full file name,
what is a unix command that would allow me to see all files beginning with or containing this date and ending with .log.
Like this, for example:
find /certain/path -type f -name "20131111*.log"
-type f - just files.
-name "20131111*.log" files whose name starts with 20131111 and ends with log.
I have hundreds of files where I need to change a portion of its text.
For example, I want to replace every instance of "http://" with "rtmp://" .
The files have the .txt extention and are spread across several folders and subfolder.
I basically am looking for a way/script that goes trough every single folder/subfolder and every single file and if it finds inside that file the occurrence of "http" to replace it with "rtmp".
You can do this with a combination of find and sed:
find . -type f -name \*.txt -exec sed -i.bak 's|http://|rtmp://|g' {} +
This will create backups of each file. I suggest you check a few to make sure it did what you want, then you can delete them using
find . -name \*.bak -delete
Here's a zsh function I use to do this:
change () {
from=$1
shift
to=$1
shift
for file in $*
do
perl -i.bak -p -e "s{$from}{$to}g;" $file
echo "Changing $from to $to in $file"
done
}
It makes use of the nice Perl mechanism to create a backup file and modify the nominated file. You can use the above to iterate through files thus:
zsh$ change http:// rtmp:// **/*.html
or just put it in a trivial #!/bin/zsh script (I just use zsh for the powerful globbing)