I've created this very simple batch file for the sake of testing a concept I'm hoping to utilize. I need to recursively delete all of one type of file except in folders with a specific name. Here's my code:
:recur
FOR /f %%a IN ('DIR /b') DO (
IF EXIST %%a\NUL (
IF ["%%a" NEQ "subtest2"] (
ECHO %%a
CD %%a
CALL :recur
CD ..
)
)
COPY "*.cfm" "*_copy.cfm"
REM DEL "*_copy*.cfm"
)
Right now I'm just testing using copy instead of delete. Basically, this should create a copy of all the .cfm files except in the folder "subtest2". Right now it's recursively making the copies everywhere, including subtest2. How do I stop this?
The structure of my base directory is:
TestFolder
---subtest1
------test.pdf
------test.txt
------test.cfm
---subtest2
------test.pdf
------test.txt
------test.cfm
---test.pdf
---test.txt
---test.cfm
---recur.bat
The square brackets are not balanced on both sides of the IF comparison, so it can never match. The brackets are not part of the IF syntax. If present, they simply become part of the string that is compared. The same is true for any enclosing quotes. Remove the square brackets, and it will work (assuming there are no other problems)
Here is a simple method to accomplish your goal. I've prefixed the DEL command with ECHO for testing purposes:
for /r /d %%F in (*) do echo %%F\|findstr /liv "\\subtest2\\" >nul && echo del "%%F\*.cfm"
The FOR /R /D simply recurses all folders. The full path of each folder is piped into a FINDSTR command that looks for paths that do not contain a \subtest2 folder. The ECHO DEL command is only executed if the \subtest2\ folder is not found in the path.
Remove the last ECHO when you have confirmed the command gives the correct results.
Change %%F to %F if you want to run the command on the command line instead of in a batch file.
for f in `find . -path YOURDIR -prune -o print`
do
rm whateveryouwanttodelete
done
the find command in backticks finds all files but ignores the directory -prune you want to ignore. Then in the body of the loop you nuke the files. You can do even better with
find . -path YOURDIR -prune -o -print0 | xargs -0 rm -f
no need for the loop. DISCLAIMER: I haven't tested it so perhaps you want to start adopting it with cp instead of rm.
You can try this:
#echo off&setlocal
for /r /d %%i in (*) do (
pushd "%%i"
echo(%%i|findstr /ri "\\subtest2$" || COPY "*.cfm" "*_copy.cfm"
popd
)
Related
I want to index some files on my NAS after moving them in the correct folder.
My command would be something like :
find *.$ext -exec mv "{}" $path \; -exec synoindex -a $(echo $path)$(basename "{}") \;
The first part is working. All files with $ext extension are moved to destination $path.
But the second part, which is supposed to index these files in their new $path folder, does not work.
This is strange because :
{} contains the right value => the complete old path of each file processed
To make sure of that, I added a third part which only does : -exec echo {} \;
Executing separately $(echo $path)$(basename "{}") works, after replacing {} by one real
value taken for example => gives the the complete new path => syntax is correct
Executing separately synoindex -a $(echo $path)$(basename "{}") works, after replacing {} by
one real value taken for example => command is correct
Thanks for any idea.
Regards,
Your command substitutions $(echo $path) and $(basename "{}") are executed by your shell before find is executed.
And you don't need to echo the $path variable. You could execute a small shell script instead:
find . -type f -name "*.$ext" -exec sh -c '
targetpath=$1; shift # get the value of $path
for file; do
mv -i "$file" "$targetpath"
synoindex -a "$targetpath/${file##*/}"
done
' sh "$path" {} +
This starts find in the current directory . searching for regular files (-type f) ending with
the file extension $ext (-name "*.$ext")
and executes a small shell script passing the $path variable as first argument to the script.
The following arguments are the filepaths found by find.
The parameter expansion ${file##*/} removes the longest prefix */ from the file and the result is the basename.
If your $path variable already contains a trailing slash /, then omit the / after $targetpath.
I want to write a script that add '0' at the end of the files that doesn't have it.
This is what I wrote:
#!/bin/bash
for file in $1
do
echo $file
ls $file | grep "\0$"
if ["$?"="1"]
then
fi
done
I don't know hot to target the files in a way I can rename them
for file in *[!0]; do mv "$file" "${file}0"; done
For each name that does not end 0, rename it so it does. Note that this handles names with spaces etc in them.
I want to give the script a directory, and it will rename the files in it that do not end in 0. How can I use this in a way I can tell the script which directory to work with?
So, make the trivial necessary changes, working with a single directory (and not rejecting the command line if more than one directory is specified; just quietly ignoring the extras):
for file in "${1:?}"/*[!0]; do mv "$file" "${file}0"; done
The "${1:?}" notation ensures that $1 is set and is not empty, generating an error message if it isn't. You could alternatively write "${1:-.}" instead; that would work on the current directory instead of a remote directory. The glob then generates the list of file names in that directory that do not end with a 0 and renames them so that they do. If you have Bash, you can use shopt -s nullglob you won't run into problems if there are no files without the 0 suffix in the directory.
You can generalize to handle any number of arguments (all supposed to be directories, defaulting to the current directory if no directory is specified):
for dir in "${#:-.}"
do
for file in "$dir"/*[!0]; do mv "$file" "${file}0"; done
done
Or (forcing directories):
for dir in "${#:-.}"
do
(cd "$dir" && for file in *[!0]; do mv "$file" "${file}0"; done)
done
This has the merit of reporting which arguments are not directories, or are inaccessible directories.
There are endless variations of this sort that could be made; some of them might even be useful.
Now, I want to do the same but, instead of the file ending with '0', the script should rename files that do not end with '.0' so that they do end with '.0'?
This is slightly trickier because of the revised ending. Simply using [!.][!0] is insufficient. For example, if the list of files includes 30, x.0, x0, z.9, and z1, then echo *[!.][!0] only lists z1 (omitting 30, x0 and z.9 which do not end with .0).
I'd probably use something like this instead:
for dir in "${#:-.}"
do
(
cd "$dir" &&
for file in *
do
case "$file" in
(*.0) : skip it;;
(*) mv "$file" "${file}0";;
esac
done
)
done
The other alternative lists more glob patterns:
for dir in "${#:-.}"
do
(cd "$dir" && for file in *[!.][!0] *.[!0] *[!.]0; do mv "$file" "${file}0"; done)
done
Note that this rapidly gets a lot trickier if you want to look for files not ending .00 — there would be a 7 glob expressions (but the case variant would work equally straight-forwardly), and shopt -s nullglob becomes increasingly important (or you need [ -f "$file" ] && mv "$file" "${file}.0" instead of the simpler move command).
How to I list only directories that contain certain files. I am running on a Solaris box. Example, I want to list sub-directories of directory ABC that contain files that end with .out, .dat and .log .
Thanks
Something along these lines might work out for you:
find ABC/ \( -name "*.out" -o -name "*.log" \) -print | while read f
do
echo "${f%/*}"
done | sort -u
The sort -u bit could be just uniq instead, but either should work.
Should work on bash or ksh. Probably not so much on /bin/sh - you'd have to replace the variable expansion with something like echo "${f}" | sed -e 's;/[^/]*$;;' or something else that would strip off the last component of the path. dirname "${f}" would be good for that, but I don't recall if Solaris includes that utility...
I have a directory customer. I have many customers in customer directory.
Now I want to add two lines in some process_config file within customer directory where it was not available.
For example:
/home/sam/customer/a1/na/process_config.txt
/home/sam/customer/p1/emea/process_config.txt
and so so.
Is this possible by single command like find & sed?
With a simple for loop :
for file in /home/sam/customer/*/*/process_config.txt; do
printf "one line\nanother line\n" >> "$file"
done
find /home/sam/customer -name 'process_config.txt' -exec DoYourAddWithSedAwkEchoOrWhatever {} \;
find give you the possibility to select each wanted (selected) file
option -exec start a subshell with your command on this file.
{} is the file name (full name) in this case.
Use \; as end of command for the iteration (other command couldbe used with the standard behaviour of ; ex -exec echo 'line1' >> {} ; echo "line2" >> {} \;
sed, awk or echo like in sample can modify the file
How to rename all the files in one directory to new name using the command mv. Directory have 1000s of files and requirement is to change the last character of each file name to some specific char. Example: files are
abc.txt
asdf.txt
zxc.txt
...
ab_.txt
asd.txt
it should change to
ab_.txt
asd_.txt
zx_.txt
...
ab_.txt
as_.txt
You have to watch out for name collisions but this should work okay:
for i in *.txt ; do
j=$(echo "$i" | sed 's/..txt$/_.txt/')
echo mv \"$i\" \"$j\"
#mv "$i" "$j"
done
after you uncomment the mv (I left it commented so you could see what it does safely). The quotes are for handling files with spaces (evil, vile things in my opinion :-).
If all files end in ".txt", you can use mmv (Multiple Move) for that:
mmv "*[a-z].txt" "#1_.txt"
Plus: mmv will tell you when this generates a collision (in your example: abc.txt becomes ab_.txt which already exists) before any file is renamed.
Note that you must quote the file names, else the shell will expand the list before mmv sees it (but mmv will usually catch this mistake, too).
If your files all have a .txt suffix, I suggest the following script:
for i in *.txt
do
r=`basename $i .txt | sed 's/.$//'`
mv $i ${r}_.txt
done
Is it a definite requirement that you use the mv command?
The perl rename utility was written for this sort of thing. It's standard for debian-based linux distributions, but according to this page it can be added really easily to any other.
If it's already there (or if you install it) you can do:
rename -v 's/.\.txt$/_\.txt/' *.txt
The page included above has some basic info on regex and things if it's needed.
Find should be more efficient than for file in *.txt, which expands all of your 1000 files into a long list of command line parameters. Example (updated to use bash replacement approach):
find . \( -type d ! -name . -prune \) -o \( -name "*.txt" \) | while read file
do
mv $file ${file%%?.txt}_.txt
done
I'm not sure if this will work with thousands of files, but in bash:
for i in *.txt; do
j=`echo $i |sed 's/.\.txt/_.txt/'`
mv $i $j
done
You can use bash's ${parameter%%word} operator thusly:
for FILE in *.txt; do
mv $FILE ${FILE%%?.txt}_.txt
done