Zsh filename expansion over multiple directories recursively - zsh

Problem: I have a directory $BASE, and in this directory (and/or any of the directories below it) are zero or more directory entries matching the pattern *.x. I want to loop over all these entries. I want to use foreach and not find $BASE -name '*.x' for this procedure. I have zsh 5.3.
My current approach goes like this:
foreach f in $BASE/*.x(N) $BASE/**/*.x(N)
do
# ... do something with $f
done
Is there a way to write the glob pattern more uniformely (not requiring the repetition of *.x)?

foreach f in $BASE/**/*.x(N) is sufficient. ** already matches 0 or more directories, so the pattern matches $BASE/*.x already.

Related

What is the difference between find with grep?

What is the difference between these command:
find . –type f –name '*txt*'
and
find . –type f | grep 'txt'
I tried to run this and there is a difference but I want to know why?
The Major difference is FIND is for searching files and directories using filters while GREP is for searching a pattern inside a file or searching process(es)
FIND is an command for searching file(s) and folder(s) using filters such as size , access time , modification time.
The find command lists all of the files within a directory and its sub-directories that match a set of filters. This command is most commonly used to find all of the files that have a certain name.
To find all of the files named theFile.txt in your current directory and all of its sub-directories, enter:
find . -name theFile.txt -print
To look in your current directory and its sub-directories for all of the files that end in the extension .txt , enter:
find . -name "*.txt" -print
GREP :(Globally search a Regular Expression and Print)
Searches files for a specified string or expression.
Grep searches for lines containing a specified pattern and, by default, writes them to the standard output.
grep myText theFile.txt
Result : Grep will print out each line contain the word myText.
In your first example, you are using the find utility to list the filenames of regular files where the filename includes the string txt.
In your second example, you are using the find utility to list the filenames of regular files and feeding the resultant filenames via a pipe to the grep utility which searches the contents of the pipe (a list of filenames, one per line) for the string txt. Each time this string is found, the corresponding line (which is a filename) is outputted.
When you have a path with txt in the directory name, the second command will find a match. When you do not want to match paths like txtfiles/allfiles.tgz and transactions/txtelevisions/bigscreen.jpg you will want to use the first.
The difference between the two is that in the first case, find is looking for files whose name (just name) matches the pattern.
In the second case, find is looking for all files of type 'f' and outputting their relative paths as strings. That result gets piped to grep, which filters the input strings to those matching the pattern. The pattern 'txt' will filter the filepath results for the pattern. Importantly, the second case will include filepaths that match anywhere in the path, not just in the filename. The first case will not do that.
The first command will display files having txt in their name.
Whereas the second command will highlight the lines of all the files having txt in their content.

recursive user defined function in makefile runs out of stack memory

I have a function that returns all the files in a directory.
# Returns all files in folder recursively that match pattern
#
# $(call rwildcard, folder,pattern)
rwildcard=$(foreach d,$(wildcard $1*),$(call rwildcard,$d/,$2) $(filter $(subst *,%,$2),$d))
Argument 1, i.e. folder, is the path of the folder where to search for files recursively and it's user provided
If this argument is "/" this would run out of memory and crash with an exception.
Is there a way to prevent this? besides filtering "/" as an argument.
Note: i'm using cygwin
I suspect you aren't making any progress in the inner rwildcard. If it matches "." every time, are you stuck in a loop?
Can you use another tool to get a list of files?
r := $(shell find $(dir) -type f -name \*$(likethis)\*)

In R: Is there a comparable function to Unix's "find"?

I want a version of this in R.
find . ( -type d -name "example_folder_*" ) -prune -print > directory.csv
The reason:
I am receiving a directory that contains a large amount of files and subdirectories. I want to know where all folders that have this format, "example_folder_", are located.
R has a function called list.dirs(arg) that will return a vector of a directories under and including arg. I don't think there is an equivalent of -prune. Once you have the directory tree vector though, it should be easy to filter with the standard R tools.

Use of find in unix on strange file/directory names [duplicate]

Im a beginner scripter, writing scripts in tcsh and csh(these are teached in my course)
Im writing a script which uses find for putting path of directories
this is the part of the script:
set list = (`find $PATH -type d`)
it works fine until the file or directory names arent named such as:
#fi##lename&& or −filename or :−,?!drectoryanem!-``
These special characters i couldnt handle i changed the find script to:
set list = ("`find $PATH -type d`")
bit none of these works, when i want to use the path from the list in this next script:
foreach i ($list:q)
foreach file (`find "$i" -maxdepth 1 -type f`)
....
end
end
it couldnt handle these special file names, so i get many errors like find: −."!filename: no such filename or directory
I worked it out
It had to be this way:
set subor = ("`find "'"$i"'" -type f -maxdepth 1`")
now it ignores everything in the filenames
and in:
foreach j ($subor:q)
i quoted it this way it ignores the white characters in file names

UNIX C Shell Scripting. Copying files and adding extension

I'm trying to write a script that copies files from one directory to another and adds a .bak extension to them. I'm having a hard time figuring out how to add the extension.
foreach file in ($argv[1]/*)
cp $file $argv[2]
end
Making a bunch of assumptions (mainly that the outline of your script is valid C shell syntax, and that spaces in file names are not an issue), then you probably need to use the basename command:
foreach file in ($argv[1]/*)
cp $file $argv[2]/`basename $file`.bak
end
The basename command removes the pathname, so the files will be copied precisely to the directory named by $argv[2]. If you're looking to retain directory hierarchies too, you have to work a fair bit harder.
$1 and $2 are the arguments (directories) to the script-:
for f in $1/*
do
fname=$(basename $f)
cp $f $2/$fname.bak
done

Resources