While I can build a pathname e.g.
(make-pathname :directory '(:RELATIVE "dir" "subdir" "subsubdir"))
how do I get back subsubdir from a pathname like this (assuming it is a directory)? I need to extract the last dir from a pathname, just as this Unix command does:
$ basename /usr/local/share/
share
See the Common Lisp Hyperspec, the Filenames Dictionary
(first (last (pathname-directory some-pathname)))
Related
Let's say I have a directory with product inventories that are saved per day:
$ ls *.csv
2014_01_01.csv
2014_01_02.csv
...
Is there a glob pattern that will only grab the newest file? Or do I need to chain it with other commands? Basically I'm just looking to do what would about to a LIMIT 1 based on the filename sort.
Assuming your shell is bash, ksh93 or zsh, and your files have the same naming convention as the example in your question:
files=( *.csv )
printf "The newest file is %s\n" "${files[-1]}"
Since the date in the filenames is in a format that naturally sorts, storing all of them in an array and taking the last element gives you the newest one (And conversely the first element is the oldest one).
What is the difference between these command:
find . –type f –name '*txt*'
and
find . –type f | grep 'txt'
I tried to run this and there is a difference but I want to know why?
The Major difference is FIND is for searching files and directories using filters while GREP is for searching a pattern inside a file or searching process(es)
FIND is an command for searching file(s) and folder(s) using filters such as size , access time , modification time.
The find command lists all of the files within a directory and its sub-directories that match a set of filters. This command is most commonly used to find all of the files that have a certain name.
To find all of the files named theFile.txt in your current directory and all of its sub-directories, enter:
find . -name theFile.txt -print
To look in your current directory and its sub-directories for all of the files that end in the extension .txt , enter:
find . -name "*.txt" -print
GREP :(Globally search a Regular Expression and Print)
Searches files for a specified string or expression.
Grep searches for lines containing a specified pattern and, by default, writes them to the standard output.
grep myText theFile.txt
Result : Grep will print out each line contain the word myText.
In your first example, you are using the find utility to list the filenames of regular files where the filename includes the string txt.
In your second example, you are using the find utility to list the filenames of regular files and feeding the resultant filenames via a pipe to the grep utility which searches the contents of the pipe (a list of filenames, one per line) for the string txt. Each time this string is found, the corresponding line (which is a filename) is outputted.
When you have a path with txt in the directory name, the second command will find a match. When you do not want to match paths like txtfiles/allfiles.tgz and transactions/txtelevisions/bigscreen.jpg you will want to use the first.
The difference between the two is that in the first case, find is looking for files whose name (just name) matches the pattern.
In the second case, find is looking for all files of type 'f' and outputting their relative paths as strings. That result gets piped to grep, which filters the input strings to those matching the pattern. The pattern 'txt' will filter the filepath results for the pattern. Importantly, the second case will include filepaths that match anywhere in the path, not just in the filename. The first case will not do that.
The first command will display files having txt in their name.
Whereas the second command will highlight the lines of all the files having txt in their content.
I have a function that returns all the files in a directory.
# Returns all files in folder recursively that match pattern
#
# $(call rwildcard, folder,pattern)
rwildcard=$(foreach d,$(wildcard $1*),$(call rwildcard,$d/,$2) $(filter $(subst *,%,$2),$d))
Argument 1, i.e. folder, is the path of the folder where to search for files recursively and it's user provided
If this argument is "/" this would run out of memory and crash with an exception.
Is there a way to prevent this? besides filtering "/" as an argument.
Note: i'm using cygwin
I suspect you aren't making any progress in the inner rwildcard. If it matches "." every time, are you stuck in a loop?
Can you use another tool to get a list of files?
r := $(shell find $(dir) -type f -name \*$(likethis)\*)
I want to recursively remove all ID3v1/ID3v2-tags of my mp3-files with eyeD3.
Can't get it work.
The slim doc doesn't say much about the PATH-variable and its usage.
usage: eyeD3 [-h] [--version] [--exclude PATTERN]
[--plugins] [--plugin NAME]
[PATH [PATH ...]]
How to apply and use the PATH-variable correct?
According to the online documentation
The PATH argument(s) along with optional usage of --exclude are used to tell eyeD3 what files or directories to process.
Directories are searched recursively and every file encountered is passed to the plugin until no more files are found.
Are you sure, that the PATH-variable doesn't work this way?
I want to expand a glob in zsh into only the filenames, rather than paths, of the matching files. I know that I can do something like this:
paths=(/some/path/blah*blah*blah)
typeset -a filenames
for i ({1..$#paths}); do
filenames[$i]=`basename $paths[$i]`
done
But I think there must be a better way.
There is a two-step process that uses parameter modifiers:
paths=(/some/path/blah*blah*blah)
filenames=($paths[#]:t)
but you can also apply the :t modifier directly to the glob itself:
filenames=( /some/path/blah*blah*blah(:t) )