I'm trying to list some files, but I only want the file names, in order of file date. I've tried a few commands but they don't see to work.
I know that using this code I can list only the file names:
ls -f *
And I know that using this command I can list the files sorted by date:
ls -ltr *
So I have tried using this command to list the file names only, sorted by file date, but it doesn't sort by date:
ls -ltr -f *
That last command simply lists the file names, but sorted by file name, not date.
Any ideas how I can do this with a simple ls command?
FYI, once I get this working my ultimate goal is to only list the most recently created 10 file names, using something like this:
ls -ltr -f * | tail -10
You could try the following command:
ls -ltr | awk '{ print $9 }' | tail -n +2
It extracts the file names from the ls -ltr command.
According to the manual for ls, the -f flag is used to,
-f do not sort, enable -aU, disable -ls --color
One way of extracting only files would be,
ls -p | grep -v /
The option -p is used to append a '/' to a directory name, we can grep for lines not containing a '/'.
To extract 10 most recently used files you could do the following
ls -ptr * | grep -v / | tail -10
Related
I have been trying to get to count all the empty folders in a certain directory. sub-directories excluded. i used the code below but i don't know how to define empty folders or folders that contain files.
echo "$(ls -l | egrep -l $1/* | wc -l)"
the $1 will be the user argument in the command line. example: ./script.sh ~/Desktop/backups/March2021.
Edit - im not allowed to use find command
Edit 2 - ls -l * | awk '/total 0/{print last}{last=$0}' | wc -l this script works but lists all folders even if the directory contains files and data or if the directory is empty.
What about this:
grep -v "." *
I mean the following: "." means any character (I'm not sure the syntax is correct), so basically you look for every file which not even contain any character.
You should not parse ls (directories or file names with newlines), so this solution is only for the assignment:
ls -d */ */* | cut -d/ -f1 | sort | uniq -u | wc -l
Explanation:
ls -d */ shows all directories. This is combined with ls -d */* which will also show contents in the directories.
The resulting output will show all directories.
Empty directories will be shown only once, so you want to look for unique lines.
With the cut you only see the name of the directory, not the files in the directory.
The sort could be skipped here, the ls will give sorted output. When you change the solution to find (next assignment?) the sort might be needed.
uniq can look for lines that occur once. The flag -u removes all lines that have duplicates, so it will show the unique lines in the output.
I want to create a function in shell programming that gets 2 parameters, directory-name and file-name and that does the following: it searches starting in the given directory-name for the file-name and then goes in all subdirectories of the directory-name to continue the search. I want the output to be every parent-directory where the file-name has been found, sorted using the file-name size.
Help would be much appreciated, thanks.
not sure about which Unix you asked for, but for Linux and maybe common Unix systems:
find <directory> -name "<filename>" -ls | sort -k 7 -n -r | awk '{print $NF}' | xargs -n 1 dirname
sort => sort by file size (the 7th column of find output is filesize)
awk => print the filename full path
dirname => get parent directory of the matched file
Example:
# Find parent directory of all types.h under /usr/include, sorted by file size in desc order
$ find /usr/include/ -name "types.h" -ls | sort -k 7 -n -r | awk '{print $NF}' | xargs -n 1 dirname
/usr/include/x86_64-linux-gnu/bits
/usr/include/x86_64-linux-gnu/sys
/usr/include/c++/7/parallel
/usr/include/rpc
/usr/include/linux/sched
/usr/include/linux/iio
/usr/include/linux
/usr/include/asm-generic
/usr/include/x86_64-linux-gnu/asm
My UNIX is quite rusty but what I want is to search a location in UNIX for files containing the two separate words in their text of "generate" and "process", but both words on the SAME LINE?
I know there are script files that contain details of the script author and its function noted at the top of the script. For example, the start of one such script contains the following;
function: generate sales overtime process
I have tried things like the following (again my UNIX is rusty)
grep -rwl . -e "generate" | "process"
But this gives errors such unrecognised commands
What I want is a list of Progress files like;
salesovertime1.p
salestravel1.p
salesexpenses1.p
salesexpenses2.p
If you search for file then find is appropriate, and then you may filter with grep:
find . -exec grep -H generate {} \; 2> /dev/null | grep process
will find recursively every file form the current directory, then filter ones that contains word "generate" and the filter again with ones that contains word "process". Filenames will be produced on output with option -H (GNU grep) and error messages begin redirected to /dev/null.
Now if you want filenames only, you can use :
find . -exec grep -H generate {} \; 2> /dev/null | grep process | cut -f1 -d\:
If you want "generate" and "process" in the same file but on different lines, the following will do it:
grep process `find . -exec grep -H generate {} \; 2> /dev/null | cut -f1 -d\:` 2> /dev/null | cut -f1 -d\:
The find generates a file list that is used to grep against and extract filenames with redirecting errors again.
I have thousands of files named "DOCUMENT.PDF" and I want to rename them based on a numeric identifier in the path. Unfortunately, I don't seem to have access to the rename command.
Three examples:
/000/000/002/605/950/ÐÐ-02605950-00001/DOCUMENT.PDF
/000/000/002/591/945/ÐÐ-02591945-00002/DOCUMENT.PDF
/000/000/002/573/780/ÐÐ-02573780-00002/DOCUMENT.PDF
To be renamed as, without changing their parent directory:
2605950.pdf
2591945.pdf
2573780.pdf
Use a for loop, and then use the mv command
for file in *
do
num=$(awk -F "/" '{print $(NF-1)}' file.txt | cut -d "-" -f2);
mv "$file" "$num.pdf"
done
You could do this with globstar in Bash 4.0+:
cd _your_base_dir_
shopt -s globstar
for file in **/DOCUMENT.PDF; do # loop picks only DOCUMENT.PDF files
# here, we assume that the serial number is extracted from the 7th component in the directory path - change it according to your need
# and we don't strip out the leading zero in the serial number
new_name=$(dirname "$file")/$(cut -f7 -d/ <<< "$file" | cut -f2 -d-).pdf
echo "Renaming $file to $new_name"
# mv "$file" "$new_name" # uncomment after verifying
done
See this related post that talks about a similar problem: How to recursively traverse a directory tree and find only files?
got a file like "index.txt" that contains a few lines of book titles like:
book 1.pdf<br>
book 1.opf<br>
book2.epub<br>
book3.opf<br>
and so more, 1 title = 1 line
Id'd like to do this thing in bash:
rm -rf from $dir IF $file IS NOT in index.txt
How I can do this?
You can use the below command.
find <dir> -name "*" | grep -vFf index.txt | xargs rm -rf
find : List all the files in the specified directory
grep-vFf : Do inverse grep with input from the file (will list files that are not found in the input file. In this case it is index.txt )
xargs rm -rf : Delete each of the file that are not found in the
list. This deletion list is obtained as the output of previous grep
command
Edit:
When files names contain white spaces use the below command.
find <dir> -name "*" | grep -vFf index.txt |sed 's/^/"/;s/$/"/' | xargs rm -rf
sed would add quotes to all the files name.
you can remove the listed files in a txt file using
#!/bin/sh
while read line;
do
rm `echo $dir/line`
done<list.txt