Fetch file from particular date - unix

New to Unix and I'm trying to fetch files from a directory having current date.
Tried below command, but it fetches some other file instead
cd /path/; ls -lrt abc833* | grep `date '+%d'`
Also I want to try something like below but it doesn't work
for file in /path/abc833*
if [ `$file | awk '{print $7}'` =`date '+%d'`];then
echo $file
fi
done
What's the mistake?

Why not use find?
find ./ -ctime 1
returns all files created in last 24 hours. You also forgot to wrap date:
cd /path/; ls -lrt abc833* | grep $(date '+%d')
%d only gives number of day in month today would be "28". that would also match "20:28" or 28th of last month.
EDIT:
Syntax errors were in your first post. You wrapped the date command correctly.
Your second approach is full of syntax errors. And you are trying to execute each file to pass its output to awk => You forgot a ls -l
But same thought error for date there. stat -c %Y <file> gives you the modification time of a file in seconds since epoch, which is maybe easier to calculate.

cd /path/; ls -lrt abc833* | sed 1d | tr -s ' '|cut -d' ' -f9|grep $(date '+%d')

You can do all the logic in awk:
ls -ltr | awk '{date=strftime("%d"); if($7==date){f="";for(i=9;i<=NF;i++){f=f" "$i} print f}}'
If your file name does not contain spaces it can be simplified:
ls -ltr | awk '{date=strftime("%d"); if($7==date){print $9}}'
And if instead of the file name you want the whole line from ls -ltr
ls -ltr | awk '{date=strftime("%d"); if($7==date){print $0}}'

Related

Get the latest modified file and count lines from modified file

Trying to write a simple script to find the latest modified file from a directory and then count the lines of that modified file. Below is part of my script.
Note: the $stg variable is created for another directory
echo "LATEST LOG = $(ls -ltr $stg/abc/foo***.txt | awk '{print $5, $6, $7, $8, $9}' | tail -n1)"
echo "COUNT = $(wc -l $stg/abc/foo***.txt | tail -n1)"
What happens on the "COUNT" part is that it does not match the count of the LATEST LOG because it seems to be counting a different log file.
Any suggestions? Thank you!
Suggestion: store the result of the latest log in a variable, and reuse it in the count. Like this:
#!/bin/bash
latestlogline=$(ls -ltr foo*.txt | awk '{print $5, $6, $7, $8, $9}' | tail -n1)
latestlogfilename=$(echo $latestlogline | awk 'NF>1{print $NF}')
echo "LATEST LOG = $(echo $latestlogline)"
echo "COUNT = $(wc -l $latestlogfilename)"
Details:
latestlogline: your code exactly, to extract the complete line of information
latestlogfilename: just the filename. wc -l expects a filename, so extract it from your first command.
Then just echo the variable values.
As commented before, *** is exactly the same thing as *.

Finding and sorting files by size in Unix

I want to create a function in shell programming that gets 2 parameters, directory-name and file-name and that does the following: it searches starting in the given directory-name for the file-name and then goes in all subdirectories of the directory-name to continue the search. I want the output to be every parent-directory where the file-name has been found, sorted using the file-name size.
Help would be much appreciated, thanks.
not sure about which Unix you asked for, but for Linux and maybe common Unix systems:
find <directory> -name "<filename>" -ls | sort -k 7 -n -r | awk '{print $NF}' | xargs -n 1 dirname
sort => sort by file size (the 7th column of find output is filesize)
awk => print the filename full path
dirname => get parent directory of the matched file
Example:
# Find parent directory of all types.h under /usr/include, sorted by file size in desc order
$ find /usr/include/ -name "types.h" -ls | sort -k 7 -n -r | awk '{print $NF}' | xargs -n 1 dirname
/usr/include/x86_64-linux-gnu/bits
/usr/include/x86_64-linux-gnu/sys
/usr/include/c++/7/parallel
/usr/include/rpc
/usr/include/linux/sched
/usr/include/linux/iio
/usr/include/linux
/usr/include/asm-generic
/usr/include/x86_64-linux-gnu/asm

LS - sort by date, display only file name

I'm trying to list some files, but I only want the file names, in order of file date. I've tried a few commands but they don't see to work.
I know that using this code I can list only the file names:
ls -f *
And I know that using this command I can list the files sorted by date:
ls -ltr *
So I have tried using this command to list the file names only, sorted by file date, but it doesn't sort by date:
ls -ltr -f *
That last command simply lists the file names, but sorted by file name, not date.
Any ideas how I can do this with a simple ls command?
FYI, once I get this working my ultimate goal is to only list the most recently created 10 file names, using something like this:
ls -ltr -f * | tail -10
You could try the following command:
ls -ltr | awk '{ print $9 }' | tail -n +2
It extracts the file names from the ls -ltr command.
According to the manual for ls, the -f flag is used to,
-f do not sort, enable -aU, disable -ls --color
One way of extracting only files would be,
ls -p | grep -v /
The option -p is used to append a '/' to a directory name, we can grep for lines not containing a '/'.
To extract 10 most recently used files you could do the following
ls -ptr * | grep -v / | tail -10

Passing output from one command as argument to another [duplicate]

This question already has answers here:
How to pass command output as multiple arguments to another command
(5 answers)
Closed 5 years ago.
I have this for:
for i in `ls -1 access.log*`; do tail $i |awk {'print $4'} |cut -d: -f 1 |grep - $i > $i.output; done
ls will give access.log, access.log.1, access.log.2 etc.
tail will give me the last line of each file, which looks like: 192.168.1.23 - - [08/Oct/2010:14:05:04 +0300] etc. etc. etc
awk+cut will extract the date (08/Oct/2010 - but different in each access.log), which will allow me to grep for it and redirect the output to a separate file.
But I cannot seem to pass the output of awk+cut to grep.
The reason for all this is that those access logs include lines with more than one date (06/Oct, 07/Oct, 08/Oct) and I just need the lines with the most recent date.
How can I achieve this?
Thank you.
As a sidenote, tail displays the last 10 lines.
A possible solution would be to grepthis way:
for i in `ls -lf access.log*`; do grep $(tail $i |awk {'print $4'} |cut -d: -f 1| sed 's/\[/\\[/') $i > $i.output; done
why don't you break it up into steps??
for file in *access.log
do
what=$(tail "$i" |awk {'print $4'} |cut -d: -f 1)
grep "$what" "$file" >> output
done
You shouldn't use ls that way. Also, ls -l gives you information you don't need. The -f option to grep will allow you to pipe the pattern to grep. Always quote variables that contain filenames.
for i in access.log*; do awk 'END {sub(":.*","",$4); print substr($4,2)}' "$i" | grep -f - $i > "$i.output"; done
I also eliminated tail and cut since AWK can do their jobs.
Umm...
Use xargs or backticks.
man xargs
or
http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_03_04.html , section 3.4.5. Command substitution
you can try:
grep "$(stuff to get piped over to be grep-ed)" file
I haven't tried this, but my answer applied here would look like this:
grep "$(for i in `ls -1 access.log*`; do tail $i |awk {'print $4'} |cut -d: -f 1 |grep - $i > $i.output; done)" $i

zcat to grep with file name

ls -ltr|grep 'Mar 4'| awk '{print $9 }'|xargs zcat -fq |grep 12345
I'm now using this command to list the records that contain my numeric string how can i alos get this command to print the name of the file the strings were found in?
thanks
Use zgrep
BTW. what you're trying to do can be done with find
find -newermt 'Mar 4' -and ! -newermt 'Mar 5' -exec zgrep -l '12345' {} \;
If you use zgrep instead of zcat+grep (which does the same), you do it like this option like this:
ls -ltr | grep 'Mar 4' | awk '{print $9}' | xargs zgrep 12345
Pass the -t option to xargs, causing it to print the command it is running (the zcat command, including the filename) before running it. The command is printed to stderr, so it will not interfere with your pipe.

Resources