For a website I'm working on I want to be able to automatically update the "This page was last modified:" section in the footer as I'm doing my nightly git commit. Essentially I plan on writing a shell script to run at midnight each night which will do all of my general server maintenance. Most of these tasks I already know how to automate, but I have a file (footer.php) which is included in every page and displays the date the site was last updated. I want to be able to recursively look through my website and check the timestamp on every file, then if any of these were edited after the date in footer.php I want to update this date.
All I need is a UNIX command that will recursively iterate through my files and return ONLY the date of the last modification. I don't need file names or what changes were made, I just need to know a single day (and hopefully time) that the most recently updated file was changed.
I know using "ls -l" and "cut" I could iterate through every folder to do this, but I was hoping for a quicker-running and easier command. Preferably a single-line shell command (possibly with a -R parameter)
The find outputs all the access times in Unix format, then sort and take the biggest.
Converting into whatever date format is wanted is left as an exercise for the reader:
find /path -type f -iname "*.php" -printf "%T#" | sort -n | tail -1
GNU find
find /path -type -f -iname "*.php" -printf "%T+"
check the find man page to play with other -printf specifiers.
You might want to look at a inotify script that updates the footer every time any other file is modified, instead of looking all through the file system for new updates.
Related
I try to open a group of files which were saved after a specific date using the following command
View /*/*log | grep 'Aug 30'
But I get a message as
Vim: warning : output is not to a terminal
And nothing happens.
Any suggestions???
You are effectively telling view to open all the log files. You then tell it to send its output not to the screen as normal, but to another command called grep.
You probably want to use find to generate a list of files and then tell view to open them. So, to find files changed yesterday (1 day ago), you could use:
find /wherever/the/logs/are -name \*.log -mtime -1
Now, you want to edit those files, so pass the list to view:
view $(find /wherever/the/logs/are -name \*.log -mtime -1)
I am currently working on a script, to store/backup our old files, so that we have more space on our server. This script will be used as a cronjob to backup the stuff every week. My script currently looks like this:
#!/bin/bash
currentDate=$(date '+%Y%m%d%T' | sed -e 's/://g')
find /Directory1/ -type f -mtime +90 | xargs tar cvf - | gzip > /Directory2/Backup$currentDate.tar.gz
find /Directory1/ -type f -mtime +90 -exec rm {} \;
The script is at first saving the current Date + Timestamp(without ":") as a variable. Afterwards it searches for files older than 90 days, tars them and finally makes a gzip out of them, which has the name "Backup$currentDate.tar.gz".
Then it's supposed to find the files again and remove them.
I do however have some issues here:
Directory1 consists of multiple Directories. It does find the files and creates the gz file, but while some files are zipped properly(for instance /DirName1/DirName2/DirName3/File), others appear directly in the "root" Dir. What could be the issue here?
Is there a way to tell the Script, to only create the gz file, if files are found? Because currently, we get gz files, even if there was nothing found, leading to empty directories.
Can I somehow use the find output later on(store variable?), so that the remove at the end really only targets those files found in the step before? Because if the third step would take, let's say a hour and the last step gets executed after it's finished, it could potentially remove files, that weren't older than 90 days before, but are now, so they are never backed up, but then deleted(highly unlikly, but not impossible).
If there's anything else you need to know, feel free to ask ^^
Best regards
I've "rephrased" your original code a bit. I don't have an AIX machine to test anything, so DO NOT cut and paste this. Using this code, you should be able to address your issues. To wit:
It make a record of what files it intends to operate on ($BFILES).
This record can be used to check for empty tar files.
This record can be used to see why your find is producing "funny" output. It wouldn't surprise me to find that xargs hit a space character.
This record can be used to delete exactly the files archived.
As a child, I had a serious accident with xargs and have avoided it ever since. Maybe there is a safe version out there.
#!/bin/bash
# I don't have an AIX machine to test this, so exit immediately until
# someone can proof this code.
exit 1
currentDate=$(date '+%Y%m%d%T' | sed -e 's/://g')
BFILES=/tmp/Backup$currentDate.files
find /Directory1 -type f -mtime +90 -print > $BFILES
# Here is the time to proofread the file list, $BFILES
# The AIX page I read lists the '-L' option to take filenames from an
# input file. I've found xargs to be sketchy unless you are very
# careful about quoting.
#tar -c -v -L $BFILES -f - | gzip -9 > /Directory2/Backup$currentDate.tar.gz
# I've found xargs to be sketchy unless you are very careful about
# quoting. I would rather loop over the input file one well quoted
# line at a time rather than use the faster, less safe xargs. But
# here it is.
#xargs rm < $BFILES
I have the following line in a bash script:
find . -name "paramsFile.*" | xargs -n131072 cat > parameters.txt
I need to make sure the order the files are concatenated in does not change when I use this command. For example, if I run this command twice on the same set of paramsFile.*, parameters.txt should be the same both times. My question is, is this the case? And if it isn't, how can I make sure it is?
Thanks!
Edit: the same question goes for xargs: would that change how the files are fed to cat?
Edit2: as William Pursell pointed out, this question is actually about find. Does find always return files in the same order?
From description in man cat:
The cat utility reads files sequentially, writing them to the standard
output. The file operands are processed in command-line order.
If file is a single dash (`-') or absent, cat reads from the standard input. If file is a UNIX domain socket, cat connects to it
and
then reads it until EOF. This complements the UNIX domain binding capability available in inetd(8).
So yes as long as you pass the files to cat in the same order every time you'll be ok.
I have a query regarding the execution of a complex command in the makefile of the current system.
I am currently using shell command in the makefile to execute the command. However my command fails as it is a combination of a many commands and execution collects a huge amount of data. The makefile content is something like this:
variable=$(shell ls -lart | grep name | cut -d/ -f2- )
However the make execution fails with execvp failure, since the file listing is huge and I need to parse all of them.
Please suggest me any ways to overcome this issue. Basically I would like to execute a complex command and assign that output to a makefile variable which I want to use later in the program.
(This may take a few iterations.)
This looks like a limitation of the architecture, not a Make limitation. There are several ways to address it, but you must show us how you use variable, otherwise even if you succeed in constructing it, you might not be able to use it as you intend. Please show us the exact operations you intend to perform on variable.
For now I suggest you do a couple of experiments and tell us the results. First, try the assignment with a short list of files (e.g. three) to verify that the assignment does what you intend. Second, in the directory with many files, try:
variable=$(shell ls -lart | grep name)
to see whether the problem is in grep or cut.
Rather than store the list of files in a variable you can easily use shell functionality to get the same result. It's a bit odd that you're flattening a recursive ls to only get the leaves, and then running mkdir -p which is really only useful if the parent directory doesn't exist, but if you know which depths you want to (for example the current directory and all subdirectories one level down) you can do something like this:
directories:
for path in ./*name* ./*/*name*; do \
mkdir "/some/path/$(basename "$path")" || exit 1; \
done
or even
find . -name '*name*' -exec mkdir "/some/path/$(basename {})" \;
How to find files on a unix server which were created/modified in previous month?
for ex. If the current month is Jul then the list of files which were created/modified in Jun should get displayed.
One way is to execute that command.
ls -laR | grep monthName where montName could be Jan,Feb,Mar, and so on ... (remember to change working directory to directory that you're interested in. Also notice that this method is recursive so all sub-directories will be inspected
With this you also retrieve all file permission and so on...
I'm sure that will be better ways (if them jump into my mind, I'll edit this post) but since I'm in coffee break, this is the faster that I find.
In order to find files modified in the previous month, you will need to use find with a set range, for example:
cd / (if you are wanting to start from the root)
find . -type f -mtime +26d -mtime -56d -print
You should adjust your range to include the dates that you wish to include.
All the best to you!
monthToFind=`date -d "1 months ago" "+%Y-%m"`
find . -printf "%TY-%Tm %p\n" | egrep "^$monthToFind " | sed "s/^$monthToFind //g"
This will be slower than using a time range in find. But the time range is hard to determine, and quickly becomes invalid, possibly even while the command is executing.
Unfortunately this will miss files modified last month when they were also modified this month. I don't know of a way to determine these files.