I have a shell script that will check a file is how many days old. I did stat -f "%m%t%Sm %N" "$file" . But I want to store this into a variable and then compare current time and file created time ?
Assuming you're using bash, you can capture the output of commands with something like:
fdate=$(stat -f "%m%t%Sm %N" "$file")
and then do whatever you will with the results:
echo ${fdate}
That's assuming the command itself works in the first place. If you are, you can ignore the text below.
The GNU stat program uses -f to specify you want to query the filesystem rather than a file and the other options you have don't seem to make sense in the context of your question.
Using Gnu stat, you can get the time since the last file update(1) as:
ageInSeconds=$(($(date -u +%s) - $(stat --printf "%Y" "file")))
The subtracts the last modification time of the file from the current time (both expressed as seconds since the epoch) to give you the age in seconds.
To turn that into days, assuming you're not overly concerned about the possible error from leap seconds (an error of, at most, one part in about 15.7 million, or 0.000006%), you can just divide it by 86,400:
ageInDays=$((($(date -u +%s) - $(stat --printf "%Y" "file")) / 86400))
(1) Note that, although stat purports to have a %W format specifier that gives the birth of the file, this doesn't always work (it returns zero). You could check that first if you're really interested in when the file was created rather than last updated but you may have to be prepared to accept the possibility the information is not available. I've used last modification time above since, frequently, it's used for things like detecting changes.
Related
I have a Unix ksh script that has been in daily use for years (kicked off at night by the crontab). Recently one function in the script is behaving erratically as never happened before. I tried various ways to find out why, but have no success.
The function validates an input string, which is supposed to be a string of 10 numeric characters. The function checks if the string length is 10, and whether it contains any non-numeric characters:
#! /bin/ksh
# The function:
is_valid_id () {
# Takes one argument, which is the ID being tested.
if [[ $(print ${#1}) -ne 10 ]] || print "$1" | /usr/xpg4/bin/grep -q [^0-9] ; then
return 1
else
return 0
fi
}
cat $input_file | while read line ; do
id=$(print $line | awk -F: '{print $5}')
# Calling the function:
is_valid_id $id
stat=$?
if [[ $stat -eq 1 ]] ; then
print "The ID $id is invalid. Request rejected.\n" >> $ERRLOG
continue
else
...
fi
done
The problem with the function is that, every night, out of scores or hundreds of requests, it finds the IDs in several requests as invalid. I visually inspected the input data and found that all the "invalid" IDs are actually strings of 10 numeric characters as should be. This error seems to be random, because it happens with only some of the requests. However, while the rejected requests persistently come back, it is consistently the same IDs that are picked out as invalid day after day.
I did the following:
The Unix machine has been running for almost a year, therefore might need to be refreshed. The system admin to reboot the machine at my request. But the problem persists after the reboot.
I manually ran exactly the same two tests in the function, at command prompt, and the IDs that have been found invalid at night are all valid.
I know the same commands may behave differently invoked manually or in a script. To see how the function behaves in script, the above code excerpt is the small script I ran to reproduce the problem. And indeed, some (though not all) of the IDs found to be invalid at night are also found invalid by the small trouble-shooting script.
I then modified that troubleshooting script to run the two tests one at a time, and found it is the /usr/xpg4/bin/grep -q [^0-9] test that erroneously finds some of the ID as containing non-numeric character(s). Well, the IDs are all numeric characters, at least visually.
I checked if there is any problem with the xpg4 grep command file (ls -l /usr/xpg4/bin/grep), to see if it is put there recently. But its timestamp is year 2005 (this machine runs Solaris 10).
Knowing that the data comes from a central ERP system, to which data entry is performed from different locations using all kinds of various terminal machines running all kinds of possible operating systems that support various character sets and encodings. The ERP system simply allows them. But can characters from other encodings visually appear as numeric characters but the encoded values are not as the /usr/xpg4/bin/grep command expects to be on our Unix machine? I tried the od (octal dump) command but it does not help me much as I am not familiar with it. Maybe I need to know more about od for solving this problem.
My temporary work-around is omitting the /usr/xpg4/bin/grep -q [^0-9] test. But the problem has not been solved. What can I try next?
Your validity test function happens to be more complicated than it should be. E.g. why do you use a command substitution with print for ${#1}? Why don't you use ${#1} directly? Next, forking grep to test for a non-number is a slow and expensive operation. What about this equivalent function, 100% POSIX and blazingly fast:
is_valid_id () {
# Takes one argument, which is the ID being tested.
if test ${#1} -ne 10; then
return 1 # ID length not exactly 10.
fi
case $1 in
(*[!0-9]*) return 1;; # ID contains a non-digit.
(*) return 0;; # ID is exactly 10 digits.
esac
}
Or even more simple, if you don't mind repeating yourself:
is_valid_id () {
# Takes one argument, which is the ID being tested.
case $1 in
([0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]) # 10 digits.
return 0;;
(*)
return 1;;
esac
}
This also avoids your unquoted use of a grep pattern, which is error-prone in the presence of one-character file names. Does this work better?
I have a constantly updating huge log file (MainLog).
I want to create another file which is only the last n lines of the log file BUT also updating.
If I use:
tail -f MainLog > RecentLog
I get ALMOST what I want except RecentLog is written as MainLog is available and might at any point only have part of the last MainLog line.
How can I specify to tail that I only want it to write when a WHOLE line is available?
By default, tail outputs whole lines unless you use the -c switch to count characters. Something like
tail -n 20 -f MainLog > RecentLog
(substituting the number of lines you want prepended to the second file for "20") should work as you want.
But if if doesn't, it is possible that using grep to line-buffer your output will fix this condition. See this question.
After many attempts, the only solution for multiple files that worked (fantastically well) for me is the fdlinecombine command. It's a small binary that reads multiple file descriptors and prints data to stdout linewise.
My use case is spawning multiple long-running ssh commands in the background and following their output, without having the lines garbled or interrupted in between.
I've seen a script that modifies the Unix $PATH, and in order to avoid duplicating items, it uses the following technic:
set path = ($path:q /some/new/path)
set path = ($path:q /another/directory)
set -f path = ($path:q)
I don't understand how is working...
Documentation to the "-f" flag says :
Disable file name generation
which doesn't make any sense to me. And what's this strange ":q"?
Thanks!
EDIT:
This Super User Question helped me understand that ":q" is a modifier.
And tcsh man explains it:
When the `:q' modifier is applied to
a substitution the variable will expand to multiple words
with each word separated by a blank and quoted to prevent
later command or filename substitution
Second Edit:
Actually, it seems that "-f" alone does the magic:
~$ set days = (Sunday Monday Tuesday Monday Sunday)
~$ echo $days
Sunday Monday Tuesday Monday Sunday
~$ set -f days = ($days)
~$ echo $days
Sunday Monday Tuesday
Still, I don't understand how is result of "Disable file name generation".
Disabling file name generation usually needed if we encounter file names that contain *, ?, {} etc. Care should be taken while handling these files so that we don't process file name as a wildcard pattern. Crete a file named stack* as vim stack*, later we shouldn't delete this file since all other files start with stack also gets deleted. Alternate way to delete the file is using quoting as rm "stack*". If required it is possible to enable file name generation by set +f in the shell.
Your confusion arises from the fact that you're reading the ksh manual, but you're using the tcsh shell. Tcsh syntax is very different from the vastly more common POSIX shell syntax.
The command set is built-in to the shell, so when you run set in tcsh you get an entirely different command to set run from ksh.
From man tcsh:
set [-r] [-f|-l] name=(wordlist) ... (+)
...
If -f or -l are specified, set only unique words keeping their
order. -f prefers the first occurrence of a word, and -l the
last.
Am appending the standard output and error of the shell script execution on a unix bok like shown below
/home/mydir/shellScript.sh >> /home/mydir/shellScript.log 2>&1
Now am wondering a way to keep logs going back as much as say 30 days else the log file size will keep on increasing.
Would appreciate if anyone can provide recommendations around the same.
This kind of thind is generally done with a tool such as logrotate.
For example, with Apache's logs, I've seen it used to :
Once per day, move the current file to another (to have one log file per day), gzipping the resulting file of the day before
Delete the archived file that were more than 1 week old
So, I suppose you might be able to use it to get what you're asking.
Is this a long-running script (e.g. daemon)? Or does it do something then exits quickly? You could dynamically build the log file's name based on today's date, so a new file gets generated any time the date changes:
#/bin/sh
now=`date +%F`
/home/mydir/shellScript.sh >> /home/mydir/shellScript-$now.log 2>&1
previous=`date --date='30 days ago' +%F`
rm -f /home/mydir/shellScript-$previous.log 2>&1
(added stale log removal).
Pascal MARTIN is correct - it is a simple matter to put a configuration file into /etc/logrotate.d, or add an entry onto the end of the file /etc/logrotate, as logrotate is included stock in most UNIX systems. It is a very easy-to-understand configuration file that takes roughly 5 min. at a man page to understand. I recommend it as the easiest and most maintainable solution.
There's not a lot of context to your problem included.
I agree with both of the offered solutions.
I would also point you to my 2 rather long-winded ;-) discourses on naming and managing logfiles.
Bash piping output and input of a program
command line wisdom for 2 panel file manager user
I hope these help.
I like keeping my history files uncluttered. Since zsh has excellent history searching features, there is no need to save all the commands that I repeatedly use (e.g., finger, pwd, ls, etc) multiple times. To strip the history file of all duplicate lines, I did sort .zhistory|uniq -du. Now, I'd like to write this back to the same file, so that if I simply put this in my .zshrc, everytime I login, my history is trimmed and clean. If I try sort .zhistory|uniq -du>.zhistory, the resulting file is empty! On the other hand, if I do sort .zhistory|uniq -du>tempfile, it writes to tempfile correctly. Any idea how I can write to the same file?
You might be able to use a variable:
file='.zhistory' && var=$(sort -u "$file") && echo "$var" > "$file"
The reason you can't write to the same file is that the redirection occurs first and truncates the file before the utility ever sees it.
You can prevent duplicate lines in the first place. Use setopt with one or more of the following settings (from man zshoptions):
HIST_EXPIRE_DUPS_FIRST
If the internal history needs to be trimmed to add the current
command line, setting this option will cause the oldest history
event that has a duplicate to be lost before losing a unique
event from the list. You should be sure to set the value of
HISTSIZE to a larger number than SAVEHIST in order to give you
some room for the duplicated events, otherwise this option will
behave just like HIST_IGNORE_ALL_DUPS once the history fills up
with unique events.
HIST_FIND_NO_DUPS
When searching for history entries in the line editor, do not
display duplicates of a line previously found, even if the
duplicates are not contiguous.
HIST_IGNORE_ALL_DUPS
If a new command line being added to the history list duplicates
an older one, the older command is removed from the list (even
if it is not the previous event).
HIST_IGNORE_DUPS (-h)
Do not enter command lines into the history list if they are
duplicates of the previous event.
HIST_SAVE_NO_DUPS
When writing out the history file, older commands that duplicate
newer ones are omitted.
The program sponge can be useful to write back in the same file you read.
(For the example's sake, you don't know about sed -i)
echo "say what again" > file
sed s/what/woot/ file > file
So bad, file is now empty, you lost your file.
echo "say what again" > file
sed s/what/woot/ file | sponge file
does what you want
(Be careful not to write sponge > file or the file will be empty again.)
The fact that i didn't have an answer to this question annoyed me sufficiently that i wrote one - call this inplace and put it executably on your path:
#! /bin/bash
BACKUP_EXT=
while getopts "b:" flag
do
case "$flag" in
b) BACKUP_EXT="$OPTARG" ;;
esac
done
shift $((OPTIND - 1))
CMD="$1"
shift
for filename in "$#"
do
TMP_FILE="$(mktemp -t)"
bash -c "$CMD" <"$filename" >"$TMP_FILE"
if [[ -n "$BACKUP_EXT" ]]
then
mv "$filename" "$filename.$BACKUP_EXT"
fi
mv "$TMP_FILE" "$filename"
done
You may now say:
inplace 'sort | uniq -du' .zhistory
Incidentally, there's a way to do that uniqification without having to sort - but that's an answer for another question!