Am appending the standard output and error of the shell script execution on a unix bok like shown below
/home/mydir/shellScript.sh >> /home/mydir/shellScript.log 2>&1
Now am wondering a way to keep logs going back as much as say 30 days else the log file size will keep on increasing.
Would appreciate if anyone can provide recommendations around the same.
This kind of thind is generally done with a tool such as logrotate.
For example, with Apache's logs, I've seen it used to :
Once per day, move the current file to another (to have one log file per day), gzipping the resulting file of the day before
Delete the archived file that were more than 1 week old
So, I suppose you might be able to use it to get what you're asking.
Is this a long-running script (e.g. daemon)? Or does it do something then exits quickly? You could dynamically build the log file's name based on today's date, so a new file gets generated any time the date changes:
#/bin/sh
now=`date +%F`
/home/mydir/shellScript.sh >> /home/mydir/shellScript-$now.log 2>&1
previous=`date --date='30 days ago' +%F`
rm -f /home/mydir/shellScript-$previous.log 2>&1
(added stale log removal).
Pascal MARTIN is correct - it is a simple matter to put a configuration file into /etc/logrotate.d, or add an entry onto the end of the file /etc/logrotate, as logrotate is included stock in most UNIX systems. It is a very easy-to-understand configuration file that takes roughly 5 min. at a man page to understand. I recommend it as the easiest and most maintainable solution.
There's not a lot of context to your problem included.
I agree with both of the offered solutions.
I would also point you to my 2 rather long-winded ;-) discourses on naming and managing logfiles.
Bash piping output and input of a program
command line wisdom for 2 panel file manager user
I hope these help.
Related
Before anyone else checks, I am confident this is not a duplicate of the existing question of how to add a header in Unix to multiple files (the question is here: Adding header into multiple text files). This is more about optimisation of a solution I am currently using for this current issue.
I have numerous directories in which I have over 20000 files and for each file I want to add the same header.
What I have been doing is:
sed -i '1ichr\tpos\tref\talt\treffrq\tinfo\trs\tpval\teffalt\tgene' *.txt
Now, this does work exactly as I want it to, but there have been a couple of issues.
First is that this seems to be an extremely slow method of doing this and it can take a pretty long time to get through all 20K+ files.
Second, and more frustratingly, occasionally my connection to the server I am using has timed out during this long process meaning that the command won't finish running, so I end up with half the files having the header and half not. And if I started from the top again this would mean a number of the files would have the header twice so I actually have to go through a process of creating them again so I can add the header all at once.
So, what I am wondering is if there is a better/quicker solution to this problem. The question I linked above seems like it would actually be slower (given that it seems like there is more the command line needs to do at each file as it is going through a loop) and so doesn't seem like it would fix this.
Don't use -i. It confuses things when you get interrupted. Instead, use
mkdir -p ../output-dir
for file in *.txt; do
sed '1ichr\tpos\tref\talt\treffrq\tinfo\trs\tpval\teffalt\tgene' "$file" > ../output-dir/"$file"
done
When you're done, you can rename the directories if you wish. This doesn't address the connection issue (ThoriumBR's suggestion of nohup is good for that), but when it happens you can recover state more easily.
First, adding a header is slow. You have to move the entire file contents to add something at the start. Adding a trailer would be very fast.
Second, use nohup:
nohup - run a command immune to hangups, with output to a non-tty
Using nohup sed -i '1ichr\tpos\tref\talt\treffrq\tinfo\trs\tpval\teffalt\tgene' *.txt will keep the command running on the background even if the server times you out.
I use rsync to backup a few thousands of files and pipe the output to a file.
Given the number of files I'd like to see a list of only those transfers that had issues as well as a summary to show which completed.
So, using the -q flag displays nicely by exception any error only.
Using --stats shows a helpful summary at the end.
The problem is that I cannot combine them because it appears that -q suppresses the stats output.
Any ideas welcome.
This did the trick for me :
rsync -azh --stats <source> <destination>
-a/--archive: archive mode; equals -rlptgoD (no -H,-A,-X)
-z/--compress: compress file data during the transfer
-h/--human-readable: output numbers in a human-readable format
--stats: give some file-transfer stats
Perhaps this will help someone else. In the end the only thing that worked was to swap the output as suggested here.
So in my case it was simply redirecting as follows:
2>> /output.log >> /output.log
i have a scenario in which i need to download files through curl command and want my script to pause for some time before downloading the second one. I used sleep command like
sleep 3m
but it is not working.
any idea ???
thanks in advance.
Make sure your text editor is not putting a /r /n and only a /n for every new line. This is typical if you are writing the script on windows.
Use notepad++ (windows) and go to edit|EOL convention|UNIX then save it. If you are stuck with it on the computer, i have read from here [talk.maemo.org/showthread.php?t=67836] that you can use [tr -d "\r" < oldname.sh > newname.sh] to remove the problem. if you want to see if it has the problem use [od -c yourscript.sh] and /r will occur before any /n.
Other problems I have seen it cause is cd /dir/dir and you get [cd: 1: can't cd to /dir/dir] or copy scriptfile.sh newfilename the resulting file will be called newfilenameX where X is an invisible character (ensure you can delete it before trying it), if the file is on a network share, a windows machine can see the character. Ensure it is not the last line for a successful test.
Until i figured it out (i knew i had to ask google for something that may manifest in various ways) i thought that there was an issue with this linux version i was using (sleep not working in a script???).
Are you sure you are using sleep the right way? Based on your description, you should be invoking it as:
sleep 180
Is this the way you are doing it?
You might also want to consider wget command as it has an explicit --wait flag, so you might avoid having the loop in the first place.
while read -r urlname
do
curl ..........${urlname}....
sleep 180 #180 seconds is 3 minutes
done < file_with_several_url_to_be_fetched
?
For my developer work I reside in the *nix shell environment pretty much all day, but still can't seem to memorize the name and argument specifics of programs I don't use daily. I wonder how other 'casual amnesiacs' handle this. Do you maintain an big cheat sheet? Do you rehearse the emacs shortcuts when you take your weekly shower? Or is your desk covered under sticky notes?
Using bash_completion is one way of not having to remember the precise syntax of program arguments.
> svn [tab][tab]
--help checkout delete lock pdel propget revert
--version ci diff log pedit proplist rm
-h cleanup export ls pget propset status
add co help merge plist pset switch
annotate commit import mkdir praise remove unlock
blame copy info move propdel rename update
cat cp list mv propedit resolved
If I don't use a command regularly enough to remember what I want, I tend to just use --help or the man pages when I need to.
Or, if I'm lucky, I use CTRL+R and let bash's history search find when I last used it.
Eventually you just remember them, well the set that you use anyway. I used to maintain a README in my home directory when I was starting out but that disappeared many years ago.
One useful command is man -k which you pass a word to and it will return a list of all commands whose man page summary contains that word.
'apropos' is also a very useful command. It will list all commands whose man pages contain the keyword.
For a website I'm working on I want to be able to automatically update the "This page was last modified:" section in the footer as I'm doing my nightly git commit. Essentially I plan on writing a shell script to run at midnight each night which will do all of my general server maintenance. Most of these tasks I already know how to automate, but I have a file (footer.php) which is included in every page and displays the date the site was last updated. I want to be able to recursively look through my website and check the timestamp on every file, then if any of these were edited after the date in footer.php I want to update this date.
All I need is a UNIX command that will recursively iterate through my files and return ONLY the date of the last modification. I don't need file names or what changes were made, I just need to know a single day (and hopefully time) that the most recently updated file was changed.
I know using "ls -l" and "cut" I could iterate through every folder to do this, but I was hoping for a quicker-running and easier command. Preferably a single-line shell command (possibly with a -R parameter)
The find outputs all the access times in Unix format, then sort and take the biggest.
Converting into whatever date format is wanted is left as an exercise for the reader:
find /path -type f -iname "*.php" -printf "%T#" | sort -n | tail -1
GNU find
find /path -type -f -iname "*.php" -printf "%T+"
check the find man page to play with other -printf specifiers.
You might want to look at a inotify script that updates the footer every time any other file is modified, instead of looking all through the file system for new updates.