Is there any configuration to delete older carbon data automatically after certain period of time?
I tried searching it but could not find anything about it.
Thanks in advance for any suggestion and answer.
Graphite as such doesn't support deletion yet. I would advice decreasing the storage-schema to store data only until the point you need it, so as to really solve this 'problem'.
Still- you can run a cron at regular intervals to do so. The following would delete any wsp file that hasn't been touched in a day-
Using GNU find:
find /opt/graphite/storage/whisper/* -name '*\.wsp' -mtime 1 -delete
Is this for whisperfiles that are no longer being written to? Or is this just for older data in an existing metric?
if the former you can run something like
find <whisperpath> -iname "*.wsp" -mtime +<number of days lower limit>
that will list how many that fit the criteria,
if you want to delete them in the same command append -delete
find <whisperpath> -iname "*.wsp" -mtime +<number of days lower limit> -delete
Related
I am currently working on a script, to store/backup our old files, so that we have more space on our server. This script will be used as a cronjob to backup the stuff every week. My script currently looks like this:
#!/bin/bash
currentDate=$(date '+%Y%m%d%T' | sed -e 's/://g')
find /Directory1/ -type f -mtime +90 | xargs tar cvf - | gzip > /Directory2/Backup$currentDate.tar.gz
find /Directory1/ -type f -mtime +90 -exec rm {} \;
The script is at first saving the current Date + Timestamp(without ":") as a variable. Afterwards it searches for files older than 90 days, tars them and finally makes a gzip out of them, which has the name "Backup$currentDate.tar.gz".
Then it's supposed to find the files again and remove them.
I do however have some issues here:
Directory1 consists of multiple Directories. It does find the files and creates the gz file, but while some files are zipped properly(for instance /DirName1/DirName2/DirName3/File), others appear directly in the "root" Dir. What could be the issue here?
Is there a way to tell the Script, to only create the gz file, if files are found? Because currently, we get gz files, even if there was nothing found, leading to empty directories.
Can I somehow use the find output later on(store variable?), so that the remove at the end really only targets those files found in the step before? Because if the third step would take, let's say a hour and the last step gets executed after it's finished, it could potentially remove files, that weren't older than 90 days before, but are now, so they are never backed up, but then deleted(highly unlikly, but not impossible).
If there's anything else you need to know, feel free to ask ^^
Best regards
I've "rephrased" your original code a bit. I don't have an AIX machine to test anything, so DO NOT cut and paste this. Using this code, you should be able to address your issues. To wit:
It make a record of what files it intends to operate on ($BFILES).
This record can be used to check for empty tar files.
This record can be used to see why your find is producing "funny" output. It wouldn't surprise me to find that xargs hit a space character.
This record can be used to delete exactly the files archived.
As a child, I had a serious accident with xargs and have avoided it ever since. Maybe there is a safe version out there.
#!/bin/bash
# I don't have an AIX machine to test this, so exit immediately until
# someone can proof this code.
exit 1
currentDate=$(date '+%Y%m%d%T' | sed -e 's/://g')
BFILES=/tmp/Backup$currentDate.files
find /Directory1 -type f -mtime +90 -print > $BFILES
# Here is the time to proofread the file list, $BFILES
# The AIX page I read lists the '-L' option to take filenames from an
# input file. I've found xargs to be sketchy unless you are very
# careful about quoting.
#tar -c -v -L $BFILES -f - | gzip -9 > /Directory2/Backup$currentDate.tar.gz
# I've found xargs to be sketchy unless you are very careful about
# quoting. I would rather loop over the input file one well quoted
# line at a time rather than use the faster, less safe xargs. But
# here it is.
#xargs rm < $BFILES
Thought there might be a simple solution to this, but I can't seem to find it anywhere. It's a simple-enough problem. Say I have the following folder/file structure:
/home/
text1.txt
/mydir/
text2.txt
Then I input the command:
find . -name *.txt
This command returns "text1.txt" when called from within /home, and returns "text2.txt" when called from within /home/mydir, as it should.
However, when calling the following from /home...:
find /home/mydir -name *.txt
it returns nothing. My expectation is that it would return "text2.txt." Any thoughts? I have already checked to see if I have any wayward aliases assigned to find, and I have nothing.
It is also worth it to note that I have two unix machines. The use of an absolute path for "find" works on one machine and not the other. Can't go into much more detail than that, I'm afraid. Just looking for a direction to investigate this more.
Thanks to anyone who can help :-)
You should use
find . -name "*.txt"
otherwise bash will extract *.txt to text1.txt resulting in the following command:
find . -name text1.txt
And it will no longer match text2.txt
Can some one tell me how to use Find command to find files of extension Zip,ZIP,zip. ?
find . -iname *.zip is not working for me in AIX.
You need to uses quotes around the pattern matching part. So
find . -iname '*.zip'
will do fine.
assuming that your shell supports character classes as part of its wild-card processing, most do, try
find . -name '*.[Zz][Ii][Pp]'
-iname ? I don't know that one.
Yes, sorry, you need a starting directory, in this case the '.' that you have flagged as missing.
I hope this helps.
How to find files on a unix server which were created/modified in previous month?
for ex. If the current month is Jul then the list of files which were created/modified in Jun should get displayed.
One way is to execute that command.
ls -laR | grep monthName where montName could be Jan,Feb,Mar, and so on ... (remember to change working directory to directory that you're interested in. Also notice that this method is recursive so all sub-directories will be inspected
With this you also retrieve all file permission and so on...
I'm sure that will be better ways (if them jump into my mind, I'll edit this post) but since I'm in coffee break, this is the faster that I find.
In order to find files modified in the previous month, you will need to use find with a set range, for example:
cd / (if you are wanting to start from the root)
find . -type f -mtime +26d -mtime -56d -print
You should adjust your range to include the dates that you wish to include.
All the best to you!
monthToFind=`date -d "1 months ago" "+%Y-%m"`
find . -printf "%TY-%Tm %p\n" | egrep "^$monthToFind " | sed "s/^$monthToFind //g"
This will be slower than using a time range in find. But the time range is hard to determine, and quickly becomes invalid, possibly even while the command is executing.
Unfortunately this will miss files modified last month when they were also modified this month. I don't know of a way to determine these files.
For a website I'm working on I want to be able to automatically update the "This page was last modified:" section in the footer as I'm doing my nightly git commit. Essentially I plan on writing a shell script to run at midnight each night which will do all of my general server maintenance. Most of these tasks I already know how to automate, but I have a file (footer.php) which is included in every page and displays the date the site was last updated. I want to be able to recursively look through my website and check the timestamp on every file, then if any of these were edited after the date in footer.php I want to update this date.
All I need is a UNIX command that will recursively iterate through my files and return ONLY the date of the last modification. I don't need file names or what changes were made, I just need to know a single day (and hopefully time) that the most recently updated file was changed.
I know using "ls -l" and "cut" I could iterate through every folder to do this, but I was hoping for a quicker-running and easier command. Preferably a single-line shell command (possibly with a -R parameter)
The find outputs all the access times in Unix format, then sort and take the biggest.
Converting into whatever date format is wanted is left as an exercise for the reader:
find /path -type f -iname "*.php" -printf "%T#" | sort -n | tail -1
GNU find
find /path -type -f -iname "*.php" -printf "%T+"
check the find man page to play with other -printf specifiers.
You might want to look at a inotify script that updates the footer every time any other file is modified, instead of looking all through the file system for new updates.