phpMyAdmin Cron to Delete Temporary Files - unix

I have a folder on my hosting which I periodically upload something to - /public_html/uploads - and I'd like to set up a cronjob through phpMyAdmin to empty it out on a regular basis.
The current cron I have in pMA is
find /public_html/uploads -maxdepth 1 -ctime 1 -exec rm -f {} \;
http://img641.imageshack.us/img641/668/1274390599451.png
(Ignore the fact that it's running every minute for now, it's so I can test it :) )
I know very little about what this command is actually doing, but it looks like "not very much". Can anyone help me fix it? :) Thanks.

The command is looking for files in the directory that where somehow modified in the last day and deletes them.
If you just want to delete everything in the directory you could just use a:
rm -f /public_html/uploads/*
As the cron job command

Related

Mac OS: How to use RSYNC to copy files modified within the last 24 hours and keep folder structure?

It's a simple question that I can't seem to figure out. I'm on a Mac with Big Sur with all the latest updates, and I'm going through Terminal to get these commands to run. If there's a better way please let me know.
This is, in basic terms, what I'm trying to do--I want RSYNC to recursively go through a source directory (which in this case would ideally be an entire drive), find any files modified within the last 24 hours, and copy those to another drive, while preserving the folder structure. So if I have:
/Volumes/Drive1/Folder1/File1.file
/Volumes/Drive1/Folder1/File2.file
/Volumes/Drive1/Folder1/File3.file
And File1 has been modified in the last 24 hours, but the other two haven't, I want it to copy that file, so that on the second drive I wind up with:
/Volumes/Drive2/Folder1/File1.file
But without copying File2 and File3.
I've tried a lot of different solutions and strings, but I'm running into problems. The closest I've been able to get is this:
find /Volumes/Drive1/ -type f -mtime -1 -exec cp -a "{}" /Volumes/Drive2/ \;
The problem is that while this one does go through Drive1 and find all the files newer than a day like I want, when it copies them it just dumps them all into the root of Drive2.
This one also seems to come close:
rsync --progress --files-from=<(find /Volumes/Drive1/ -mtime -1 -type f -exec basename {} \;) /Volumes/Drive1/ /Volumes/Drive2/
This one also identifies all the files modified in the last 24 hours, but instead of copying them it gives an error, "link_stat (filename and path) failed: no such file or directory (2)."
I've spent several days trying to figure out what I'm doing wrong but I can't figure it out. Help please!
I think this'll work:
srcDir=/Volumes/Drive1
destDir=/Volumes/Drive2
(cd "$srcDir" && find . -type f -mtime -1 -print0) |
while IFS= read -r -d $'\0' filepath; do
mkdir -p "$(dirname "$destDir/$filepath")"
cp -a "$srcDir/$filepath" "$destDir/$filepath"
done
Explanation:
Using cd "$srcDir"; find . -whatever will generate relative paths (starting with "./") from the source directory to the found files; that means appending the results to $srcDir and $destDir will give the full source and destination paths for each file.
Putting it in parentheses makes it run in a subshell, so the cd won't affect other commands. Coupling cd and find with && means that if cd fails, it won't run find (which would run in the wrong place, generate a list of the wrong file file, and generally cause trouble).
Using -print0 and while IFS= read -r -d $'\0' is a standard weird-filename-safe way of iterating over found files (see BashFAQ #20). Note that if anything in the loop reads from standard input (e.g. cp -i asking for confirmation), it'll steal part of the file list; if this is a worry, use this variant (instead of the pipe) to send the file list over file descriptor #3 instead of standard input:
while IFS= read -r -d $'\0' filepath <&3; do
...
done 3< <(cd "$srcDir" && find . -type f -mtime -1 -print0)
Finally, mkdir -p is used to make sure the destination directory exists, and then cp to copy the file.

SCP issue with multiple files - UNIX

Getting error in copying multiple files. Below command is copying only first file and giving error for rest of the files. Can someone please help me out.
Command:
scp $host:$(ssh -n $host "find /incoming -mmin -120 -name 2018*") /incoming/
Result:
user#host:~/scripts/OTA$ scp $host:$(ssh -n $host "find /incoming -mmin -120 -name 2018*") /incoming/
Password:
Password:
2018084session_event 100% |**********************************************************************************************************| 9765 KB 00:00
cp: cannot access /incoming/2018084session_event_log.195-10.45.40.9
cp: cannot access /incoming/2018084session_event_log.195-10.45.40.9_2_3
Your command uses Command Substitution to generate a list of files. Your assumption is that there is some magic in the "source" notation for scp that would cause multiple members of the list generated by your find command to be assumed to live on $host, when in fact your command might expand into something like:
scp remotehost:/incoming/someoldfile anotheroldfile /incoming
Only the first file is being copied from $host, because none of the rest include $host: at the beginning of the path. They're not found in your local /incoming directory, hence the error.
Oh, and in addition, you haven't escape the asterisk in the find command, so 2018* may expand to multiple files that are in the login directory for the user in question. I can't tell from here, it depends on your OS and shell configuration.
I should point out that you are providing yet another example of the classic Parsing LS problem. Special characters WILL break your command. The "better" solution usually offered for this problem tends to be to use a for loop, but that's not really what you're looking for. Instead, I'd recommend making a tar of the files you're looking for. Something like this might do:
ssh "$host" "find /incoming -mmin -120 -name 2018\* -exec tar -cf - {} \+" |
tar -xvf - -C /incoming
What does this do?
ssh runs a remote find command with your criteria.
find feeds the list of filenames (regardless of special characters) to a tar command as options.
The tar command sends its result to stdout (-f -).
That output is then piped into another tar running on your local machine, which extracts the stream.
If your tar doesn't support -C, you can either remove it and run a cd /incoming before the ssh, or you might be able to replace that pipe segment with a curly-braced command: { cd /incoming && tar -xvf -; }
The curly brace notation assumes a POSIX-like shell (bash, zsh, etc). The rest of this should probably work equally well in csh if that's what you're stuck with.
Limited warranty: Best Effort Only. Untested on animals or computers. Your milage may vary. May contain nuts.
If this doesn't work for you, poke at it until it does.

How to get latest timestamp file from FTP using UNIX Script

I am writing a shell script; I want to download latest uploaded file from FTP. I want to get latest file of specific folder. Below is my code for that. But it is not working as expected.
File names are in specific format like
.../MONTHLY_FILE/
ABC_ECI_12082015.ZIP
ABC_ECI_18092015.ZIP
ABC_ECI_09102015.ZIP
Here my return filename should be "ABC_ECI_09102015.ZIP".
Please help me out in this and let me know what mistake I am making.
#!/bin/ksh
. ospenv
#SRC_DIR=/powerm/Myway/SrcFiles
SRC_DIR=$PMRootDir/SrcFiles
cd $SRC_DIR
ftp -n gate.usc.met.com << FINISH
user ftp_abc.com xyz
##Here xyz is password
cd /MONTHLY_FILE
#mget ls -t -r | tail -n 1`enter code here`
get $1
bye
FINISH
I don't understand exactly where exactly you made a mistake, but this should do the trick:
ls -tr | tail -1
what's mget? why do you have an "enter code here"? What I wrote above should print out the last file... whether you save that into a variable, insert it directly to another action, it's up to you.

Getting filenames from a directory, put it into file, then zip it (UNIX)

Okay - so I've been trying all day today to see if this is possible.
In my case, I have a cache folder contains 1 million of cache files (and yes it's impossible to open). So for housekeeping, I'd like to delete those that has not been accessed since 120 days and log whatever was deleted. Managed to clean up around 200K files with this line:
find thisfolder -name "pattern*" -type f -atime +120 -exec rm -f {} \; -fprint /home/myfolder/logs/deleted_cache.txt 2>&1
But then, I ended up with a log file (deleted_cache.txt) about 50MB. That doesn't do housekeep any favor. So I was thinking to zip it up, hoping we could clear more space.
Read about the I/O redirection, piping, and zip; and after several attempts, it seems impossible to do it in a line. Is bash script the only way to do it?
Please enlighten me.
Thank you.
You can use this command:
find thisfolder -name "pattern*" -type f -atime +120 -delete -printf '%f\n' | gzip > deleted_cache.gz
-delete delete files
-printf print file names to stdout
gzip compress stdin
Note: I only test it in Ubuntu.

FTP - Only want to keep latest 10 files - delete LRU

I have created a shell script to backup my webfiles + database dump, put it into a tar archive and FTP it offsite. Id like to run it X times per week however I only want to keep the latest 10 backups on the FTP site.
How can I do this best? Should I be doing this work on the shell script side, or is there an FTP command to check last modified and admin things that way?
Any advice would be appreciated.
Thanks,
One way to do something like this would be to use the day of the week in the filename:
backup-mon.tgz
backup-tue.tgz
etc.
Then, when you backup, you would delete or overwrite the backup file for the current day of the week.
(Of course, this way you only get the latest 7 files, but it's a pretty simple method)
Do you have shell access to to the FTP server? If so, I'm sure you could write a script to do this, and schedule a cron job to do the periodic clean up.
Here's something that ought to work:
num_files_to_keep=10
i=0
for file in `ls -tr`; do
if [ $i -ge $num_files_to_keep ]; then
rm $file;
fi;
i=`expr $i + 1`;
done
find . \! -newer `ls -t|head -n 10|tail -n 1`
Unfortunately if executed when there are less than 10 files it deletes the oldest file on every execution (because ! -newer tests for "older or equal" insead of "strictly older" - this can be remedied by checking first:
[ `ls|wc -l` -gt 10 ] && find . \! -newer `ls -t|head -n 10|tail -n 1`
If you're going down the shell path, try this:
find /your-path/ -mtime +7 -type f -exec rm -rf {} \;
This would delete everything older than a certain date (in this case 7 days). May be more relevant depending on whether you need to keep multiple backups for one day. E.g. yesterday I did ten revisions of the same website.

Resources