Hey guys I need either a SQL query or a script in order to delete all the images from my media library in Wordpress, except the ones that they have '320x180' as a resolution (or in their name)
The reason why is I have more than 100k files and I can't delete the manually as this would take centuries. SQL/programming is not my strongest point either.
Thanks
You can easily delete files using "find" command on linux, just needed to do the same thing. Try this in the bash console:
cd /var/www/wordpress/wp-content/uploads
find -type f -regex '.*[0-9]+x[0-9]+.\(jpg\|png\|jpeg\)$' -delete
The above will delete all resized image versions leaving just the original uploads.
Related
I'm new to zsh and I'm trying to figure out how to get tab completion to work so that when I typed part of the name of a file that's not in the current directory zsh will complete it.
The idea is that I have some scripts in ~ and c:\MyStuff\bin and I'd love for zsh to try and complete those (executable) scripts when I'm in other directories.
Being able to complete files that are anywhere in my path would be nice, but if it's easier to complete files using a list of directories set in my .zshrc, well, that would work fine too.
If anyone has any pointers for resources about how to do this, or even advice like "This will / won't work in zsh" that would be great. zsh seems open-ended enough that it ought to be able to do this AND ALSO I've searched long enough without finding anything that I wouldn't be surprised if there's nothing at the end of this rabbit hole :)
Thanks in advance!
So I decided to invest the time and actually try and read the (very, very thorough) zsh completion docs. There's a lot there, including a section which says that zsh actually does this out of the box:
8 Command Execution
"...the shell searches each element of $path for a directory containing an executable file by that name..."
Turns out this does work for me, it just takes a long time (more than a second or two) and so I thought it wasn't working.
Next up: looking at why it's taking so long - perhaps it's my very long $path variable :)
I have several TB of photos, spread throughout subfolders. Each photo has an original, a watermarked resized version, and a thumbnail.
Named as such:
img1001.jpg
img1001_w.jpg
img1001_t.jpg
DSC9876.jpg
DSC9876_w.jpg
DSC9876_t.jpg
etc etc.
What I need to do, is move all of the originals to a different server. Presumably rsync is the best tool for this?
Is it possible to rsync a directory, while excluding any files that end in _t.jpg or _w.jpg? I'm not concerned about possible edge cases where the original file ends with either of those, as there are no such cases in my data.
Or am I better off just rsync'ing the whole lot, and then selectively deleting the _t & _w files from the destination?
Thanks
Yes, rsync is a good choice. Also because it works incremental so you can stop and start it when needed.
By default rsync does not delete anything on remote, I believe.
Yes, you can sync whole directory structures.
It is possible exclude files or folders from syncing.
I think I'm using a command like
rsync -av [--exclude <excludes-file>] <source> <destination>
I have two files where I edited one and left the other just for reference.
However I screwed some codes on the file I'm editing and since its a huge file, I don't know where I made the error or even if it have more errors. It was not altered I just deleted it.
I want to know if there is a program, plugin, script, something that I can insert the two files and override only the parameters of the classes that was edited (the class names wasn't altered).
I know I should have used GIT and all but I didn't. Lesson learned.
Appreciate any help. I'm using SublimeText.
If you're on a Unix-like OS, or you have Cygwyn installed you can use diff and patch to do this.
$ diff -u old.css new.css > changes.diff
$ patch < changes.diff
I need to create a branch from the state of the HEAD several days ago, for which I would like to add a tag on all files in a module. Trouble is that a few files were removed using cvs remove between that day and now. When I tried to do "cvs rtag" using -D option, I don't see the tag on the deleted files although the deleted files existed then in the cvs.
Is there a straightforward way to branch from a specific date with all the files that existed then?
Unfortunately cvs (r)tag does not allow mixing the -D and -r options. But cvs update does, so you can update your working copy to the date and branch you want and then cvs tag your working copy.
I am screwed. I misused wildcards like a moron, in the rename command.
I repeated names twice in a 3gig folder, which I cannot afford to delete.
Now, the rename command is not working, and it says the file name is too long.
Please help me.
If programming can solve this, please let me know. I am a competent programmer in Java and PHP.
Under the hood, any rename command should get implemented with rename(). If you are in the directory where the file is and do:
mv hugefilenamethatiscreweduponandwanttobemuchshorted tersefile
it should work, as I don't think the path would get expanded out and overflow the limit. Otherwise, you can temporarily move the parent directory somewhere so it had a minimal path (like /p) and then rename the file and then move it back.