GIMP batch editing (Script-fu/Python-fu) - console

I have about 500 images, I would like edit all of them in batch, I need to resize them all at to 190x120 dimensions, position then slightly higher (say 10 pixels). And export. Also I would like them all to keep their initial names.
Basically I have a frame and I would like to load images (on layer under it) then size them down (above dimensions) move slightly up and export each individual image with frame so that it keeps its name.
What would be a command I could use in GIMP console (script-fu or Python-fu)?
Or what other method could I use to achieve the result without editing each image individually?
Thanks in advance!

i altered one of my own python-fu scripts, and after some testing it seems to do what you want. the interface itself is pretty self-explanatory.
just download the file and install it as a plug-in.
gist link: https://gist.github.com/MCOfficer/bdf6c0c0935d22da38e72cc99fea6375 (archive.md)

Related

Adjust Rmarkdown's inline chunk output - increase no. of columns displayed?

This is a workflow related question. I'm trying out working only (or mostly) in the Rmarkdown source window with the options set to "Chunk output inline". So with R open, there is just one big window -The environment, Console and File windows being minimized.
My question: Is there some option to change the number of columns displayed? I want to increase the numbers of columns visible without scrolling (see screenshot below), and since there is enough space I think it should be possible to display more of them.
Many thanks!
Solution: There is an option. Just add cols.print = 12 (or whatever number you want) to global options.
Also, people might find this 'manual' on markdown useful: https://bookdown.org/yihui/rmarkdown/html-document.html#paged-printing

How to use JuliaImages to create a smaller image given a starting image?

I just got done exploring the docs for JuliaImages found here. What I want to do is as follows:
I have an image. It is a map of sorts. It takes up a lot of space so I want to index into the image and create a new smaller image that is just essencially a zoomed-in version of the original image. I know I could do this manually, but I want to create a re-usable script that I can use to apply this operation to N number of images. How can I do this using JuliaImages?
If by "zoomed in" you mean focusing on a small portion of the image and making it look bigger, you can do this with ordinary array-indexing tools. For example, img[251:500,147:328] would extract a portion of the image.
If what you're really looking for is a thumbnail, my favorite approach is to use restrict. That is limited to 2-fold reductions. You can also imfilter (best with the IIRGaussian filters of ImageFiltering.KernelFactors) and then call imresize. But there will be no beating the performance of restrict.

PDF image is 1.3 MB, but after removing text in Illustrator becomes ~35 Mb (still vector in PDF)

I have created a figure for my scientific work using ggplot (an R package for plotting data). It's a scatterplot that contains ~25.000 data points in a normal x-y-style plot. Each data point has a border and a color fill. The output vector PDF is 1.3 Mb in size. Now, I would like to make some final adjustments regarding font size and text position and merge it with other panels in a bigger figure which I normally do in Illustrator. So I add/embed the scatterplot to the rest of my figures which nicely loads all elements correctly. However, when I then simply save this file as .ai or .pdf, the output will be more than ~30 Mb. How is it possible, that all elements are still preserved in the original (small) PDF, but after Illustrator it is inflated to much? It is critical for me to keep the file size small.
I tried many things, including different PDF exporting options in Illustrator and macOS Preview PDF file compression, but nothing worked. I even tried merging all those ~25.000 overlapping dots together in one or at least few shapes, but either Illustrator crashes in the process (Illustrator > Pathfinder unite/merge) or the resulting PDF shows some erratic behaviour, i.e. become black/white in Word (Illustrator > Flatten Transparency) What am I missing here?
Any help is appreciated!
When saving, make sure you're not enabling Illustrator editing capabilities. Leaving Illustrator editing capabilities enabled will essentially cause a copy of the Illustrator file (as an AI version) to be written into the PDF that's being saved. This often causes the PDF to increase dramatically in size, especially for files with many vector or path elements.
I had the same issue. What worked was me was this:
Export as eps instead of pdf in ggplot. You may need to use device=cairo_ps as an option (I did).
In Adobe Illustrator, create a new document and select the web option
Combine all your figures into this new figure by dragging and dropping them there. Use "Embed" to embed those figures into the new one.
Make all changes you need
Save as pdf with default options (I used preset {Smallest File Size (PDF 1.6)}.
This preserved the small file size for me. I think the only thing that matters here is the use of eps instead of pdf when exporting from ggplot.

Changing resolution of bitmaps

I am making some graphs with R and I am coping them to Word. I was coping them as metafiles but Word doesn't seem to be able to cope with them. The other option in R to copy graphs is a bitmap, but when I use this the quality of the graphs in word is terrible.
I saw some answers about changing the resolution in this website but only if I saved the graphs which I would like to avoid. Is there a way of changing the resolution for copied graphs?
Thanks,
sbg
When the graphs are onscreen, they are drawn for a screen resolution (i.e. 72dpi). For print, you need to use at least 300dpi, or switch to a vector format. Word can import graphs in Windows Metafile (.wmf) format; but your other option is to save the plot using, e.g.,
png("my plot.png", res = 300)
plot(1:5)
dev.off()
This saves to disk, which you said you wanted to avoid, but you can always delete it again later (programatically even, with file.remove).
I'd also like to make the case that when you copy and paste, your work isn't as easily reproducible as when you use code. There is no trace of what you have done, and when your data changes, you need to go through the rigmarole of clicking again, rather than just executing your updated script.

Create CSS sprites based on colour?

I have a large set of thumbnails I wish to display on a page (over 200). I'd like to use CSS sprites to load them to minimise the HTTP requests. I think putting all of them in one massive file is a bad idea, but splitting them into about 6 files of 40-50 thumbnails should work nicely.
All of the thumbnails are fairly low colour (can be reduced to 256 colours without quality drop), but in total all the thumbnails cover a lot more colours.
So, is there an easy way to group them based on their colour? Putting each group of files in separate folder is fine, since I can stitch together with ImageMagick or an online sprite tool later. But doing all of that in one (with CSS) would be nice too.
Update: the reason for grouping by colour:
The idea is to save more bandwidth. If I have 10 mostly-blue thumbnails, 10 green and 10 red, I can combine them to 3 images, reducing each to 256 colours. If I mix thumbnails then reducing to 256 colours makes the images poorer quality.
Firstly, I would suggest not worrying too much, and saving as a 24bit png. It may seem that the image is getting a lot bigger by doing this, but if the thumbnails are small you'll probably find that there is a large amount of bandwidth currently being using with just http headers that will go away that you can use to make your images look better.
However, if you want automate the process you could try working out the average colour (one way of doing something close to this is to resize them to 1x1, then look at the rgb colour for that pixel). Once you have a colour per images, convert to hsv and sort by hue. You can then bundle them based on that sort order. I've not actually tried this, but it may produce acceptable results.
Adjusting the number of images that get bundled will also effect the output quality. If it sucks when you put 30 images per file, try 25 and see how much difference it makes. Actually, might be smarter to think about the number of files...
Put them all into a single file.
Does it look bad, as there aren't enough colours?
add one extra file and split them equally across all the files. Goto step 2.
Well I did some testing by grabbing a sample by hand of one "tint" and comparing it to a montage created by just taking the first N images. There was only a difference of a few kilobytes, which was reduced to about 30 bytes after I found PNGcrush. Fanastic tool!
So in short, my crackpot idea has been disproven. :p
Now, this is nothing more than theoretical blabbering, but, I understand that animated GIFs have support for a distinct color palette per frame. Theoretically, you could place each image on a separate frame of the animation (leaving most of that frame transparent), and set the pause time between frames to 1ms. Set the animation to only go through once, and you could (potentially) have an effective CSS sprite with reduced to 256 colors per image.
Like I said, theoretical blabbering.

Resources