I am trying to save 360 png files as a gif with image magick in R (I am working with MacOS) - r

please let me know any other system/code I need to include, as I am not as familiar with writing out images to my computer. I am creating 360 png files as follows:
for(theta in 1:360){
ic=as.character(theta)
if(theta<10) ic=paste("00",ic,sep="")
if(theta>=10 & theta<100) ic=paste("0",ic,sep="") # make filenames the same length
fn=paste("c:iris360\\HW4_",ic,".png",sep="") #filename
png(fn,width=1000,height=1000) # save as *.png
p3(X1,X2, r=100,theta=theta,mainL=paste("theta =",theta))
# legend("topleft",pch=16,cex=1.5,col=allcl)
dev.off()
}
system("magick c:iris360\\HW4*.png c:iris.gif")
where p3 is just a function that takes my matrices X1 and X2 and plots the points and their segments(let me know if I need to include it as well). However, I get this error:
magick: must specify image size iris360HW4*.png' # error/raw.c/ReadRAWImage/140.
I am unable to open the gif file, as my mac says it is damaged or uses a file format that preview does not recognize.
Update 1: I replaced fn's declaration with
fn <- sprintf("c:iris360/HW4_%03i.png", theta)
as well as replacing ic with sprintf("%03i", theta) everywhere it appeared, but still got the same specify image size error.
When I run the system command into my terminal, I still get the same error asking me to specify the image size.

Magick needs to know several things (e.g., image size, delay between frames, images to use, destination file name) in order to convert a stack of png into a gif. See GIF Animations and Animation Meta-data
magick -delay 100 -size 100x100 xc:SkyBlue \
-page +5+10 balloon.gif -page +35+30 medical.gif \
-page +62+50 present.gif -page +10+55 shading.gif \
-loop 0 animation.gif
So it looks like you need to change
system("magick c:iris360\\HW4*.png c:iris.gif")
to something more like
system("magick -delay 10 -size 100x100 —loop 0 c:iris360\\HW4*.png c:iris.gif")

Related

R Magick Package Image_Write Saves Tiff Images in Gray Instead of RGB

I have a very large number of images that need to be cropped slightly (by 1 pixel either height-wise or width-wise) in order to do further image-processing on them.
I'm trying to use the Magick package in R to do so, but running into an issue where any images that are gray are saved by Magick in grayscale instead of RGB. I see that people have asked similar questions here and here, and I have tried the solutions offered to no avail. For some reason, when doing Image_Write, setting the defines colorspace to auto-grayscale off and type to truecolor does not work and the images are still saved as grayscale instead of RGB format.
Here is the code I'm trying to use:
crop <- function(a,b) {
image <- image_read(a)
cut <- image_crop(image, b)
image_write(cut, path = a, format = "tiff", defines = c('colorspace:auto-grayscale' = 'false', 'type:truecolor' = 'on'))
}
runcrop <- mapply(crop, mydat[,1], mydat[,2])
Where the input (mydat) is a table with two columns: the path to the image and the pixel size I need the image cropped to. Any image taken in black and white (CH4) is saved as a grayscale output, while all the other channels are correctly saved as RGB images after cropping.
Here is a small excerpt from the table:
> mydat
imtocrop WxH
[1,] "./01 10245 XY01_Fused_CH1.tif" "2288x1218+0+0"
[2,] "./01 10245 XY01_Fused_CH2.tif" "2288x1218+0+0"
[3,] "./01 10245 XY01_Fused_CH3.tif" "2288x1218+0+0"
[4,] "./01 10245 XY01_Fused_CH4.tif" "2288x1218+0+0"
[5,] "./01 10245 XY01_Fused_Overlay.tif" "2288x1218+0+0"
And here is an example image to recapitulate the error: https://www.dropbox.com/s/8m93vflnf6rd6ao/01%2010245%20XY01_Fused_CH4.tif?dl=0
The cropping formula I've written works perfectly, but I can't seem to get Image Magick to save these tiffs as RGB instead of grayscale. I've tried both 'false' and 'off' for auto-grayscale, I've tried 'true' and 'on' for truecolor. I've also tried using only the truecolor option, without the colorspace option (as suggested on ImageMagick's documentation site here). Nothing I do seems to convince Image_Write to output as RBG instead of grayscale. Please let me know if you have any ideas that might fix this error, thanks!
I have no knowledge of R, but am trying to move my knowledge of ImageMagick towards your knowledge of R and maybe we an reach the target of sorting out your problem somewhere in the middle...
Let's make a simple 640x480 solid black image with ImageMagick in the Terminal, and pipe it into exiftool in TIFF format to check its depth and whether it is greyscale or colour:
magick -size 640x480 xc:black tif:- | exiftool - | grep -E '^Bits|^Photometric'
Bits Per Sample : 16
Photometric Interpretation : BlackIsZero <- means GREYSCALE
Ok, not surprisingly, our black 640x480 rectangle came out in greyscale with 16-bits per sample because my ImageMagick was compiled with Q16. Let's rectify the 16-bit-edness:
magick -size 640x480 xc:black -depth 8 tif:- | exiftool - | grep -E '^Bits|^Photometric'
Bits Per Sample : 8 <--- that's better, 8-bit
Photometric Interpretation : BlackIsZero <--- still greyscale
Ok, now let's force the black image to become RGB:
magick -size 640x480 xc:black -type truecolor -depth 8 tif:- | exiftool - | grep -E '^Bits|^Photometric'
Bits Per Sample : 8 8 8
Photometric Interpretation : RGB
My hope is that you can find the corresponding settings/switches/options in R Studio now that you know what they look like.
Another option might be to "shell out" with system() and just use the command-line interface:
magick INPUT.TIF -crop WIDTHxHEIGHT+xoffset+yoffset -type truecolor RESULT.TIF
Note that I am forcing TIFF format and piping to exiftool for simplicity of demonstration. If you are testing in your Terminal, you can write straight to a TIFF file with:
magick -size 640x480 xc:black image.tif
You can also set lossless compression with:
magick -size 640x480 xc:black -compress LZW image.tif

Open only part of an image (JPEG/TIFF etc.) in R

I am analysing very large images in R, in the order of tens of thousands of pixels square. Unfortunately, even with 64 GB RAM, these images sometimes fail to fit into memory, and when they do I can only open one at a time, precluding parallelisation.
My current strategy is to load them using the JPEG or TIFF packages. e.g.:
image <- readJPEG('image.jpg')
However, as I am only performing simple mathematical manipulations (summing, thresholding etc.) that could be performed piece-by-piece, is it possible to only open part of an image at a time by specifying the dimensions to load? If so, I could write a loop to open 1024 x 1024 sized tiles. The JPEG and TIFF packages do not offer an option to do this.
If you are working with very large images, libvips is probably your best bet. You can shell out to it from R using system().
Your question is not very specific, but let's make a 10,000x10,000 pixel TIFF with ImageMagick and it is a black-white gradient:
convert -size 10000x10000 gradient: -depth 8 a.tif
Now threshold that at 50% with vips and check memory required:
vips im_thresh a.tif b.tif 128 --vips-leak
memory: high-water mark 292.21 MB
Pretty frugal, no? By comparison, the equivalent ImageMagick command requires 1.6GB of RAM:
/usr/bin/time -l convert a.tif -threshold 50% b.tif
Sample Output
...
1603895296 maximum resident set size
...
How about adding 64 to every pixel using im_gadd which does:
usage: vips im_gadd a in1 b in2 c out
where:
a is of type "double"
in1 is of type "image"
b is of type "double"
in2 is of type "image"
c is of type "double"
out is of type "image"
calculate a*in1 + b*in2 + c = outfile
So we use:
vips im_gadd 1 a.tif 0 b.tif 64 c.tif --vips-leak
memory: high-water mark 584.41 MB
Need to do some statistics?
vips im_stats c.tif
band minimum maximum sum sum^2 mean deviation
all 64 319 1.915e+10 4.20922e+12 191.5 73.6206
1 64 319 1.915e+10 4.20922e+12 191.5 73.6206
As it turns out, there is an R package - RBioFormats - that allows you to specify part of an image being opened (though it is not available on CRAN). It can be installed from Git as follows:
source("https://bioconductor.org/biocLite.R")
biocLite("aoles/RBioFormats") # You might need to first run `install.packages("devtools")`
library(RBioFormats)
The dimensions of the image can be read from the metadata without having to open the image:
metadata <- read.metadata('image.tiff')
xdim <- metadata#.Data[[1]]$sizeX
ydim <- metadata#.Data[[1]]$sizeY
Suppose that we want to load the top-left 512 x 512 pixels, we use the subset function:
image <- read.image('image.tiff', subset = list(X = 1:512, y = 1:512))
From this it is trivial to write a loop to iteratively process a whole large image. RBioFormats is an R interface into the Java BioFormats library and will open Tiffs, PNGs, JPEGs as well as many proprietary imaging formats.

creating a GIF in R error

I would like to learn how to create a simple GIF.
I found this code :
dir.create("examples")
setwd("examples")
# example 1: simple animated countdown from 10 to "GO!".
png(file="example%02d.png", width=200, height=200)
for (i in c(10:1, "G0!")){
plot.new()
text(.5, .5, i, cex = 6)
}
dev.off()
which creates 11 png images. I would like to create a GIF file from these 11 eleven images, so I am using system("convert -delay 80 *.png example_1.gif"). But I get an error
> system("convert -delay 80 *.png example_1.gif")
Invalid Parameter - 80
Warning message:
running command 'convert -delay 80 *.png example_1.gif' had status 4
I have also looked at Creating a Movie from a Series of Plots in R; but this does not work for me either.
P.S. I have already installed ImageMagick
Try running system("convert /?") to see the source of your problem!!!
Converts a FAT volume to NTFS.
CONVERT volume /FS:NTFS [/V] [/CvtArea:filename] [/NoSecurity] [/X]
volume Specifies the drive letter (followed by a colon),
mount point, or volume name.
/FS:NTFS Specifies that the volume will be converted to NTFS.
/V Specifies that Convert will be run in verbose mode.
/CvtArea:filename
Specifies a contiguous file in the root directory
that will be the place holder for NTFS system files.
/NoSecurity Specifies that the security settings on the converted
files and directories allow access by all users.
/X Forces the volume to dismount first if necessary.
All open handles to the volume will not be valid.
If you have version 7 of ImageMagick installed,then this line should solve the problem.
system("magick -delay 80 *.png example_1.gif")

Overlapping lines in Gnuplot when exporting

I'm trying to plot a discrete brownian path in gnuplot, which involves a lot of overlaping lines. This is how it's displayed in the qt terminal (I have generated the image with a screenshot):
Notice how the overlapping lines get colored in a stronger color, which is beautiful.
If I export it in png, with
set term pngcairo size 1366,768 enhanced
I obtain this:
All the lines have the same intensity. Setting transparent doesn't help, either.
The same happens with this MWE:
set term pngcairo size 1366,768 background '#000000' enhanced
set output "image.png"
unset key
set border 0
unset xtics
unset ytics
set samples 1e6
set xrange [0:0.1]
p sin(1/x) w l lw 0.3
set output
I'm running gnuplot -d each time so my local config does not get loaded. How should I export the plot to obtain the same effect as in the GUI?
Here are some results of my investigation :
I couldn't achieve beautiful results with pngcairo either. Opacity isn't added when 2 curves overlap each other.
Exporting to SVG and converting to PNG looked a bit better, either with inkscape -z -e image.png -w 1600 -h 1200 image.svg or convert -density 3000 -resize 1600x1200 image.svg image.png. This step could be included in gnuplot as a system command.
It is possible to export the qt render to png directly from the qt window. First menu icon on the left → Export to image
This process could in theory be automated directly from Gnuplot, without user interaction. A patch has been submitted : https://sourceforge.net/p/gnuplot/patches/665/. As far as I can tell, it hasn't been yet integrated into Gnuplot 5.0.x
Here is a related discussion on Gnuplot-dev.
If you feel adventurous, you could try to recompile Gnuplot with the applied patch. The submitter might be able to help you.
Very offtopic in this question, but as a workaround I have made a Julia script that replicates the image feeling that I am looking for. I will post it here in case anybody finds it useful.
using Images
function paint(Ny, Nx, iters=1e6; stepsize = 50)
randstep() = rand([-1;1])
x = Nx÷2
y = Ny÷2
M = zeros(Nx,Ny)
for i in 1:iters
rx = randstep()
ry = randstep()
for i in 1:stepsize
x = mod1(x+rx, Nx)
y = mod1(y+ry, Ny)
M[x,y] += 1
end
end
clamped = M/maximum(M)
img = [Colors.RGB(0,mm,0) for mm in clamped]
end
img = convert(Image,paint(1366,768,1e4,stepsize=10))
save("coolbrownianwalk.png", img)
This produces images like this:

How can I losslessly crop a jpeg in R

I am new to R. I have a folder full of images(RGB) which are not of the same dimensions. My requirement is to have them all in the same dimensions which would involve resizing a bunch of them. I wrote the following code to get this done
#EBImage
library(EBImage)
path = "G:/Images/"
file.names = dir(path,full.names = TRUE, pattern =".jpeg")
reqd_dim = c(3099,2329,3)
sprintf("Number of Image Files is: %d", length(file.names))
for(i in 1:length(file.names)){
correction_flag = FALSE
print("Loop Number:")
flush.console()
print(i)
flush.console()
img = readImage(file.names[i])
# Checking if the dimensions are the same
for (j in 1:length(reqd_dim)) {
if(dim(img)[j]!=reqd_dim[j]){
correction_flag = TRUE
break
}
}
if(correction_flag==TRUE){
print("Correcting dimensions of the image")
flush.console()
writeImage(img[1:3099, 1:2329, 1:3],file.names[i],quality = 100)
}
}
My problem is that while the images are originally between 500-600 kb in size, the ones that are resized end up being between 1.8 to 2 Mb. In my particular case the images are in either of the two sizes - 3100x2329 or 3099x2329. So my resizing involves removing the extra column of pixels to make all images 3099x2329. I am ok with the file size of the files going down a bit as I expect some information to be lost; but in my case the file size is increasing more than three-fold.
Alternatively I have thought of converting the images into matrices(which is supported by EBImage) and remove the extra row. But I have two issues here, one is that I don't know how to do it and two is even if I found a way to do it, I'm afraid I might loose some information if I ever needed to convert it back to an image.
I'm open to an improvement over this approach, or a totally different one as well. My only requirement is that I need to be able to do resize my images in R without adding or losing any information (apart from the information in the pixels to be removed)
To perform lossless JPEG cropping you can use jpegtran, an external command line tool distributed as part of the IJG library. For example, the following command removes the last column of pixels from a 768x512 image:
jpegtran -crop 767x512+0+0 -optimize image.jpg >image.jpg
The -crop switch specifies the rectangular subarea WxH+X+Y, and -optimize is an option for reducing file size without quality loss by optimizing the Huffman table. For a complete list of switches see jpegtran -help.
Once jpegtran is installed on your system, it can be invoked from R by system(). The following example first takes a sample image and saves it as JPEG. The image is then cropped, and the pixel values are compared to the values from the original image.
library("EBImage")
# resave a sample image as JPG
f = system.file("images", "sample.png", package="EBImage")
writeImage(readImage(f), "image.jpg", quality=90)
# do the cropping
system("jpegtran -crop 767x512+0+0 -optimize image.jpg >cropped.jpg")
# compare file size
file.size("image.jpg", "cropped.jpg")
## [1] 65880 65005
original = readImage("image.jpg")
dim(original)
## [1] 768 512
cropped = readImage("cropped.jpg")
dim(cropped)
## [1] 767 512
# check whether original values are retained
identical(original[1:767,], cropped)
## TRUE
Back to your specific use-case: your script could be further improved by examining image dimensions without actually loading the whole pixel array into R. For this you could, for example, use RBioFormats to only read image meatadata containing image dimensions into R. But you can also use another command line tool identify distributed as part of the ImageMagick suite to retrieve the image dimensions, as illustrated below.
path = "G:/Images/"
file.names = dir(path, full.names = TRUE, pattern =".jpeg")
reqd_dim = c(3099,2329,3)
cat(sprintf("Number of Image Files is: %d\n", length(file.names)))
for (i in seq_along(file.names)) {
file = file.names[i]
cat(sprintf("Checking dimensions of image number %d: ", i))
flush.console()
cmd = paste('identify -format "c(%w, %h)"', file)
res = eval(parse(text=system(cmd, intern=TRUE)))
# Checking if the dimensions are the same
if ( all(res==reqd_dim) ) {
cat("OK\n")
flush.console()
}
else {
cat("Correcting\n")
flush.console()
system(sprintf("jpegtran -crop %dx%d+0+0 -optimize %s >%s",
reqd_dim[1], reqd_dim[2], file, file))
}
}

Resources