i found the code to convert the image and got 480*288 numbers of data, because the pixel of the image is that. however, the matrix i want is just much less number matrix like: 19*12 or something. so how can i do that in the R? thanks a lot!.
library("EBImage")
img <- readImage("sample.jpg")
img <- channel(img, "grey")
write.csv(t(img), "sample.csv", row.names=FALSE)
In order to reduce the size of the resulting matrix you need to scale the image with resize before saving it to a .csv file, see my answer to Resizing image in R.
Related
I generated an image of ordinary kriged predictions. I have a shapefile of a boundary line and I'd like to crop the ordinary kriged predictions in the shape of that shapefile.
This is the code I use to generate the image:
image(OK.pred,loc=grid,axes=F,useRaster=TRUE). I just want to clip an object out of the image -- when I plot them, they overlay perfectly.
It's almost identical to the issue here, https://gis.stackexchange.com/questions/167170/is-it-possible-to-clip-a-shapefile-to-an-image-in-r, but I'm relatively new to R and got totally lost with the netcdf file part.
I found a bunch of code on how to clip rasters, but I just can't figure out how to even save an image into a variable let alone transform it to a raster in order to clip it. Any help would be much appreciated!
OK.pred<-krige.conv(gambling.geo,coords = gambling.geo$coords, data=gambling.geo$data, locations=grid,krige=krige.control(obj.model=gambling.vario.wls))
ordinarykrig = image(OK.pred,loc=grid,axes=F,useRaster=TRUE)
Macau <- readOGR("MAC_adm0.shp")
x <- crop(?...)
Taken from http://leg.ufpr.br/geoR/tutorials/kc2sp.R:
You need to convert the kriging output to a Spatial object before you can pass it to mask(). The following should do it:
OK.pred<-krige.conv(gambling.geo,coords = gambling.geo$coords, data=gambling.geo$data, locations=grid,krige=krige.control(obj.model=gambling.vario.wls))
GT.s <- points2grid(SpatialPoints(as.matrix(grid)))
reorder <- as.vector(matrix(1:nrow(grid), nc=slot(GT.s, "cells.dim")[2])[,slot(GT.s, "cells.dim")[2]:1])
SGDF.s <- SpatialGridDataFrame(grid=GT.s, data=as.data.frame(OK.pred[1:2])[reorder,])
r<-raster(SGDF.s)
x<-mask(r, Macau)
Please note that im not interested in any kind of interpolation algorithms where you expand the amount of pixels and interpolate the new values.
I want to leave the world of pixel based images and am looking for some scalable vector image solution.
Is there a way to turn a pixel image in a ggplot into a color meshed smooth vector graphic?
The following pictures demonstrate what im aiming for.
and then smooth it out.
The images are taken from the following wikipedia article HERE
Please note that the original images from the article are SVG files. You can zoom in as much as you want and you always have smooth color transitions and no edges.
Some additional images and infos: HERE2
Here is some example Data of something that meets the first image "nearest"
library(ggplot2)
n = 5
pixelImg <- expand.grid(x=1:n,y=1:n)
pixelImg$value <- sample(1:n^2,n^2,replace = T)
ggplot(pixelImg, aes(x, y)) +
geom_raster(aes(fill = value)) +
scale_fill_gradientn(colours=c("#FFCd94", "#FF69B4", "#FF0000","#4C0000","#000000"))
If not in ggplot is there a way to do it outside of ggplot?
Look into the ggsave() function. It supports .svg files for vector graphics.
I have a folder of JPG images that I'm trying to classify for a kaggle competition. I have seen some code in Python that I think will accomplish this on the forums, but was wondering is it possible to do in R? I'm trying to convert this folder of many jpg images into csv files that have numbers showing the grayscale of each pixel, similar to the hand digit recognizer here http://www.kaggle.com/c/digit-recognizer/
So basically jpg -> .csv in R, showing numbers for the grayscale of each pixel to use for classification. I'd like to put a random forest or linear model on it.
There are some formulas for how to do this at this link. The raster package is one approach. THis basically converts the RGB bands to one black and white band (it makes it smaller in size, which I am guessing what you want.)
library(raster)
color.image <- brick("yourjpg.jpg")
# Luminosity method for converting to greyscale
# Find more here http://www.johndcook.com/blog/2009/08/24/algorithms-convert-color-grayscale/
color.values <- getValues(color.image)
bw.values <- color.values[,1]*0.21 + color.values[,1]*0.72 + color.values[,1]*0.07
I think the EBImage package can also help for this problem (not on CRAN, install it through source:
source("http://bioconductor.org/biocLite.R")
biocLite("EBImage")
library(EBImage)
color.image <- readImage("yourjpg.jpg")
bw.image <- channel(color.image,"gray")
writeImage(bw.image,file="bw.png")
Firstly, I want to read multilayer (10) ENVI image in R and then run some program on it. Finally I want to export the output into ENVI image at the size with input image. The size of input image is ncol= 200, nrow=350, Layer=10). The type of input image is floating point. How to write code in R to do this?
Thank you
When plotting images or heatmaps to pdfs as in the example below they are saved as vector objects where every pixel in the image or cell in the heatmap is represented by a square. Even at modest resolutions this results in unnecessarily large files that also renders uglily on some devices. Is there a way to make R save only the image area as a png or jpg embedded in the pdf but keep text, axes, anotations etc as vector graphics?
I'm asking since I am often printing R graphics, sometimes on large posters, and would like to combine the best of the two worlds. Of course I could save the entire figure as a high resolution png but that would not be as elegant, or combine it manually e.g. in Inkscape but it is quite tedious.
my.func <- function(x, y) x %*% t(y)
pdf(file="myPlot.pdf")
image(my.func(seq(-10,10,,500), seq(-5,15,,500)), col=heat.colors(100))
dev.off()
Thanks for your time, ideas and hopefully solutions!
Use ?rasterImage, or more conveniently in recent versions image with option useRaster = TRUE.
That will dramatically reduce the size of the file.
my.func <- function(x, y) x %*% t(y)
pdf(file="image.pdf")
image(my.func(seq(-10,10,,500), seq(-5,15,,500)), col=heat.colors(100))
dev.off()
pdf(file="rasterImage.pdf")
image(my.func(seq(-10,10,,500), seq(-5,15,,500)), col=heat.colors(100), useRaster = TRUE)
dev.off()
file.info("image.pdf")$size
file.info("rasterImage.pdf")$size
image.pdf: 813229 bytes
rasterImage.pdf 16511 bytes
See more details about the new features here:
http://developer.r-project.org/Raster/raster-RFC.html
http://journal.r-project.org/archive/2011-1/RJournal_2011-1_Murrell.pdf