R - Import then export multiband RGB aerial image .tif - r

I want to import an aerial image in .tif format, and then export in the same colour format. I imagine this is a simple task but I can't find a way to do it. The original files are coloured when viewed in Windows Photo Viewer and Google Earth, but the exports are black and white.
Ultimately, I want to crop/merge multiple images to create a single aerial image map from approx 6 tiles. Of the 6 tiles, I am only interested in about 30% of the area (the bits over the water, plus a ~20 m buffer), so the idea is to cut down on image size by keeping the parts I want as a single file, rather than importing all 6 tiles. I can open one .tif in Google Earth as a coloured aerial image, which is what I want for my custom-boundary map/image.
At the moment, I am struggling to import then export a single .tif in the same Google Earth-readable coloured image format. I'm importing the files into R using raster, and have tried exporting using writeRaster, but the images are black and white when viewed in GE. I believe this might mean the image is rendering only a single (RGB) layer of the image? However, plotRGB() in R is able to plot it in colour.
You can download the file I'm working with from my Google Drive, or search for it on Elvis (satellite image data from inside the Australian Capital Territory, Australia, approx -35.467437, 148.824043). Thanks for any help or direction to some instructions.
This is where I'm at so far...
# import and plot the tif - plots nicely in colour
brick('ACT2017-RGB-10cm_6656073_55_0001_0001.tif') %>%
plotRGB
This is what I see from plotRGB(), and also when I open the original in Google Earth (this is the desired output colour).
# export
brick('ACT2017-RGB-10cm_6656073_55_0001_0001.tif') %>%
writeRaster('my_output.tif')
# then import the export
brick('my_output.tif') %>%
plotRGB
my_export.tif plots in colour in R, but black and white in Google Earth.

Here is how you can do that with terra (the replacement for raster). For this example to work well, you need version 1.4-1 or higher; currently the development version, that you can install with install.packages('terra', repos='https://rspatial.r-universe.dev')
library(terra)
f <- 'ACT2017-RGB-10cm_6656073_55_0001_0001.tif'
r <- rast(f)
plot(r)
Since this is a "RGB" image, there is no need to call plotRGB explicitly
I create two sub-images for this example
ex <- ext(665000, 665500, 6073000, 6073500)
x <- crop(r, ex)
ey <- ext(665500, 666000, 6073500, 6074000)
y <- crop(r, ey)
They still look good with
plot(x)
plot(y)
Now merge them
m <- merge(x, y)
After merge, the RGB, channels are lost and need to be redeclared
RGB(m) <- 1:3
And you can write to disk
z <- writeRaster(m, "test.tif", overwrite=TRUE)
plot(z)

Related

R grid arrange tiff microscopy RGB

I have a RGB tiff files (from cellProfiler) which I want to import to R, label and arrange - as part of a high throughput analysis. The closest I get is using:
library(tiff)
library(raster)
imageTiff <- tiff::readTIFF(imagePath[i])
rasterTiff <- raster::as.raster(imageTiff)
raster::plot(rasterTiff)
raster::plot plots the image nicely but I can't catch the output and use it with gridExtra or add labels.
In addition I tried rasterVis with levelPlot and multiple other ways importing the tiff and then converting them to grob or ggplots.
However, I can't get anything to work and would like to ask if R is even suited at all for that task?
Thank you very much for your help!
Okay, I think that is the most straight forward way and possible also the most obvious one.
I import JPEG or TIFF files with jpeg::readJPEG or tiff::readTIFF respectively. Both transform the images to a raster format which is compatible with rasterGrid() and following grid.arrange etc.
library(jpeg)
library(tiff)
library(grid)
imageJPEG <- grid::rasterGrob(jpeg::readJPEG("test.jpeg"))
imageTIFF <- grid::rasterGrob(tiff::readTIFF("test.tiff"))
grid.arrange(imageJPEG , imageJPEG , imageJPEG)
grid.arrange(imageTIFF , imageTIFF, imageTIFF)
For my purpose that is perfect since tasterGrob does not alter the raster matrix values. Labeling might be a bit tricky but overall it is a grid/grob problem from here on.

Import tiff + tfw + prj files in R

I want to import in R a map that I have downloaded from
http://www.naturalearthdata.com/downloads/10m-raster-data/10m-natural-earth-1/
When I download it I get 3 files with the following extension
.tif
.tfw
.prj
How should I read them? I can read the .tif file with
imported_raster=raster('NE1_HR_LC_SR_W.tif')
but then the colours and the projection are different from the original tif.
Thanks
I was looking for some information on another topic when I came across this one.
It's quite normal that the colours appear different from the original tif. There probably was a color distribution or a colour scheme applied on the original tif which isn't exported to or with the output tif. It's the user that should set a colour scheme or color distribution. (Just like in arcmap for example).
I guess your exported tif has no projection at all when you load it in R? You need to use the information from the .tfw file to give each pixel (row, column) a coordinate.
Read in the .tfw file
Assume that your .tfw (ascii file) is something like this:
10.000000
0.000000000000000
0.000000000000000
-10.000000000000
137184.00000000000
180631.000000000
The last two rows are the X/Y coordinates of the center of the upperleft pixel of your tif.
The first row tells you what your spatial resolution is, in this case 10.
So if you know the coordinate of the center of the upper pixel, than the coordinate of pixel (row=i, column=j) is
137184+i*10, 180631+i*10).

convert jpg to greyscale csv using R

I have a folder of JPG images that I'm trying to classify for a kaggle competition. I have seen some code in Python that I think will accomplish this on the forums, but was wondering is it possible to do in R? I'm trying to convert this folder of many jpg images into csv files that have numbers showing the grayscale of each pixel, similar to the hand digit recognizer here http://www.kaggle.com/c/digit-recognizer/
So basically jpg -> .csv in R, showing numbers for the grayscale of each pixel to use for classification. I'd like to put a random forest or linear model on it.
There are some formulas for how to do this at this link. The raster package is one approach. THis basically converts the RGB bands to one black and white band (it makes it smaller in size, which I am guessing what you want.)
library(raster)
color.image <- brick("yourjpg.jpg")
# Luminosity method for converting to greyscale
# Find more here http://www.johndcook.com/blog/2009/08/24/algorithms-convert-color-grayscale/
color.values <- getValues(color.image)
bw.values <- color.values[,1]*0.21 + color.values[,1]*0.72 + color.values[,1]*0.07
I think the EBImage package can also help for this problem (not on CRAN, install it through source:
source("http://bioconductor.org/biocLite.R")
biocLite("EBImage")
library(EBImage)
color.image <- readImage("yourjpg.jpg")
bw.image <- channel(color.image,"gray")
writeImage(bw.image,file="bw.png")

How can you crop raster layers in R in a batch and change projection

I was working with spatial data to get ready for analyses - I have a DEM at the desired extent of my study area, though I have ~39 other layers at the national scale (US). Is there a way to crop all of these 39 layers to the same extent as the DEM at once?
Also, I will be overlaying the output with other layers in a different projection. Is it possible to adjust the projection and pixel size of the output layers?
I am trying to use freeware as much as possible for my data manipulation...
I had the problem above, but have written a function in R to do all of this in a batch - see below. I had 39 climate data layers at the scale of the continental U.S. (from PRISM Climate Data group; http://www.prism.oregonstate.edu/), and wanted to clip them to the extent of a DEM, in southern California, reproject them, and export them for easy import and use with other layers in SAGA GIS. Below is the code, with an example of how it would be run, setting the working directory to the folder that has the layers that you want to crop, and only those layers.
During the processing, all data are stored in memory, so with huge datasets, it might get hung up because of lack of memory... that would probably be something that would be good to improve.
Also, a response on the R Forum provided a shorter, more elegant way to do it too: http://permalink.gmane.org/gmane.comp.lang.r.geo/18320
I hope somebody finds it useful!
#########################################
#BatchCrop Function ###
#by Mike Treglia, mtreglia#gmail.com ###
###Tested in R Version 3.0.0 (64-bit), using 'raster' version 2.1-48 and 'rgdal' version 0.8-10
########################################
#This function crops .asc raster files in working directory to extent of another layer (referred to here as 'reference' layer), converts to desired projection, and saves as new .asc files in the working directory. It is important that the original raster files and the reference layer are all in the same projection, though different pixel sizes are OK. The function can easily be modified to use other raster formats as well
#Note, Requires package 'raster'
#Function Arguments:
#'Reference' refers to name of the layer with the desired extent; 'OutName' represents the intended prefix for output files; 'OutPrj' represents the desired output projection; and 'OutRes' represents the desired Output Resolution
BatchCrop<-function(Reference,OutName,OutPrj,OutRes){
filenames <- list.files(pattern="*.asc", full.names=TRUE) #Extract list of file names from working directory
library(raster) #Calls 'raster' library
#Function 'f1' imports data listed in 'filenames' and assigns projection
f1<-function(x,z) {
y <- raster(x)
projection(y) <- CRS(z)
return(y)
}
import <- lapply(filenames,f1,projection(Reference))
cropped <- lapply(import,crop,Reference) #Crop imported layers to reference layer, argument 'x'
#Function 'f2' changes projectection of cropped layers
f2<-function(x,y) {
x<-projectRaster(x, crs=OutPrj, res=OutRes)
return(x)
}
output <- lapply(cropped,f2,OutPrj)
#Use a 'for' loop to iterate writeRaster function for all cropped layers
for(i in (1:max(length(filenames)))){ #
writeRaster(output[[i]],paste(deparse(substitute(OutName)), i), format='ascii')
}
}
#############################################
###Example Code using function 'BatchCrop'###
#############################################
#Data layers to be cropped downloaded from: http://www.prism.oregonstate.edu/products/matrix.phtml?vartype=tmax&view=maps [testing was done using 1981-2010 monthly and annual normals; can use any .asc layer within the bounds of the PRISM data, with projection as lat/long and GRS80]
#Set Working Directory where data to be cropped are stored
setwd("D:/GIS/PRISM/1981-2010/TMin")
#Import Reference Layer
reference<-raster("D:/GIS/California/Hab Suitability Files/10m_DEM/10m DEM asc/DEM_10m.asc")
#Set Projection for Reference Layer
projection(reference) <- CRS("+proj=longlat +ellps=GRS80")
#Run Function [desired projection is UTM, zone 11, WGS84; desired output resolution is 800m]
BatchCrop(Reference=reference,OutName=TMinCrop,OutPrj="+proj=utm +zone=11 +datum=WGS84",OutRes=800)

using ggplot's "annotation_raster" and reached R's "memory ceiling"

I am using R to create a floorplan of a house with several layers like below, starting from the bottom layer:
basemap: a scanned version of the floorplan which I put it at the bottom layer to aid the reading
bed: the house have several dozens of beds, scattered in different rooms of the house, they have different colours based on the characteristics of the residents
piechart: each bed has a piechart of top of it, again the piecharts are created based on the residents' other set of characteristics, some beds have piecharts, some don't.
The bed and piechart were created based on the shp file created based on the basemap (i.e. I use Mapwindow the create a vector layer, import the basemap as raster layer and put it at the bottom, then draw the beds one by one. The bed shp file is then imported into R, the bed polygons' centroid are calculated and that centroid helps to position the piecharts)
I used read.jpeg to import the basemap to imagematrix object, then use the new annotation_raster function in ggplot2 0.9 to put the basemap at the bottom map layer, since the bed layer is created based on the basemap also, the bed layer superimpose on the basemap layer perfectly in ggplot2.
I can create the map without problem - if the basemap is small enough (3000 x 3000 pixels), now I have a basemap of 8000+ x 3000+ pixels (object.size 241823624 bytes), I did not aware of the R memory issue when I was creating the shp file, the ggplot object can be compiled if I have the annotation_raster disabled, but R keeps saying that I can allocate memory with xxxMB when I try to include the basemap into the ggplot object.
I think this is nothing to do with the compression of the jpg files, as the dimension is not changed even I further compress the jpg file. But I can't resize the jpg file as my bed layer is created based on the original jpg file's dimension.
Can anyone help to shrink the size of the basemap's imagematrix, without changing the jpeg's dimension, or some other tricks to deal the R's memory limitation? Thanks.
I fixed it.
I first created a new basemap image file with width and height halved, then in the annotation_raster I did the following:
chart <- chart + annotation_raster(db$temp.basemap,
xmin=0,
xmax=basemap.xlength*2, # I stretched the image in R
ymin=0,
ymax=basemap.ylength*2) # I stretched the image in R
Now the map can be compiled within R's memory limit, the drawback I can think of is the reduce in image quality, but that is bearable, as it was 8000 x 3000 originally.

Resources