I have a SpatialPointsDataFrame which has one attribute (let's call it z for convenience) as well as lat/long coordinates.
I want to write this out to an XYZ file (i.e. an ASCII file with three columns).
Initially I tried
write.table(spdf, filename, row.names=FALSE)
but this wrote the z value first, followed by the coordinates, on each row. So it was ZXY format rather than XYZ. Not a big deal, perhaps, but annoying for other people who have to use the file.
At present I am using what feels like a really horrible bodge to do this (given below), but my question is: is there a good and straightforward way to write a SPDF out as XYZ, with the columns in the right order? It seems as though it ought to be easy!
Thanks for any advice.
Bodge:
dfOutput <- data.frame(x = coordinates(spdf)[,1], y = coordinates(spdf)[,2])
dfOutput$z <- data.frame(spdf)[,1]
write.table(dfOutput, filename, row.names=FALSE)
Why not just
library(sp)
spdf <- SpatialPointsDataFrame(coords=matrix(rnorm(30), ncol = 2),
data=data.frame(z = rnorm(15)))
write.csv(cbind(coordinates(spdf), spdf#data), file = "example.csv",
row.names = FALSE)
You can write to a .shp file using writeOGR from rgdal package. Alternatively, you could fortify (from ggplot2) your data and write that as a csv file.
Following up on Noah's comment about a method like coordinates but for data values: The raster package has the getValues() method for returning the values of a SpatialPointsDataFrame.
library(raster)
spdf <- raster('raster.sdat')
write.table(
cbind(coordinates(spdf), getValues(spdf)),
file = output_file,
col.names = c("X", "Y", "ZVALUE"),
row.names = FALSE,
quote = FALSE
)
Related
I have a folder with many csv files. Each file has several columns as well as lat and long columns. Another folder have many rasters in tif format. The .csv files are named based on Julian date (e.g. 251.csv), and so the rasters (e.g. 251.tif). I would like to be able to add the raster value to the csv with matching name and save to a new csv in R. What I want to achieve is this:
raster<-raster("c:/temp/TIFF/2001/273.tif")
points<-read.csv("c:/temp/csv/2001/273.csv")
coordinates(points)=~long+lat
rasValue=extract(raster,points)
combinePointValue <- cbind(points,rasValue)
head(combinePointValue)
library(spdplyr)
combinePointValue <- combinePointValue %>%
rename(chloro = 10)
write.table(combinePointValue,file="c:/temp/2001/chloro/273_chloro.csv",append=FALSE,
sep=",",row.names=FALSE, col.names=TRUE)
Considering the many csv and many tif files, I would prefer avoiding having to type this over and over. Anyone able to help?
Many thanks in advance!
Ilaria
It is better to provide a minimal reproducible example since your code can not run without your specific data. However, if I understand well, you can try something like this. Since csv and tif files have the same name, you can sort them and loop through the file index. You can use the original path of csv files to save a new file just by pasting the suffix "_chloro:
library(spdplyr)
csv <- sort(list.files("c:/temp/csv/2001/",full.names = T))
tif <- sort(list.files("c:/temp/TIFF/2001/",full.names = T))
lapply(1:length(csv),function(i){
raster<-raster(tif[i])
points<-read.csv(csv[i])
coordinates(points)=~long+lat
rasValue=extract(raster,points)
combinePointValue <- cbind(points,rasValue)
head(combinePointValue)
combinePointValue <- combinePointValue %>%
rename(chloro = 10)
write.table(combinePointValue,file=paste0(tools::file_path_sans_ext(csv[i]),"_chloro.csv"),append=FALSE,
sep=",",row.names=FALSE, col.names=TRUE)
})
SInce the R spatial "ecosystem" is undergoing dramatic changes over the past few years, and package like sp and raster will be deprecated, you might consider a solution based on the terra package.
It would go something like:
# Not tested!
library(terra)
csv_path = "c:/temp/csv/2001/"
tif_path = "c:/temp/TIFF/2001/"
tif_list = list.files(file.path(tif_path, pattern = "*.tif", full.names = FALSE)
result_list = lapply(1:length(tif_list), function(i) {
tif_file = file.path(tif_path, tif_list[i])
# Do not assume that the list of files are exactly equivalent.
# Instead create CSV file name from tif file
csv_name = gsub("tif", "csv", tif_file)
csv_file = file.path(csv_path, csv_name)
r = rast(tif_file)
csv_df = read.csv(csv_file)
# Assume csv long/lat are the same CRS as the tif files
pts = vect(csv_df, geom=c("long", "lat"), crs=st_crs(tif))
result = extract(r, pts, xy = TRUE)
new_csv = paste0(tools::file_path_sans_ext(csv_file),"_chloro.csv")
write.csv(result, file.path(csv_path, new_csv))
return(result)
})
I could load shp file in r:
setwd("something")
shp = readOGR(dsn = ".", layer = "shp_name")
Now, I want to convert that to a normal dataframe. What should I do?
I found the answer. It's just works like in general:
shp_df = as.data.frame(shp, xy = T)
You don’t always have to complicate things...
it is a rather untypical scenario, I am using R Custom visual in PowerBI to plot a raster and the only way to pass data is by using a dataframe.
this what I have done so far,
generate a raster in R
save it to file using SaveRDS
encoded the file as a base64 and save it as a csv.
now using this code I manage to read the csv, load it to a dataframe combine al the rows
my question is how to decode it back to a raster Object ?
here is a reproducible example
# Input load. Please do not change #
`dataset` = read.csv('https://raw.githubusercontent.com/djouallah/keplergl/master/raster.csv', check.names = FALSE, encoding = "UTF-8", blank.lines.skip = FALSE);
# Original Script. Please update your script content here and once completed copy below section back to the original editing window #
library(caTools)
library(readr)
dataset$Value <- as.character(dataset$Value)
dataset <- dataset[order(dataset$Index),]
z <- paste(dataset$Value)
Raster <- base64decode(z,"raw")
here is the result
it turned out the solution is very easy, saveRDS has an option to save with ascii = TRUE
saveRDS(background,'test.rds',ascii = TRUE,compress = FALSE)
now I just read it as humain readbale format (which is easy to load to PowerBI) and it works
fil <- 'https://raw.githubusercontent.com/djouallah/keplergl/master/test.rds'
cony <- gzcon(url(fil))
XXX <- readRDS(cony,refhook = NULL)
plotRGB(XXX)
I'm using the following code:
lst <- split(data, cut(data$Pos, breaks = maxima, include.lowest = TRUE))
dir <- getwd()
lapply(seq_len(length(lst)),
function (i) write.csv(lst[[i]], file = paste0(dir,"/",names(lst[i]), ".csv"), row.names = FALSE)) ## split data into .csv files based on max.csvima values
that another user provided me with, to split and save a dataset into separate .csv files. However, when the files are saved they are saved in a naming format as so: [0,9], (9,19], etc., which the analysis program I'm using cannot read in. How would I change the filenames that they are being saved as? I assumed that it was the
names(lst[i])
portion, however when I changed that (e.g. to names(vec[i]) with vec being a vector of numbers with the same length as the number of data files), no data files were created.
Any help is appreciated!
#desc provides the answer in the comment you only need to change your code to
lst <- split(data, cut(data$Pos, breaks = maxima, include.lowest = TRUE))
dir <- getwd()
lapply(seq_len(length(lst)),
function (i) write.csv(lst[[i]], file = paste0(dir,"/your_desired_label_here",names(lst[i]), ".csv"), row.names = FALSE)) ## split data into .csv files based on max.csvima values
I want to normalize data using RMA in R package. but there has problem it does not read .txt file. Please tell me, "what I do for normalizing data from .txt file?"
reply please
Basically all normalization methods in Bioconductor are based on the AffyBatch class. Therefore, you have to read your textfile (probably a matrix) and create an AffyBatch manually:
AB <- new("AffyBatch", exprs = exprs, cdfName = cdfname, phenoData = phenoData,...)
RMA needs ExpressionSet structure. After reading the file (read.table()) and cleaning colnames and row.names convert the file to matrix and use:
a<-ExpressionSet(assayData=matrix)
If didnt work, import your *.txt data to flexarray software which can read it and do rma.
This may work.
I use normalizeQuantiles() function from Limma R package:
library(limma)
mydata <- read.table("RDotPsoriasisLogNatTranformedmanuallyTABExport.tab", sep = "\t", header = TRUE) # read from file
b = as.matrix(cbind(mydata[, 2:5], mydata[, 6:11])) # set the numeric data set
m = normalizeQuantiles(b, ties=TRUE) # normilize
mydata_t <- t(as.matrix(m)) # transpose if you need