I'm quite new to R and I have a problem on which I couldn't find a solution so far.
I have a folder of 1000 raster files. I have to get the median of all rasters for each cell.
The files contain NoData Cells (I think therefore they have different extents)
Is there any solution to loop through the folder, adding together all files an getting the median?
Error in rep(value, times = ncell(x)) : invalid 'times' argument
In addition: Warning message:
In setValues(x, rep(value, times = ncell(x))) : NAs introduced by coercion
Error in .local(x, i, j, ..., value) :
cannot replace values on this raster (it is too large
I tried with raster stack, but it doesn't work because of the different extents.
Thanks for your help.
I'll try to approach this by mosaic()'ing images with different extents and origins but same resolution.
Create a few rasterLayer objects and export them (to read latter)
library('raster')
library('rgdal')
e1 <- extent(0,10,0,10)
r1 <- raster(e1)
res(r1) <- 0.5
r1[] <- runif(400, min = 0, max = 1)
#plot(r1)
e2 <- extent(5,15,5,15)
r2 <- raster(e2)
res(r2) <- 0.5
r2[] <- rnorm(400, 5, 1)
#plot(r2)
e3 <- extent(18,40,18,40)
r3 <- raster(e3)
res(r3) <- 0.5
r3[] <- rnorm(1936, 12, 1)
#plot(r3)
# Write them out
wdata <- '../Stackoverflow/21876858' # your local folder
writeRaster(r1, file.path(wdata, 'r1.tif'),
overwrite = TRUE)
writeRaster(r2,file.path(wdata, 'r2.tif'),
overwrite = TRUE)
writeRaster(r3,file.path(wdata, 'r3.tif'),
overwrite = TRUE)
Read and Mosaic'ing with function
Since raster::mosaic do not accept rasterStack/rasterBrick or lists of rasterLayers, the best approach is to use do.call, like this excellent example.
To do so, adjust mosaic signature and how to call its arguments with:
setMethod('mosaic', signature(x='list', y='missing'),
function(x, y, fun, tolerance=0.05, filename=""){
stopifnot(missing(y))
args <- x
if (!missing(fun)) args$fun <- fun
if (!missing(tolerance)) args$tolerance<- tolerance
if (!missing(filename)) args$filename<- filename
do.call(mosaic, args)
})
Let's keep tolerance low here to evaluate any misbehavior of our function.
Finally, the function:
Mosaic function
f.Mosaic <- function(x=x, func = median){
files <- list.files(file.path(wdata), all.files = F)
# List TIF files at wdata folder
ltif <- grep(".tif$", files, ignore.case = TRUE, value = TRUE)
#lext <- list()
#1rt <- raster(file.path(wdata, i),
# package = "raster", varname = fname, dataType = 'FLT4S')
# Give an extent area here (you can read it from your first tif or define manually)
uext <- extent(c(0, 100, 0, 100))
# Get Total Extent Area
stkl <- list()
for(i in 1:length(ltif)){
x <- raster(file.path(wdata, ltif[i]),
package = "raster", varname = fname, dataType = 'FLT4S')
xext <- extent(x)
uext <- union(uext, xext)
stkl[[i]] <- x
}
# Global Area empty rasterLayer
rt <- raster(uext)
res(rt) <- 0.5
rt[] <- NA
# Merge each rasterLayer to Global Extent area
stck <- list()
for(i in 1:length(stkl)){
merged.r <- merge(stkl[[i]], rt, tolerance = 1e+6)
#merged.r <- reclassify(merged.r, matrix(c(NA, 0), nrow = 1))
stck[[i]] <- merged.r
}
# Mosaic with Median
mosaic.r <- raster::mosaic(stck, fun = func) # using median
mosaic.r
}
# Run the function with func = median
mosaiced <- f.Mosaic(x, func = median)
# Plot it
plot(mosaiced)
Possibly far from the best approach but hope it helps.
Related
How can I read all files in a folder, perform a script and create separate outputs from all files
containing the original name? I have a folder with .las files and I need to create corresponding .asc files from them. My script as below:
library(lidR)
# Path to data
LASfile <- ("path/1234.las")
# Sorting out points in point cloud data, keeping vegetation and ground point classes.
las <- readLAS(LASfile, filter="-keep_class 1 2") # Keep high vegetation and ground point classes
# Normalizing ground points to 0 elevation (idwinterpolation), instead of meters above sea level.
dtm <- grid_terrain(las, algorithm = knnidw(k = 8, p = 2))
las_normalized <- normalize_height(las, dtm)
# Create a filter to remove points above 95th percentile of height
lasfilternoise = function(las, sensitivity)
{
p95 <- grid_metrics(las, ~quantile(Z, probs = 0.95), 10)
las <- merge_spatial(las, p95, "p95")
las <- filter_poi(las, Z < p95*sensitivity)
las$p95 <- NULL
return(las)
}
# Generating a pitfree canopy height modela model without null values (Khosravipour et al., 2014)
las_denoised <- lasfilternoise(las_normalized, sensitivity = 1.2)
chm <- grid_canopy(las_denoised, 0.32, pitfree(c(0,2,5,10,15), c(3,1.5), subcircle = 0.2))
# Applying a median filter, 5x5 moving window to smooth the image and remove noise
ker <- matrix(1,3,3)
chms <- raster::focal(chm, w = ker, fun = median)
plot(chms)
library(raster)
# Writing output file
writeRaster(chms, filename="path/1234.asc", format="ascii", overwrite=TRUE) # Ändra till relevant för varje körning
citation("lidR")
I tried using lapply but I dont know how to use it in the right way.
Must be something like this to read all files in the folder: list.files("path", pattern = "*.las", full.names = TRUE)
and something like this to write the output files: lapply(r, writeRaster, filename = paste0(f, ".asc"), format = "ascii")
But I cannot get it right
An example of my LAZ to LAS+Index conversion:
convertLAZ <- function(lazfile, outdir = "") {
if(!dir.exists({{outdir}})) { dir.create({{outdir}}, recursive = TRUE)}
print(lazfile)
las <- lidR::readLAS(files = {{lazfile}}, filter = "-keep_class 2 9")
.file <- stringi::stri_replace_all_regex({{lazfile}}, "^.*/", "")
lidR::writeLAS(las, file = paste0({{outdir}}, "/", stringi::stri_replace_all_fixed(.file, "laz", "las")), index = TRUE)
}
f <- list.files("data/laz", pattern = "*.laz", full.names = TRUE)
lapply(f, convertLAZ, outdir = "data/las22")
You can expand it to rasterization, normalization, etc and saving as .asc. But I would encourage you to have a look on https://r-lidar.github.io/lidRbook/engine.html. In short: process your LAZ/LAS files as LAScatalog, and then tile the result raster and save to .asc.
And an example how to use parallel processing (in below example 3+1 processes - please note, it can be memory hungry, so be careful with number of workers/processing parameters like opt_chunk_buffer.
library(future)
options(parallelly.availableCores.methods = "mc.cores")
options(mc.cores = 3)
plan(multisession)
parallelly::availableWorkers()
library(lidR)
myPath <- "data/las"
ctg <- readLAScatalog(myPath)
crs(ctg) <- "EPSG:2180"
ctg#output_options$drivers$SpatRaster$param$overwrite <- TRUE
opt_output_files(ctg) <- "data/dtm2/barycz__{XLEFT}_{YBOTTOM}"
opt_chunk_size(ctg) <- 500
opt_chunk_buffer(ctg) <- 600
opt_filter(ctg) <- "-keep_class 2 9"
summary(ctg)
vr <- rasterize_terrain(ctg, 0.25, tin())
plot(vr)
Solved it now
.libPaths( c( "C:/Users/Public/R/win-library/4.2" , .libPaths() ) )
library(lidR)
createASCI <- function(lasfile, outdir = "") {
if(!dir.exists({{outdir}})) { dir.create({{outdir}}, recursive = TRUE)}
print(lasfile)
las <- lidR::readLAS(files = {{lasfile}}, filter = "-keep_class 1 2 3 4 5")
.file <- stringi::stri_replace_all_regex({{lasfile}}, "^.*/", "")
# Normalizing ground points to 0 elevation (idwinterpolation), instead of meters above sea level.
dtm <- grid_terrain(las, algorithm = knnidw(k = 8, p = 2))
las_normalized <- normalize_height(las, dtm)
# Create a filter to remove points above 95th percentile of height
lasfilternoise = function(las, sensitivity)
{
p95 <- grid_metrics(las, ~quantile(Z, probs = 0.95), 10)
las <- merge_spatial(las, p95, "p95")
las <- filter_poi(las, Z < p95*sensitivity)
las$p95 <- NULL
return(las)
}
# Generating a pitfree canopy height modela model without null values (Khosravipour et al., 2014)
las_denoised <- lasfilternoise(las_normalized, sensitivity = 1.2)
chm <- grid_canopy(las_denoised, 0.32, pitfree(c(0,2,5,10,15), c(3,1.5), subcircle = 0.2))
# Applying a median filter, 5x5 moving window to smooth the image and remove noise
ker <- matrix(1,3,3)
chms <- raster::focal(chm, w = ker, fun = median)
writeRaster(chms, file = paste0({{outdir}}, "/", stringi::stri_replace_all_fixed(.file, "las", "asc")), index = TRUE)
}
f <- list.files("C:/Lasdata", pattern = "*.las", full.names = TRUE)
lapply(f, createASCI, outdir = "C:/Lasdata/nytt")
I have a folder of files (csv) that have filtered/gated data -- two columns (dihedral angle vs bend angle). It was filtered based upon an individualized min and max for each file.
I need to be able to get at least the mean, median, sd, skewness, and kurtosis for each column of each file and have that data presented as a table. (One line per file in the final product)
I am not familiar with what R packages that maybe suitable for this task, so I was trying to do something simple. I can get it to work for a single file, but I have over 200 files. They will likely be updating, so I'll have to run this multiple times.
module load ccs/container/R/4.1.0
R
library(moments)
files <- list.files("/mnt/gpfs2_4m/scratch/username/fs_scripts/foldedstart_*", pattern="*.csv", recursive=TRUE, full.names=TRUE)
cat("filename","\t","dihedral mean","\t","bend mean","\t","dihedral median","\t","bend median","\t","dh sd","\t","bd sd","\t","dh skew","\t","bd skew","\t","dh kurt","\t","bd kurt","\n")
for (currentFile in files) {
df <- read.table(fileName[i], header=TRUE)
z1 <- mean(df$V1)
z2 <- median(df$V1)
z3 <- sd(df$V1)
z4 <- skewness(df$V1)
z5 <- kurtosis(df$V1)
z7 <- mean(df$V2)
z8 <- median(df$V2)
z9 <- sd(df$V2)
z10 <- skewness(df$V2)
z11 <- kurtosis(df$V2)
cat(filename,"\t",z1,"\t",z7,"\t",z2,"\t",z8,"\t",z3,"\t",z9,"\t",z4,"\t",z10,"\t",z5,"\t",z11,"\n")
write.table(newdata, file=statsFileName[i]))
}
The "first cat line" is the header and labels.
The "for cat line" likely goes "no where," but it is the format that I am trying to achieve.
The "write.table line" is something that I found, but I don't think it may be appropriate for this.
I truly appreciate any help on this. I am not that familiar with R and the examples that I have found do not appear to relate enough to what I trying to do for me to adapt them.
Edit: This is a plot from where the data is visualized. I’m looking for the medians (centers) of each major area of density. Trying to give some context.
Example of what the data looks like (head and tail) and some of the files in the folder
Added screenshot for Rui
Added screenshots for Rowan
The following computes all statistics the question asks for for each file and writes a table of results to a CSV file.
library(moments)
stats <- function(filename, na.rm = TRUE) {
tryCatch({
x <- read.csv(filename)
xbar <- colMeans(x, na.rm = na.rm)
med <- apply(x, 2, median, na.rm = na.rm)
S <- apply(x, 2, sd, na.rm = na.rm)
skwn <- skewness(x, na.rm = na.rm)
kurt <- kurtosis(x, na.rm = na.rm)
#
# return a data.frame, it will
# make the code simpler further on
out <- data.frame(
filename = filename,
dihedral.mean = xbar[1],
bend.mean = xbar[2],
dihedral.median = med[1],
bend.median = med[2],
dihedral.sd = S[1],
bend.sd = S[2],
dihedral.skewness = skwn[1],
bend.skewness = skwn[2],
dihedral.kurtosis = kurt[1],
bend.kurtosis = kurt[2]
)
row.names(out) <- NULL
out
},
error = function(e) e
)
}
statsFileName <- "statsfile.txt"
#files <- list.files("/mnt/gpfs2_4m/scratch/username/fs_scripts/foldedstart_*", pattern="*.csv", recursive=TRUE, full.names=TRUE)
files <- list.files("~/Temp", "^t.*\\.csv$")
newdata <- lapply(files, stats)
ok <- !sapply(newdata, inherits, "error")
cat("files read:", sum(ok), "\n")
if(any(!ok)) {
cat("errors:", sum(!ok), "\n")
err_list <- list(
files = files[!ok],
error = conditionMessage(newdata[!ok])
)
}
newdata <- do.call(rbind, newdata[ok])
write.csv(newdata, file = statsFileName, row.names = FALSE)
This solution uses dplyr to summarise each file, combines the summaries into a single dataframe, then writes the results to a csv file.
library(moments)
library(dplyr)
csv_output_path <- "./results.csv"
data_dir <- "./data"
### Create dummy csv files for reproducibility ###
if(!dir.exists(data_dir)) dir.create(data_dir)
for(i in 1:200){
write.csv(data.frame(V1 = runif(100), V2 = runif(100)),
file = paste0(data_dir, "/file_", i, ".csv"),
row.names = FALSE)
}
### Summarise files ###
files <- list.files(data_dir, pattern = ".csv$", recursive = TRUE, full.names = TRUE)
all_results <- vector("list", length(files)) # results placeholder
# Loop that calculates summary statistics
for (i in 1:length(files)) {
currentFile <- files[i]
df <- tryCatch(read.csv(file = currentFile, header=TRUE),
error = function(e) NULL)
if(is.null(df))
next
result <- df %>% summarise_all(list(mean = mean, median = median,
sd = sd, skew = skewness, kur = kurtosis))%>%
mutate(file = currentFile) %>% # add filename to the result
select(file, everything()) # reorder
all_results[[i]] <- result
}
# Combine results into a single df
final_table <- bind_rows(all_results)
# write file
write.csv(final_table, csv_output_path, row.names = FALSE)
I have been trying to write a loop to go through two folders of Sentinel 2 satellite images (Band 4 and 5) and get a NDVI for each date.
A stack is created for each band, some cropping and resampling to finally proceed to the NDVI calculation. I struggle with the integration of the NDVI calculation in the loop and the file name creation.
I'd simply want my loop to generate x files for x dates and then give each NDVI images the date as a name "YYYY/MM/DD.tif" extracted from the file name. But I can't think of a way to do so, after a lot of unsuccessful trial and error.
#list files
files4 <- list.files(path4, pattern = "jp2$", full.names = TRUE)
files5 <- list.files(path5, pattern = "jp2$", full.names = TRUE)
ms5 <- stack()
ms4 <- stack()
for (f in files4){
# loading a raster
r4 <- raster(f)
proj4string(r4)
proj4string(emprise)
emprise <- spTransform(emprise, proj4string(r4))
r4b <- crop(r4, emprise)
ms4<- stack(ms4,r4b)
#copy the date from the file to give a name to the final NDVI image (I have to get ride of everything but the date
x <- gsub("[A-z //.//(//)]", "", r4)
y <- substr(x, 4, 11)
}
for (f in files5){
# load the raster
r5 <- raster(f)
proj4string(r5)
proj4string(emprise)
emprise <- spTransform(emprise, proj4string(r5))
r5b <- crop(r5, emprise)
ms5<- stack(ms5,r5b)
}
#Resampling : setting the Band 5 to the same resolution as Band 4
b5_resamp <- resample(ms5, ms4)
Have you considered looping over dates rather than files? I can't give more specific advice without example data, but here is the general idea:
# List files
files4 <- list.files("./band4", pattern = ".tif", full.names = TRUE)
#> "band4/T31UDR_20170126T105321_B04.tif" "band4/T31UDR_20180126T105321_B04.tif"
files5 <- list.files("./band5", pattern = ".tif", full.names = TRUE)
#> "./band5/T31UDR_20170126T105321_B05.tif" "./band5/T31UDR_20180126T105321_B05.tif"
# Get dates
dates <- unique(gsub(pattern = ".*_(\\d{8}).*", replacement = "\\1", x = c(files4, files5)))
#> "20170126" "20180126"
# Define empty stacks
ms5 <- stack()
ms4 <- stack()
for(date in dates){
## Band 4
f4 <- list.files("./band4", pattern = date, full.names = TRUE)
# loading a raster
r4 <- raster(f4)
proj4string(r4)
proj4string(emprise)
emprise <- spTransform(emprise, proj4string(r4))
r4b <- crop(r4, emprise)
ms4 <- stack(ms4,r4b)
## Band 5
f5 <- list.files("./band5", pattern = date, full.names = TRUE)
# load the raster
r5 <- raster(f5)
proj4string(r5)
proj4string(emprise)
emprise <- spTransform(emprise, proj4string(r5))
r5b <- crop(r5, emprise)
ms5<- stack(ms5,r5b)
## Resampling : setting the Band 5 to the same resolution as Band 4
b5_resamp <- resample(ms5, ms4)
## Write to file
writeRaster(b5_resamp, filename = paste0(date, ".tif"))
}
I want to calculate the share (%) of pixels classified as 1 from a list of files. For a single image the code works well, however, when I try to write it in a for loop R tells me named numeric(0) for all files.
How do I get what I want?
Single Image:
ras <- raster("path") # binary product
ras_df <- as.data.frame(ras) # creates data frame
ras_table <- table(ras_df$file) # creates table
share_suit_hab <- ras_table[names(ras_table)==1]/sum(ras_table[names(ras_table)]) # number of pixels with value 1 divided by sum of pixels with value 0 and 1 = share of suitable habitat (%)
print(share_suit_hab)
> ras
class : RasterLayer
dimensions : 1000, 1000, 1e+06 (nrow, ncol, ncell)
resolution : 2165.773, 2463.182 (x, y)
extent : -195054.2, 1970719, 2723279, 5186461 (xmin, xmax, ymin, ymax)
crs : +proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0
source : C:/Users/name/MASTERARBEIT/BASELINE/Eastern Arctic/Summer_EA_Output/ct/2006/cis_SGRDREA_20060703_pl_a.tif
names : cis_SGRDREA_20060703_pl_a
values : 0, 1 (min, max)
For Loop:
list_ct <- list.dirs("path")
i=0
for(year in list_ct){
ct_files_list <- list.files(year, recursive = FALSE, pattern = "\\.tif$", full.names = FALSE)
ct_file_df <- as.data.frame(paste0("path", i, "/", ct_files_list))
ct_file_df <- as.data.frame(matrix(unlist(ct_file_df), nrow= length(unlist(ct_file_df[1]))))
ct_table <- table(ct_file_df[, 1])
stored <- ct_table[names(ct_table)==1]/sum(ct_table[names(ct_table)])
print(stored)
}
This is the final code which is running perfectly!
list_ct <- list.dirs("path", recursive = FALSE)
stored <- list()
for (year in seq_along(list_ct)){
ct_file_list <- list.files(list_ct[year], recursive=FALSE, pattern = ".tif$", full.names = FALSE)
tmp <- list()
for (i in seq_along(ct_file_list)){
ct_file_df <- raster(paste0(list_ct[year], "/", ct_file_list[i])) %>% as.data.frame()
# do calculations
tmp[[i]] <- sum(ct_file_df[,1], na.rm=TRUE) / length(ct_file_df[!is.na(ct_file_df)[],1])
names(tmp)[i] <- paste0(list_ct[year], "/", ct_file_list[i])
print(tmp[i])
}
stored[[year]] <- tmp
names(stored)[year] <- paste0(list_ct[year])
}
Could you add a reproducible example (data incl.)?
You probably need to replace numeric(0) simply by 0. Numeric(0) does
not mean 0, it means a numeric vector of length zero (i.e., empty). I'm guessing you're probably assigning numeric(0)+1 which is still a numeric vector of 0.
Edit:
You have a folder containing multiple folders which each include 1 or more tif files. You want to loop through each of these folders, importing the tif(s) file, do a calculation, save the result.
In the following, my path contains 5 folders named '2006','2007','2008','2009' and '2010'. Each of these "year"-folders contain an .xlsx file. Each .xlsx file contains 1 column (here, you just need to select the right one in your data frame). This column has the same name in all excel files, "col1", and contains values between 0 and 1. Then this will work:
library(dplyr)
library(readxl)
#
list_ct <- list.dirs("mypath", recursive = FALSE)
stored <- list()
for (year in seq_along(list_ct)){
ct_file_list <- list.files(list_ct[year], recursive=FALSE, pattern = ".xlsx$", full.names = FALSE)
tmp <- list()
for (i in seq_along(ct_file_list)){
ct_file_df <- read_excel(paste0(list_ct[year], "/", ct_file_list[i])) %>% as.data.frame()
# do calculations ..
tmp[[i]] <- sum(ct_file_df$col1) / length(ct_file_df$col1)
names(tmp)[i] <- paste0(list_ct[year], "/", ct_file_list[i])
print(tmp[i])
}
stored[[year]] <- tmp
names(stored)[year] <- paste0(list_ct[year])
}
Instead of using "read_excel", you just use raster() like you did with the single file. Hope you can use the answer.
Example data
library(raster)
s <- stack(system.file("external/rlogo.grd", package="raster"))
s <- s > 200
#plot(s)
If your actual data is all for the same area (and the raster data have the same extent and resolution, you want to create a RasterStack (using the filenames) and use freq as below
f <- freq(s)
f
#$red
# value count
#[1,] 0 3975
#[2,] 1 3802
#$green
# value count
#[1,] 0 3915
#[2,] 1 3862
#$blue
# value count
#[1,] 0 3406
#[2,] 1 4371
Followed by
sapply(f, function(x) x[2,2]/sum(x[,2]))
# red.count green.count blue.count
# 0.4888775 0.4965925 0.5620419
If you cannot make a RasterStack you can make a list and lapply and continue as above, or use sapply and do this
ss <- as.list(s)
x <- sapply(ss, freq)
x[4,] / colSums(x[3:4, ])
#[1] 0.4888775 0.4965925 0.5620419
If you insist on a loop
res <- rep(NA, length(ss))
for (i in 1:length(ss)) {
# r <- raster(ss[i]) # if these were filenames
r <- ss[[i]] # here we extract from the list
x <- freq(r)[,2]
res[i] <- x[2] / sum(x)
}
res
# 0.4888775 0.4965925 0.5620419
Thank you!
This is working perfectly for all files of one year!
library(raster)
s_list <- list.files("C:/Users/OneDrive - wwfgermany/MASTERARBEIT/BASELINE/Eastern Arctic/Summer_EA_Output/area_calc/ct/2006/", full.names = T)
s <- raster::stack(s_list)
f <- freq(s, useNA = 'no')
f
ct_avg <- sapply(f, function(x) x[2,2]/sum(x[,2]))
ct_avg__mean <- mean(ct_avg)
ct_avg__mean
However, when I want to write it in another loop, to get one value per year as a final result in the end, I end up with an error saying subscript out of bounds. This is the code I am using:
setwd("C:/Users/MASTERARBEIT/BASELINE/Eastern Arctic/Summer_EA_Output/area_calc/ct/")
list_ct <- list.dirs("C:/Users/MASTERARBEIT/BASELINE/Eastern Arctic/Summer_EA_Output/area_calc/ct/")
i=0
for (year in list_ct) {
s_list <- list.files(year, recursive = FALSE, pattern = "\\.tif$", full.names = FALSE)
s <- raster::stack(s_list)
f <- freq(s, useNA = 'no')
f
ct_avg <- sapply(f, function(x) x[2,2]/sum(x[,2]))
ct_avg__mean <- mean(ct_avg)
ct_avg__mean
}
I am using the following code to check P-values of a linear trend, but it seems that the loop is not working properly as I cannot see a 2-D map of P-value but only a row
library(chron)
library(RColorBrewer)
library(lattice)
library(ncdf4)
#-------------------------------------------------------------------------------------------
options(warn=-1)
ncin <- nc_open("MOD04_10K_Winter.nc", readunlim=FALSE)
#print(ncin)
lon <- ncvar_get(ncin, varid="Longitude", start=NA, count=NA, verbose=FALSE,
signedbyte=TRUE, collapse_degen=TRUE, raw_datavals=FALSE )
lat <- ncvar_get(ncin, varid="Latitude", start=NA, count=NA, verbose=FALSE,
signedbyte=TRUE, collapse_degen=TRUE, raw_datavals=FALSE )
aod <- ncvar_get(ncin, varid="AOD", start=NA, count=NA, verbose=FALSE,
signedbyte=TRUE, collapse_degen=TRUE, raw_datavals=FALSE )
px <- matrix(nrow = 1:length(lon), ncol = 1:length(lat))
is.matrix(px)
for (lo in 1:length(lon)) {
for (la in 1:length(lat)) {
int1a = aod[lo, la,]
# if mean of int is finite then proceed else fill NA to all arrays
mn = mean(int1a, trim = 0, na.rm = FALSE)
if (is.finite(mn))
{
print("---------------- Reading Finite data -------------")
xs = 1:30
fn1a = lm(int1a~xs) # Function_NCP
p_val = summary(fn1a)$coefficients[2, 4] # Saving p-value
if (p_val < 0.05) {print("statisticlly significant")} else {print("statisticlly in-significant")}
print(p_val)
print(lo)
print(la)
px[lo][la] = p_val # variables in [] only (?)
}
} # latitude dimension
}
If I am using [lo, la] instead of [lo][la] I am having the following error
Error in [<-(*tmp*, lo, la, value = 0.0543481042240582) :
subscript out of bounds
Sorry if the solution is very trivial, I have just started working in R.
You have just to make a small fix on the matrix px declaration. Now you set the number of rows and columns as vectors: nrow = 1:length(lon) and nrow = 1:length(lon). R silently takes only the first elements of these vectors and generates a 1 to 1 matrix. (Actually, it would generate a warning, by the warnings are supressed!)
So, the solution is
px <- matrix(nrow = length(lon), ncol = length(lat))