Google Earth Engine- error 'Cannot export array bands.' When exporting file from assets to drive as Geotiff? - google-earth-engine

I ran temporalsegmentationccdc algorithm and saved output in the GEE assets. There is an output file in the asset but when I tried to export that output file from the assets to drive I am getting an error such as
'Error: Cannot export array bands.'
The output file is of 35 bands and I have tried to export each band individually but I am still getting the same error. I have not yet found any solution for this error.
Has anyone got a similar error when exporting files from GEE Assets to Drive? Any help would be highly appreciated.
Here is my export code:
var temporalSeg_band1 = image_temporalSeg.select(['blue_coefs'])
Map.addLayer(image_temporalSeg)
var aoi = image
// export the image to drive
Export.image.toDrive({
image: ee.Image(temporalSeg_band1),
description: "Band_blue_coeffecient",
folder: "GEE_data",
fileNamePrefix: "Blue_coef",
scale: 30,
region: aoi,
maxPixels: 10000000,
shardSize: 100,
fileDimensions: 5000,
crs: "EPSG:3338",
fileFormat: "GeoTIFF"
})

Related

Save a workbook created with xlsx R package in AWS S3 is not working

I created a workbook, named workbook_1, using the xlsx R package, like this:
workbook_1 = xlsx::createWorkbook()
It has the next class:
I need to upload this file to a bucket path in AWS, S3. I tried with this code that worked for me previously for .csv files:
s3write_using(workbook_1, object = paste0("file_location_path" + "workbook_1.xlsx" ), FUN=write.xlsx, bucket = "bucket_S3")
But i get this error:
I think that the error is because my workbook actually does not have a xlxs extension. Do you guys know how can I convert my file workbook_1 to xlsx?
Thanks in advance for your help!

I am unable to read my database from the PDF file

I'm doing sentiment analysis on articles. I am transforming my PDF file into corpus; these files are in a Folder.
#reference: https://data.library.virginia.edu/reading-pdf-files-into-r-for-text-mining/
library(pdftools) #incluido RSC (ref.: https://cran.r-project.org/web/packages/pdftools/pdftools.pdf)
files <- list.files(path = "~/Doctorade-Project/Doctorade/CLAY_Arquivos_PDF", pattern = "pdf$", full.names = TRUE)
files
Here I already use PDFtools, which made the reading of the files smooth, without problem.
When I want to import all files from Folder and transform into "corpus", it has the following import errors.
#reference: https://data.library.virginia.edu/reading-pdf-files-into-r-for-text-mining/
opinions <-lapply(files, pdf_text)
length(opinions)
lapply(opinions, length)
follow the results
[Output truncated]PDF error: Could not parse ligature component "0173" of "_0173" in parseCharName
PDF error: Could not parse ligature component "0173" of "_0173" in parseCharName
PDF error: Invalid shared object hint table offset
PDF error: read ICCBased color space profile error
PDF error: not an ICC profile, invalid signature
Error in poppler_pdf_info(loadfile(pdf), opw, upw) : PDF parsing failure.
I need to import the texts from the files that are in PDF, that is, make the corpus of words to perform the sentiment analysis.
We appreciate everyone's help, and thank you for your cooperation.

Worldclim: getData() in R (latitude/longitude coordinates): Error in utils::unzip(zipfile, exdir = dirname(zipfile)) : 'exdir' does not exist

Uploading longitude and latitude coordinates in a .csv file using getData():
I am sorry for asking such a basic question but I am new to R, and I am having trouble uploading my .csv file containing latitude and longitude coordinates using the function getData(). The idea is to upload the data in worldlcim.
The code I used was:
bioclim.data <- getData(name = "worldclim",
var = "bio",
res = 2.5,
path "~/Documents/TerneyProposal/UpalPublishedPapers/Blue_Whale_Paper/Data_Blue_Whale_Project/Blue_Whale_GPS_CSV.csv")enter code here
However, I keep on getting this error message:
trying URL 'https://biogeo.ucdavis.edu/data/climate/worldclim/1_4/grid/cur/bio_2-5m_bil.zip'
Content type 'application/zip' length 129319755 bytes (123.3 MB)
==================================================
downloaded 123.3 MB
Could not download file -- perhaps it does not exist
Error in utils::unzip(zipfile, exdir = dirname(zipfile)) :
'exdir' does not exist
I don't understand this message because my .csv file is not in a zip file, and this file also opens easily when I use the function read.csv("data")
Very kind regards if anyone could please kindly advise?
Many thanks in advance!
Welcome to the community! These kinds of errors are a bit misleading... I think the main issue is probably that your file is not in the folder it expects. Run:
getwd()
to see what the working folder is... that is where it's expecting the file. Either move the file into that folder, or you can change your session's working directory to the folder where the file is, by writing:
setwd("pathToYourFile")
If you're on windows, remember to change those pesky \ to /.
Give it a try!

Reading multiple netcdf files

I am trying to read multple nc4 files in r. below is the code I am using to execute this task;
library(ncdf4)
OSND_gpmr.df<-NULL
GPM_R.files= list.files(path,pattern='*.nc4',full.names=TRUE)
for(i in seq_along(GPM_R.files)) {
nc_data = nc_open(GPM_R.files[i])
GPM_Prec<-ncvar_get(nc_data, 'IRprecipitation')
x=dim(GPM_Prec)
### note start=c(42,28) are the index in image regards to real coordinates of interset
## R reads images from lat,long.
OSND_gpmr.spec =ncvar_get(nc_data, 'IRprecipitation', start = c(42,28), count = c(1,1))
rbind(OSND_gpmr.df,data.frame(OSND_gpmr.spec))->OSND_gpmr.df
nc_close(nc_data)
}
but I consistently get this error:
Error in R_nc4_open: No such file or directory.
But the list of files is correctly recognised as chr [1:1440] as shown in the global environments-Values.
Can someone please help me with what I am doing wrong?
Your working directory might have been different from the files location. Your GPM_R.files list stores only the file names from the given location without file paths. While nc_open() expects filenames with complete path.

Reading a gpx file into Shiny from a dropbox account

I have a Shiny app that accesses data from a dropbox account. I used the instructions at https://github.com/karthik/rdrop2/blob/master/README.md to be been able to read in csv data with no problem, i.e. using the drop_read_csv command from the rdrop2 package after doing the authentication step.
e.g.
my_data<-drop_read_csv("ProjectFolder/DataSI.csv")
My next problem however is that there are going to be a lot of gpx track files uploaded to the dropbox that I want the app to be able to read in. I have tried using:
gpx.files<-drop_search('gpx', path="ProjectFolder/gpx_files")
trk.tmp<-vector("list",dim(gpx.files)[1])
for(i in 1: dim(gpx.files)[1]){
trk.tmp[[i]]<-readOGR(gpx.files$path[i], layer="tracks")
}
But no luck. At the readOGR step, I get:
Error in ogrInfo(dsn = dsn, layer = layer, encoding = encoding, use_iconv = use_iconv, :
Cannot open data source
Hopefully someone can help.
My problem was I hadn't specified the dropbox path properly. I have used the drop_read_csv code and made a drop_readOGR version:
drop_readOGR<-function(my.file, dest=tempdir()){
localfile = paste0(dest, "/", basename(my.file))
drop_get(my.file, local_file = localfile, overwrite = TRUE)
readOGR(localfile, layer="tracks")
}
So now I can just use what I was doing before except I have changed the line in the loop to call the new function.
gpx.files<-drop_search('gpx', path="ProjectFolder/gpx_files")
trk.tmp<-vector("list",dim(gpx.files)[1])
for(i in 1: dim(gpx.files)[1]){
trk.tmp[[i]]<-drop_readOGR(gpx.files$path[i])
}

Resources