Convert multiple ASCII files to Raster files in R - r

Could you please explain in detail how I can open multi ASCII files from the beginning and convert them to the raster files in RStudio? I have ASCII files from several years and need to compress the files into one file at the end, containing all the years.

Related

Scilab unable to correctly read text and csv file

I wish to open and read the following text file in Scilab (version 6.0.2).
The original file is an .xlsx that I have converted to both .txt and .csv through Excel to facilitate opening & working with it in Scilab.
Using both fscanfMat and csvRead, scilab only reads the first column as Nan. I understand why the first column is considered as Nan, but I do not see why the rest of the document isn't read. Columns 2 and 3 are in particular of interest to me.
For csvRead, I used :
M=csvRead(chemin+filename," ",",",[],[],[],[],7);
to skip the 7-row header.
Could it be something to do with the way in which the file has been formatted?
For anyone able to help, I will try to upload an example of a .txt file and also the original .xlsx file
Files available for download, here: Excel and Text files
If you convert your xlsx file into a xls one with Excel you can read it withthe readxls function.
Your separator is a tabulation character (ascii code 9). Use the following command:
M=csvRead("Probe1_350N_2S.txt",ascii(9),",",[],[],[],[],7);

How to read multiple .nc files and export it to different respective .csv files?

I have many .nc (netcdf) files, each file representing rainfall at hourly interval. I need to convert multiple .nc files to multiple respective .csv file. Using R, I am able to convert one .nc file to .csv successfully but I want to convert multiple files at one time.
I have successfully converted one .nc file to .csv file. For conversion of multiple files at one time, I have tried to stack all the files together using 'stack' command and then convert the to .csv using 'write.csv' or 'writetable' but it showed error and didn't work.
Code to convert one .nc file to .csv is as follows:
library(raster)
nc.brick <- brick(file.choose())
nc.df <- as.data.frame(nc.brick[[1]], xy=TRUE)
write.csv(nc.df, file.choose())
As an output, I have got a .csv file with three columns, one representing latitutde, second-longitude and third-rainfall value. I want such similar multiple .csv files to be converted from multiple .nc files at one go. So, is there any way to convert multiple .nc files to multiple .csv files respectively?
You can make a loop over files in a directory. So rather than using file.choose() which require manual choosing of files, you can make a vector of the files in your directory.
rm(list = ls())
install.packages(“raster”)
install.packages(“ncdf4”)
library(raster)
ptf <- "/path/to/nc/files"
setwd(ptf) # change your working directory
lf <- list.files(pattern="[.]nc$") # list of files ending in .nc
for(i in lf){
nc.brick <- brick(i)
nc.df <- as.data.frame(nc.brick[[1]], xy=T)
write.csv(nc.df, sub("[.]nc$",".csv",i)) # write to the same file name substituting .nc to .csv
}

how to write single CSV file from multiple (several thousand) netcdf files in R

I'm trying to get a single csv file from several thousand netcdf files where each file represents a single point in time. The files contain time, latitude, longitude, and 4 different weather variables. My thought process is to merge the files into one netcdf file and then I can write a csv file from that but I'm not sure how to merge 29000 files with a simple code without having to write the name of the files over and over.
Use netCDF operators. Either cdo or nco.
To merge your files in time with cdo, just:
cdo mergetime input_files output_file
and with using nco (according to https://linux.die.net/man/1/ncrcat):
ncrcat input_files outputfile
You can specify input_files with wildcard *. Basically, if your thousands of files are named like: file_000001.nc,file_000002.nc,...,file_vwxyz.nc, do:
cdo mergetime file_* files.nc
Other way is just to loop over the file names and do all your operations with r or whatever tool that you are using. Nevertheless, cdo or nco, are the right tools for your problem.

Exporting Chinese characters from Excel to R

I have a file in Excel which has a column with Chinese simplified characters. When I open it in R from the corresponding CSV file I only get ?'s.
I'm afraid the problem is when exporting from Excel to CSV because when I open the CSV file on a text editor I also get ?'s.
How can I get around this?
The best way to secure your Chinese/Unicode characters is to read file from .xlsx:
library(readxl)
read_xlsx("yourfilepath.xlsx", col_types = "text")
If your file is too big to read from .xlsx, then the best way is to open Excel and split manually into multiple files.
(My experience with a laptop with 8GB RAM is to split files into 250,000 rows x 106 columns.)
If you need to read from .csv, your all windows settings/localization needs to be the same as your file, but even that does not guarantee the integrity of all your Unicode characters (eg. emojis).
(If you also need .csv for something else, then you can use the R function write.csv after you read data from .xlsx into R.)

R and zipped files

I have about ~1000 tar.gz files (about 2 GB/file compressed) each containing bunch of large .tsv (tab separated) files e.g. 1.tsv, 2.tsv, 3.tsv, 4.tsv etc.
I want to work in R on a subset of the .tsv files (say 1.tsv, 2.tsv) without extracting the .tar.gz files, in order to save space/time.
I tried looking around but couldn't find a library or a routine to stream the tar.gz files through memory and extracting data from them on the fly. In other languages there are ways of doing this efficiently. I would be surprised if one couldn't do this in R
Does anyone know of a way to accomplish this in R? Any help is greatly appreciated! Note: Unzipping/untarring the file is not an option. I want to extract relevant fields and save them in a data.frame without extracting the file

Resources