DEM to Raster for multiple files - raster

I'm trying to design a program to help me convert 1000+ DEM file into USGS raster file, using the method "arcpy.DEMtoRaster_Conversion" in ArcGIS. My idea is to use a OpenFileDialog to allow multiple selection for these files, then use an array to same these names and use these names as the inDEM and save the outRaster in tif format.
file_path = tkFileDialog.askopenfilename(filetypes=(("DEM", "*.dem"),),multiple=1)
this is how I open multiple files in the dialog, but I;m not sure how to save them so as to fulfill the following steps. Can someone help me?

This code will find all dems in a folder and apply the conversion function and save the output tiffs to another folder
#START USER INPUT
datadir="Y:/input_rasters/" #directory where dem files are located
outputdir="Y:/output_rasters/" #existing directory where output tifs are to be saved in
#END USER INPUT
import os
arcpy.env.overwriteOutput = True
arcpy.env.workspace = datadir
arcpy.env.compression = "LZW"
DEMList = arcpy.ListFiles("*.dem")
for f in DEMList:
print "starting %s" %(f)
rastername=os.path.join(datadir, f)
outrastername=os.path.join(outputdir, f[:-4]+".tif")
arcpy.DEMToRaster_conversion(rastername, outrastername)

Related

Creating the user's specific directory in R

I want to read the CSV file "mydata.csv" as an input and create the output in the same directory using R. I have hard-coded for getting csv input(Domain_test.csv) and the output(MyData.csv) path as below. But I will have to share the same Rscript and the corresponding csv files with one of the users so that he/she can execute it and take the results. I want the user should to select his specific path where ever he wants and make it run without hard coding the input/output path in the script.
How it should be done in R?
#reading csv from this current directory
data <- read.csv("C:/Users/Desktop/input_output_directory/Domain_test.csv")
#generating the output In this same directory
write.csv(dataframe,"C:/Users/Desktop/input_output_directory/MyData.csv", row.names = FALSE)
You can use
wd <- choose.dir(default = "", caption = "Select folder")
setwd(wd)

How to upload a R data frame into a google drive ?

I am using googledrive package from CRAN. But, function - drive_upload lets you upload a local file and not a data frame. Can anybody help with this?
Just save a data_frame in question to a local file. Most basic options would be saving to CSV or saving an RData.
Example:
test <- data.frame(a = 1)
tempFileCon <- file()
write.csv(test, file = tempFileCon)
rm(test)
load("test.Rds")
exists("test")
Since clarified it is not possible to use temporary file we could use a file connection.
test <- data.frame(a = 1)
tempFileCon <- file()
write.csv(test, file = tempFileCon)
And now we have the file conneciton in memory that we can use to provide for other functions. Caveat - use literal object name to address it and not quotations like you would with actual files.
Unfortunately I can find no way to push the dataframe up directly, but just to document for others trying to get the basics accomplished that this question touches upon is with the following code that writes a local .csv and then bounces it up through tidyverse::googledrive to express itself as a googlesheet.
write_csv(iris, 'df_iris.csv')
drive_upload('df_iris.csv', type='spreadsheet')
You can achieve this using gs_add_row from googlesheets package. This API accepts dataframes directly as input parameter and uploads data to the specified google sheet. Local files are not required.
From the help section of ?gs_add_row:
"If input is two-dimensional, internally we call gs_add_row once per input row."
This can be done in two ways. Like mentioned by others, a local file can be created and this can be uploaded. It is also possible to create a new spreadsheet in your drive. This spreadsheet will be created in the main folder of your drive. If you want it stored somewhere else, you can move it after creation.
# install the packages
install.packages("googledrive", "googlesheets4")
# load the libraries
library(googledrive)
library(googlesheets4)
## With local storage
# Locally store the file
write.csv(x = iris, file = "iris.csv")
# Upload the file
drive_upload(media = "iris.csv", type='spreadsheet')
## Direct storage
# Create an empty spreadsheet. It is stored as an object with a sheet_id and drive_id
ss <- gs4_create(name = "my_spreadsheet", sheets = "Sheet 1")
# Put the data.frame in the spreadsheet and provide the sheet_id so it can be found
sheet_write(data=iris, ss = ss, sheet ="Sheet 1")
# Move your spreadsheet to the desired location
drive_mv(file = ss, path = "my_creations/awesome location/")

Reading multiple netcdf files

I am trying to read multple nc4 files in r. below is the code I am using to execute this task;
library(ncdf4)
OSND_gpmr.df<-NULL
GPM_R.files= list.files(path,pattern='*.nc4',full.names=TRUE)
for(i in seq_along(GPM_R.files)) {
nc_data = nc_open(GPM_R.files[i])
GPM_Prec<-ncvar_get(nc_data, 'IRprecipitation')
x=dim(GPM_Prec)
### note start=c(42,28) are the index in image regards to real coordinates of interset
## R reads images from lat,long.
OSND_gpmr.spec =ncvar_get(nc_data, 'IRprecipitation', start = c(42,28), count = c(1,1))
rbind(OSND_gpmr.df,data.frame(OSND_gpmr.spec))->OSND_gpmr.df
nc_close(nc_data)
}
but I consistently get this error:
Error in R_nc4_open: No such file or directory.
But the list of files is correctly recognised as chr [1:1440] as shown in the global environments-Values.
Can someone please help me with what I am doing wrong?
Your working directory might have been different from the files location. Your GPM_R.files list stores only the file names from the given location without file paths. While nc_open() expects filenames with complete path.

In R, opening an object saved to Excel through shell.exec

I would like to be able to open files quickly in Excel after saving them. I learned from R opening a specific worksheet in a excel workbook using shell.exec 1 on SO
On my Windows system, I can do so with the following code and could perhaps turn it into a function: saveOpen <_ function {... . However, I suspect there are better ways to accomplish this modest goal.
I would appreciate any suggestions to improve this multi-step effort.
# create tiny data frame
df <- data.frame(names = c("Alpha", "Baker"), cities = c("NYC", "Rome"))
# save the data frame to an Excel file in the working directory
save.xls(df, filename "test file.xlsx")
# I have to reenter the file name and add a forward slash for the paste() command below to create a proper file path
name <- "/test file.xlsx"
# add the working directory path to the file name
file <- paste0(getwd(), name)
# with shell and .exec for Windows, open the Excel file
shell.exec(file = file)
Do you just want to create a helper function to make this easier? How about
save.xls.and.open <- function(dataframe, filename, ...) {
save.xls(df, filename=filename, ...)
cmd <- file.path(getwd(), filename)
shell.exec(cmd)
}
then you just run
save.xls.and.open(df, filename ="testfile.xlsx")
I guess it doesn't seem like all that many steps to me.

Proper phrasing for a loop to convert all .dta files to .csv in a directory

So I have a single instance of dta to csv conversion, and I need to repeat it for all files in a directory. Great help on SO, but I'm still not quite there. Here's the single instance
#Load Foreign Library
library(foreign)
## Set working directory in which dtw files can be found)
setwd("~/Desktop")
## Single File Convert
write.csv(read.dta("example.dta"), file = "example.csv")
From here, I figure I use something like:
## Get list of all the files
file_list<-dir(pattern = ".dta$", recursive=F, ignore.case = T)
## Get the number of files
n <- length(file_list)
## Loop through each file
for(i in 1:n) file_list[[i]]
But I'm not sure of the proper syntax, expressions, etc. After reviewing the great solutions below, I'm just confused (not necessarily getting errors) and about to do it manually -- quick tips for an elegant way to go through each file in a directory and convert it?
Answers reviewed include:
Convert Stata .dta file to CSV without Stata software
applying R script prepared for single file to multiple files in the directory
Reading multiple files from a directory, R
THANKS!!
Got the answer: Here's the final code:
## CONVERT ALL FILES IN A DIRECTORY
## Load Foreign Library
library(foreign)
## Set working directory in which dtw files can be found)
setwd("~/Desktop")
## Convert all files in wd from DTA to CSV
### Note: alter the write/read functions for different file types. dta->csv used in this specific example
for (f in Sys.glob('*.dta'))
write.csv(read.dta(f), file = gsub('dta$', 'csv', f))
If the files are in your current working directory, one way would be to use Sys.glob to get the names, then loop over this vector.
for (f in Sys.glob('*.dta'))
write.csv(read.dta(f), file = gsub('dta$', 'csv', f))

Resources