Writing and Saving to excel file with Shiny - r

My app allows a user to input a date range, a division, a pdf file, and an excel file. The program pulls numbers from the pdf, calculates total points, and adds it to the ranks pulled from the existing excel file. It is supposed to write and save the excel file. Without Shiny, my program works fine. Within Shiny, It is running and the data is correct, but it does not add the data to the excel file. I have added print prompts at various stages to test this. I have tried running this externally as well with the same results. It does not throw an error, it just not add the data to the excel file.
Server function
server<-function(input,output){
output$horse_div <- renderText({
paste("You have chosen",input$division)
})
output$qualifying<-renderText({
paste("Qualifying Period from",input$date[1],"to",input$date[2])
})
output$pdf<-renderText({
req(input$horsereport)
req(input$excelfile)
ponyoutput<-horseRecord(input$horsereport$datapath,input$date[1],input$date[2],input$division,input$excelfile$datapath)
paste("mylist",ponyoutput[1])
})
}
Snippet of horseRecord function
#Set up sheet and excel file
wsheetrank<-paste(div,"RANK")
wsheetpoints<-paste(div,"POINTS")
#load workbook
wb<-loadWorkbook(file=excelfile)
#add pony to ranked list
rank<-read.xlsx(excelfile,wsheetrank)
rank<-rank[,2:3]
rank<-rank %>% mutate(Points=as.numeric(Points))
dat<-rank
dat<-dat%>% add_row(Pony=horse,Points=points) %>% arrange(desc(Points))
#remove duplicates
dat<-dat[!duplicated(dat$Pony),]
rownames(dat)<-seq(from=1,to=nrow(dat),by=1)
#find rank
rank<-grep(horse,dat$Pony)
#Write to excel file
writeData(wb,sheet=wsheetrank,x=dat,colNames=TRUE,rowNames = TRUE,borders="all")
saveWorkbook(wb,excelfile,overwrite=TRUE)
This should add the totaled points pulled from the PDF file to the ranked list, resort, and write to the ranked worksheet. The full code and files can be found here: https://github.com/chealy21/horsePoints

The excel file that you upload gets saved in a temporary file on the shiny server. It is not the excel file that is on disk, even if you run your app locally.
If you add a browser() before writing the data in the workbook (line 138 of horsereportfunction.R), you will get a console prompt when the app reaches that line and you will be able to see it for yourself.
This is what is in the excelfile variable (on a linux machine. Windows temp file are somewhere else but they are still just tmp files):
excelfile
# [1] "/tmp/RtmpFllCQq/5709c4b043babf5b85bd29b4/0.xlsx"
This temporary file on the shiny server does get updated:
readxl::read_excel("/tmp/RtmpFllCQq/5709c4b043babf5b85bd29b4/0.xlsx") %>% filter(Pony == "BIT OF LAUGHTER")
# # A tibble: 1 x 3
# ..1 Pony Points
# <chr> <chr> <dbl>
# 1 30 BIT OF LAUGHTER 1503.
This is in line with the documentation for fileInput:
Whenever a file upload completes, the corresponding input variable is set to a dataframe. This dataframe contains one row for each selected file, and the following columns:
name:
The filename provided by the web browser. This is not the path to read to get at the actual data that was uploaded (see datapath column).
size:
The size of the uploaded data, in bytes.
type:
The MIME type reported by the browser (for example, text/plain), or empty string if the browser didn't know.
datapath:
The path to a temp file that contains the data that was uploaded. This file may be deleted if the user performs another upload operation.
Maybe you could let the user download the updated report (from datapath) after updating it?
If you never intend to publish your app and will always use it locally, you could hardcode the excel file directory. Your app would then copy the workbook at the hardcoded location from datapath.

Related

Accessing file on team drive using team_drive_get

I have used googledrive functions successfully to access xlsx spreadsheets on my own google drive - so
drive_download(file = "DIRECTOR_TM/Faculty/Faculty Productivity/Faculty productivity.xlsx",
overwrite=TRUE)
works and saves a local copy of the file for me to run analyses on.
Mid year we switched to using team drives and the equivalent
drive_download(file = "Director/Faculty/Faculty Productivity/Faculty productivity.xlsx",
overwrite=TRUE)
doesn't work - I get an error that says "Error: 'file' does not identify at least one Drive file."
So I have tried using the team_drive_get function - and am confused
Director <- team_drive_get("Director")
does work - I get a tribble with one observation. But the file I want is in a subdirectory in the "Director" team drive. So I tried
TeamDrive <- team_drive_get("Director/Faculty/Faculty Productivity/")
but the result is a 0 obs tribble.
How do I get access to a file in a subdirectory on a team drive?
googledrive uses IDs to identify objects in a flattened file structure for your team, i.e., you don't need to know the subdirectory. If you know the name of your file, you just need to search the team drive and find the ID (see your specific question---and why I found this---addressed below).
# environment variables
FILENAME <- "your_file_name"
TEAM_DRIVE_NAME <- "your_team_name_here"
# get file(s)
gdrive_files_df <- drive_find(team_drive = TEAM_DRIVE_NAME)
drive_download(
as_id(gdrive_files_df[gdrive_files_df$name == FILENAME,]$id),
overwrite = TRUE
)
Alternatively, this is what you can do if you do need to find the specific ID of a subdirectory (perhaps for an upload where there is no existing ID for the file).
# environment variables
FILEPATH <- "your_file_path"
TEAM_SUBDIRECTORY <- "your_subdirectory"
# grab the ID of your subdirectory and upload to that directory
drive_upload(
FILEPATH,
path = as_id(gdrive_files_df[gdrive_files_df$name == TEAM_SUBDIRECTORY,]$id),
name = FILENAME,
)

R Shiny Save File by Unique Name

I have developed a Shiny app where users have the option to save machine learning models (to be able to use them later). These models get saved in the default shiny directory.
The issue is that, since the name of the model file being saved is not unique, the file can be overwritten when multiple users are using the app.
I want the files to be saved by a unique name and the users to be able to load their specific files back
Below is the code I am using
# Save model to be used later
.jcache(m1$classifier)
observeEvent(input$save, {
#delete previous model if it exists in folder
fn <- "m1"
if (file.exists(fn)) file.remove(fn)
save(m1, file = "D:\\Dropbox\\Users\\Myname\\m1")
})
#Load model saved earlier
load(file="m1")
There is a package called uuid that can help with this:
install.packages("uuid")
# This function will create a unique string for you that you can use as your filename
fn <- uuid::UUIDgenerate()
So I suggest generating a new filename each time you want to save a model and storing it in a variable that can be referred back to when you want to reload the model.
load(file=fn)

How to upload a R data frame into a google drive ?

I am using googledrive package from CRAN. But, function - drive_upload lets you upload a local file and not a data frame. Can anybody help with this?
Just save a data_frame in question to a local file. Most basic options would be saving to CSV or saving an RData.
Example:
test <- data.frame(a = 1)
tempFileCon <- file()
write.csv(test, file = tempFileCon)
rm(test)
load("test.Rds")
exists("test")
Since clarified it is not possible to use temporary file we could use a file connection.
test <- data.frame(a = 1)
tempFileCon <- file()
write.csv(test, file = tempFileCon)
And now we have the file conneciton in memory that we can use to provide for other functions. Caveat - use literal object name to address it and not quotations like you would with actual files.
Unfortunately I can find no way to push the dataframe up directly, but just to document for others trying to get the basics accomplished that this question touches upon is with the following code that writes a local .csv and then bounces it up through tidyverse::googledrive to express itself as a googlesheet.
write_csv(iris, 'df_iris.csv')
drive_upload('df_iris.csv', type='spreadsheet')
You can achieve this using gs_add_row from googlesheets package. This API accepts dataframes directly as input parameter and uploads data to the specified google sheet. Local files are not required.
From the help section of ?gs_add_row:
"If input is two-dimensional, internally we call gs_add_row once per input row."
This can be done in two ways. Like mentioned by others, a local file can be created and this can be uploaded. It is also possible to create a new spreadsheet in your drive. This spreadsheet will be created in the main folder of your drive. If you want it stored somewhere else, you can move it after creation.
# install the packages
install.packages("googledrive", "googlesheets4")
# load the libraries
library(googledrive)
library(googlesheets4)
## With local storage
# Locally store the file
write.csv(x = iris, file = "iris.csv")
# Upload the file
drive_upload(media = "iris.csv", type='spreadsheet')
## Direct storage
# Create an empty spreadsheet. It is stored as an object with a sheet_id and drive_id
ss <- gs4_create(name = "my_spreadsheet", sheets = "Sheet 1")
# Put the data.frame in the spreadsheet and provide the sheet_id so it can be found
sheet_write(data=iris, ss = ss, sheet ="Sheet 1")
# Move your spreadsheet to the desired location
drive_mv(file = ss, path = "my_creations/awesome location/")

Use R package "googledrive" to load in R a file from my googledrive

I have a file in my google drive that is an xlsx. It is too big so it is not automatically converted to a googlesheet (that's why using googlesheets package did not work). The file is big and I can't even preview it through clicking on it on my googledrive. The only way to see it is to download is as an .xlsx . While I could load it as an xlsx file, I am trying instead to use the googledrive package.
So far what I have is:
library(googledrive)
drive_find(n_max = 50)
drive_download("filename_without_extension.xlsx",type = "xlsx")
but I got the following error:
'file' does not identify at least one Drive file.
Maybe it is me not specifying the path where the file lives in the Drive. For example : Work\Data\Project1\filename.xlsx
Could you give me an idea on how to load in R the file called filename.xlsx that is nested in the drive like that?
I read the documentation but couldn't figure out how to do that.Thanks in advance.
You should be able to do this by:
library(googledrive)
drive_download("~/Work/Data/Project1/filename.xlsx")
The type parameter is only for Google native spreadsheets, and does not apply to raw files.
I want to share my way.
I do this way because I keep on updating the xlsx file. It is a query result that comes from an ERP.
So, when I tried to do it by googleDrive Id, it gave me errors because each time the ERP update the file its Id change.
This is my context. Yours can be absolutely different. This file changes just 2 or three times at month. Even tough it is a "big" xlsx file (78-80K records with 19 factors), I use it for just seconds to calculate some values and then I can trash it. It does not have any sense to store it. (to store is more expensive than upload)
library(googledrive)
library(googlesheets4) # watch out: it is not the CRAN version yet 0.1.1.9000
drive_folder_owner<-"carlos.sxxx#xxxxxx.com" # this is my account in this gDrive folder.
drive_auth(email =drive_folder_owner) # previously authorized account
googlesheets4::sheets_auth(email =drive_folder_owner) # Yes, I know, should be the same, but they are different.
d1<-drive_find(pattern = "my_file.xlsx",type = drive_mime_type("xlsx")) # This is me finding the file created by the ERP, and I do shorten the search using the type
meta<-drive_get(id=d1$id)[["drive_resource"]] # Get the id from the file in googledrive
n_id<-glue("https://drive.google.com/open?id=",d1$id[[1]]) # here I am creating a path for reading
meta_name<- paste(getwd(),"/Files/",meta[[1]]$originalFilename,sep = "") # and a path to temporary save it.
drive_download(file=as_id(n_id),overwrite = TRUE, path = meta_name) # Now read and save locally.
V_CMV<-data.frame(read_xlsx(meta_name)) # store to data frame
file.remove(meta_name) # delete from R Server
rm(d1,n_id) # Delete temporary variables

Use variables to create a CSV file name and path

Sorry, everyone. First time using R. The company switched to it recently and I am trying to customize a script I was given.
The purpose of the script is to:
Open a CSV file
Filter the results by a code
Save the results as a new CSV file named as the code
Because of this, I have to provide the code three times and the location path twice. I am trying to streamline this so I only need to enter the code and path once, by assigning them to variables, and then the script would use those variables for everything else.
Here's what I have so far, but I'm getting an error "Error in paste(FINAL) : object 'FINAL' not found"
CODE <- '1234'
LOC <- 'C:/Users/myname/Documents/Raw Files/'
FINAL <- paste0(LOC,CODE,'.csv')
RawFile <- read_csv(paste0(LOC,'Raw File MERGED_Raw.csv'))
CODEofInterest <- RawFile %>% filter(ID == CODE)
write_csv(CODEofInterest,paste0(FINAL))
It was user error, I was not running the entire script, just the last line.

Resources