I currently have a data_prep.R file that I use to pull in data using SQL queries and then wrangle the data into suitable data frames for use within my {golem} package. At the end of this script I have
usethis::usedata(df, overwrite = T)
From research it seems that this file should go into the /data-raw folder as you are not supposed to execute code in the /R folder. When ran, it constructs my data frames and then places them within the /data folder. However, this script does not seem to get ran whenever I run the application, moreover, the data frames will remain unchanged until I manually run the data_prep.R script again.
My application relies on the new data coming in and as such I would need this data_prep.R file to run whenever the app is launched.
Is there something that I am missing?
I worked this out by placing the data_prep.R script into the application base directory and sourcing the file within the app_server.R file.
source("./data_prep.R")
This runs the script on app start and pulls the data frames from the server allowing the data to be up to date.
The call of usethis::usedata(df, overwrite = T) create you a dataset you can load when your package is in use.
You should load explicitly the dataset in your app using data(df) in the piece of code where the data is needed.
Related
I have an excel file that sits in a shared drive (MS One drive) and I would like to run an R script that updates some data in that file.
Is there a control in R to force close any open instances of that file so that the data update runs ok?
I have tried the close() and file() functions but without success.
Any ideas?
Thank you
I am trying to use an R script as a data source for Power BI. I am a regular user of R but am new to Power BI. When all the datasets that are imported by the R script are from SQL databases I can import the resulting dataframes from the R script fine, however I have a script that uses a .csv file that Power BI's R session can't find which results in the error:
Error: 'times_of_day_grid.csv' does not exist in current working directory ('C:/Users/MyUserName/RScriptWrapper_ac2d4ec7-a4f6-4977-8713-10494f4b0c4f').
The .pbix file and the R script are both stored in the same folder as the csv
I have tried manually setting the wd by inserting into the script
setwd("C:/Users/MyUserName/Documents/R/Projects/This Project Folder")
But this just results in the message
"Connecting - Please wait while we establish a connection to R"
And later if I leave it running:
Unable to connect
We encountered an error while trying to connect.
Details: "ADO.NET: R execution timeout. The script execution was
terminated, since it was running for more than 1800000 miliseconds."
I have also tried specifying the full addresses of the csv files in read_csv(), but this results in the same timeout warning.
Any ideas as to how I can edit my script (or the settings in Power BI) to get around this? (The script only takes a minute or so to run in RStudio.)
Don't forget that you can load your csv file using the built-in functionalities in PowerBI Get Data > Text/CSV and then go to Edit Queries and handle the R scripting from there. That way you won't have to worry about setting the working directory in the R script at all.
You can even load multiple files and work on each and everyone of them using the approach described in Operations on multiple tables / datasets with Edit Queries and R in Power BI
Please let me know how this works out for you-
So I wrote a shiny app in R and part of the script includes calling a web service to collect data. I run the script on a daily basis to build the shiny app. I would like to show data collected from the web service over time, however the only way I know how would be to write the data to a csv file after every call to the web service and then load that csv file back into R to show how the data has changed over time. This may seem like a strange question, but is there a way to accomplish this solely in R so that the data is stored every time the script is run and new values from the web service call are apended?
I have a shiny app that continuously runs on a server, but this app uses SQL data tables and needs to check for updates once a day. Right now, with no batch file in place, I have to manually stop the app, run an R script that checks for these updates, then re-run the app. The objects that I want to update are currently stored in RStudio's global environment. I've been looking around at modifying .RData files because I'm running out of options. Any ideas?
EDIT: I'm aware that I probably have to shut down the app for a few minutes to refresh the tables, but is there a way I can do something like this using a batch file?
I've been using http://r-pkgs.had.co.nz as a guide with much success, but I'm stuck on something that I haven't been able to resolve for many hours. Perhaps because I'm a bit thick...
Basically, I want to include a bunch of csv's as data.frames into an R package I'm working on. If I manually save the data objects as .rda or .Rdata and place them in the <package_dir>/data folder, I can build the package and the data is accessible upon loading the package. However, these csv's will receive updates every so often, and when this happens I'd like to just hit 'build' in R-Studio and have the data objects rebuilt from the csv's using some very simple processing scripts I have written.
So based on Hadley's guide, I've run devtools::use_data_raw() and put the csv's in the <package_dir>/data-raw folder. In that folder I've also placed R scripts to turn these data files in to R objects and then save them to the correct location and with the correct format with devtools::use_data(<object_name>).
My original interpretation was that that when building the package, the scripts in <package_dir>/data-raw get run to produce the .rda files in the <package_dir>/data folder. I'm guessing this is incorrect? If this is wrong, is there a way to automatically source those scripts when building the package? Is/would this be a bad practice?