How read external data from different server with golem shiny app? - r

I am trying to add the txt file that contains let's say 50 proj, with paths outside of the package. I am attempting to use these files to get a shinyapp using golem framework.
My problem is, as much as I read on golem shiny apps, I do not understand where to add these txt files so that I can then use them for my shiny applications. NOTE: I want to work with golem framework and therefore the answer should be aligned to these request.
This is a txt file.
nameproj technology pathwork LinkPublic Access
Inside I have 50 projects with paths and links that will be used to retrieve the data for the app.
L3_baseline pooled /projects/gb/gb_screening/analyses_gb/L3_baseline/ kkwf800, kkwf900, etc..
Then I create paths to the data like this:
path_to_data1 = "data/data1.txt"
path_to_data2 = "data/data2.txt"
Then, I create helper functions. These helper functions will be used in app_server and app_ui modules. Something like the bellow.
make_path<-function(pathwork,type,ex, subfolder=""){
path<-paste0(pathwork,"/proj", type,"/",ex,"/",subfolder,"/")
return(path)
}
getfiles = function(screennames, types, pathwork){
files = data.frame()
for (ind in 1:length(screennames)){
hitfile = file.path(make_path(path_worj, types[ind], names[ind], "analysis"),"File.tsv")
if(file.exists(file)){files=rbind(files, data.frame(filename=file, screen=paste0(names[ind],"-",types[ind])))}
}
return(files)
}
Can someone direct me to:
how to actually add the txt files containing paths to external data and projects within golem framework
a clear example where these files are added within the golem
NOTES: My datasets are all within private servers within my company. Thus, all these paths are directing me to these servers. And I have no issues with accessing these datasets.

I have solve the issue by simply adding a source file, with the paths above only and run the app. It seems it is working.

Related

How to transfer my files to R Projects, and then to GitHub?

I have 3 r scripts;
data1.r
data2.r
graph1.r
the two data files, run some math and generate 2 separate data files, which I save in my working directory. I then call these two files in graph1.r and use it to plot the data.
How can I organise and create an R project which has;
these two data files - data1.r and data2.r
another file which calls these files (graph1.r)
Output of graph1.r
I would then like to share all of this on GitHub (I know how to do this part).
Edit -
Here is the data1 script
df1 <- data.frame(x = seq(1,100,1), y=rnorm(100))
save(df1, file = "data1.Rda")
Here is the data2 script
df2 <- data.frame(x = seq(1,100,1), y=rnorm(100))
save(df2, file = "data2.Rda")
Here is the graph1 script
load(file = "data1.Rda")
load(file = "data2.Rda")
library(ggplot2)
ggplot()+geom_point(data= df1, aes(x=x,y=y))+geom_point(data= df2, aes(x=x,y=y))
Question worded differently -
How would the above need to be executed inside a project?
I have looked at the following tutorials -
https://r4ds.had.co.nz/workflow-projects.html
https://martinctc.github.io/blog/rstudio-projects-and-working-directories-a-beginner's-guide/
https://swcarpentry.github.io/r-novice-gapminder/02-project-intro/
https://www.tidyverse.org/blog/2017/12/workflow-vs-script/
https://chrisvoncsefalvay.com/2018/08/09/structuring-r-projects/
https://support.rstudio.com/hc/en-us/articles/200526207-Using-Projects
I have broken my answer into three parts:
The question in your title
The reworded question in your text
What I, based on your comments, believe you are actually asking
How to transfer my files to R Projects, and then to GitHub?
From RStudio, just create a new project and move your files to this folder. You can then initialize this folder with git using git init.
How would [my included code] need to be executed inside a project?
You don't need to change anything in your example code. If you just place your files in a project folder they will run just fine.
An R project mainly takes care of the following for you:
Working directory (it's always set to the project folder)
File paths (all paths are relative to the project root folder)
Settings (you can set project specific settings)
Further, many external packages are meant to work with projects, making many task easier for you. A project is also a very good starting point for sharing your code with Git.
What would be a good workflow for working with multiple scripts in an R project?
One common way of organizing multiple scripts is to make a new script calling the other scripts in order. Typically, I number the scripts so it's easy to see the order to call them. For example, here I would create 00_main.R and include the code:
source("01_data.R")
source("02_data.R")
source("03_graph.R")
Note that I've renamed your scripts to make the order clear.
In your code, you do not need to save the data to pass it between the scripts. The above code would run just fine if you delete the save() and load() parts of your code. The objects created by the scripts would still be in your global environment, ready for the next script to use them.
If you do need to save your data, I would save it to a folder named data/. The output from your plot I would probably save to outputs/ or plots/.
When you get used to working with R, the next step to organize your code is probably to create a package instead of using only a project. You can find all the information you need in this book.

Is there a way to work with a file without uploading it in Shiny?

I am trying to create a Shiny app that it will depend on some files to work. I have always done with the FileInput command to select the files. However, I would like to know if there is a possibility to work with those files (more than 1) without using that command. I only want to work with those files if they are in the same folder where it is the shiny app.
I was thinking to use something similar to list.files, but I don't know if it is possible.
file_list <- list.files(path="PATH")
Does anyone know how to do it?
Thanks in advance
If you have only one file, you could do something like
path = "path/to/file/file.ext"
if (file.exists(path)) {
file = read_file(path)
}
Or if you have multiple files in a folder
files_path = list.files("path/to/folder")
if (length(files_path) != 0) {
files = lapply(files_path, read_file)
}
I'm doing something similar in a Shiny app where I have to take several files that are located in different folders, and the number of the files varies depending on what you've previously done in the past.
If this doesn't help, please share more context so I can try to help you.

Hide Keys in Shiny Application Deploy

I'm deploying an app to shinyapps.io using data I'm grabbing from S3 and I want to make sure my AWS keys are safe. Currently within the app.R code I'm setting environment variables and then querying S3 to get the data.
Is there a way to create a file that obscures the keys and deploy it to shinyApss along with my app.R file
Sys.setenv("AWS_ACCESS_KEY_ID" = "XXXXXXXX",
"AWS_SECRET_ACCESS_KEY" = "XXXXXXXXX",
"AWS_DEFAULT_REGION" = "us-east-2")
inventory =aws.s3::s3read_using(read.csv, object = "s3://bucket/file.csv")
I'll also add that I'm on the free plan so user authentication is not available otherwise I wouldn't fuss about my keys being visible.
I recommend the following solution and the reasons behind it:
Firstly, create a file named .Renviron (just create it with a text editor like the one on RStudio). Since that file has a dot before the name, the file will be hidden (in Mac/Linux for example). Type the following:
AWS_ACCESS_KEY_ID = "your_access_key_id"
AWS_SECRET_ACCESS_KEY = "you_secret_access_key"
AWS_DEFAULT_REGION = "us-east-2"
Secondly, if you are using .git it is advisable to add the following text in your gitignore file (so to avoid to share that file for version control):
# R Environment Variables
.Renviron
Finally you can retrieve the values stored in .Renviron to connect to your databases, S3 buckets and so on:
library(aws.s3)
bucketlist(key = Sys.getenv("AWS_ACCESS_KEY_ID"),
secret = Sys.getenv("AWS_SECRET_ACCESS_KEY"))
In that way your keys will be "obscured" and will be retrieved by the function Sys.getenv from .Renviron so you can protect your code.
Perhaps this solution is too basic, but you can simply create a .txt file, with the keys in it one per line. Than you can use scan() to read that file.
Something like:
Sys.setenv("AWS_ACCESS_KEY_ID" = scan("file.txt",what="character")[1],
"AWS_SECRET_ACCESS_KEY" = scan("file.txt",what="character")[2],
"AWS_DEFAULT_REGION" = "us-east-2")
It is similar to the first solution in the "managing secrets" link in the comments, except that we use a simple text format instead of JSON.

How do I assign the working directory for rmarkdown/knitr interactive doc on shinyApps.io?

I have a fully functional interactive shiny doc using knitr/r markdwon.
However, when I try to publish (deploy) it to shinyApps.io, I get an error saying the .PNG file I try to incorporate into the doc (using readPNG from the png package) is unable to open.
I know the problem is related to working directories.
In my original code I assigned a working directory to my folder (i.e., "C:/Users/NAME/documents/...." that contains both my .rmd file and my .png file. However, obviously my folder doesn't exist on the shinyapps.io.
So how exactly do I set my working directory to open the .png file via my doc on shinyapps.io?
I can't find anywhere that explicitly walks through this process. Please help explain this simply/basically for me.
Example:
Assume I have a folder in "C:/Users/NAME/documents/Shiny" that contains 2 files: "shiny.rmd" and "pic.png". I want "pic.png" to be rendered in my shiny.rmd doc.
Currently I have:
---
title: "TroubleShoot"
output: html_document
runtime: shiny
---
```{r ,echo=FALSE,eval=TRUE}
library(png)
library(grid)
direct <- "C:/Users/NAME/documents/Shiny/"
img <- readPNG(paste0(direct,"pic.PNG"))
grid.raster(img)
```
How do I rewrite my code so it works on shinyApps.io?
Do I need to create and name folders in specific ways to place my 2 files into before deploying?
When you paste the c drive to direct, and read the direct into img, you are sending the app/markdown to your local drive. You need to use a relative path to the project. I would suggest something like:
direct <- "./"
Which uses the relative path, not the absolute path.
That said you might be able to skip that step entirely if the .png is in the same folder as the shiny object. just call the file name and format and leave it as that.
Also, check you use of ". Your syntax highlighting indicates your code won't run properly as it thinks some of your functions and variables are character strings because you've misplaced your quotes around read.png(). That's probably just a copy and paste typo though.
In summary, call the image via grid.raster("./pic.PNG"), and brush up on how working directories and relative paths work.
In the absence of knowing exactly how you're 'setting' your working directory:
In rStudio to correct and easiest approach is to make a 'New Project' for every New Project. You can do this as the first step in your project, or you can add a project file to an existing directory.
When you do this it automatically sets your working directory correctly. The way you seem to be doing it is not correct, you are merely supplying a 'path'. If you were to not want to use a project for some reason (I can't think of a case where that would be right), then you can always use setwd(), however, I believe this needs to be used in every session. A true project doesn't have that requirement.
This will then allow you to deploy your app properly as it will update the working directory to the one on the host machine at shinyapps.io when it is initialised there.
Finally, using the correct relative syntax "./path/to/my.file" to reference all your assets, images, data etc, should correct your issue as stated in the question.

Where to store .xls file for xlsReadWrite in R

I am relatively new to R and am having some trouble with how to access my data. I have my test.xls file created in my MYDocuments. How to I access it from R
library(xlsReadWrite)
DF1 <- read.xls("test.xls") # read 1st sheet
Set the working directory with:
setwd("C:/Documents and Settings/yourname/My Documents")
This link may be useful as a method of making working folders per project and then placing all relevant info in that folder. It's a nice tutorial for making project files that contain everything you need. This is one approach.
http://www.dangoldstein.com/flash/Rtutorial2/Rtutorial2.html
The setwd() is another approach. I use a combination of the two in my work.

Resources