Import synced OneDrive files - r

For a project I want to create a shinydashboard that is powered by data provided by various people in a SharePoint and has the ability to automatically update once a file is edited in SharePoint.
Because of firewall issues with the company I am working for it seemed like a better idea to sync the SharePoint folder to my local disk, and upload the files from there.
My code looks like this:
observe({
invalidateLater(2000, session)
list <- list.files(path = "../../folder/folder", pattern = ".xlsx", recursive = T)
for (i in 1:length(list)) {
assign(basename(file_path_sans_ext(list[i])),
as.data.table(read_xlsx(paste0("../../folder/folder/", list[i]))))
}
})
By itself the code works well. It grabs all of the documents and assigns them to data.tables with their filename as name. However, whenever I make a change to one of the documents in SharePoint it won't update the file in my local disk until I open it manually. I can see that because when I the Preview Pane in file explorer it will still show the unedited document.
My question:
Is there a way in R to force a sync or in OneDrive to sync automatically once an update is made to the original file? All files combined are like 250kb so even if an automatic update might be too heavy in most cases, I think it's fine in my case.

Related

Can I use PowerBI to access SharePoint files, and R to write those files to a local directory (without opening them)?

I have a couple of large .xlsb files in 2FA-protected SharePoint. They refresh periodically, and I'd like to automate the process of pulling them across to a local directory. I can do this in PowerBI already by polling the folder list, filtering to the folder/files that I want, importing them and using an R script to write that to an .rds (it doesn't need to be .rds - any compressed format would do). Here's the code:
let
#"~ Query ~"="",
//Address for the SP folder
SPAddress="https://....sharepoint.com/sites/...",
//Poll the content
Source15 = SharePoint.Files(SPAddress, [ApiVersion=15]),
//... some code to filter the content list down to the 2 .xlsb files I'm interested in - they're listed as nested 'binary' items under column 'Content' within table 'xlsbList'
//R export within an arbitrary 'add column' instruction
ExportRDS = Table.AddColumn(xlsbList, "Export", each R.Execute(
"saveRDS(dataset, file = ""C:/Users/current.user/Desktop/XLSBs/" & [Label] & ".rds"")",[dataset=Excel.Workbook([Content])[Data]{0}]))
However, the files are so large that my login times out before the refresh can complete. I've tried using R's file.copy command instead of saveRDS, to pick up the files as binaries (so PowerBI never has to import them):
R.Execute("file.copy(dataset, ""C:/Users/current.user/Desktop/XLSBs/""),[dataset=[Content]])
with dataset=[Content] instead of dataset=Excel.Workbook([Content])[Data]{0} (which gives me a different error, but in any event would result in the same runtime issues as before) but it tells me The Parameter 'dataset' isn't a Table. Is there a way to reference what PowerBI sees as binary objects, from within nested R (or Python) code so that I can copy them to a local directory without PowerBI importing them as data?
Unfortunately I don't have permissions to set the SharePoint site up for direct access from R/Python, or I'd leave PowerBI out entirely.
Thanks in advance for your help

Is there a way to upload multiple files with one containing their names in Shiny?

I was wondering if there is a way to upload several files by one single file containing its names. E.g I have a .txt that has 3 rows with "File 1.txt" "File 2.txt" and "File 3.txt" and I want to upload those 3 files in that order (The file with the names is in the same folder that the to be uploaded files).
I don't want to use fileInput(..., multiple = TRUE) because when you upload multiple files are ordered by default alphabetically. With a few files the user can change the names to be uploaded this way, but if you have a lot of files it became an annoying task.
I've thougth that I could paste the name to the last part of the datapath, but when the file is uploaded the datapath is in the local temp and not the actual location of the file, as shown in the figure below.
Thanks!
When a file is uploaded by Shiny WEB application the file is assigned a new name and is stored in temp folder of server. All the reads are done with this server-side stored file. The server does not have priveleges to get access to client's file system and is not capable to read any files from client's PC. So there are no obvious ways to solve the stated task in the desired way and I guess it shouldn't be used even the way is found as it opens the gate for risks of unauthorized file system access.
I guess if the task cannot be solved in any altetnative way you have to consider a UI interface with an arranged list of files creation, client side packing and sending to server side.
If there is any sort of pattern to the files you want to process, you might still be able to use multiple = T within fileinput for serverside processing.
My example below arranges on size of the uploaded files, but you could create a custom order on the name attribute as well (assuming you will be using this list as an index for your processing).
library(shiny)
library(magrittr)
ui <- fluidPage(
fileInput(inputId = "fileinput",
label = "select",
multiple = TRUE)
)
server <- function(input,output,session) {
observeEvent(input$fileinput, {
print(input$fileinput %>% dplyr::arrange(desc(size)))
})
}
shinyApp(ui,server)

Iterating through files in google cloud storage

I am currently trying to read pdf files stored in my google cloud storage. So far I have figured out how to read one file at a time from my google cloud storage, but I want to be able to loop through multiple files in my google cloud storage without manually reading them one by one. How can I do this? I have attached my code below.
To iterate all files in your bucket move your code for downloading and parsing in the for loop. Also I changed the for loop to for blob in blob_list[1:]: since GCS always prints the top folder in the first element and you do not want to parse that since it will result to and error. My folder structure used for testing is "gs://my-bucket/output/file.json....file_n.json".
Output when looping through the folder (for blob in blob_list:):
Output files:
output/
output/copy_1_output-1-to-1.json
output/copy_2_output-1-to-1.json
output/output-1-to-1.json
Output when skipping the first element (for blob in blob_list[1:]:):
Output files:
output/copy_1_output-1-to-1.json
output/copy_2_output-1-to-1.json
output/output-1-to-1.json
Loop through files. Skip the first element:
blob_list = list(bucket.list_blobs(prefix=prefix))
print('Output files:')
for blob in blob_list[1:]:
json_string = blob.download_as_string()
response = json.loads(json_string)
first_page_response = response['responses'][0]
annotation = first_page_response['fullTextAnnotation']
print('Full text:\n')
print(annotation['text'])
print('END OF FILE')
print('##########################')
NOTE: If you have a different folder structure versus the test made, just adjust the index in the for loop.

Enable tab completion to find file in Julia outside working directory

How can I enable tab completion in Julia with a function I wrote that loads a file outside of the working directory? Tab completion is super nice, but it only seems to work for files in my working directory. e.g. I store all of my files in a directory foo and I wrote a function to conveniently read these files.
const WHEREMUHFILES = "foo/"
function read_foo_file(filename::String)
readlines(WHEREMUHFILES * filename)
end
However, when I try read_foo_file("ba and hit Tab, tab completion doesn't work, in that it doesn't search the directory WHEREMUHFILES for bar.txt. Is there a way to enable this?
In reality, I have many different types of files organized in different directories, and the read_foo_file populates complex data structures after reading in the files, so a simple work around such as "put your files in your working directory!" is not what I'm looking for.

How to make Shiny redirect immediately?

I set up a shiny app that checks for a GET string and presents a link if a file matching the id argument exists. Now what I would like to do is have the page redirect straight to the download file if a valid query is detected in the URL. Does anybody know of the syntax to insert, e.g. a <meta http-equiv=...> header from server.R?
Motivation: I want to be able to download files directly into an R console session from a URL pointing at a Shiny app. So, a non-geeky user specifies their preliminary statistical model using Shiny, then a statistician downloads it into their usual working environment and takes it the rest of the way. I need to do this server-side rather than with something like javascript's window.location because javascript won't be supported client-side.
Here is the server.R
shinyServer(function(input, output, clientData) {
query <- reactive(parseQueryString(clientData$url_search));
revals <- reactiveValues();
## obtain ID from GET string
observe({revals$id <- query()$id});
## alternatively obtain ID from user input if any
observe({input$submitid; if(length(id<-isolate(input$manualid))>0) revals$id <- id;});
## update filename, path, and existance flag
observe({ revals$filename <- filename <- paste0(id<-revals$id,".rdata");
revals$filepath <- filepath <- paste0("backups/",filename);
revals$filexists <- file.exists(filepath)&&length(id)>0; });
## update download handler
output$download <- {downloadHandler(filename=function() revals$filename, content=function(oo) if(revals$filexists) system(sprintf('cp %s %s',revals$filepath,oo)))};
## render the download link (or message, or lack thereof)
output$link <- renderUI({
cat('writing link widget\n');
id<-revals$id;
if(length(id)==0) return(div(""));
if(revals$filexists) list(span('Session download link:'),downloadLink('download',id)) else {
span(paste0("File for id ",id," not found"));}});
});
Here is the ui.R
shinyUI(pageWithSidebar(
headerPanel(div("Banner Text"),"Page Name"),
sidebarPanel(),
mainPanel(
htmlOutput('link'),br(),br(),
span(textInput('manualid','Please type in the ID of the session you wish to retrieve:'),actionButton('submitid','Retrieve')))));
Update:
In trying #jeff-allen 's suggestion, I ran into another problem: how to extract the filesystem path to which the files get copied for downloading and turn it into a valid URL? It's probably possible by screwing with shell scripts and http config settings on my local host, but how to do this in a portable way that doesn't require superuser privileges and is as shiny-native as possible?
Motivation: I want to be able to download files directly into an R
console session from a URL pointing at a Shiny app.
...i.e. this amounts to a very roundabout way of trying to serve static content from a shiny app. Turns out I don't need to redirect or use downloadHandler at all. As this post on the Shiny forum says, any file I create inside the local www directory will be accessible as if it is at the root of my app directory. I.e. if I have my app do save.image(file='www/foo.rdata') then I will be able to access it from [http://www.myhost.com/appname/foo.rdata] if the app itself lives at [http://www.myhost.com/appname/]

Resources