Could a multi-page Streamlit app pages share Duckdb file without problems? - streamlit

Hi I am running a Streamlit app with DuckDB persisted on a file. The app is is growing quite large, If I split it into a multi-page app, will it be problematic with database files locks and such, given DuckDB stores the data on a single file? Are Streamlit multi-page app pages sharing the same process and thread? If so, does that mean it won't likely cause issues, or will it be wise to avoid such a setup and store the data in another way?

Related

Not able to save records into db after published to shiny server

I have published one application on shiny server which basically takes input from users and save into SQLite database in back-end.
My concern is after publishing to shiny server when user is opening the form and saving their input i can't see any record saving in database.However it is working perfectly fine when i try to launch from R-studio without publishing.
I have put database file into shiny folder before publishing,i believe issue might due to the path of the database so is their any specific folder or path on which we suppose to put our database file?
Any help would be appreciated!
Are you sure that shinyapps.io supports write access for sqlite? There is a community post (see https://community.rstudio.com/t/shinyapps-io-and-sqlite-as-persistent-local-data-storage/19361). From this it is clear that shinyapps.io did not support local data storage at that time and there were no concrete plans to implement it. That was one and a half years ago, true, but it may simply be that it is still not be available at this point. That would mean that you can most likely read but not write sqlite.
Hopefully, you find an alternative to store data here: https://shiny.rstudio.com/articles/share-data.html

How to speed up loading data at start of shiny app

I'm pretty new to using shiny apps to visualize data. We plan to host our shiny app on our own server. So for that we used docker to deploy our app. However the app is super slow to load, since we have to load a lot of (big) dataframes (up to 10000000 rows x 10 columns), that are saved within a RData object.
My first question is: Will the data be loaded each time a user visits/reloads the website?
I was looking into ways how to spead up loading the data. One possbility might be to use the feather package, which seems to be faster in loading data tables.
Another option would be to put the data into a database. However I do not have experience with that. I saw there are some nice packages like DBI and RMariaDB that seem to work well with shiny app. However, I only find examples where an exterinal database is queried. Is it possible to pack a MySQL database within the docker and access it from within the shiny app? Or is the normal procedure to host the database externally?
I'm really new to all this, so I'm not even sure if I'm asking the right questions. These are the conditions: We have a lot of data in the form of multiple data tables. Those need to be read into our app quickly and needs to be queried quickly through interactive user input. We need to dockerize our app in order to deploy it. What is the best approach here?

Shiny R: Updating global environment objects from a batch file on a server application?

I have a shiny app that continuously runs on a server, but this app uses SQL data tables and needs to check for updates once a day. Right now, with no batch file in place, I have to manually stop the app, run an R script that checks for these updates, then re-run the app. The objects that I want to update are currently stored in RStudio's global environment. I've been looking around at modifying .RData files because I'm running out of options. Any ideas?
EDIT: I'm aware that I probably have to shut down the app for a few minutes to refresh the tables, but is there a way I can do something like this using a batch file?

SQL CE 4 vs Files on a website

I am looking at moving all data loaded from files into drop downs, variable calculations, user state, Session Memory(hardcoded..),etc to be all loaded rather from SQL CE. Plus would be have it running in a MemCache or App Fabric layer but we don't have that luxury so we stuck with using Session or file loads to read the temp storage data. The data's too small to be kept in SQL Server as well as it's on a different machine on the network so compact edition seems to be a good option. It sounds like it's a viable option as you get a trimmed down db on your site versus files/session memory.
You should use Session for temporary storage, but replace all code that uses filesystem for storage with SQL CE, with files on filesystem you have to have some sort of thread isolation for different users of your ASP.NET application and with SQL CE you got that out of the box.
If you have to have some sort of search on that data, searching bunch of files is not very feasible.
In general I would sure preferred SQL Server Compact than files.

Copy of Access mdb database being updated by live database

I'm trying to compute statistics for data held in an Access .mdb database. In order to avoid interfering with the live database, I'm working from a copy which I made by simply using copy-paste in Windows Explorer. The copy resides in the same directory, but with a different name.
I'm using R and RODBC to connect to the copy of the file. The strange thing is that new data that is being updated on the original live database is appearing in my queries. This is despite the file timestamps of the copy not changing at all. It is also causing some slowdown in the live database.
My understanding is that the .mdb files are standalone, or is this not the case? Should I have copied the database in a different way?
It seems that you may have copied the front-end of a front-end / back-end set up. The back-end is where data is held and tables are linked to a front-end to hold forms etc. Copying a front-end copies the back-end links, so the data is live.

Resources