Android SQLite DB Open for Application Lifetime - sqlite

I know there have been a bunch of questions and answers on this one already, but I couldn't find any conclusive answer that says what I want to do is OK.
Basically I want to have a singleton SQLiteOpenHelper that gets instantiated in MyApplication.onCreate() (I extend Application). That way, my code only ever has to call the singleton's getWritableDatabase() and I don't have to worry about managing whether the db is open or close, or worry about tearing down and building up connections.
I gather that the connection will remain open until I call the singleton's close() which I plan on never doing, so the connection will remain open as long as the application is running. When the application is killed, the whole process gets killed, so the db connection will get killed with it, as well as any resources tied up with the db. Then, the next time the application is run, I instantiate the singleton again, and go on with my business.
Is this a bad idea? Can I run into issues with not closing the db connection even if the process is killed? The reason I would like to do this is because a lot of my code is not strongly coupled to Activity lifecycles, and it would be a real headache to manage the db lifetime based on it.

Related

Best practice to maintain PSQL+R Shiny connections

I've built an application that does a lot of processing of user data (they can load in their data, map the variables, run analyses, review a dashboard, and download the results/report). It's a pretty heavy application, and I'm running into an issue that I'm not sure how to best solve for.
The problem is that sometimes the session will unexpectedly disconnect from the psql database. This causes problems because just about every corner of the application depends on retrieving or sending information. Basically, the app doesn't work at all without the connection. What's even worse, is the UI doesn't really inform the user of the problem, it kindof sits all lazy-like.
The application exists on an EC2 instance within a docker container, served through an HTTPS proxy (Caddy) to the public via a registered domain name. Each new session searches for a global pool connection, and if one does not exist, it creates one, then checks out a local connection and passes that into all the downstream modules.
I'm wondering how others have addressed this problem. Should I,
use a global pool, then check out a single connection and test for a severed connection at the start of each function? This is my current (unfinished) approach and seems not great.
search for a pool connection and checkout a connection at the start of each function, then return at the end? This would take a bit of time to implement (and test), but seems like a reasonable solution.
check for a connection every minute and if one doesn't exist, create one. I'm guessing this would need to happen in each module independently.
Any direction will be greatly appreciated.
Thanks,

sqlite3 + node: when to close db?

I'm using better-sqlite3 on Node, but I suspect my questions are applicable to node-sqlite3 as well.
I basically have 2 simple questions, relating to a server-rendered website:
Do I need to explicitly call .close() on the database? I seem to remember reading somewhere that it will automatically close when the current scope (like the current function) exits. What if I never call .close() in a web server scenario, taking a lot of requests?
If you have a bunch of different components (authentication, authorisation, localisation, payment, etc...) and each component may or may not need to access the database throughout the lifetime of a request (which are quite short-lived, except for payment), is it better to
have one db connection for the lifetime of the server and pass that around
have one db connection for the lifetime of the request and pass that around
open a new connection every time I need something, maybe 2-3 times per request (and close it either explicitly or implicitly when the function returns, if that's a thing)
Thank you
Joshua Wise's (better-sqlite3's creator) answer over on GitHub:
Database connections are automatically closed when they are garbage collected, which is non-deterministic. If you want to know that the connection is closed (rather than guessing), you should call .close().
You can just open one database connection for the entire thread (the entire process if you're not using worker threads), and share that connection between every request. Node.js is single-threaded, so you don't have to worry about simultaneous access, even if multiple requests are being handled concurrently. The one caveat is that you should never have a SQLite transaction open across multiple ticks of the event loop (i.e., don't use await between BEGIN and COMMIT), because then other requests could accidentally inject SQL into your transactions. Also, SQLite transactions are serialized (you can't have more than one at a time), so you should open and close them as quickly as possible; keeping them open across ticks of the event loop is bad for performance.

Maxmind Injecting new DatabaseReader as a singletone to avoid re-accessing the file again and again

In a .net core web app, I want inject a new DatabaseReader as a singleton. Therefore i use the AddSingelton in my Startup-Class.
services.AddSingleton(x => new DatabaseReader(pathToFile));
Do you think its a good idea to reuse DatabaseReader?
Thanks
A single connection is a bad idea - if access to the connection is properly locked, it means that website / application could only serve one user at a time.
This means that you are extremly limited in your application scalability and have no ability to get a lot of users.
There is also a problem when your connection is not locked very well, things can get weird.
For example, one thread might dispose the connection while another thread is trying to execute a command against it.
A better possibility is to use connection pooling by creating a new connection object when you need one. So you can handle many requests at the same time and your limitation should be the database.
Yes, you should reuse the DatabaseReader across concurrent requests. The reader is thread safe and does not rely on locks for that thread safety.

What happens if Android destroys my app's process and there was a Realm instance still open?

If the OS destroys my app's process and there was a Realm instance still open, but no transactions executing, is there a chance that this will cause problems when my app starts back up again? If not, why not just open Realm instances in the application's custom Application class's onCreate method, store global references to them, and then just let the OS close them if/when it ends your app's process?
There is nothing inheriently wrong with that approach on the UI thread since it is a Looper thread that will auto-update the Realm, but remember that you need a Realm instance for each thread you want to work on.
Realm is an MVCC database, which means it can keep multiple versions of the data alive at the same time. This means that if you keep an Realm instance open on a non-looper thread, Realm will have to keep track of all changes between oldest and newest version. This can inflate the filesize.
In general we recommend controlling the life cycle as described in the below links. That will prevent any issues.
https://realm.io/docs/java/latest/#closing-realm-instances
https://realm.io/docs/java/latest/#controlling-the-lifecycle-of-realm-instances

Designing an asynchronous task library for ASP.NET

The ASP.NET runtime is meant for short work loads that can be run in parallel. I need to be able to schedule periodic events and background tasks that may or may not run for much longer periods.
Given the above I have the following problems to deal with:
The AppDomain can shutdown due to changes (Web.config, bin, App_Code, etc.)
IIS recycles the AppPool on a regular basis (daily)
IIS itself might restart, or for that matter the server might crash
I'm not convinced that running this code inside ASP.NET is not the right thing to do, becuase it would allow for a simpler programming model. But doing so would require that an external service periodically makes requests to the app so that the application is keept running and that all background tasks are programmed with utter most care. They will have to be able to pause and resume thier work, in the event of an unexpected error.
My current line of thinking goes something like this:
If all jobs are registered in the database, it should be possible to use the database as a bookkeeping mechanism. In the case of an error, the database would contain all state necessary to resume the operation at the next opportunity given.
I'd really appriecate some feedback/advice, on this matter. I've been considering running a windows service and using some RPC solution as well, but it doesn't have the same appeal to me. And I'd instead have a lot of deployment issues and sycnhronizing tasks and code cross several applications. Due to my business needs this is less than optimial.
This is a shot in the dark since I don't know what database you use, but I'd recommend you to consider dialog timers and activation. Assuming that most of the jobs have to do some data manipulation, and is likely that all have to do only data manipulation, leveraging activation and timers give an extremely reliable job scheduling solution, entirely embedded in the database (no need for an external process/service, not dependencies outside the database bounds like msdb), and is a solution that ensures scheduled jobs can survive restarts, failover events and even disaster recovery restores. Simply put, once a job is scheduled it will run even if the database is restored one week later on a different machine.
Have a look at Asynchronous procedure execution for a related example.
And if this is too radical, at least have a look at Using Tables as Queues since storing the scheduled items in the database often falls under the 'pending queue' case.
I recommend that you have a look at Quartz.Net. It is open source and it will give you some ideas.
Using the database as a state-keeping mechanism is a completely valid idea. How complex it will be depends on how far you want to take it. In many cases you will ended up pairing your database logic with a Windows service to achieve the desired result.
FWIW, it is typically not a good practice to manually use the thread pool inside an ASP.Net application, though (contrary to what you may read) it actually works quite nicely other than the huge caveat that you can't guarantee it will work.
So if you needed a background thread that examined the state of some object every 30 seconds and you didn't care if it fired every 30 seconds or 29 seconds or 2 minutes (such as in a long app pool recycle), an ASP.Net-spawned thread is a quick and very dirty solution.
Asynchronously fired callbacks (such as on the ASP.Net Cache object) can also perform a sort of "behind the scenes" role.
I have faced similar challenges and ultimately opted for a Windows service that uses a combination of building blocks for maximum flexibility. Namely, I use:
1) WCF with implementation-specific types OR
2) Types that are meant to transport and manage objects that wrap a job OR
3) Completely generic, serializable objects contained in a custom wrapper. Since they are just a binary payload, this allows any object to be passed to the service. Once in the service, the wrapper defines what should happen to the object (e.g. invoke a method, gather a result, and optionally make that result available for return).
Ultimately, the web site is responsible for querying the service about its state. This querying can be as simple as polling or can use asynchronous callbacks with WCF (though I believe this also uses some sort of polling behind the scenes).
I tell you what I have do.
I have create a class called Atzenta that have a timer (1-2 second trigger).
I have also create a table on my temporary database that keep the jobs. The table knows the jobID, other parameters, priority, job status, messages.
I can add, or delete a job on this class. When there is no action to be done the timer is stop. When I add a job, then the timer starts again. (the timer is a thread by him self that can do parallel work). I use the System.Timers and not other timers for this.
The jobs can have different priority.
Now let say that I place a job on this table using the Atzenta class. The next time that the timer is trigger is check the query on this table and find the first available job and just run it. No other jobs run until this one is end.
Every synchronize and flags are done from the table. In the table I have flags for every job that show if its |wait to run|request to run|run|pause|finish|killed|
All jobs are all ready known functions or class (eg the creation of statistics).
For stop and start, I use the global.asax and the Application_Start, Application_End to start and pause the object that keep the tasks. For example when I do a job, and I get the Application_End ether I wait to finish and then stop the app, ether I stop the action, notify the table, and start again on application_start.
So I say, Atzenta.RunTheJob(Jobs.StatisticUpdate, ProductID); and then I add this job on table, open the timer, and then on trigger this job is run and I update the statistics for the given product id.
I use a table on a database to synchronize many pools that run the same web app and in fact its work that way. With a common table the synchronize of the jobs is easy and you avoid 2 pools to run the same job at the same time.
On my back office I have a simple table view to see the status of all jobs.

Resources