LocalStorage or SQLite Database? - sqlite

I'm currently developing a mobile application which uses AJAX request to get data from a server.
To enable offline navigation in my application, I need to store all data collected.
My application is quite powerful because there's a section where the user can see charts (powered by highcharts).
I'm asking myself about the best solution to cache the data collected in the JSON format.
Is it light or efficient enough to JSON.stringify the data array into local storage like:
localStorage.setItem("graph_1_datas", JSON.stringify(json_data_array));
Or would it be better to create a database, and a table like that:
TABLE
-----
id
graphId
blockId
x
y
I have 3 graphIds by blockId, and about 10 blockIds...

Storing the JSON strings to local storage should be fairly fast and efficient. Just store a separate file for each request and then it will give you clear simple code for getting the data either from local storage or web service.
If you are likely to want to edit the data offline then you may wish to consider an SQLite database as it will make it easier/more efficient to add code to track changes.
You may also want to consider an SQLite database if your object graph gets more complicated and fits a relational database model.

Related

Plain JSON vs in-app DB for React-native?

I have 50-100mb dataset that users need to have access to. It's static, so doesn't make sense to host a server for it. There are two kinds of operations I'll perform on the data:
Reading objects by unique ObjectId. Each object is ~3kb.
Full text search through ~300.000 strings. Each string is 4-60 characters.
I'm considering to store data as JSON files. The 300k strings will be stored separately. I'll use https://github.com/nextapps-de/flexsearch or something similar to perform search over it. I've done something similar before with ~10mb dataset back in 2016. I used just regex search and it was working flawlessly.
Are there reasons to use RealmDB, SQLite, PouchDB or something else instead of just JSON?
I wish I did this question an year ago...
In the office I currently work we tried creating an app by using PouchDB and react native, we basically saw PouchDB as an advantage because it wouldn't require our API to send all data over and over again on every refresh triggered by the user, it would only send the data that changed based on the client's checkpoint. As the data in the server was quite heavy (around 6k entries with more than 200 attributes each) we tried at all costs to go easy on the client's data plan.
Months after this implementation was in place we implemented a search functionality with many different options for sorting and filtering, and not only we had to throw away all our implementation of PouchDB, but we had to start from scratch replacing all its logic with indexed JSON values. PouchDB performance was extremely slow, it was taking more than 5 seconds or so to retrieve results, and we just couldn't afford to delay this time on our scope.
In the end we accomplished to reach a very quick search running flex search inside our indexed JSONs. Don't do the same mistake we did, PouchDB costed us too much budget and precious time. It was a terrible choice.
Unfortunately I cannot offer proof or more details from a reputable source, I can only share the own personal terrible experience I had when I thought we were reaching the end of a project and we had to start from scratch. it was a mess.
Oh boy, a bountied, opinion based question!
I have about 5 years experience with pouchDB specifically, a little with SQLite. I have but a cursory experience with RealmDB - I tried it out and decided it was not a good fit for my hybrid/mobile needs.
pouchDB exceeds in on one area hands down - synchronization/replication just like it's big brother CouchDB. Providing interaction with an offline database that synchronizes with a remote database is huge for many mobile apps. pouchDB is schemaless, leveraging JSON documents. With pouchDB one may choose among several data stores via adapters. As there can be quota headaches1 for your data size the right choice may likely be the SQLite adapter. pouchDB does not support full text search.
SQLite is what its name implies - a relational database, requiring a schema. An advantage to SQLite is platform support and the size of the database is not subject to quota headaches like web storage (e.g. IndexedDB). SQLite supports full text search, and apps can deploy with a canned database.
Between pouchDB and SQLite lies RealmDB - it is a schema based object database that supports synchronization/replication. Like pouchDB, it does not support full text search.
Now your requirements
Looking up object by id
300k static text
full-text search
I read 'static' to mean immutable.
Since your data does not change and full-text search is required, pouchDB and RealmDB would not be good choices. If there is a requirement to enhance, remove or add to the data, either would make sense as changes to data on a single server would replicate changes to the local database, practically in a seamless fashion.
SQLite might be a reasonable choice since it supports search and it is possible to deploy a canned database with the app. However, SQLite can be slow in hybrid apps.
So,
pouchDB and RealmDB would be massive overkill and not a good fit.
SQLite would add a fair bit of complexity.
For your specific requirements I'd stay on your path, though I have a care as it appears flexsearch loads its index into memory - if its performance returns some penalty then SQLite, with it's ability to deploy a canned database and providing a search facility may prove a reasonable trade off versus complexity.
Good luck!
1 Quota Headaches
I would say it really just depends on whether you want and NEED to leverage the power of relational queries. Because your data is never changing I would use JSON unless you are trying to perform complex comparisons between your data. In your case it sounds like you are just going to be searching for the particular ObjectId so JSON is your best bet especially because you are saying you won't need to change the data later.
If you organize your JSON so that your ObjectId are in a sorted order you will easily be able to search quickly.

CosmosDB API selection: does it dictate how the data is stored, or only how we communicate with the instance?

When creating a CosmosDB instance, we can choose the API that we will use to communicate with the instance (e.g. SQL, MongoDB, Cassandra, etc.)
What is not clear to me is - does this selection dictates how the data is stored, or only the way we communicate with the instance? For example, if we choose MongoDB, does it mean that CosmosDB will store data in a MongoDB fashion?
The choice of API does not change how the data is stored. Cosmos DB always stores data using something called atom-record-sequence (ARS) which is essentially a set of primitive types, structs and arrays. The database engine translates the native ARS format into the data structures used by the various APIs (i.e. json documents, table rows, etc.)
So the answer to your question is that the choice of API only impacts how you communicate with the databases for that Cosmos DB account.
As David Makogon points out in his comment on another answer, while the way the data is stored is the same regardless of the API used, the content of the data will be different because each API requires it's own metadata so that the underlying data can be projected into the format expected by each API.
Here is a good technical overview of how Cosmos works under the hood.
https://azure.microsoft.com/en-us/blog/a-technical-overview-of-azure-cosmos-db/
Data is always stored in the same fashion (as a bunch of json documents), only the way you interact with the data changes
https://learn.microsoft.com/en-us/azure/cosmos-db/introduction#develop-applications-on-cosmos-db-using-popular-open-source-software-oss-apis

Storing large object to InProc session rather than reloading on every page

This is my first post/question so please let me know if/how I can improve it. I found similar questions but nothing quite covered this.
When you store to InProc session you're just storing a reference to the data. So, if I have a public property foo, and I store it in Session("foo") = foo, then I haven't really taken up any additional memory (aside from the 32/64 bits used by the pointer)?
In my case, we are currently reloading foo on every page of our website, so if I were to instead store it in session, then it should be taking the same about of space, but not needing to reload on every page. I'm seeing a lot of people say not to store large objects in session, but if that large object already exists, what difference does it make to have a pointer to it? Of course I would remove the object from session the moment it was no longer needed.
The data we are trying to store is an object specific to the user's current work, but not user data. As an analogy, say the user was a car dealer, and he is looking at all the data for a particular customer. We have multiple pages for this customer, and we want to keep all the customer info loaded on each page, All the customer data is stored in a single xml data column in a SQL table, which we parse on every page.
We have tried binary serialization instead of parsing xml, so we could store with session in state server mode, but we found the performance to actually be worse.
We are running on a single web server.
First off, no. When you store something in the session state all the data required to store that object is consumed by the website process(s). Just because .NET treats variables like references doesn't mean it actually uses less memory than a no-GC language. It just means that copying that variable around is done efficiently without using reference operators or pointers.
Your question is a bit vague, but you have a few options for persisting data:
1) Send the data to the client as JSON and store it on the browser if it should be per-user and is needed more on the client side than the server side. You can then send pieces of the data with different requests if you need to (put it in hidden fields if you have to use ASPX web forms).
2) Store it in the session state if it is a small bit of per user data.
3) Store it in the ASP.NET cache if it is large and common to all users, see here (https://msdn.microsoft.com/en-us/library/6hbbsfk6.aspx).
4) If it is large and user-specific that is used primarily on the server then you have more of a performance problem. You should see if you can break out any user specific stuff from static stuff. If you do that and its still large then a database may not be a bad solution. If you are already using DB calls in your application then looking up this data on every request won't cause too much overhead and you won't have to regenerate it from scratch (You should only do this if the data takes a considerable time to generate as a DB call could be slower than just regenerating the data itself). I recommend writing some sort of middleware (HttpModule or OwinMiddleware) that uses whatever user Identity you use for auth to look up the data and then set it on the HttpContext.Current.Items collection. This way the data is usable for the entire request and you can add logic in the middleware to figure out when to set it.
I would think that having a large chunk of user-specific data would be a red flag as user data should just be a list of what the user can/can't do and what their preferences are.
If this is static data then its super simple. The application cache is what you want. The only complications would be if you have multiple servers that need synced data.

MVC3 + MongoDB Architecture: Store models directly to database?

I am currently developing a mvc3 application using mongodb. I am quite unsure on how i shall build the architecture. E.g. my app has a page used for managing the user profile for a registered user (like name, email, some attributes exposed inside enum-comboboxes). Hence i have a ManageProfileModel.cs with all properties to manage. What's the proper way to use the data with mongodb? Shall i store the ManageProfileModel data inside mongodb or do i have to add an additional layer containing domain classes like User.cs, Invoice.cs, ... and store these objects inside mongodb (these objects are being used in the models created)?
I am asking because a model for managing a user profile does not necessarily resemble a user (domain) object. My first approach is to store directly my (view)models inside mongodb. I am not sure if its that easy to get my (consistent) data at a later point.
Thanks!
I would store the models directly in Mongo as-is for most of your data. I'm sure you know this already, but Mongo focuses on denormalization, and so it's different than traditional relational databases that want you to normalize your data.
So for a profile, you might have a user, a set of invoices, a set of addresses etc. As you decide your data models, I would suggest the following:
Consider your UI. If you need user + profile + invoices, go ahead and make a document like that. Makes your life a lot easier.
Don't be afraid to have repeated information stored.
You will constantly be wondering if you should embed a document (adding addresses to user) or link to a document (put a list of references in an array referencing invoices). The rule I've heard that I think is good: If the data is constantly changing, make a link/reference. If it's immutable or slowly changing, embed it.
If your document will grow a lot over time, considering breaking it up. Mongo has to move your document in memory if it grows too big.

Alternatives of Datatable

In my web application, I have a dynamic query that returns huge data to datatable, and this query is often recalled with different parameters. So database is exhausted.
I want to get all record with no parameters to an object, and perform queries (may be with linq) on this object. So database will not be exthausted.
Which objects can be used instead of datatable?
This is one of my pet peeves - people who return all the data from the database.
There is absolutely no need for this unless you are doing reporting.
If you are doing reporting, then you need to increase your hardware capability so that the database can cope. This may also include tuning your database, rearranging tables, reindexing, regular rebuilding of indexes, updating statistics, archiving out old data, etc.
If you are NOT doing reporting, then start limiting how much data can be queried at any one time. Users DO NOT need to see massive quantities of data all at once. They need to see discrete amounts of data presented in a manageable and coherent way.
Another rule of thumb i like to observe is: let your database server do the work, it is made to manipulate lots of data, it is what it is good at, and it should have the power to do it. Pulling back loads of data to the client, and then trying to manipulate that data on the client is a foolish thing to do. If your client machines are more powerful than the database server then you have issues.
Never ever perform this(except cache)!!!
You are trying to implement DB mechanisms, like
persistent storage
index search and query strategy
replication
and so on
Spend your time on db optimization(optimal scheme, indexes, query, partitioning).

Resources