Difference between sqlite and better-sqlite3 implementation - sqlite

What's the difference between sqlite and better-sqlite3 implementations? I have to use better-sqlite3 to create a database for a form (+ only node.js and express), but the only clear example I found uses sqlite. Is there any difference? If not, thanks. Otherwise, do you know any usefull links for databases and forms with better-sqlite3?
Thanks

In better-sqlite3, you can register custom functions and aggregate functions written in JavaScript, which you can run from within SQL queries.
In better-sqlite3, you can iterate through the cursor of a result set, and then stop whenever you want (you don't have to load the entire result set into memory).
In better-sqlite3, you can conveniently receive query results in many different formats (here, here, and here).
In better-sqlite3, you can safely work with SQLite's 64-bit integers, without losing precision due to JavaScript's number format.
See https://github.com/JoshuaWise/better-sqlite3/issues/262

One important difference is: better-sqlite allows for synchronous SQLite queries. With sqlite, you can't do that.

Related

Decimal/Date Sorting With Entity Framework Core and SQLite

Using Entity Framework Core (Code First) and SQLite, Guid is stored as binary but Decimal and Date fields are stored as text with Microsoft's provider.
I can understand they might not want the imprecision of DOUBLE for currency amounts and thus use text.
What happens if I need to sort? Is Entity Framework Core smart enough to make sorting work as expected (but slower because it needs to parse everything!), or will it sort alphabetically instead of sorting by number? I don't want it to return 100 before 2.
I'll have to do things like "give me the latest order" so what's the best approach for that? I want to make sure it's going to work.
Am I better to switch to System.Data.SQLite provider to store dates in UNIX format (this is not supported by the Microsoft's provider)? and then would I have to do the parsing back and forth myself or it could take care of it automatically?
I am still learning system.data.sqlite myself, but I am aware that you can create and assign a custom collation to your column. The collation can either be assigned to the table column, or only to a particular view or query using standard sqlite SQL syntax and the COLLATE keyword.
This is not a complete example/tutorial, but for starters visit the Microsoft.data.sqlite docs. Also see this stack overflow answer. These are just hints, but provide a consistent method to do this. Remember that sqlite is an in-process DB engine, so it should still be rather efficient and still allow working with the database in a normal fashion without having to constantly inject custom logic between queries. Once you have the custom collation defined and properly registered, it should be rather seamless with perhaps the only extra requirement to append e.g. COLLATE customDecimal to the ORDER BY clauses.
The custom collation function would convert the string value to an appropriate numeric type and return the comparison. It's very similar to the native .Net IComparer and IComparison interfaces/implementations.

SQLite SELECT very slow on Windows Phone

Simple SQLite SELECT query on Windows Phone is very slow on a high-end device (Lumia 930).
select * from tableName
It's fetching around 15000 records (yeah, I need them all) and normally I'd expect it to not be this slow. However, it takes around 12-13 seconds to get all the records. I'm using SQLite.Net-PCL client.
What could be causing it? Is it true that it's due to the very slow wrapper? Is there a workaround, any way to improve it?
EDIT: I tried using SQLite PCL from Microsoft Open Technologies and I manually mapped property by property and I got much better results. So it seems that the count of rows, count of columns and the reflection, all combined, cause things to slow down. I am now working on trying to expose a similar functionality through SQLite.NET-PCL, the library which I'm using, to see how that would go.
EDIT2: I marked Peter's answer as answer to my question as I was able to improve performance dramatically by manually mapping type by type using Prepare call and stepping through row by row.
SQLite can easily return 15,000 records from a simple table in a fraction of a second on a Windows Phone (tested on a Lumia 920).
There are other things causing your poor performance. If you have a huge number of columns, that might be a problem. Depending on how the SQLite wrapper is implemented (I don't know), two possible culprits are use of Reflection to fill your result objects or per-row Async overhead. But again, I don't know how that wrapper is implemented specifically.
The way to speed it up (other than to return less data) is to write your code in C++ and wrap it in a WinRT component to be called by your managed app.
Depends on what information you want from your entities you may try using Query<>() method that alows you to write raw SQL query and then you can select only fields that you are interested in and map it to lighter entities if that is possible, even if you getting all fields for your entitiy Query<>() still should be quicker.Also check if you are using lates SQLite driver for WP

Accessing CoreData tables from fmdb

I'm using CoreData in my application for DML statements and everything is fine with it.
However I don't want use NSFetchedResultsController for simple queries like getting count of rows, etc.
I've decided to use fmdb, but don't know actual table names to write sql. Entity and table names don't match.
I've even looked inside .sqllite file with TextEdit but no hope :)
FMResultSet *rs = [db getSchema] doesn't return any rows
Maybe there's a better solution to my problem?
Thanks in advance
Core Data prefixes all its SQL names with Z_. Use the SQL command line tools to check out the your persistent store file to see what names it uses.
However, this is a very complicated and fragile solution. The Core Data schema is undocumented and changes without warning because Core Data does not support direct SQL access. You are likely to make error access the store file directly and your solution may break at random when the API is next updated.
The Core Data API provides the functionality you are seeking. IJust use a fetch request that fetches on a specific value using an NSExpressionDescription to perform a function. This allows you to get information like counts, minimums, maximums etc. You can create and use such fetches independent of a NSFetchedResultsController.
The Core Data API is very feature rich. If you find yourself looking outside the API for a data solution, chances are you've missed something in the API.

Which one to use? EAV or Blobs in the database?

I am currently working to rework the data system of our application. Basically, it is designed so that people can add all the custom fields they want, with only a few constant/always-there fields.
Our current design is giving us plenty of maintenance problems. What we do is dynamically(at runtime) add a column to the database for each field. We have to have a meta table and other cruft to maintain all of these dynamic columns.
Now we are looking at EAV, but it doesn't seem much better. Basically, we have many different types of fields, so there would be a StringValues, IntegerValues, etc table... which makes things that much worse.
I am wondering if using JSON or XML blobs in the database may be a better solution, specifically because in most use cases, when we retrieve anything out of these tables, we need the entire row. The problems is that we need to be able to create reports for this data as well.. No solution really makes custom queries look easy. And searching across such a blob database will surely be a performance nightmare when reports are ran.
Each "row" needs to have anywhere from about 15 to 100(possibly more) attributes/columns associated with it.
We are using SQL Server 2008 and our application interfacing with the database is a C# web application(so, ASP.Net).
what do you think? Use EAV or blobs or something else entirely? (Also, yes, I know a schema free database like MongoDB would be awesome here, but I can't convince my boss to use it)
What about the xml datatype? Advanced querying is possible against this type.
We've used the xml type with good success. We do most of our heavy lifting at the code level using linq to parse out values. Our schema is somewhat fixed, so that may not be an option for you.
One interesting feature of SQL server is the sql_variant type. It's fully supported in .NET and quite easy to use. The advantages is you don't need to create StringValue, IntValue, etc... columns, just one Value column that can contain all the simple types.
This very specific type favors the EAV option, IMHO.
It has some drawbacks though (sorting, distinct selects, etc...). So if you want to use it, make sure you read all the documentation and understand its limit.
Create a table with your known columns and "X" sparse columns using a sequential name such as DataColumn0001, DataColumn0002, etc. When there is a definition for a new column just rename a column and start inserting data. The great advantage to the sparse column is it is indexable.
More info at this link.
What you're doing is STUPID with a database that doesn't support your data type. You should work with a medium that meets your needs which include NoSQL databases such as RavenDB, MongoDB, DocumentDB, CouchBase or Postgres in RDMBS to name several.
You are inherently using the tool in a capacity it was neither designed for, and one it specifically attempts to limit you from achieving success. NoSQL database solutions frequently use JSON as an underlying storage because JSON is inherently schemaless. Want to add a property? Sure go ahead, want to add a whole sub collection? Sure go ahead. NoSQL databases were in part, created specifically to remove rigid schema requirements of RDBMS.
2015 Edit: Postgres now natively supports JSON. This is a viable option for RDBMS. My answer is still correct that you need to use the correct tool for the problem. It is a polygot persistence world.

SQLite insert performance

I need to write single lines to a file placed on a file server. I am considering utilizing SQLite to ensure that the writing of the line was successful and not just partially completed. However, as insert performance is really essential. My question is, what is the exact process (as it read this, write that, read this and so on) that SQLite goes through when it inserts a row to a table. The table does not have any indexes, nor primary keys, constraints or anything.
This is the most common bottleneck in my experience:
http://www.sqlite.org/faq.html#q19
Except for that, SQLite is really fast. :)
You should use transactions and thus try to avoid fsync()-ing on every INSERT. Take a look here for some benchmarks.
Also, be sure to properly set the two important pragmas:
synchronous (NORMAL)
journal_mode (WAL)
you can use explain for details what happens when you execute a statement: http://www.sqlite.org/lang_explain.html

Resources