'Pre-prepared' statements in SQLite3? - sqlite

Using SQLite in a memory-constrained embedded system with a fixed set of queries, it seems that code and data savings could be made if the queries could be 'pre-prepared'. That is, the prepared statement is produced by (an equivalent of) sqlite3_prepare_v2() at build time, and only _bind(), _step() etc need to be called at runtime, referencing one or more sqlite3_stmt* pointers that are effectively static data. The entire SQL parsing (and query planning?) engine could be eliminated from the target.
I realise that there is considerable complexity hidden behind the sqlite3_stmt* pointer, and that this is highly unlikely to be practical with the current sqlite3 implementation - but is the concept feasible?

This was discussed on the SQLite-users mailing list in 2006. At that time D. Richard Hipp supported a commercial version of SQLite that ran compiled statements on a stripped down target, which did not have any SQL parser. Perhaps you could check with hwaci to see if this product is still available.

Related

Turning Datalog queries in to SQL(ite) queries

Datalog is a lovely language for querying relational data. It is simple, clear, composes well, and supports recursive queries without additional syntax.
SQLite is a fantastic embedded database with what seems to be a powerful query engine able to handle recursive queries – see the examples at the bottom of that page for generating Mandelbrot sets and finding all possible solutions to Sudoko puzzles!
I'm interested to know if there is a fairly standard way to translate from a datalog query in to recursive SQL as supported by SQLite, or if there are libraries that provide this facility.
DLVDB is an interpreter for recursive Datalog that uses an ODBC database connection for their extensional data: http://www.dlvsystem.com/dlvdb/
Apart from that, the paper
S. Ceri, G. Gottlob, and L. Tanca. 1989. What You Always Wanted to Know About Datalog (And Never Dared to Ask). IEEE Trans. on Knowl. and Data Eng. 1, 1 (March 1989), 146-166. http://dx.doi.org/10.1109/69.43410
provides theoretical background and some pointers for translating Datalog into relational algebra.

Pros and cons of using packages in Oracle

I am fairly new to using packages. My team is deciding on whether or not to use packages in our applications. We have 4 applications that are using in-line sql currently. We are deciding on putting each sql statement in a stored procedure and then logically grouping the Stored procedure into packages(These stored procedures will be shared among the applications). What are the potential pros and cons of using packages.
Our applications are written in asp.net using c#.
So you want to start a debate about the advantages versus the disadvantages of using a package? Ok, then I would leave the disadvantages part on you if you could share with us. I will just share with you tye advantages, not in my own words, since it would be a repetition of what Thomas Kyte already said here:
break the dependency chain (no cascading invalidations when you install a new package body -- if
you have procedures that call procedures -- compiling one will invalidate your database)
support encapsulation -- I will be allowed to write MODULAR, easy to understand code -- rather
then MONOLITHIC, non-understandable procedures
increase my namespace measurably. package names have to be unique in a schema, but I can have
many procedures across packages with the same name without colliding
support overloading
support session variables when you need them
promote overall good coding techniques, stuff that lets you write code that is modular,
understandable, logically grouped together....
If you are a programmer - you would see the benefits of packages over a proliferation of standalone
procedures in a heartbeat.
If you're going to have different applications accessing the same tables, with the same business logic happening on the database (eg. constraints, etc), then stored procs are the way to go. Think of them as the interface between the "front end" (ie. anything that's not the database) and the database, in much the same way as a webservice provides an interface to the mid-tier (I think; I'm really not up on the non-db related architecture!).
Your publically callable stored procedures would be typically things like create_a_new_customer, add_a_new_address, retrieve_customer_details, etc, where the logic behind each action is coded and related procedures would be grouped into the same package. You wouldn't want to code a series of procedures that just do dml on the tables and expect the applications to work out when to call each procedure.
Here's a handy decision diagram for this question.
Just adding a side effect of using packages...
If I have a package with all the stored procedures and functions that is required for my application to run. Consider one small stored procedure suddenly fail due to some issue(eg. some alteration in table structure) or some other minor issue so that only one stored procedure becomes invalid.
In this case, will the entire package become invalid? Affecting the entire application?
On the other hand if do not use package and create standalone procs and functions, then only one proc will fail.

How does executemany() work

I have been using c++ and work with sqlite. In python, I have an executemany operation in the library but the c++ library I am using does not have that operation.
I was wondering how the executemany operation optimizes queries to make them faster.
I was looking at the sqlite c/c++ api and saw that there were two commands, sqlite3_reset and sqlite3_clear_bindings, that can be used to clear and reuse prepared statements.
Is this what python does to batch and speedup executemany queries (at least for inserts)? Thanks for your time.
executemany just binds the parameters, executes the statements, and calls sqlite3_reset, in a loop.
Python does not give you direct access to the statement after it has been prepared, so this is the only way to reuse it.
However, SQLite does not take much time for preparing statements, so this is unlikely to have much of an effect on performance.
The most important thing for performance is to batch statements in a transaction; Python tries to be clever and to do this automatically (independently from executemany).
I looked into some of the related posts and found the folowing which was very detailed on ways to improve sqlite batch insert performace. These principles could effectively be used to create an executemany function.
Improve INSERT-per-second performance of SQLite?
The biggest improvement changes were indeed as #CL. said, turning it all into one transaction. The author of the other post also found significant improvement by using and reusing prepared statements and playing with some pragma settings.

Which is better performance-wise: stored procedure or executing a query with dataadapter?

I am reworking a .NET application that so far has been running slowly. Our databases are Oracle, and the code is written in VB. When writing queries, I typically pass the parameters to a middle tier function which builds the raw SQL. I have a database class that has a function ExecuteQuery which takes in a SQL string and returns a DataTable. This uses an OleDbDataAdapter to run the query on the database.
I found some existing code that sends the SQL and a parameter to a stored procedure which as far as I can tell, opens the query and ouputs it to a SYS_REFCURSOR / DataSet.
I don't know why it's set up this way, but could someone tell me which is better performance-wise? Or the pros/cons to doing it this way?
Thanks in advance
Stored Procedures vs dynamic SQL have the exact same performance. In other words there is no performance advantage of one over the other. (Incidentally, I am a HUGE believer in using stored procs for everything for a host of other reasons but that's not the topic on hand).
Bottle necks can occur for many reasons.
For one, if you are actually code generating select statements it is highly probable that those statements are very unoptimized for the data the app needs. For example, doing a SELECT * which pulls 50 columns back versus a SELECT ID, Description which just pulls the two you need in your application at that point. In this example, the amount of data that has to be read from disk, transferred over the network wire, and pushed into objects in memory of the web server isn't trivial.
These will have to be evaluated on a case by case basis.
I would highly suggest that if you have a "slow" application that you need to improve the performance of the very first thing you ought to do is profile the application. What part of it is running slow? It might be inside the database server, it might be in your middle tier, it may even be a function of your network bandwidth or memory / load limitations on your web server. Heck, there might even be a WAIT command lurking somewhere in there placed by some previous programmer that left the company...
In short, you have at this point absolutely no idea on where to begin. So looking at actual code is premature. Go profile the app and see where things are slowing down. You might find that performance may radically improve simply by putting more memory in the database server.... Which is a much cheaper alternative than rewriting, testing and deploying vast amounts of code.
a stored procedure will definitely have better performance over building a raw query in code and executing it, but the important thing to realize is that, that difference in performance won't be your performance issue, there are many other things that will affect performance much more than just changing just query to be a stored procedure, even if you run a stored procedure and process the results using adapters, data tables, data sets, you're still incurring in a lot of performance, specially if you pass those large objects around (I have seen cases where datasets are returned wrapped in web service calls), so, don't focus on that, focus on caching data, having a good query, create the proper indexes, minimize the use of datasets, datatables, that will yield better benefits than just moving queries to stored procedures

Writing updates to OLAP cube

What is the easiest way to write user entered measure values (sales forcast) to the SQL Server Analysis Services OLAP cube from a .Net client application?
I'm aware that underlying fact table can be updated with DML statements and that cube can be reprocessed but I'm looking for alternatives.
Regards,
Aleksandar
We use pivot table Ranet OLAP for editing cube data.
View sample Simple PivotTable Widget - PivotTaple with Updateable
Writing updates to OLAP cube.
I nearly got into a project like this once. It did not go ahead, which I was very grateful for, after looking into the work involved. My advice to you is to run away!!!
You do not have to update actual cube data, or reprocess though - depending on how complex your user-entered data is going to be. I believe this is covered in Microsoft's standard MDX course, the notes of which you may be able to find online (sorry, I've since disposed of my copy). This depends on whether you want to learn MDX though, which is not easy.
I think you can use ADOMD .Net to do Writeback. You can ADOMDCommand to wrap UPDATE CUBE Statements.
ADOMD .Net
http://msdn.microsoft.com/en-us/library/ms123483(v=SQL.100).aspx
Link below talks about some of the issues in this approach, if you are doing too many updates together.
http://www.developmentnow.com/g/112_2006_1_0_0_677198/writeback-in-ADOMD-NET.htm

Resources