SQLIte and recursive data - recursion

Say, I have table T where some row can contain reference to another row in the same table. Generally speaking I want to build recursive entity. What is most appropriate and efficient way to do that using SQLite?

Related

Does SQLite support bulk-loading (sort-then-indexing)?

When constructing indexing tree from existing data, there is a bulk-loading algorithm, like
https://en.wikipedia.org/wiki/B%2B_tree#Bulk-loading
https://www.youtube.com/watch?v=HJgXVxsO5YU
When creating an index for a non-empty table, does SQLite use bulk-loading or create indexing by insertions? From my performance test, it seems that SQLite uses insertion to create indexing because the time costs between inserting table after indexing and creating indexing after insertion are similar.
Do we know why bulk-loading is not used? Does it not work well in practice?
Bulk loading requires that the data is already sorted.
SQLite implements sorting by inserting the rows into a temporary index, so using it for bulk loading would not be productive.

How to import a data frame in RSQLite with specifying column's constrains?

I am trying to put a large data frame into a new table of a database. It could be done simply done via:
dbWriteTable(conn=db,name="sometablename",value=my.data)
However, I want to specify the Primary keys, foreign keys and the column Types like Numeric, Text and so on.
Is there any thing I can do? Should I create a table with my columns first and then add the data frame into it?
RSQlite assumes you have already your data.frame table all set before writing it to disk. There is not much to specify in the writing query. So, I visualise two ways, either before firing a query to write it, or after. I usually write the table from R to disk, then I polish it using dbGetQuery to alter table attributes. The only problem with this workflow is that Sqlite has very limited feature for altering tables.

SQLITE and autoindexing

I recently began exploring indexing in sqlite. I'm able to successfully create an index with my desired columns.
After doing this I took a look at the database to see that the index was created successfully only to find that sqlite had already auto created indexes for each of my tables:
"sqlite_autoindex_tablename_1"
These auto generated indices used two columns of each table, the two columns that make up my composite primary key. Is this just a normal thing for sqlite to do when using composite primary keys?
Since I'll be doing most of my queries based on these two columns, does it make sense to manually create indices, which are the exact same thing?
New to indices so really appreciate any support/feedback/tips, etc -- thank you!
SQLite requires an index to enforce the PRIMARY KEY constraint -- without an index, enforcing the constraint would slow dramatically as the table grows in size. Constraints and indexes are not identical, but I don't know of any relational database that does not automatically create an index to enforce primary keys. So yes, this is normal behavior for any relational database.
If the purpose of creating an index is to optimize searches where you have an indexable search term that involves the first column in the index then there's no reason to create an additional index on the column(s) -- SQLite will use the automatically created one.
If your searches will involve the second column in the index without including an indexable term for the first column you will need to create your index. Neither SQLite (nor any other relational database I know of) can use composite indexes to optimize filtering when the head columns of the index are not specified in the search.

sqlite3 insert into dynamic table

I am using sqlite3 (maybe sqlite4 in the future) and I need something like dynamic tables.
I have many tables with the same format: values_2012_12_27, values_2012_12_28, ... (number of tables is dynamic) and I want to select dynamically the table that receives some data.
I am using _sqlite3_prepare with INSERT INTO ? VALUES(?,?,?). Ofcourse this fails to compile (syntax error near ?). There is a nice and simple way to do this in sqlite ?
Thanks
Using SQL parameters is not possible for identifiers such as table or column names.
If you don't want to keep so many prepared statements around, just prepare them on the fly whenever you need one.
If your database were properly normalized, you would have a single big values table with an extra date column.
This organization is usually to be preferred, unless you have measured both and found that the better performance (if it actually exists) outweighs the overhead of managing multiple tables.

How to determine position of specific character/string in SQLite string column value?

I have values in a SQLite table* that contain a number of strings, of different lengths, joined by periods, something like this:
SomeApp.SomeNameSpace.InterestingString.NotInteresting
SomeApp.OtherNameSpace.WantThisOne.ReallyQuiteDull
SomeApp.OtherNameSpace.WantThisOne.AlsoDull
SomeApp.DifferentNameSpace.AlwaysWorthALook.LittleValue
I'd like to extract (in this case) the third period-delimited substring so I could write something like
SELECT interesting_string, COUNT(*)
FROM ( SELECT third_part_of_period_delimited_string(name) interesting_string )
GROUP BY interesting_string;
Obviously I can do this any number of ways programmatically; I'm wondering if there's any way to achieve this in a SQLite SELECT query?
* It's a SharpDevelop Profiler database, if anyone's curious
No.
You can, as you mention, work with the strings after you have selected them from the database. Or you can split them up into separate columns when they are stored.
If you do not have access to the code that is storing the data, you might want to consider reading the data in its entirety, splitting the strings and storing the split out tokens in separate columns in a new table. If the data is not too large, you might look at storing this table in a new memory database to give excellent performance.
Whether this is worthwhile depends on whether one pass to split the data strings can be made use of many times. If the data is constantly changing, then this scheme would probably not work well.

Resources