Teradata: MULTISET vs SET Table performance - teradata

So SET tables involve a performance impact, as for every new row inserted or updated in the table, Teradata checks if a duplicate row already exists in the table, which can be a serious issue if there is a large number of records. However, we can improve its performance by defining a unique constraint on any column in the SET table, such as a Unique Primary Index (UPI). This helps us avoid the additional overhead of performing the duplicate row check, as the UPI guarantees that there will be no duplicate rows.
Does this mean that a SET table with a UPI/USI will have the same performance as a MULTISET table with a UPI/USI? Please explain.
And if your table has a Unique Primary Index, should you create it as a SET or a MULTISET table?

There will be no performance difference for a SET/MULTISET table with a UPI, there's just a difference regarding Insert/Select: a SET table silently ignores duplicate rows (#rows inserted less than #rows selected) while a MULTISET table throws an error (Duplicate row checks are done before checking for uniqueness).
But adding a USI will not prevent duplicate row checks, a new row has to be inserted into the base table first to create it's ROWID before it's inserted into the USI subtable.

Related

How to modify column type fast in SQLite?

I have a 3 GB SQLite database. I want to modify column type of one of the table columns.
I know that sqlite does not support altering columns and this can only be done by recreating a table.
That's how I do it:
BEGIN TRANSACTION;
ALTER TABLE tbl RENAME TO tbl_;
CREATE TABLE tbl (a INTEGER, b TEXT, c TEXT);
INSERT INTO tbl SELECT * FROM tbl_;
DROP TABLE tbl_;
COMMIT;
I thought that during this process since I use a transaction then the database size will not increase. But it did. On my disk not enough space to double the database size. Is it normal that the database size increases within the transaction? Is there any other way of modifying column type without increasing database size? This process also takes a lot of time. Unexpectedly, most of the time is taken by DROP TABLE statement, it's even longer than INSERT statement. Why dropping table is longer than copying data from one table to another?
Thanks in advance!

How is rowid determined on sqlite insertion?

I´d like to predict (reverse engineer really) the rowid of any to-be-inserted row in a sqlite table (to reconstruct a stream of sqlite insertions using the rowid of some tables as foreign key in other tables). The insertion may happen after an arbitrary sequence of insertions and deletions. How is the rowid determined by sqlite on insertion?
Is it an ever incrementing counter?
int64_t next_rowid() {
static int64_t r = 0;
return ++r;
}
Maybe the smallest row not in use?
// Algorithm description, not (likely) working code
static sorted_set<int64_t> deleted;
static int64_t top = 0;
int64_t next_rowid() {
if(deleted.size()==0) deleted.push(++top);
return deleted.pop_front();
}
void delete_rowid(int64_t r) {
deleted.push(r);
}
Some other scheme?
Unspecified?
https://sqlite.org/autoinc.html -
SQLite is single thread, so for most cases it performs select max(id) +1 from the_table. From that perspective it is really hard to tell what was the sequence. You can however provide valid sequence threating deleted stuff as not present. Or maybe I missed something.
Edit
As CL spotted. Autoincrement works in more stable way. So you can't get same id twice. And from that you can see that something was deleted meanwhile...
First, there are 2 types of rowid determination algorithms. Depending upon whether or not AUTOINCREMENT has been specified.
AUTOINCREMENT means that the rowid is guaranteed to increase within the limitations of the size of the number (9223372036854775807). If that number is reached, then any subsequent insert attempt fails with an SQLITE_FULL exception.
Without AUTOINCREMENT in the above scenario the algorithm will try to find an unused rowid and therefore the resultant rowid may be lower than other existing rowids.
Neither of the algorithms guarantee an increment of 1, rather that usually they will increment by 1.
AUTOINCREMENT results in a table sqlite_sequence being created, the last used rowid is held in the sequence column, Note! it can be manipulated/altered so add 1 record then change it to 100 and next insert will likely be 101.
The name column is the name of the table that the row is for.
I changed the name column, as a test, to a non-existent table name (last sequence was the 101) inserting a record still resulted in 102, so it would appear that in the absence of the respective sequence in sqlite_sequence the algorithm still locates a higher rowid.
I then lowered the sequence to 2, the next rowid was 103.
So the guarantee of a higher rowid seems to be thorough.
I next added a 2nd row to sqlite_sequence for the same table with a sequence number of 600. Insert came up with a rowid of 104.
As SQLite possibly selects the first row according to id, I then changed the id of from 2 (1 is the one that was changed to a non-existent table name) to 20. 3 is the rowid of the rouge/2nd entry row. The inserted rowid was 601.
As an attempt to try to fool SQLite I deleted the newly added row in the table and the row with a rowid of 3, sequence value of 601 in the sqlite_sequence table. SQLite was fooled, the rowid of the inserted row was 105.
As such the algorithms appear to be along the lines of :-
a) for where AUTOINCREMENT isn't specified
1 greater than the highest rowid in the table in which the row is being inserted unless it is greater than 9223372036854775807, in which case an unused rowid will be sought.
b) 1 greater than the greater of the highest rowid in the table into which the row is being inserted and the sequence stored in the first row for the table in the sqlite_sequence table. Noting that the sqlite_sequence table may be updated but then that the insert does not take place e.g. if the insert fails due to constraints.
Much of the above is based upon this

SQLite - Selecting not indexed column in GROUP BY

I have similar situation like question below.
Mysql speed up max() group by
SELECT MAX(id) id, cid FROM table GROUP BY cid
To optimize above query (shown in the question), creating index(cid, id) does the trick.
However, when I add a column that is not indexed to SELECT, query speed drastically slows down.
For example,
SELECT MAX(id) id, cid, newcolumn FROM table GROUP BY cid
If I create index(cid, id, newcolumn), query time comes back to minimal. It seems I should index all the columns I select while using GROUP BY.
Is there any way other than indexing all the columns to be select?
When all the columns used in the query are part of the index (which is then called a covering index), SQLite can get all values from the index and does not need to access the table itself.
When adding a column that is not indexed, each record must be looked up in both the index and the table.
Furthermore, the order of the records in the table is unlikely to be the same as the order in the index, so the table's pages are not read in order, and are read multiple times, which means that caching will not work as well.
The newcolumn values must be read from either the table or an index; there is no other mechanism to store data.
tl;dr: no

will the maximum number of rows decrease after records are deleted from a table as the row id keeps incrementing

This might be a beginners question, but when testing my sqlite data base, I found that when I delete a row, the row id keeps incrementing when I insert a new row and doesn't reuse for instance the row id of a deleted row. So, what will happen if the row id runs out to it's maximum value, while there are less rows in the table?
This is documented:
If the table has previously held a row with the largest possible ROWID, then new INSERTs are not allowed and any attempt to insert a new row will fail with an SQLITE_FULL error.
If you omit the AUTOINCREMENT keyword, IDs will still autoincrement, but can be reused if you delete the last row or if the values overflow:
If the largest ROWID is equal to the largest possible integer (9223372036854775807) then the database engine starts picking positive candidate ROWIDs at random until it finds one that is not previously used.
When you add row number as auto increment you have to check largest value. If data rows go to that limit you have to use bigger data type. But usually integer doesn't cross because a database designer must keep eye on normalization.
If data rows give so big. You are really stuck with the queries. It will take huge time. SQLite is mainly useful for low end device. They are not so capable of handling big data.

How can I insert a new table row into every other row in an existing table?

Ok I have a sqlite db, that has roughly 100 rows. It is kind of a strange thing that I'm trying to do, but I need to insert a new row between each of the existing rows.
I have been trying to use the Insert statement as follows, but haven't had any luck:
insert into t1(column1) values("hello") where id%2 == 0
So I'm basically trying to use the %-operator to tell me if the id is even or odd. For every even id number, I'd like to insert a new row.
What am I missing? What can I do differently? How can I insert a new row into every other row and have the index updated as well?
Thanks
Your question assumes that the rows have some kind of built-in order to them, and that you can insert rows between other rows. That's not true.
It is true that rows have an order on disk, and that the id column is usually assigned in order, but that's an implementation detail. When you perform a query, the database is free to return the rows in any order it chooses, unless you specify what you want with an ORDER BY clause.
Now, I'm assuming what you really want is to insert rows between the existing rows in id order. One way to get what you want would look like this:
UPDATE t1 SET id = id * 2
INSERT INTO t1 (id, column) SELECT id+1, "hello" FROM t1
The UPDATE would double the ids of all the existing rows (so 1,2,3 becomes 2,4,6); then the INSERT would perform a query on t1 and use the result to insert a new set of rows with id values one more than the existing rows (so 2,4,6 becomes 3,5,7).
I haven't tested the above statements, so I don't know if they would work or if they require some extra trickery (like a temporary table) since we are querying and updating the same table in one statement. Also I may have made a syntax error.
Don't consider the rows as pre-ordered in the database. A database will store them as they come in, or according to an index. It's your task to order them on retrieval (i.e. when you query for data) according to your needs.

Resources