MariaDB - Inserting historical data into a system versioned (temporal) table - mariadb

I have some tables in MariaDB that I have been tracking the changes for by using a separate "changelog" table that updates every time a record is updated. However I have recently learned about temporal data tables in MariaDB and I would like to switch to that method as it is a much more elegant method of tracking changes. I'm wondering, however, if there is a way to transfer over my "changelog" table to the newly system versioned tables.
So I was hoping I could insert new rows somehow with the specified values for the table and also specify the row_end and row_start columns and also have that not trigger the table to create another historical row... is this possible? I tried just doing a a "insert into (id, row_start, row_end, etc) values(x, y, z)" but that results in an unknown column "row_start" error.

Old question, but starting with 10.11 MariaDB allows direct insertion of historical data using a command line option or setting.
https://mariadb.com/kb/en/system-versioned-tables/#system_versioning_insert_history
system_versioning_insert_history
Description: Allows direct inserts into ROW_START and ROW_END columns if secure_timestamp allows changing timestamp.
Commandline: --system-versioning-insert-history[={0|1}]
Scope: Global, Session
Dynamic: Yes
Type: Boolean
Default Value: OFF
Introduced: MariaDB 10.11.0

Related

MariaDB table missing but I can't recreate it

Something went wrong during a structure synchronization between two databases.
One of our production databases now is missing a key table 'customers' (which just about every other table has foreign keys to)
I'm trying to recreate the table from last night's backup (I don't want to restore the entire db - just recreate this table as the data in it does not change that much and I don't want to lose the transactional data from today)
The hassle seems to be that all the foreign key data for this table still exists in INFORMATION_SCHEMA.KEY_COLUMN_USAGE and I am getting 121 and 150 errors when I try run the CREATE TABLE query.
I've manually deleted all FK to the missing table and I am still getting errno 150 when trying to recreate the table. Any ideas where else there might be lost references to this table that is stopping me creating it again?
This was eventually resolved by multiple consultations of the SHOW ENGINE INNODB STATUS query.
The missing table had various indexes - example on the customer name there was an index "customer_name_idx". The CREATE TABLE query asked for this index to be created. The show engine innodb status return was "could not create table because index customer_name_idx already exists."
There was no reference to this index, to any primary key or to the table itself in any of the meta-data tables - I checked
INFORMATION_SCHEMA.INNODB_SYS_INDEXES
INFORMATION_SCHEMA.TABLE_SCHEMA
INFORMATION_SCHEMA.STATISTICS -INFORMATION_SCHEMA.TABLE
so I could not explain why this error was being thrown.
My guess, after the fact, is that MySQL is holding a cached copy of the information_schema meta data in memory and was consulting that, and maybe that only gets refreshed if you restart MySQL?
The solution was to give the indexes new names as a short term fix, and to rename them during our next scheduled downtime.
Once these were made, the table was created and the backup data could be reinstated.

System-versioned tables: need historical data for a new empty table for testing / development

I want to track the changes of a table using MariaDB system versioning.
For these changes I need to create a visualization group by monthly changes.
How can I create test data inside the database table to test my query properly? It is a new empty database table, so I need to create test data in the past. I am not able to wait for a year or so to get proper test data.
Any ideas?
An example of loading data need to change the current time like:
set statement timestamp=unix_timestamp('2021-03-04') for
insert into .....
ref: fiddle
As documented, you cannot set system_versioned_asof to change the result of DML like fiddle example.
Starting with 10.11 MariaDB allows direct insertion of historical data.
https://mariadb.com/kb/en/system-versioned-tables/#system_versioning_insert_history
system_versioning_insert_history
Description: Allows direct inserts into ROW_START and ROW_END columns if secure_timestamp allows changing timestamp.
Commandline: --system-versioning-insert-history[={0|1}]
Scope: Global, Session
Dynamic: Yes
Type: Boolean
Default Value: OFF
Introduced: MariaDB 10.11.0

Creation of Flyway "schema_version" fails for dashDB

I'm using Flyway to manage db migration on IBM dashDB. This database organizes by default table content 'by column', which in particular makes the creation of the "schema_version" table fail.
To get it to work, the table creation SQL statement should only include the "ORGANIZE BY ROW" directive:
CREATE TABLE (...)
(...)
) ORGANIZE BY ROW
What would be the best approach to handle this issue ? I'm looking for a solution that does not impact the default table organization.
Thanks for helping,
Cheers.
dashDB will perform best when all tables are column-based. When you start to mix row and column based tables, many operations are then performed in "compensation" which basically means they won't take full advantage of the columnar engine.
There are currently some compatibility reasons why a columnar table cannot be created and thus a row based table must be used, but the original DDL nor error are stated so I can't tell in this case. If you can provide the full CREATE TABLE statement and the resulting error (if you have it), I can possibly provide an alternative solution that would allow you to still use all column-based tables.
If you only want to change a particular table from column organized to row organized then a "ORGANIZE BY ROW" on the table definition would be the recommended way to approach this. (This seems to be what you're doing)
Changing the default table org will change how tables are created when you don't put an "ORGANIZE BY " in your table ddl.
If you have admin privileges on your dashDB instance you can change the default table org via 'Run SQL' in the dashDB console or using a dashDB client. (for exampl: clp/clpplus)
Set default table organization to ROW:
call ADMIN_CMD('UPDATE DB CFG USING DFT_TABLE_ORG ROW');
Set default table organization to COLUMN: (default dashDB configuration)
call ADMIN_CMD('UPDATE DB CFG USING DFT_TABLE_ORG COLUMN');
Analytics will perform much better with Column organized tables so it's recommended to have the majority of your tables as column organized.

Updating an SQLite database via an ODBC linked table in Access

I am having an issue with an SQLite database. I am using the SQLite ODBC from http://www.ch-werner.de/sqliteodbc/ Installed the 64-bit version and created the ODBC with these settings:
I open my Access database and link to the datasource. I can open the table, add records, but cannot delete or edit any records. Is there something I need to fix on the ODBC side to allow this? The error I get when I try to delete a record is:
The Microsoft Access database engine stopped the process because you and another user are attempting to change the same data at the same time.
When I edit a record I get:
The record has been changed by another user since you started editing it. If you save the record, you will overwrite the changed the other user made.
Save record is disabled. Only copy to clipboard or drop changes is available.
My initial attempt to recreate your issue was unsuccessful. I used the following on my 32-bit test VM:
Access 2010
SQLite 3.8.2
SQLite ODBC Driver 0.996
I created and populated the test table [tbl1] as documented here. I created an Access linked table and when prompted I chose both columns ([one] and [two]) as the Primary Key. When I opened the linked table in Datasheet View I was able to add, edit, and delete records without incident.
The only difference I can see between my setup and yours (apart from the fact that I am on 32-bit and you are on 64-bit) is that in the ODBC DSN settings I left the Sync.Mode setting at its default value of NORMAL, whereas yours appears to be set to OFF.
Try setting your Sync.Mode to NORMAL and see if that makes a difference.
Edit re: comments
The solution in this case was the following:
One possible workaround would be to create a new SQLite table with all the same columns plus a new INTEGER PRIMARY KEY column which Access will "see" as AutoNumber. You can create a unique index on (what are currently) the first four columns to ensure that they remain unique, but the new new "identity" (ROWID) column is what Access would use to identify rows for CRUD operations.
I had this problem too. I have a table with a primary key on a VARCHAR(30) (TEXT) field.
Adding an INTEGER PRIMARY KEY column didn't help at all. After lots of testing I found the issue was with a DATETIME field I had in the table. I removed the DATETIME field and I was able to update record values in MS-Access datasheet view.
So now any DATETIME fields I need in SQLite, I declare as VARCHAR(19) so they some into Access via ODBC as text. Not perfect but it works. (And of course SQLite doesn't have a real DATETIME field type anyway so TEXT is just fine and will convert OK)
I confirmed it's a number conversion issue. With an empty DATETIME field, I can add a time of 01-01-2014 12:01:02 via Access's datasheet view, if I then look at the value in SQLite the seconds have been rounded off:
sqlite> SELECT three from TEST where FLoc='1020';
2014-01-01 12:01:00.000
SYNCMODE should also be NORMAL not OFF.
Update:
If you have any text fields with a defined length (e.g. foo VARCHAR(10)) and the field contents contains more characters than the field definition (which SQLite allows) MS-Access will also barf when trying to update any of the fields on that row.
I've searched all similar posts as I had a similar issue with SQLite linked via ODBC to Access. I had three tables, two of them allowed edits, but the third didn't. The third one had a DATETIME field and when I changed the data type to a TEXT field in the original SQLite database and relinked to access, I could edit the table. So for me it was confirmed as an issue with the DATETIME field.
After running into this problem, not finding a satisfactory answer, and wasting a lot of time trying other solutions, I eventually discovered that what others have mentioned about DATETIME fields is accurate but another solution exists that lets you keep the proper data type. The SQLite ODBC driver can convert Julian day values into the ODBC SQL_TIMESTAMP / SQL_TYPE_TIMESTAMP types by looking for floating point values in the column, if you have that option enabled in the driver. Storing dates in this manner gives the ODBC timestamp value enough precision to avoid the write conflict error, as well as letting Access see the column as a date/time field.
Even storing sub-second precision in the date string doesn't work, which is possibly a bug in the driver because the resulting TIMESTAMP_STRUCT contains the same values, but the fractional seconds must be lost elsewhere.

Teradata: Is there a way to generate DDL from a view or select statement?

I am using a global application user account to access database A. This user account does not have permissions to modify database A's schema (ie, create tables, modify tables, etc). This user also has access to database B, but only views. I need to run SQL to feed data from a view in database B into a table in database A.
In a perfect world, I would be able to use this SQL:
create database_a.mytable as (select * from database_b) with no data
However, the user can't create tables in database A. If I could get the DDL of the select statement then I could log in under my personal account (which doesn't have any access to database B) and run the DDL in database A to create the table.
The only other option is to manually write the SQL, but I don't want to do that, especially since this view I am wanting to copy has many columns of varying data types and sizes.
Edit: I may be getting closer. I just experimented with this:
show (select * from database_b.myview)
However, it generated the DLL of every single table that is used in the view itself, as well as the definition for the view. This doesn't really help me since I just want the schema of the select statement itself. In other words, I need what would be generated if I were to use the create table as statement mentioned above.
Edit for Rob: Perhaps "DDL" was the wrong term to use. Using show view db.myview just shows the definition of the view, not the schema it represents. In my above example of create table as, I show how you can create a table that mimics the schema of a result set returned in a select. It generates a DDL on the back end for creating a table and then executes that DDL to actually create the table. You can then say show table db.newtable and see the new table's DDL. I want to get that DDL directly from a select statement so that I can copy it, log out of the app account, into my personal account, and then execute the DDL to create the table.
This is only to save me the headache of having to type out the DDL manually by hand to save time and reduce typing errors, especially since the source view has so many columns. That said, I think hitting up the DBA or writing some snazzy stored procedure to do dynamic stuff would be a bit over the top for my needs. I think there has to be a way to get the DDL for creating a table schema directly from a select statement.
Generate DDL Statements for objects:
SHOW TABLE {DatabaseB}.{Table1};
SHOW VIEW {DatabaseB}.{View1};
Breakdown of columns in a view:
HELP VIEW {DatabaseB}.{View1};
However, without the ability to create the object in the target database DatabaseA your don't have much leverage. Obviously, if the object already existed INSERT INTO SELECT ... FROM DatabaseB.Table1 or MERGE INTO would be options that you already explored.
Alternative Solution
Would it be possible to have a stored procedure created that dynamically created the table based on the view name that is provided? The global application account would simply need privilege to execute the procedure. Generally the user creating the stored procedure would need the permissions to perform the actions contained within the stored procedure. (You have some additional flexibility with this in Teradata 13.10.)
There are some caveats with this approach. You are attempting to materialize views that could reference anywhere from hundreds to billions of records. These aren't simple 1:1 views that are put on top of the target tables. Trying to determine the required space in the target database to materialize the view will be difficult. Performance can and will vary depending on the complexity of the view and the data volumes. This will not be a fast-path or data block optimized operation.
As a DBA, I would be concerned with this approach being taken on by a global application account without fully understanding the intent. I trust you have an open line of communication with the DBA(s) involved for supporting this system. I'm sure there are reasons for your madness that can't be disclosed here.
Possible Solution - VOLATILE TABLE
Unless the implicit privilege for CREATE TABLE has been revoked from the global application account this solution should work.
Volatile tables do not require perm space. There table definitions persist for the duration of the session and any data inserted into them relies on the spool space of the user who instantiated it.
CREATE VOLATILE TABLE {Global Application UserID}.{TableA_Copy} AS
(
SELECT *
FROM {DatabaseB}.{TableA}
)
WITH NO DATA
NO PRIMARY INDEX
ON COMMIT PRESERVE ROWS;
SHOW TABLE {Global Application UserID}.{TableA_Copy};
I opted to use a Teradata 13.10 feature called NO PRIMARY INDEX. By default, CREATE TABLE AS will take the first column of the SELECT statement and make it the PRIMARY INDEX of the table. This could lead to skewing and perm space issues in your testing depending on the data demographics. You can specify an explicit PRIMARY INDEX on your own as you understand the underlying data. (See the DDL manuals for details on the syntax if you're uncertain.)
The use of ON COMMIT PRESERVE ROWS for the intent of this example is probably extraneous. But in reality if you popped any data into that table for testing this clause would be beneficial in Teradata mode as the data would otherwise be lost immediately after the CREATE TABLE or any other data manipulation was performed against the volatile table.

Resources