Creation of Flyway "schema_version" fails for dashDB - flyway

I'm using Flyway to manage db migration on IBM dashDB. This database organizes by default table content 'by column', which in particular makes the creation of the "schema_version" table fail.
To get it to work, the table creation SQL statement should only include the "ORGANIZE BY ROW" directive:
CREATE TABLE (...)
(...)
) ORGANIZE BY ROW
What would be the best approach to handle this issue ? I'm looking for a solution that does not impact the default table organization.
Thanks for helping,
Cheers.

dashDB will perform best when all tables are column-based. When you start to mix row and column based tables, many operations are then performed in "compensation" which basically means they won't take full advantage of the columnar engine.
There are currently some compatibility reasons why a columnar table cannot be created and thus a row based table must be used, but the original DDL nor error are stated so I can't tell in this case. If you can provide the full CREATE TABLE statement and the resulting error (if you have it), I can possibly provide an alternative solution that would allow you to still use all column-based tables.

If you only want to change a particular table from column organized to row organized then a "ORGANIZE BY ROW" on the table definition would be the recommended way to approach this. (This seems to be what you're doing)
Changing the default table org will change how tables are created when you don't put an "ORGANIZE BY " in your table ddl.
If you have admin privileges on your dashDB instance you can change the default table org via 'Run SQL' in the dashDB console or using a dashDB client. (for exampl: clp/clpplus)
Set default table organization to ROW:
call ADMIN_CMD('UPDATE DB CFG USING DFT_TABLE_ORG ROW');
Set default table organization to COLUMN: (default dashDB configuration)
call ADMIN_CMD('UPDATE DB CFG USING DFT_TABLE_ORG COLUMN');
Analytics will perform much better with Column organized tables so it's recommended to have the majority of your tables as column organized.

Related

SQLite Importer will overwrite my database when I load my application?

I have an Ionic App using SQLite. I don't have any problems with implementation.
The issue is that I need to import an SQL file using SQLitePorter to populate the database with configuration info.
But also, on the same database I have user info, so my question is:
Everytime I start the app, it will import the sql file, fill the database and probably overwrite my user data too? Since it is all on the same base?
I assume that you can always init your table using string queries inside your code. The problem is not that you are importing a .sql file. Right?
According to https://www.sqlitetutorial.net/sqlite-create-table/ it is obvious that you always create a table with [IF NOT EXISTS] switch. Writing a query like :
CREATE TABLE [IF NOT EXISTS] [schema_name].table_name (
column_1 data_type PRIMARY KEY);
you let sqlite to decide if it's going to create a table with the risk to overwrite an existing table. It is supposed that you can trust that sqlite is smart enough, not to overwrite any information especially if you use 'BEGIN TRANSACTION' - 'COMMIT' procedure.
I give my answer assuming that you have imported data and user data in distinct tables, so you can manipulate what you populate and what you don't. Is that right?
What I usually do, is to have a sql file like this:
DROP TABLE configutation_a;
DROP TABLE configutation_b;
CREATE TABLE configutation_a;
INSERT INTO configutation_a (...);
CREATE TABLE configutation_b;
INSERT INTO configutation_b (...);
CREATE TABLE IF NOT EXIST user_data (...);
This means that every time the app starts, I am updating with the configuration data I have at that time (that's is why we use http.get to get any configuration file from a remote repo in the future) and create user data only if user_data table is not there (hopefully initial start).
Conclusion: It's always a good practice, in my opinion, to trust a database product 100% and abstractly let it do any transaction that might give you some risk if you implemented your self in your code; since it gives a tool for that.For example, the keyword [if not exists], is always safer than implementing a table checker your self.
I hope that helps.
PS: In case you refer in create database procedure, SQLite, connects to a database file and it doesn't exist, it creates it. For someone comfortable in sqlite command line, when you type
sqlite3 /home/user/db/configuration.db will connect you with this db and if the file is not there, it will create it.

MariaDB - Inserting historical data into a system versioned (temporal) table

I have some tables in MariaDB that I have been tracking the changes for by using a separate "changelog" table that updates every time a record is updated. However I have recently learned about temporal data tables in MariaDB and I would like to switch to that method as it is a much more elegant method of tracking changes. I'm wondering, however, if there is a way to transfer over my "changelog" table to the newly system versioned tables.
So I was hoping I could insert new rows somehow with the specified values for the table and also specify the row_end and row_start columns and also have that not trigger the table to create another historical row... is this possible? I tried just doing a a "insert into (id, row_start, row_end, etc) values(x, y, z)" but that results in an unknown column "row_start" error.
Old question, but starting with 10.11 MariaDB allows direct insertion of historical data using a command line option or setting.
https://mariadb.com/kb/en/system-versioned-tables/#system_versioning_insert_history
system_versioning_insert_history
Description: Allows direct inserts into ROW_START and ROW_END columns if secure_timestamp allows changing timestamp.
Commandline: --system-versioning-insert-history[={0|1}]
Scope: Global, Session
Dynamic: Yes
Type: Boolean
Default Value: OFF
Introduced: MariaDB 10.11.0

Providing default value for unmapped column in SQL Compare

Is it possible to provide a default value or a query to provide a value to an unmapped column in the target table using Redgate SQL Data Compare?
To explain the scenario I have a configuration database that holds settings data for several database instances. The data is all in the same shape, but the config database has an additional InstanceID field in most tables. This allows me to filter my compare to only compare against the InstanceID relating to the source Instance database. However if I generate Insert scripts they fail because the Target Instance ID fields are non nullable. I want to provide a default value that is then used in the Insert Scripts. Is this doable?
SQL Data Compare doesn't have an easy way of doing this I'm afraid.
There is one way to do it - you could create a view that selects everything from the source table along with a computed column, which just provides the "default value" that you want to insert. Then you can map the view to the table in the target database and compare them, deploying from the result.
I hope this helps.

Teradata: Is there a way to generate DDL from a view or select statement?

I am using a global application user account to access database A. This user account does not have permissions to modify database A's schema (ie, create tables, modify tables, etc). This user also has access to database B, but only views. I need to run SQL to feed data from a view in database B into a table in database A.
In a perfect world, I would be able to use this SQL:
create database_a.mytable as (select * from database_b) with no data
However, the user can't create tables in database A. If I could get the DDL of the select statement then I could log in under my personal account (which doesn't have any access to database B) and run the DDL in database A to create the table.
The only other option is to manually write the SQL, but I don't want to do that, especially since this view I am wanting to copy has many columns of varying data types and sizes.
Edit: I may be getting closer. I just experimented with this:
show (select * from database_b.myview)
However, it generated the DLL of every single table that is used in the view itself, as well as the definition for the view. This doesn't really help me since I just want the schema of the select statement itself. In other words, I need what would be generated if I were to use the create table as statement mentioned above.
Edit for Rob: Perhaps "DDL" was the wrong term to use. Using show view db.myview just shows the definition of the view, not the schema it represents. In my above example of create table as, I show how you can create a table that mimics the schema of a result set returned in a select. It generates a DDL on the back end for creating a table and then executes that DDL to actually create the table. You can then say show table db.newtable and see the new table's DDL. I want to get that DDL directly from a select statement so that I can copy it, log out of the app account, into my personal account, and then execute the DDL to create the table.
This is only to save me the headache of having to type out the DDL manually by hand to save time and reduce typing errors, especially since the source view has so many columns. That said, I think hitting up the DBA or writing some snazzy stored procedure to do dynamic stuff would be a bit over the top for my needs. I think there has to be a way to get the DDL for creating a table schema directly from a select statement.
Generate DDL Statements for objects:
SHOW TABLE {DatabaseB}.{Table1};
SHOW VIEW {DatabaseB}.{View1};
Breakdown of columns in a view:
HELP VIEW {DatabaseB}.{View1};
However, without the ability to create the object in the target database DatabaseA your don't have much leverage. Obviously, if the object already existed INSERT INTO SELECT ... FROM DatabaseB.Table1 or MERGE INTO would be options that you already explored.
Alternative Solution
Would it be possible to have a stored procedure created that dynamically created the table based on the view name that is provided? The global application account would simply need privilege to execute the procedure. Generally the user creating the stored procedure would need the permissions to perform the actions contained within the stored procedure. (You have some additional flexibility with this in Teradata 13.10.)
There are some caveats with this approach. You are attempting to materialize views that could reference anywhere from hundreds to billions of records. These aren't simple 1:1 views that are put on top of the target tables. Trying to determine the required space in the target database to materialize the view will be difficult. Performance can and will vary depending on the complexity of the view and the data volumes. This will not be a fast-path or data block optimized operation.
As a DBA, I would be concerned with this approach being taken on by a global application account without fully understanding the intent. I trust you have an open line of communication with the DBA(s) involved for supporting this system. I'm sure there are reasons for your madness that can't be disclosed here.
Possible Solution - VOLATILE TABLE
Unless the implicit privilege for CREATE TABLE has been revoked from the global application account this solution should work.
Volatile tables do not require perm space. There table definitions persist for the duration of the session and any data inserted into them relies on the spool space of the user who instantiated it.
CREATE VOLATILE TABLE {Global Application UserID}.{TableA_Copy} AS
(
SELECT *
FROM {DatabaseB}.{TableA}
)
WITH NO DATA
NO PRIMARY INDEX
ON COMMIT PRESERVE ROWS;
SHOW TABLE {Global Application UserID}.{TableA_Copy};
I opted to use a Teradata 13.10 feature called NO PRIMARY INDEX. By default, CREATE TABLE AS will take the first column of the SELECT statement and make it the PRIMARY INDEX of the table. This could lead to skewing and perm space issues in your testing depending on the data demographics. You can specify an explicit PRIMARY INDEX on your own as you understand the underlying data. (See the DDL manuals for details on the syntax if you're uncertain.)
The use of ON COMMIT PRESERVE ROWS for the intent of this example is probably extraneous. But in reality if you popped any data into that table for testing this clause would be beneficial in Teradata mode as the data would otherwise be lost immediately after the CREATE TABLE or any other data manipulation was performed against the volatile table.

SQLite Modify Column

I need to modify a column in a SQLite database but I have to do it programatically due to the database already being in production. From my research I have found that in order to do this I must do the following.
Create a new table with new schema
Copy data from old table to new table
Drop old table
Rename new table to old tables name
That seems like a ridiculous amount of work for something that should be relatively easy. Is there not an easier way? All I need to do is change a constraint on a existing column and give it a default value.
That's one of the better-known drawbacks of SQLite (no MODIFY COLUMN support on ALTER TABLE), but it's on the list of SQL features that SQLite does not implement.
edit: Removed bit that mentioned it may being supported in a future release as the page was updated to indicate that is no longer the case
If the modification is not too big (e.g. change the length of a varchar), you can dump the db, manually edit the database definition and import it back again:
echo '.dump' | sqlite3 test.db > test.dump
then open the file with a text editor, search for the definition you want to modify and then:
cat test.dump | sqlite3 new-test.db
As said here, these kind of features are not implemented by SQLite.
As a side note, you could make your two first steps with a create table with select:
CREATE TABLE tmp_table AS SELECT id, name FROM src_table
When I ran "CREATE TABLE tmp_table AS SELECT id, name FROM src_table", I lost all the column type formatting (e.g., time field turned into a integer field
As initially stated seems like it should be easier, but here is what I did to fix. I had this problem b/c I wanted to change the Not Null field in a column and Sqlite doesnt really help there.
Using the 'SQLite Manager' Firefox addon browser (use what you like). I created the new table by copying the old create statement, made my modification, and executed it. Then to get the data copied over, I just highlighted the rows, R-click 'Copy Row(s) as SQL', replaced "someTable" with my table name, and executed the SQL.
Various good answers already given to this question, but I also suggest taking a look at the sqlite.org page on ALTER TABLE which covers this issue in some detail: What (few) changes are possible to columns (RENAME|ADD|DROP) but also detailed workarounds for other operations in the section Making Other Kinds Of Table Schema Changes and background info in Why ALTER TABLE is such a problem for SQLite. In particular the workarounds point out some pitfalls when working with more complex tables and explain how to make changes safely.

Resources