Teradata: Is there a way to generate DDL from a view or select statement? - teradata

I am using a global application user account to access database A. This user account does not have permissions to modify database A's schema (ie, create tables, modify tables, etc). This user also has access to database B, but only views. I need to run SQL to feed data from a view in database B into a table in database A.
In a perfect world, I would be able to use this SQL:
create database_a.mytable as (select * from database_b) with no data
However, the user can't create tables in database A. If I could get the DDL of the select statement then I could log in under my personal account (which doesn't have any access to database B) and run the DDL in database A to create the table.
The only other option is to manually write the SQL, but I don't want to do that, especially since this view I am wanting to copy has many columns of varying data types and sizes.
Edit: I may be getting closer. I just experimented with this:
show (select * from database_b.myview)
However, it generated the DLL of every single table that is used in the view itself, as well as the definition for the view. This doesn't really help me since I just want the schema of the select statement itself. In other words, I need what would be generated if I were to use the create table as statement mentioned above.
Edit for Rob: Perhaps "DDL" was the wrong term to use. Using show view db.myview just shows the definition of the view, not the schema it represents. In my above example of create table as, I show how you can create a table that mimics the schema of a result set returned in a select. It generates a DDL on the back end for creating a table and then executes that DDL to actually create the table. You can then say show table db.newtable and see the new table's DDL. I want to get that DDL directly from a select statement so that I can copy it, log out of the app account, into my personal account, and then execute the DDL to create the table.
This is only to save me the headache of having to type out the DDL manually by hand to save time and reduce typing errors, especially since the source view has so many columns. That said, I think hitting up the DBA or writing some snazzy stored procedure to do dynamic stuff would be a bit over the top for my needs. I think there has to be a way to get the DDL for creating a table schema directly from a select statement.

Generate DDL Statements for objects:
SHOW TABLE {DatabaseB}.{Table1};
SHOW VIEW {DatabaseB}.{View1};
Breakdown of columns in a view:
HELP VIEW {DatabaseB}.{View1};
However, without the ability to create the object in the target database DatabaseA your don't have much leverage. Obviously, if the object already existed INSERT INTO SELECT ... FROM DatabaseB.Table1 or MERGE INTO would be options that you already explored.
Alternative Solution
Would it be possible to have a stored procedure created that dynamically created the table based on the view name that is provided? The global application account would simply need privilege to execute the procedure. Generally the user creating the stored procedure would need the permissions to perform the actions contained within the stored procedure. (You have some additional flexibility with this in Teradata 13.10.)
There are some caveats with this approach. You are attempting to materialize views that could reference anywhere from hundreds to billions of records. These aren't simple 1:1 views that are put on top of the target tables. Trying to determine the required space in the target database to materialize the view will be difficult. Performance can and will vary depending on the complexity of the view and the data volumes. This will not be a fast-path or data block optimized operation.
As a DBA, I would be concerned with this approach being taken on by a global application account without fully understanding the intent. I trust you have an open line of communication with the DBA(s) involved for supporting this system. I'm sure there are reasons for your madness that can't be disclosed here.
Possible Solution - VOLATILE TABLE
Unless the implicit privilege for CREATE TABLE has been revoked from the global application account this solution should work.
Volatile tables do not require perm space. There table definitions persist for the duration of the session and any data inserted into them relies on the spool space of the user who instantiated it.
CREATE VOLATILE TABLE {Global Application UserID}.{TableA_Copy} AS
(
SELECT *
FROM {DatabaseB}.{TableA}
)
WITH NO DATA
NO PRIMARY INDEX
ON COMMIT PRESERVE ROWS;
SHOW TABLE {Global Application UserID}.{TableA_Copy};
I opted to use a Teradata 13.10 feature called NO PRIMARY INDEX. By default, CREATE TABLE AS will take the first column of the SELECT statement and make it the PRIMARY INDEX of the table. This could lead to skewing and perm space issues in your testing depending on the data demographics. You can specify an explicit PRIMARY INDEX on your own as you understand the underlying data. (See the DDL manuals for details on the syntax if you're uncertain.)
The use of ON COMMIT PRESERVE ROWS for the intent of this example is probably extraneous. But in reality if you popped any data into that table for testing this clause would be beneficial in Teradata mode as the data would otherwise be lost immediately after the CREATE TABLE or any other data manipulation was performed against the volatile table.

Related

Oracle APEX PL/SQL process to insert multi-select items into association table for m:m relationship failing silently

I am implementing a form on table that allows the end-user to create a new project. This form contains a shuttle that allows the user to select the disposal site(s)(1+) that the project pertains to. I would like to use the output of the shuttle values to populate an association table between projects and disposal sites which is a many to many relationship.
This is my approach so far:
Created an additional VARCHAR2(4000)in the projects table to store the shuttle output (called 'Shuttle'). The shuttle output in this column looks something like 'CA-AT-D109Z2:CA-AT-D115:CA-AT-D174Z2'.
Created a process to take separate based on ':' and then add the values to the association table using the PL/SQL code:
Declare
Cursor c_values
is
Select
t.column_value As disposal_sites
From
Table ( apex_string.split(:P28_SHUTTLE, ':') ) t
Where
t.column_value Is Not Null;
Begin
for c in c_values loop
insert into MP_MDB_PROJECT_2_DSITE (PROJECTIDFK,DISPOSALSITEIDFK)
values (:P28_PROJECTNUMBER,c.disposal_sites);
end loop;
End;
The process/code enters the values from the shuttle into the association table in a loop as expected for the the disposal site but remains blank for projectidfk (the key that is '1' in the 1:m relationship). The code doesn't throw an error so I am having trouble debugging.
I think perhaps the problem I am having is that project number is computed after submission based on the users selections.Therefore, when the process run it finds :P28_PROJECTNUMBER to be null. Is there a way to ensure the computation to determine :P28_PROJECTNUMBER takes places first and is then followed by the PL/SQL process?
All help appreciated
If the form you're implementing is a native apex form, then you can use the attribute "Return Primary Key(s) after Insert" in the "Automatic Row Processing - DML" process to ensure that the primary key page item contains inserted value for any processes execute after this process.
Just make sure the process that handles the shuttle data is executed after the DML process.

Creation of Flyway "schema_version" fails for dashDB

I'm using Flyway to manage db migration on IBM dashDB. This database organizes by default table content 'by column', which in particular makes the creation of the "schema_version" table fail.
To get it to work, the table creation SQL statement should only include the "ORGANIZE BY ROW" directive:
CREATE TABLE (...)
(...)
) ORGANIZE BY ROW
What would be the best approach to handle this issue ? I'm looking for a solution that does not impact the default table organization.
Thanks for helping,
Cheers.
dashDB will perform best when all tables are column-based. When you start to mix row and column based tables, many operations are then performed in "compensation" which basically means they won't take full advantage of the columnar engine.
There are currently some compatibility reasons why a columnar table cannot be created and thus a row based table must be used, but the original DDL nor error are stated so I can't tell in this case. If you can provide the full CREATE TABLE statement and the resulting error (if you have it), I can possibly provide an alternative solution that would allow you to still use all column-based tables.
If you only want to change a particular table from column organized to row organized then a "ORGANIZE BY ROW" on the table definition would be the recommended way to approach this. (This seems to be what you're doing)
Changing the default table org will change how tables are created when you don't put an "ORGANIZE BY " in your table ddl.
If you have admin privileges on your dashDB instance you can change the default table org via 'Run SQL' in the dashDB console or using a dashDB client. (for exampl: clp/clpplus)
Set default table organization to ROW:
call ADMIN_CMD('UPDATE DB CFG USING DFT_TABLE_ORG ROW');
Set default table organization to COLUMN: (default dashDB configuration)
call ADMIN_CMD('UPDATE DB CFG USING DFT_TABLE_ORG COLUMN');
Analytics will perform much better with Column organized tables so it's recommended to have the majority of your tables as column organized.

Accessing a TEMP TABLE in a TRIGGER on a VIEW

I need to parameterize a view, and I am doing so by creating a TEMP TABLE which has the parameters for the view.
CREATE TEMP TABLE parms (parm1 INTEGER, parm2 INTEGER);
CREATE VIEW tableview AS ...
The VIEW is rather complex, but it basically uses these two parameters to kick start a recursive CTE, and there isn't any other way that I have found to express the view without these parameters.
The parameters must be stored in a temporary table because each connection should be able to have its own view with different parameters.
In any case, this works fine for creating the view itself, so long as I create the same TEMP TABLE at the start of any queries that use the view, e.g.:
CREATE TEMP TABLE parms (parm1 INTEGER, parm2 INTEGER);
INSERT INTO parms (parm1,parm2) VALUES (5,66);
SELECT * FROM tableview;
I am able to do the same thing to create a trigger to allow inserts on the view:
CREATE TEMP TABLE parms (parm1 INTEGER, parm2 INTEGER);
CREATE TRIGGER tableinsert INSTEAD OF INSERT ON tableview ...
However, when I try to do an actual INSERT (re-creating the TEMP TABLE first as before) I get an error:
no such table: main.parms
If I create a non-temporary table, I do not get this error, but then I have the problem that different connections can't have their own separate views.
I have review the documentation for triggers, and it mentions caveats of using temporary triggers on a non-temporary table, but I don't see anything regarding the reverse.
I did find a reference elsewhere that indicated that "the table... must exist in the same database as the table or view to which the trigger is attached". I thought a temporary table was part of the current database, is this not true? Is there some way to make this true?
I also tried accessing the parms table as temp.parms in the TRIGGER, but got the error:
qualified table names are not allowed on INSERT, UPDATE, and DELETE
statements within triggers
If I can't use a temporary table, is there some way to work around it to accomplish the same thing?
Update: Ok, so it seems to be an SQLite limitation. After digging around a bit in the SQLite source code, it seems to be pretty trivial to allow SELECT access to a temporary table in a trigger. However, allowing UPDATE access appears to be a lot harder.
Temporary objects are created in a separate database named temp, so they are not accessible from triggers in other databases.
The remaining mechanism to get a connection-specific value into a trigger is to use a user-defined function.

Determine flyway variables from earlier SQL step

I'd like to use flyway for a DB update with the situation that an DB already exists with productive data in it. The problem I'm looking at now (and I did not find a nice solution yet), is the following:
There is an existing DB table with numeric IDs, e.g.
create table objects ( obj_id number, ...)
There is a sequence "obj_seq" to allocate new obj_ids
During my DB migration I need to introduce a few new objects, hence I need new
object IDs. However I do not know at development time, what ID
numbers these will be
There is a DB trigger which later references these IDs. To improve performance I'd like to avoid determine the actual IDs every time the trigger runs but rather put the IDs directly into the trigger
Example (very simplified) of what I have in mind:
insert into objects (obj_id, ...) values (obj_seq.nextval, ...)
select obj_seq.currval from dual
-> store this in variable "newID"
create trigger on some_other_table
when new.id = newID
...
Now, is it possible to dynamically determine/use such variables? I have seen the flyway placeholders but my understanding is that I cannot set them dynamically as in the example above.
I could use a Java-based migration script and do whatever string magic I like - so, that would be a way of doing it, but maybe there is a more elegant way using SQL?
Many thx!!
tge
If the table you are updating contains only reference data, get rid of the sequence and assign the IDs manually.
If it contains a mix of reference and user data, you need to select the id based on values in other columns.

ASP.NET SqlDataSource update and create FK reference

The short version:
I have a grid view bound to a data source which has a SelectCommand with a left join in it because the FK can be null. On Update I want to create a record in the FK table if the FK is null and then update the parent table with the new records ID. Is this possible to do with just SqlDataSources?
The detailed version:
I have two tables: Company and Address. The column Company.AddressId can be null. On my ascx page I am using a SqlDataSource to select a left join of company and address and a GridView to display the results. By having my UpdateCommand and DeleteCommand of the SqlDataSource execute two statements separated by a semi-colon I am able to use the GridView's Edit and Delete functionality to update both table simultaneously.
The problem I have is when the Company.AddressId is null. What I need to have happen is have the data source create a record in the Address table and then update the Company table with the new Address.ID then proceed with the update as usual. I would like to do this with just data sources if possible for consistency/simplicity sake. Is it possible to have my data source do this, or perhaps add a second data source to the page to handle some of this?
Once I have that working I can probably figure out how to make it work with the InsertCommand as well but if you are on a roll and have an answer for how to make that fly as well feel free to provide it.
Thanks.
execute two statements separated by a
semi-colon
I don't see any reason why it wouldn't be possible to do both an INSERT and UPDATE in two statements with SqlDataSource just like you are doing here.
However, just so you know, if you have a lot of traffic or users using the application at the same time, you can run into concurrently issues where one user does something that affects another user and unexpected results can cascade and mess up your data. In general, for things like what you are doing - INSERT and UPDATE involving primary or foreign keys, usually SQL TRANSACTIONs are used. But, you must execute them as SQL stored procedures (or functions), on your SQL database. You are still able to call them from your SqlDataSource however by simply telling it that you are calling a stored procedure.

Resources