Is it possible to run a Teradata query in Excel that uses Volatile tables? - teradata

My Teradata query creates a volatile that is used to join to existing views. When linking query to excel the following error pops up: "Teradata: [Teradata Database] [3932] Only an ET or null statement is legal after a DDL Statement". Is there a workaround for this for someone that does not have write permissions in teradata to create a real view or table? I want to avoid linking to Teradata in SQL and running an open query to pull in the data needed.
This is for Excel 2016 64bit and using Teradata version 15.10.1.12

Normally this error will occur if you are using ANSI mode or have issued a BT (Begin Transaction) in BTET mode.
Here are a few workarounds to try:
Issue an ET; statement (commit) after the create volatile table statement. If you are using ANSI mode, use COMMIT; instead of ET;. If you are unsure, try each one in turn. Only one will be valid but both do the same thing. Make sure your Volatile table includes ON COMMIT PRESERVE ROWS
Try using BT ET mode (a.k.a. Teradata mode) when establishing the session. I do not remember where but there will be a setting in the ODBC configuration for this.
Try using a Global Temporary table. These work similarly to Volatile tables except you define them once and the definition sticks around. That is, you can create it in, say BTEQ, or SQL assistant etc. The definition is common to all users and sessions (i.e. your Excel session), but the content is transient and unique to each session (like a volatile table).
Move the select part of your insert into the volatile table into the query that selects the data from the volatile table. See simple example below.
If you do not have create Global Temporary table permissions, ask your DBA.
Here is a simple example to illustrate point 4.
Current:
create volatile table tmp (id Integer)
ON COMMIT PRESERVE ROWS;
insert into tmp
select customer_number
from customer
where X = Y and yr = 2019
;
select a,b,c
from another_tbl A join TMP T ON
A.id = T.id
;
Becomes:
select a,b,c
from another_tbl A join (
select customer_number
from customer
where X = Y and yr = 2019
) AS T
ON
A.id = T.id
;
Or better yet, just Join your tables directly.
Note The first sequence (create table, Insert into and select) is a three statement series. This will return 3 "result sets". The first two will be row counts the last will be the actual data. Most programs (including I think Excel) can not process multiple result set responses. This is one of the reasons it is difficult to use Teradata Macros with client tools like Excel.
The latter solution (a single select) avoids this potential problem.

Related

How to introduce indexing to sqlite query in android?

In my android application, I use Cursor c = db.rawQuery(query, null); to query data from a local sqlite database, and one of the query string looks like the following:
SELECT t1.* FROM table t1
WHERE NOT EXISTS (
SELECT 1 FROM table t2
WHERE t2.start_time = t1.start_time AND t2.stop_time > t1.stop_time
)
however, the issue is that the query gets very slow when the database gets huge. Trying to look into introducing indexing to speed up the query, but so far, not been very successful, therefore, would be great to have some help here, as it's also hard to find examples for this for android applications.
You can create a composite index for the columns start_time and stop_time:
CREATE INDEX idx_name ON table_name(start_time, stop_time);
You can read in The SQLite Query Optimizer Overview:
The ON and USING clauses of an inner join are converted into
additional terms of the WHERE clause prior to WHERE clause analysis
...
and:
If an index is created using a statement like this:
CREATE INDEX idx_ex1 ON ex1(a,b,c,d,e,...,y,z);
Then the index might be used if the initial columns of the index
(columns a, b, and so forth) appear in WHERE clause terms. The initial
columns of the index must be used with the = or IN or IS operators.
The right-most column that is used can employ inequalities.
You may have to uninstall the app from the device so that the db is deleted and rerun to recreate it, or increase the version number of the db so that you can create the index in the onUpgrade() method.

Need to get data from a table using database link where database name is dynamic

I am working on a system where I need to create a view.I have two databases
1.CDR_DB
2.EMS_DB
I want to create the view on the EMS_DB using table from CDR_DB. This I am trying to do via dblink.
The dblink is created at the runtime, i.e. DB Name is decided at the time user installs the database, based on the dbname dblink is decided.
My issue is I am trying to create a query like below to create a view from a table which name is decided at run time. Please see below query :
select count(*)
from (SELECT CONCAT('cdr_log#', alias) db_name
FROM ems_dbs a,
cdr_manager b
WHERE a.db_type = 'CDR'
and a.ems_db_id = b.cdr_db_id
and b.op_state = 4 ) db_name;
In this query cdr_log#"db_name" is the runtime table name(db_name get's created at runtime).
When I'm trying to run above query, I'm not getting the desired result. The result of the above query is '1'.
When running only the sub-query from the above query :
SELECT CONCAT('cdr_log#', alias) db_name
FROM ems_dbs a,
cdr_manager b
WHERE a.db_type = 'CDR'
and a.ems_db_id = b.cdr_db_id
and b.op_state = 4;
i'm getting the desired result, i.e. cdr_log#cdrdb01
but when i'm trying to run the full query, getting result as '1'.
Also, when i'm trying to run as
select count(*) from cdr_log#cdrdb01;
I'm getting the result as '24' which is correct.
Expected Result is that I should get the same output similar to the query :
select count(*) from cdr_log#cdrdb01;
---24
But the desired result is coming as '1' using the full query mentioned initially.
Please let me know a way to solve the above problem. I found a way to do it via a procedure, but i'm not sure how can I invoke this procedure.
Can this be done as part of sub query as I have used above?
You're not going to be able to create a view that will dynamically reference an object over a database link unless you do something like create a pipelined table function that builds the SQL dynamically.
If the database link is created and named dynamically at installation time, it would probably make the most sense to create any objects that depend on the database link (such as the view) at installation time too. Dynamic SQL tends to be much harder to write, maintain, and debug than static SQL so it would make sense to minimize the amount of dynamic SQL you need. If you can dynamically create the view at installation time, that's likely the easiest option. Even better than directly referencing the remote object in the view, particularly if there are multiple objects that need to reference the remote object, would probably be to have the view reference a synonym and create the synonym at install time. Something like
create synonym cdr_log_remote
for cdr#<<dblink name>>
create or replace view view_name
as
select *
from cdr_log_remote;
If you don't want to create the synonym/ view at installation time, you'd need to use dynamic SQL to reference the remote object. You can't use dynamic SQL as the SELECT statement in a view so you'd need to do something like have a view reference a pipelined table function that invokes dynamic SQL to call the remote object. That's a fair amount of work but it would look something like this
-- Define an object that has the same set of columns as the remote object
create type typ_cdr_log as object (
col1 number,
col2 varchar2(100)
);
create type tbl_cdr_log as table of typ_cdr_log;
create or replace function getAllCDRLog
return tbl_cdr_log
pipelined
is
l_rows typ_cdr_log;
l_sql varchar(1000);
l_dblink_name varchar(100);
begin
SELECT alias db_name
INTO l_dblink_name
FROM ems_dbs a,
cdr_manager b
WHERE a.db_type = 'CDR'
and a.ems_db_id = b.cdr_db_id
and b.op_state = 4;
l_sql := 'SELECT col1, col2 FROM cdr_log#' || l_dblink_name;
execute immediate l_sql
bulk collect into l_rows;
for i in 1 .. l_rows.count
loop
pipe row( l_rows(i) );
end loop;
return;
end;
create or replace view view_name
as
select *
from table( getAllCDRLog );
Note that this will not be a particularly efficient way to structure things if there are a large number of rows in the remote table since it reads all the rows into memory before starting to return them back to the caller. There are plenty of ways to make the pipelined table function more efficient but they'll tend to make the code more complicated.

Create temporary table

I'm coming from SQL Server enviroment where you can declare a temp table with #table, but as I've read you can't do this in oracle.
I want get a value for 500.000 hardcoded id's from a table, but as the IN clause has a limit of 1000 I need to find another way. Is the best way to create a temporary table and insert the hardcoded values and then join the other table which contains the values I need ?
My client (toad) has autocommit set to off and I dont want to commit anything, I want it to be session-based so when I close the database client I want the temporary table do disappear. Is the code below the right way to do in oracle?
CREATE GLOBAL TEMPORARY TABLE Test(HardcodedId number(10))
ON COMMIT DELETE ROWS;
I've also tried to use inner join and in the join select the hardcoded values from dual, but this creates a column for each value and i'm not able to use a reference to join with. Is it possible to insert all values into a single column in dual?
You can use some thing like this (500 union all)
select * from (
select '1' from dual
union all
select '2' from dual
...) q
Then you can join this with other tables.
For your situation, I would use a GTT (global temporary table) - which you have already researched by the looks.
The advantage of a GTT is that it's a permanent object (so no need to constantly create and drop it) and the data "stored" in it is on a session basis.

Not sure about the type of SQL Server lock to use for synchronization

I have an ASP.NET web application that populates the SQL Server 2008 database table like this:
INSERT INTO tblName1 (col1, col2, col3)
VALUES(1, 2, 3)
I also have a separate service application that processes the contents of that table (on the background) by first renaming that table, and then by creating an empty table as such:
SET XACT_ABORT ON
BEGIN TRANSACTION
--Rename table
EXEC sp_rename 'tblName1', 'temp_tblName1'
--Create new table
CREATE TABLE tblName1(
id INT NOT NULL IDENTITY(1,1) PRIMARY KEY,
col1 INT,
col2 INT,
col3 INT
)
COMMIT
SET XACT_ABORT OFF
--Begin working with the 'temp_tblName1' table
What I am not sure is which SQL lock do I need to use in this situation on the tblName1 table?
PS. To give you a frequency with which these two code samples run: first may run several times a second (although most times, less frequently), and the second one -- twice a day.
As some of the comments have suggested, consider doing this differently. You may benefit from using the snapshot isolation level. Using snapshot isolation requires ALLOW_SNAPSHOT_ISOLATION to be set to ON on the database. This setting is off by default, so you'll want to check whether you can turn it on.
Once you are able to use snapshot isolation, you would not need to change your INSERT statement, but your other process could change to something like:
SET XACT_ABORT ON
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
BEGIN TRANSACTION
-- Do whatever this process does, but don't rename the table.
-- If you want to get rid of the old records:
DELETE [tblName1] WHERE 1 = 1
-- Then
COMMIT TRANSACTION
In case you really do need to create a new non-temporary table for some reason, you may need to do so before entering the transaction, as there are some limits on what you are allowed to do during snapshot isolation.

System or catalog tables - DBA_ / DBC - outputs are volatile

I am trying to get list of all tables/views (in other words all objects) where a particular field is referenced using the system or catalog tables. I am using the following query.
select *
from dba_col_comments
where column_name like('SXX_AXXX_%')
order by 1;
However, the output is volatile. When I repeatedly run the same query without any changes the output is varies. For instance, it produced 9300 records and then 9350 after a couple of minutes and then 9347 after a couple of minutes.
I am observing the same behaviour in Teradata as well.
My theory would be - in a production enironment temporary objects that are created are probably getting an entry in the system/catalog tables.
Any thoughts/directions?
In Teradata you will find that as global temporary tables are instantiated (referenced by an SQL statement) records should be added to the data dictionary table TVM. These records are then dropped after the session logs off leaving just the base table record associated with the original CREATE GLOBAL TEMPORARY TABLE statement that was submitted.
You can find these instances using the view DBC.AllTempTables.
In Teradata, volatile tables are not maintained within the data dictionary.
EDIT - Your mileage may vary but this should get you started on Teradata
SELECT D1.DatabaseNameI AS DatabaseName_
, T1.TVMNameI AS TableName_
, F1.FieldName AS ColumnName_
FROM "DBC".TVM T1
INNER JOIN
"DBC".Dbase D1
ON D1.DatabaseId = T1.DatabaseId
INNER JOIN
"DBC".TVFields F1
ON F1.DatabaseId = T1.DatabaseId
AND F1.TableId = T1.TVMId
WHERE F1.FieldName = 'MyColumn'
--AND D1.DatabaseNameI IN ('{Database1}', ... '{Database99}') -- Filter on databases
AND F1.FieldType in ('i', 'i1', 'i2', 'i8') -- Integer, ByteInt, SmallInt, BigInt
--AND T1.TableKind IN ('T') -- Optional Filter to just tables.
AND NOT EXISTS
(SELECT 'x'
FROM "DBC".TempTables TT1
WHERE Tt1.TableId = T1.TVMId
)
;

Resources