System or catalog tables - DBA_ / DBC - outputs are volatile - oracle11g

I am trying to get list of all tables/views (in other words all objects) where a particular field is referenced using the system or catalog tables. I am using the following query.
select *
from dba_col_comments
where column_name like('SXX_AXXX_%')
order by 1;
However, the output is volatile. When I repeatedly run the same query without any changes the output is varies. For instance, it produced 9300 records and then 9350 after a couple of minutes and then 9347 after a couple of minutes.
I am observing the same behaviour in Teradata as well.
My theory would be - in a production enironment temporary objects that are created are probably getting an entry in the system/catalog tables.
Any thoughts/directions?

In Teradata you will find that as global temporary tables are instantiated (referenced by an SQL statement) records should be added to the data dictionary table TVM. These records are then dropped after the session logs off leaving just the base table record associated with the original CREATE GLOBAL TEMPORARY TABLE statement that was submitted.
You can find these instances using the view DBC.AllTempTables.
In Teradata, volatile tables are not maintained within the data dictionary.
EDIT - Your mileage may vary but this should get you started on Teradata
SELECT D1.DatabaseNameI AS DatabaseName_
, T1.TVMNameI AS TableName_
, F1.FieldName AS ColumnName_
FROM "DBC".TVM T1
INNER JOIN
"DBC".Dbase D1
ON D1.DatabaseId = T1.DatabaseId
INNER JOIN
"DBC".TVFields F1
ON F1.DatabaseId = T1.DatabaseId
AND F1.TableId = T1.TVMId
WHERE F1.FieldName = 'MyColumn'
--AND D1.DatabaseNameI IN ('{Database1}', ... '{Database99}') -- Filter on databases
AND F1.FieldType in ('i', 'i1', 'i2', 'i8') -- Integer, ByteInt, SmallInt, BigInt
--AND T1.TableKind IN ('T') -- Optional Filter to just tables.
AND NOT EXISTS
(SELECT 'x'
FROM "DBC".TempTables TT1
WHERE Tt1.TableId = T1.TVMId
)
;

Related

How to introduce indexing to sqlite query in android?

In my android application, I use Cursor c = db.rawQuery(query, null); to query data from a local sqlite database, and one of the query string looks like the following:
SELECT t1.* FROM table t1
WHERE NOT EXISTS (
SELECT 1 FROM table t2
WHERE t2.start_time = t1.start_time AND t2.stop_time > t1.stop_time
)
however, the issue is that the query gets very slow when the database gets huge. Trying to look into introducing indexing to speed up the query, but so far, not been very successful, therefore, would be great to have some help here, as it's also hard to find examples for this for android applications.
You can create a composite index for the columns start_time and stop_time:
CREATE INDEX idx_name ON table_name(start_time, stop_time);
You can read in The SQLite Query Optimizer Overview:
The ON and USING clauses of an inner join are converted into
additional terms of the WHERE clause prior to WHERE clause analysis
...
and:
If an index is created using a statement like this:
CREATE INDEX idx_ex1 ON ex1(a,b,c,d,e,...,y,z);
Then the index might be used if the initial columns of the index
(columns a, b, and so forth) appear in WHERE clause terms. The initial
columns of the index must be used with the = or IN or IS operators.
The right-most column that is used can employ inequalities.
You may have to uninstall the app from the device so that the db is deleted and rerun to recreate it, or increase the version number of the db so that you can create the index in the onUpgrade() method.

Is it possible to run a Teradata query in Excel that uses Volatile tables?

My Teradata query creates a volatile that is used to join to existing views. When linking query to excel the following error pops up: "Teradata: [Teradata Database] [3932] Only an ET or null statement is legal after a DDL Statement". Is there a workaround for this for someone that does not have write permissions in teradata to create a real view or table? I want to avoid linking to Teradata in SQL and running an open query to pull in the data needed.
This is for Excel 2016 64bit and using Teradata version 15.10.1.12
Normally this error will occur if you are using ANSI mode or have issued a BT (Begin Transaction) in BTET mode.
Here are a few workarounds to try:
Issue an ET; statement (commit) after the create volatile table statement. If you are using ANSI mode, use COMMIT; instead of ET;. If you are unsure, try each one in turn. Only one will be valid but both do the same thing. Make sure your Volatile table includes ON COMMIT PRESERVE ROWS
Try using BT ET mode (a.k.a. Teradata mode) when establishing the session. I do not remember where but there will be a setting in the ODBC configuration for this.
Try using a Global Temporary table. These work similarly to Volatile tables except you define them once and the definition sticks around. That is, you can create it in, say BTEQ, or SQL assistant etc. The definition is common to all users and sessions (i.e. your Excel session), but the content is transient and unique to each session (like a volatile table).
Move the select part of your insert into the volatile table into the query that selects the data from the volatile table. See simple example below.
If you do not have create Global Temporary table permissions, ask your DBA.
Here is a simple example to illustrate point 4.
Current:
create volatile table tmp (id Integer)
ON COMMIT PRESERVE ROWS;
insert into tmp
select customer_number
from customer
where X = Y and yr = 2019
;
select a,b,c
from another_tbl A join TMP T ON
A.id = T.id
;
Becomes:
select a,b,c
from another_tbl A join (
select customer_number
from customer
where X = Y and yr = 2019
) AS T
ON
A.id = T.id
;
Or better yet, just Join your tables directly.
Note The first sequence (create table, Insert into and select) is a three statement series. This will return 3 "result sets". The first two will be row counts the last will be the actual data. Most programs (including I think Excel) can not process multiple result set responses. This is one of the reasons it is difficult to use Teradata Macros with client tools like Excel.
The latter solution (a single select) avoids this potential problem.

How can I return inserted ids for multiple rows in SQLite?

Given a table:
CREATE TABLE Foo(
Id INTEGER PRIMARY KEY AUTOINCREMENT,
Name TEXT
);
How can I return the ids of the multiple rows inserted at the same time using:
INSERT INTO Foo (Name) VALUES
('A'),
('B'),
('C');
I am aware of last_insert_rowid() but I have not found any examples of using it for multiple rows.
What I am trying to achieve can bee seen in this SQL Server example:
DECLARE #InsertedRows AS TABLE (Id BIGINT);
INSERT INTO [Foo] (Name) OUTPUT Inserted.Id INTO #InsertedRows VALUES
('A'),
('B'),
('C');
SELECT Id FROM #InsertedRows;
Any help is very much appreciated.
This is not possible. If you want to get three values, you have to execute three INSERT statements.
Given SQLite3 locking:
An EXCLUSIVE lock is needed in order to write to the database file. Only one EXCLUSIVE lock is allowed on the file and no other locks of any kind are allowed to coexist with an EXCLUSIVE lock. In order to maximize concurrency, SQLite works to minimize the amount of time that EXCLUSIVE locks are held.
And how Last Insert Rowid works:
...returns the rowid of the most recent successful INSERT into a rowid table or virtual table on database connection D.
It should be safe to assume that while a writer executes its batch INSERT to a ROWID-table there can be no other writer to make the generated primary keys non-consequent. Thus the insert primary keys are [lastrowid - rowcount + 1, lastrowid]. Or in Python SQLite3 API:
cursor.execute(...) # multi-VALUE INSERT
assert cursor.rowcount == len(values)
lastrowids = range(cursor.lastrowid - cursor.rowcount + 1, cursor.lastrowid + 1)
In normal circumstances when you don't mix provided and expected-to-be-generated keys or as AUTOINCREMENT-mode documentation states:
The normal ROWID selection algorithm described above will generate monotonically increasing unique ROWIDs as long as you never use the maximum ROWID value and you never delete the entry in the table with the largest ROWID.
The above should work as expected.
This Python script can be used to test correctness of the above for multi-threaded and multi-process setup.
Other databases
For instance, MySQL InnoDB (at least in default innodb_autoinc_lock_mode = 1 "consecutive" lock mode) works in similar way (though obviously in much more concurrent conditions) and guarantees that inserted PKs can be inferred from lastrowid:
"Simple inserts" (for which the number of rows to be inserted is known in advance) avoid table-level AUTO-INC locks by obtaining the required number of auto-increment values under the control of a mutex (a light-weight lock) that is only held for the duration of the allocation process, not until the statement completes

Create temporary table

I'm coming from SQL Server enviroment where you can declare a temp table with #table, but as I've read you can't do this in oracle.
I want get a value for 500.000 hardcoded id's from a table, but as the IN clause has a limit of 1000 I need to find another way. Is the best way to create a temporary table and insert the hardcoded values and then join the other table which contains the values I need ?
My client (toad) has autocommit set to off and I dont want to commit anything, I want it to be session-based so when I close the database client I want the temporary table do disappear. Is the code below the right way to do in oracle?
CREATE GLOBAL TEMPORARY TABLE Test(HardcodedId number(10))
ON COMMIT DELETE ROWS;
I've also tried to use inner join and in the join select the hardcoded values from dual, but this creates a column for each value and i'm not able to use a reference to join with. Is it possible to insert all values into a single column in dual?
You can use some thing like this (500 union all)
select * from (
select '1' from dual
union all
select '2' from dual
...) q
Then you can join this with other tables.
For your situation, I would use a GTT (global temporary table) - which you have already researched by the looks.
The advantage of a GTT is that it's a permanent object (so no need to constantly create and drop it) and the data "stored" in it is on a session basis.

sqlite: SELECTing with UNION when the other table does not exist

I have two sqlite3 databases which have tables with identical schemas and (possibly) overlapping data. For example a Temperature table in both databases. If I want to get all the columns from both tables combined I will first ATTACH the other database:
sqlite> ATTACH DATABASE 'old.sqlite' AS Old;
and then combine them with UNION like this:
sqlite> SELECT * FROM Temperature UNION SELECT * FROM Old.Temperature;
This works fine.
However sometimes there is just one table. For example for humidity I might have just one Humidity and no counterpart in the other database. In this case the query fails:
sqlite> SELECT * FROM Humidity UNION SELECT * FROM Old.Humidity;
SQL error: no such table: Old.Humidity
What I would like to get is all columns from the tables that do exist and not to fail just because the other table doesn't exist.
I don't know before hand which tables exist in which databases. I only have the table names from all the databases combined into one list. And the part of the codebase that's reading the data expects to get all the columns in one query.
It is not possible to create a query dyncamically in SQL.
(SQLite is designed to be used from within an application written in a 'real' programming language.)
You have to check beforehand whether the table exists (use PRAGMA table_info, or try if a query using that table works).
Then execute a query either with or without UNION.

Resources