Update completed . No rows changed - teradata

1 .The below collect stats statment on volatile table in Teradata Bteq script returns no rows changed . Can some one help me to understand whether the stats are collected or not ?.
collect stats on ORDER_VT column (ORDER_ID);
** Update completed. No rows changed.
*** Total elapsed time was 16 seconds.
2.Trying to collect the stats on below volatile created in two different ways .
Collect stats on all columns at a time
Collecting stats on each column individually
Whats make the the difference here .
Create multiset volatile table TEST1
as (
select
COLMN1,
COLMN2,
COLMN3,
from TABLE1 T1
inner join TABLE2 T2
on T1.KEY1=T2.KEY1
AND T1.KEY2=T2.KEY2
)WITH DATA PRIMARY INDEX(COLMN1,COLMN2,COLMN3)
ON COMMIT PRESERVE ROWS;
Collect stats on TEST1 column(COLMN1,COLMN2,COLMN3);
Collect stats on TEST1 column(COLMN1 );
Collect stats on TEST1 column(COLMN2 );
Collect stats on TEST1 column(COLMN3 );

#1: Stats on a Volatile Table are stored in memory only, not in the data dictionary -> no row updated.
#2: The 1st collect creates a multi-column statistics, the other ones are single column.
To see the actual data use help stats on test1; and show stats values on test1;

Related

sqlite - copy subset of tables and columns into new db-file

I have a database A.db, which contains tables t1, t2 and t3.
Now I want to create a new database B.db, which contains t1 and some chosen columns col1 and col4 from t2.
With .import I get hundreds of errors and it seems to work only for full tables.
.output sounds like I just save the output as it would be printed.
Basically, I need an insert into foo select ... across different files. How can I do this?
First you must attach A.db to your current database and give it an alias like adb.
Then write the insert statement just like you would if all the tables existed in the same database, qualifying the column names with the database alias.
It's a good practice to include in the insert into... statement inside parentheses all the column names of the table foo for which you will set values from the other 2 tables, but also be sure that the order of the columns is the same with the order of the columns in the select list:
attach database 'pathtoAdatabase/A.db' as adb;
insert into foo (column1, column2, .......)
select adb.t1.column1, adb.t1.column2, ...., adb.t2.col1, adb.t2.col4
from adb.t1 inner join adb.t2
on <join condition>
Replace <join condition> with the conditions on whichyou will join the 2 tables to makes the rows that you will insert into foo, something like:
adb.t1.id = adb.t2.id

Recursive SQLite CTE with JSON1 json_each

I have a SQLite table where one column contains a JSON array containing 0 or more values. Something like this:
id|values
0 |[1,2,3]
1 |[]
2 |[2,3,4]
3 |[2]
What I want to do is "unfold" this into a list of all distinct values contained within the arrays of that column.
To start, I am using the JSON1 extension's json_each function to extract a table of values from a row:
SELECT
value
FROM
json_each(
(
SELECT
values
FROM
my_table
WHERE
id == 2
)
)
Where I can vary the id (2, above) to select any row in the table.
Now, I am trying to wrap this in a recursive CTE so that I can apply it to each row across the entire table and union the results. As a first step I replicated (roughly) the results from above as follows:
WITH RECURSIVE result AS (
SELECT null
UNION ALL
SELECT
value
FROM
json_each(
(
SELECT
values
FROM
my_table
WHERE
id == 2
)
)
)
SELECT * FROM result;
As the next step I had originally planned to make id a variable and increment it (in a similar manner to the first example in the documentation, but haven't been able to get that to work.
I have gone through the other examples in the documentation, but they are somewhat more complex and I haven't been able to distill those down to see how they might apply to this problem.
Can someone provide a simple example of how to solve this (or a similar problem) with a recursive CTE?
Of course, my goal is to solve the problem with or without CTEs so Im also happy to hear if there is a better way...
You do not need a recursive CTE for this.
To call json_each for multiple source rows, use a join:
SELECT t1.id, t2.value
FROM my_table AS t1
JOIN json_each((SELECT "values" FROM my_table WHERE id = t1.id)) AS t2;

Teradata - duplication column error

I want to make a volatile table using teradata.
In the select statement I am using multiple columns from different tables.
However, some of the columns in the different tables have same names.
Therefore, I am getting a 'duplication column error'.
The question is - is there any workaround to bypass this error?
Is it possible to add for example table name to column name?
This is how my code looks:
CREATE MULTISET VOLATILE TABLE test
AS (
SEL *
FROM Table_A Left JOIN Table_B
...
)
WITH DATA
ON COMMIT PRESERVE ROWS
Instead of doing a select * , select individual column names and put aliases next to it. This will bypass the error.
A select all statement only works if you're working off one table. If you're retrieving all data from multiple tables, you've to specify that in your select statement.
CREATE MULTISET VOLATILE TABLE test AS
(
SELECT Table_A.*
, Table_B.*
FROM Table_A
LEFT JOIN Table_B ON ...
...
)
WITH DATA PRIMARY INDEX(«PI»)
ON COMMIT PRESERVE ROWS

How to insert all the records in a data frame into the database in R?

I have a data frame data_frm which has following columns:
emp_id | emp_sal | emp_bonus | emp_desig_level
| | |
| | |
And I want to insert all the records present in this data frame into the database table tab1.I executed this query but I got an error:
for(record in data_frm)
{
write_sql <- paste("Insert into tab1 (emp_id,emp_sal,emp_bonus,emp_desig_level) values (",data_frm[,"emp_id"],",",data_frm[,"emp_sal"],",",data_frm[,"emp_bonus"],",",data_frm[,"emp_desig_level"],")",sep="")
r <- dbSendQuery(r,write_sql)
}
I get error as:
Error in data_frm[, "emp_id"] : incorrect number of dimensions
How do I insert all the records from the data frame into database?
NOTE: I want to insert all the records of the data frame using insert statement.
dbWriteTable(conn, "RESULTS", results2000, append = T) # to protect current values
dbWriteTable(conn, "RESULTS", results2000, append = F) # to overwrite values
From the RDBI homepage at sourceforge. Hope that helps...
In your for loop, you need to put:
data_frm[record,"column_name"]
Other wise your loop is trying to insert the entire column instead of just the particular record.
for(record in data_frm)
{
write_sql <- paste("Insert into tab1 (emp_id,emp_sal,emp_bonus,emp_desig_level) values (",data_frm[record,"emp_id"],",",data_frm[record,"emp_sal"],",",data_frm[record,"emp_bonus"],",",data_frm[record,"emp_desig_level"],")",sep="")
r <- dbSendQuery(r,write_sql)
}
Answered here
Copied one more time:
Recently I had similar issue.
Problem description: MS Server data base with scheme. The task is to save an R data.frame object to a predefined data base table without dropping it.
Problems I faced:
Some packages functions does not support schemes or require github development version installation
You can save data.frame only after drop (delete table) operation (I needed just "clear table" operation)
How I solved the issue
Using simple RODBC::sqlQuery, writing a data.frame row by row.
The solution (couple of functions) is available here or here

tSQLt AssertEqualsTable - unexpected results when table schema doesn't match

I noticed the other day that you can write a test where there are more columns in the Actual table that in the Expected table and the test will still pass if the the data matches in the columns that exist in both.
Here is an example:
if exists(select * from INFORMATION_SCHEMA.ROUTINES where ROUTINE_SCHEMA='UnitTests_FirstTry' and ROUTINE_NAME='test_AssertEqualsTable_IgnoresExtraColumnsInActual')
begin
drop procedure UnitTests_FirstTry.test_AssertEqualsTable_IgnoresExtraColumnsInActual
end
go
create procedure UnitTests_FirstTry.test_AssertEqualsTable_IgnoresExtraColumnsInActual
as
begin
IF OBJECT_ID(N'tempdb..#Expected') > 0 DROP TABLE [#Expected];
IF OBJECT_ID(N'tempdb..#Actual') > 0 DROP TABLE [#Actual];
create table #expected( a int null) --, b int null, c varchar(10) null)
create table #actual(a int, x money null)
insert #expected (a) values (1)
insert #actual (a, x) values (1, 22.51)
--insert #expected (a, b, c) values (1,2,'test')
--insert #actual (a, x) values (1, 22.51)
exec tSQLt.AssertEqualsTable '#expected', '#actual'
end
go
exec tSQLt.Run 'UnitTests_FirstTry.test_AssertEqualsTable_IgnoresExtraColumnsInActual'
go
I noticed this when I removed some extra columns from the Expected table of a test that no longer needed those columns, but I forgot to remove the same columns from the Actual table and my test still passed, which to me was a bit off putting.
This only happens when the Actual table has more columns. If the expected has more columns an error is generated. Is this correct? Does anyone know what the reasoning is behind this behavior?
Although not particularly well documented in this respect, the AssertEqualsTable routine only looks at the data in the table - not that the columns are the same. To check whether the table structures are the same, use AssertResultSetsHaveSameMetaData. I wrote a bit about this in this
article.
You can of course run both in the same test, and the test will only pass if both checks pass.
I guess the reason for the split would be because there may be rare instances where you care about either the data or the metadata being consistent for your test, but not both.

Resources