tSQLt AssertEqualsTable - unexpected results when table schema doesn't match - tsqlt

I noticed the other day that you can write a test where there are more columns in the Actual table that in the Expected table and the test will still pass if the the data matches in the columns that exist in both.
Here is an example:
if exists(select * from INFORMATION_SCHEMA.ROUTINES where ROUTINE_SCHEMA='UnitTests_FirstTry' and ROUTINE_NAME='test_AssertEqualsTable_IgnoresExtraColumnsInActual')
begin
drop procedure UnitTests_FirstTry.test_AssertEqualsTable_IgnoresExtraColumnsInActual
end
go
create procedure UnitTests_FirstTry.test_AssertEqualsTable_IgnoresExtraColumnsInActual
as
begin
IF OBJECT_ID(N'tempdb..#Expected') > 0 DROP TABLE [#Expected];
IF OBJECT_ID(N'tempdb..#Actual') > 0 DROP TABLE [#Actual];
create table #expected( a int null) --, b int null, c varchar(10) null)
create table #actual(a int, x money null)
insert #expected (a) values (1)
insert #actual (a, x) values (1, 22.51)
--insert #expected (a, b, c) values (1,2,'test')
--insert #actual (a, x) values (1, 22.51)
exec tSQLt.AssertEqualsTable '#expected', '#actual'
end
go
exec tSQLt.Run 'UnitTests_FirstTry.test_AssertEqualsTable_IgnoresExtraColumnsInActual'
go
I noticed this when I removed some extra columns from the Expected table of a test that no longer needed those columns, but I forgot to remove the same columns from the Actual table and my test still passed, which to me was a bit off putting.
This only happens when the Actual table has more columns. If the expected has more columns an error is generated. Is this correct? Does anyone know what the reasoning is behind this behavior?

Although not particularly well documented in this respect, the AssertEqualsTable routine only looks at the data in the table - not that the columns are the same. To check whether the table structures are the same, use AssertResultSetsHaveSameMetaData. I wrote a bit about this in this
article.
You can of course run both in the same test, and the test will only pass if both checks pass.
I guess the reason for the split would be because there may be rare instances where you care about either the data or the metadata being consistent for your test, but not both.

Related

I need the equivalent of this Count with Case for Firebird 3 database

I need the equivalent of this Count with Case for a Firebird 3 database. I get an error when I try it:
SQL error code = -104.
Invalid usage of boolean expression.
I was just recently introduced to the Case command and I can't seem to rework it myself. I managed to get it to work with SQLite just fine.
The intent is to do an AND operation, the Where can't do an AND because the keywords are in rows.
SELECT Count((CASE WHEN keywords.keyword LIKE '%purchased%'
THEN 1 END) AND
(CASE WHEN keywords.keyword LIKE '%item%'
THEN 1 END)) AS TRows
FROM products
LEFT OUTER JOIN keywords_products ON
products.product_rec_id = keywords_products.product_rec_id
LEFT OUTER JOIN keywords ON
keywords_products.keyword_rec_id = keywords.keyword_rec_id
WHERE (keywords.keyword LIKE '%purchased%' OR
keywords.keyword LIKE '%item%')
I have three SQLite tables, a products table, a keywords_products table, and a keywords table.
CREATE TABLE products (
product_rec_id INTEGER PRIMARY KEY NOT NULL,
name VARCHAR (100) NOT NULL
);
CREATE TABLE keywords_products (
keyword_rec_id INTEGER NOT NULL,
product_rec_id INTEGER NOT NULL
);
CREATE TABLE keywords (
keyword_rec_id INTEGER PRIMARY KEY NOT NULL,
keyword VARCHAR (50) NOT NULL UNIQUE
);
The keywords_products table holds the the record id of a product and a record id of a keyword. Each product can be assigned multiple keywords in the keywords table.
The keyword table looks like this:
keyword_rec_id keyword
-------------- -----------
60 melee
43 scifi
87 water
The keywords_products table looks like this (one keyword can be assigned to many products):
keyword_rec_id product_rec_id
-------------- --------------
43 1
60 1
43 2
87 3
The products table looks like this:
product_rec_id name
-------------- --------------
1 Scifi Melee Weapons
2 Scifi Ray Weapon
3 Lake House
I'm assuming you want to count how many rows there are where both conditions are true.
The error occurs because you can't use AND between integer values. The values must be true booleans.
So, change your code to
Count((CASE WHEN keywords.keyword LIKE '%purchased%'
THEN TRUE END) AND
(CASE WHEN keywords.keyword LIKE '%item%'
THEN TRUE END))
However that is far too complex. You can simplify your expression to
count(nullif(
keywords.keyword LIKE '%purchased%' and keywords.keyword LIKE '%item%',
false))
The use of NULLIF is needed because COUNT will count all non-NULL values (as required by the SQL standard), and false is non-NULL as well. So to achieve the (assumed) desired effect, we transform false to NULL using NULLIF.
You have to use ONE single CASE expression with multiple WHEN branches.
Making Boolean functions of distinct CASE expressions just makes no sense - the CASE is not Boolean function itself.
You can see rules and an example at CASE.
case
when Age >= 18 then 'Yes'
when Age < 18 then 'No'
end;
Remake you two CASE clauses to a single CASE clause following this pattern.
However, you only use CASE when you can not move filters and conditions into standard part of SQL select. Normal approach would be to minimize data that SQL engine has to fetch, using pre-filtering. The CASE uses post-filtering, it makes SQL engine to fetch all the data, regardless if it needs it or not, and then discard the unneeded fetched data. That is redundant work slowing down the process.
In your case you already extracted the condition into WHERE clause, that is good.
SELECT
...
WHERE (keywords.keyword LIKE '%purchased%')
OR (keywords.keyword LIKE '%item%')
Since you pre-filter your data stream to always contain "item" or "purchase" then the CASE clause of yours would always return 1 on all rows selected under this WHERE pre-filtering. Hence - just remove the redundant CASE clause and put "1" instead.
SELECT Count(1)
FROM products
LEFT JOIN keywords_products ON products.product_rec_id = keywords_products.product_rec_id
LEFT JOIN keywords ON keywords_products.keyword_rec_id = keywords.keyword_rec_id
WHERE (keywords.keyword LIKE '%purchased%')
OR (keywords.keyword LIKE '%item%')
Now, given that WHERE clause is processed logically after JOINing, this your query de facto transformed LEFT JOINs into FULL JOINs ( your WHERE clause just discards rows with NULL "keyword" column values ) but aghain in unreliable and inefficient method. Since you do not want to have "keyword is NULL" kind of rows anyway - just convert your left joins to normal joins.

SUM totals by FOR ALL ENTRIES itab keys

I want to execute a SELECT query on a database table that has 6 key fields, let's assume they are keyA, keyB, ..., keyF.
As input parameters to my ABAP function module I do receive an internal table with exactly that structure of the key fields, each entry in that internal table therefore corresponds to one tuple in the database table.
Thus I simply need to select all tuples from the database table that correspond to the entries in my internal table.
Furthermore, I want to aggregate an amount column in that database table in exactly the same query.
In pseudo SQL the query would look as follows:
SELECT SUM(amount) FROM table WHERE (keyA, keyB, keyC, keyD, keyE, keyF) IN {internal table}.
However, this representation is not possible in ABAP OpenSQL.
Only one column (such as keyA) is allowed to state, not a composite key. Furthermore I can only use 'selection tables' (those with SIGN, OPTIOn, LOW, HIGH) after they keyword IN.
Using FOR ALL ENTRIES seems feasible, however in this case I cannot use SUM since aggregation is not allowed in the same query.
Any suggestions?
For selecting records for each entry of an internal table, normally the for all entries idiom in ABAP Open SQL is your friend. In your case, you have the additional requirement to aggregate a sum. Unfortunately, the result set of a SELECT statement that works with for all entries is not allowed to use aggregate functions. In my eyes, the best way in this case is to compute the sum from the result set in the ABAP layer. The following example works in my system (note in passing: using the new ABAP language features that came with 7.40, you could considerably shorten the whole code).
report zz_ztmp_test.
start-of-selection.
perform test.
* Database table ZTMP_TEST :
* ID - key field - type CHAR10
* VALUE - no key field - type INT4
* Content: 'A' 10, 'B' 20, 'C' 30, 'D' 40, 'E' 50
types: ty_entries type standard table of ztmp_test.
* ---
form test.
data: lv_sum type i,
lt_result type ty_entries,
lt_keys type ty_entries.
perform fill_keys changing lt_keys.
if lt_keys is not initial.
select * into table lt_result
from ztmp_test
for all entries in lt_keys
where id = lt_keys-id.
endif.
perform get_sum using lt_result
changing lv_sum.
write: / lv_sum.
endform.
form fill_keys changing ct_keys type ty_entries.
append :
'A' to ct_keys,
'C' to ct_keys,
'E' to ct_keys.
endform.
form get_sum using it_entries type ty_entries
changing value(ev_sum) type i.
field-symbols: <ls_test> type ztmp_test.
clear ev_sum.
loop at it_entries assigning <ls_test>.
add <ls_test>-value to ev_sum.
endloop.
endform.
I would use FOR ALL ENTRIES to fetch all the related rows, then LOOP round the resulting table and add up the relevant field into a total. If you have ABAP 740 or later, you can use REDUCE operator to avoid having to loop round the table manually:
DATA(total) = REDUCE i( INIT sum = 0
FOR wa IN itab NEXT sum = sum + wa-field ).
One possible approach is simultaneous summarizing inside SELECT loop using statement SELECT...ENDSELECT statement.
Sample with calculating all order lines/quantities for the plant:
TYPES: BEGIN OF ls_collect,
werks TYPE t001w-werks,
menge TYPE ekpo-menge,
END OF ls_collect.
DATA: lt_collect TYPE TABLE OF ls_collect.
SELECT werks UP TO 100 ROWS
FROM t001w
INTO TABLE #DATA(lt_werks).
SELECT werks, menge
FROM ekpo
INTO #DATA(order)
FOR ALL ENTRIES IN #lt_werks
WHERE werks = #lt_werks-werks.
COLLECT order INTO lt_collect.
ENDSELECT.
The sample has no business sense and placed here just for educational purpose.
Another more robust and modern approach is CTE (Common Table Expressions) available since ABAP 751 version. This technique is specially intended among others for total/subtotal tasks:
WITH
+plants AS (
SELECT werks UP TO 100 ROWS
FROM t011w ),
+orders_by_plant AS (
SELECT SUM( menge )
FROM ekpo AS e
INNER JOIN +plants AS m
ON e~werks = m~werks
GROUP BY werks )
SELECT werks, menge
FROM +orders_by_plant
INTO TABLE #DATA(lt_sums)
ORDER BY werks.
cl_demo_output=>display( lt_sums ).
The first table expression +material is your internal table, the second +orders_by_mat quantities totals selected by the above materials and the last query is the final output query.

Oracle: is it possible to compare values in Long column with values in Clob column

There is a table X having a "Long" Column and Table Y having a "CLOB" Column. Data has been migrated from Table X to Table Y. Now I need to verify if the data got converted correctly. I had the idea of using casting, but seems "Long" values cannot be converted to "Varchar" in select statements. Any ideas are highly appreciated.
Eg:
SELECT LONG_COLUMN FROM TABLE_X
Minus
SELECT CLOB_COLUMN FROM TABLE_Y
You have two problems: long can't easily be converted/compared to other datatypes and clob can't be used in a minus operation!
Fortunately, both of these can be overcome by using PL/SQL. Longs and clobs can be implicitly converted into varchar2 when they are selected in PL/SQL blocks. You can load these into a nested-table, then use the multiset except operator to find the differences between them:
create table long_t ( x long );
create table lob_t ( x clob );
insert into long_t values ('1');
insert into long_t values ('2');
insert into lob_t values ('1');
declare
type t is table of varchar2(32767);
longs t;
clobs t;
diff t;
begin
select x bulk collect into longs from long_t;
select x bulk collect into clobs from lob_t;
diff := longs multiset except clobs;
for i in 1 .. diff.count loop
dbms_output.put_line(diff(i));
end loop;
diff := clobs multiset except longs;
for i in 1 .. diff.count loop
dbms_output.put_line(diff(i));
end loop;
end;
/
anonymous block completed
2
If your tables contain more than a couple of thousand rows, you're likely to run out of memory using the above as-is, as the whole table will be loaded at once. If you've got an id or similar column on each table, then it would be best to fetch and compare rows in ranges, e.g. 1-1000, 1001-2000 and so on.

SQLite default value if null

Let's say I have a table called "table"
So
Create Table "Table" (a int not null, b int default value 1)
If I do a "INSERT INTO "Table" (a) values (1)". I will get back 1 for column a and 1 for column b as the default value for column b is 1.
BUT if I do "INSERT INTO "Table" (a, b) values (1, null)". I will bet back 1 for column a and an empty value for column b. Is there a way to set a column's default value if a null was given?
No, if you are doing:
INSERT INTO my_table (a, b) values (1, null)
You are explicitely asking for a null value on b column.
In a RDBMS you could technically use a trigger to override that behavior. But in SQLite you can't.
If you don't want nulls for column b then you should set it as a non null-able field as you have done with a
This solution for MySQL should mostly work for SQLite. The main, and albeit major, difference is that there doesn't seem to be a Default() function in SQLite like there is in MySQL. I was able to replicate this functionality by keeping track of my database schema in PHP and then manually inserting the default value as the second argument to Coalesce(). See this Gist for example code.
If the column with the default value can have the NOT NULL constraint, then you can:
create the table using this syntax:
CREATE TABLE "Table" (a INT NOT NULL, b INT NOT NULL ON CONFLICT REPLACE DEFAULT 1);
and insert as usual
create the table as in your question
and insert using this syntax:
INSERT OR REPLACE INTO "Table" (a, b) VALUES(1, null);
https://database.guide/convert-null-values-to-the-columns-default-value-when-inserting-data-in-sqlite/
If the column with the default value can not have the NOT NULL constraint (allowing NULL to be inserted), as in your question:
you will have to omit the column with the default value from the insert query so that it gets its default.
Ideal would be:
INSERT INTO "Table" (a, b) VALUES(1, COALESCE(NULL, DEFAULT))
, as is in other sql dialects, which might be supported in future release:
https://sqlite.org/forum/info/d7384e085b808b05

to create an trigger in pl/sql

i have a problem that i want to create an trigger that checks the Pincode number
is exactly six of digit or not.
please tell me an detail answer.
I suppose you mean a character column in your question, if so - you can achieve this without trigger but using check constraint like this:
create table test_1
(
pincode varchar2(20)
constraint test_1$chk$id check (ltrim(pincode, '0123456789') is null and length(pincode)= 6)
);
Running these inserts:
insert into test_1 (pincode) values ('45sdgf65');
insert into test_1 (pincode) values ('4565');
Will result the error:
ORA-02290: check constraint (ANDREW.TEST_1$CHK$ID) violated
But this one goes ok:
insert into test_1 (pincode) values ('034565');

Resources