to create an trigger in pl/sql - plsql

i have a problem that i want to create an trigger that checks the Pincode number
is exactly six of digit or not.
please tell me an detail answer.

I suppose you mean a character column in your question, if so - you can achieve this without trigger but using check constraint like this:
create table test_1
(
pincode varchar2(20)
constraint test_1$chk$id check (ltrim(pincode, '0123456789') is null and length(pincode)= 6)
);
Running these inserts:
insert into test_1 (pincode) values ('45sdgf65');
insert into test_1 (pincode) values ('4565');
Will result the error:
ORA-02290: check constraint (ANDREW.TEST_1$CHK$ID) violated
But this one goes ok:
insert into test_1 (pincode) values ('034565');

Related

SUM totals by FOR ALL ENTRIES itab keys

I want to execute a SELECT query on a database table that has 6 key fields, let's assume they are keyA, keyB, ..., keyF.
As input parameters to my ABAP function module I do receive an internal table with exactly that structure of the key fields, each entry in that internal table therefore corresponds to one tuple in the database table.
Thus I simply need to select all tuples from the database table that correspond to the entries in my internal table.
Furthermore, I want to aggregate an amount column in that database table in exactly the same query.
In pseudo SQL the query would look as follows:
SELECT SUM(amount) FROM table WHERE (keyA, keyB, keyC, keyD, keyE, keyF) IN {internal table}.
However, this representation is not possible in ABAP OpenSQL.
Only one column (such as keyA) is allowed to state, not a composite key. Furthermore I can only use 'selection tables' (those with SIGN, OPTIOn, LOW, HIGH) after they keyword IN.
Using FOR ALL ENTRIES seems feasible, however in this case I cannot use SUM since aggregation is not allowed in the same query.
Any suggestions?
For selecting records for each entry of an internal table, normally the for all entries idiom in ABAP Open SQL is your friend. In your case, you have the additional requirement to aggregate a sum. Unfortunately, the result set of a SELECT statement that works with for all entries is not allowed to use aggregate functions. In my eyes, the best way in this case is to compute the sum from the result set in the ABAP layer. The following example works in my system (note in passing: using the new ABAP language features that came with 7.40, you could considerably shorten the whole code).
report zz_ztmp_test.
start-of-selection.
perform test.
* Database table ZTMP_TEST :
* ID - key field - type CHAR10
* VALUE - no key field - type INT4
* Content: 'A' 10, 'B' 20, 'C' 30, 'D' 40, 'E' 50
types: ty_entries type standard table of ztmp_test.
* ---
form test.
data: lv_sum type i,
lt_result type ty_entries,
lt_keys type ty_entries.
perform fill_keys changing lt_keys.
if lt_keys is not initial.
select * into table lt_result
from ztmp_test
for all entries in lt_keys
where id = lt_keys-id.
endif.
perform get_sum using lt_result
changing lv_sum.
write: / lv_sum.
endform.
form fill_keys changing ct_keys type ty_entries.
append :
'A' to ct_keys,
'C' to ct_keys,
'E' to ct_keys.
endform.
form get_sum using it_entries type ty_entries
changing value(ev_sum) type i.
field-symbols: <ls_test> type ztmp_test.
clear ev_sum.
loop at it_entries assigning <ls_test>.
add <ls_test>-value to ev_sum.
endloop.
endform.
I would use FOR ALL ENTRIES to fetch all the related rows, then LOOP round the resulting table and add up the relevant field into a total. If you have ABAP 740 or later, you can use REDUCE operator to avoid having to loop round the table manually:
DATA(total) = REDUCE i( INIT sum = 0
FOR wa IN itab NEXT sum = sum + wa-field ).
One possible approach is simultaneous summarizing inside SELECT loop using statement SELECT...ENDSELECT statement.
Sample with calculating all order lines/quantities for the plant:
TYPES: BEGIN OF ls_collect,
werks TYPE t001w-werks,
menge TYPE ekpo-menge,
END OF ls_collect.
DATA: lt_collect TYPE TABLE OF ls_collect.
SELECT werks UP TO 100 ROWS
FROM t001w
INTO TABLE #DATA(lt_werks).
SELECT werks, menge
FROM ekpo
INTO #DATA(order)
FOR ALL ENTRIES IN #lt_werks
WHERE werks = #lt_werks-werks.
COLLECT order INTO lt_collect.
ENDSELECT.
The sample has no business sense and placed here just for educational purpose.
Another more robust and modern approach is CTE (Common Table Expressions) available since ABAP 751 version. This technique is specially intended among others for total/subtotal tasks:
WITH
+plants AS (
SELECT werks UP TO 100 ROWS
FROM t011w ),
+orders_by_plant AS (
SELECT SUM( menge )
FROM ekpo AS e
INNER JOIN +plants AS m
ON e~werks = m~werks
GROUP BY werks )
SELECT werks, menge
FROM +orders_by_plant
INTO TABLE #DATA(lt_sums)
ORDER BY werks.
cl_demo_output=>display( lt_sums ).
The first table expression +material is your internal table, the second +orders_by_mat quantities totals selected by the above materials and the last query is the final output query.

Query a manual list of data items

I would like to run a query involving joining a table to a manually generated list but am stuck trying to generate the manual list. There is an example of what I am attempting to do below:
SELECT
*
FROM
('29/12/2014', '30/12/2014', '30/12/2014') dates
;
Ideally I would want my output to look like:
29/12/2014
30/12/2014
31/12/2014
What's your Teradata release?
In TD14 there's STRTOK_SPLIT_TO_TABLE:
SELECT *
FROM TABLE (STRTOK_SPLIT_TO_TABLE(1 -- any dummy value
,'29/12/2014,30/12/2014,30/12/2014' -- any delimited string
,',' -- delimiter
)
RETURNS (outkey INTEGER
,tokennum INTEGER
,token VARCHAR(20) CHARACTER SET UNICODE) -- modify to match the actual size
) AS d
You can easily put this in a Derived Table and then join to it.
inkey (here the dummy value 1) is a numeric or string column, usually a key. Can be used for joining back to the original row.
outkey is the same as inkey.
tokennum is the ordinal position of the token in the input string.
token is the extracted substring.
Try this:
select '29/12/2014'
union
select '30/12/2014'
union
...
It should work in Teradata as well as in MySql.

tSQLt AssertEqualsTable - unexpected results when table schema doesn't match

I noticed the other day that you can write a test where there are more columns in the Actual table that in the Expected table and the test will still pass if the the data matches in the columns that exist in both.
Here is an example:
if exists(select * from INFORMATION_SCHEMA.ROUTINES where ROUTINE_SCHEMA='UnitTests_FirstTry' and ROUTINE_NAME='test_AssertEqualsTable_IgnoresExtraColumnsInActual')
begin
drop procedure UnitTests_FirstTry.test_AssertEqualsTable_IgnoresExtraColumnsInActual
end
go
create procedure UnitTests_FirstTry.test_AssertEqualsTable_IgnoresExtraColumnsInActual
as
begin
IF OBJECT_ID(N'tempdb..#Expected') > 0 DROP TABLE [#Expected];
IF OBJECT_ID(N'tempdb..#Actual') > 0 DROP TABLE [#Actual];
create table #expected( a int null) --, b int null, c varchar(10) null)
create table #actual(a int, x money null)
insert #expected (a) values (1)
insert #actual (a, x) values (1, 22.51)
--insert #expected (a, b, c) values (1,2,'test')
--insert #actual (a, x) values (1, 22.51)
exec tSQLt.AssertEqualsTable '#expected', '#actual'
end
go
exec tSQLt.Run 'UnitTests_FirstTry.test_AssertEqualsTable_IgnoresExtraColumnsInActual'
go
I noticed this when I removed some extra columns from the Expected table of a test that no longer needed those columns, but I forgot to remove the same columns from the Actual table and my test still passed, which to me was a bit off putting.
This only happens when the Actual table has more columns. If the expected has more columns an error is generated. Is this correct? Does anyone know what the reasoning is behind this behavior?
Although not particularly well documented in this respect, the AssertEqualsTable routine only looks at the data in the table - not that the columns are the same. To check whether the table structures are the same, use AssertResultSetsHaveSameMetaData. I wrote a bit about this in this
article.
You can of course run both in the same test, and the test will only pass if both checks pass.
I guess the reason for the split would be because there may be rare instances where you care about either the data or the metadata being consistent for your test, but not both.

SQLite default value if null

Let's say I have a table called "table"
So
Create Table "Table" (a int not null, b int default value 1)
If I do a "INSERT INTO "Table" (a) values (1)". I will get back 1 for column a and 1 for column b as the default value for column b is 1.
BUT if I do "INSERT INTO "Table" (a, b) values (1, null)". I will bet back 1 for column a and an empty value for column b. Is there a way to set a column's default value if a null was given?
No, if you are doing:
INSERT INTO my_table (a, b) values (1, null)
You are explicitely asking for a null value on b column.
In a RDBMS you could technically use a trigger to override that behavior. But in SQLite you can't.
If you don't want nulls for column b then you should set it as a non null-able field as you have done with a
This solution for MySQL should mostly work for SQLite. The main, and albeit major, difference is that there doesn't seem to be a Default() function in SQLite like there is in MySQL. I was able to replicate this functionality by keeping track of my database schema in PHP and then manually inserting the default value as the second argument to Coalesce(). See this Gist for example code.
If the column with the default value can have the NOT NULL constraint, then you can:
create the table using this syntax:
CREATE TABLE "Table" (a INT NOT NULL, b INT NOT NULL ON CONFLICT REPLACE DEFAULT 1);
and insert as usual
create the table as in your question
and insert using this syntax:
INSERT OR REPLACE INTO "Table" (a, b) VALUES(1, null);
https://database.guide/convert-null-values-to-the-columns-default-value-when-inserting-data-in-sqlite/
If the column with the default value can not have the NOT NULL constraint (allowing NULL to be inserted), as in your question:
you will have to omit the column with the default value from the insert query so that it gets its default.
Ideal would be:
INSERT INTO "Table" (a, b) VALUES(1, COALESCE(NULL, DEFAULT))
, as is in other sql dialects, which might be supported in future release:
https://sqlite.org/forum/info/d7384e085b808b05

sqlite, UPDATE OR REPLACE

I do something like
UPDATE OR REPLACE someTable SET a=1, b=2 WHERE c=3
I expect if it doesnt exist it will be inserted into the DBs. But nothing happens and i get no errors. How can i insert data, replace it if it already exist and use a where for the condition (instead of replacing BC of a unique ID)
Careful, INSERT OR REPLACE doesn't have the expected behaviour of an "UPDATE OR REPLACE".
If you don't set the values for all fieds, INSERT OR REPLACE is going to replace them with default values, whereas with an UPDATE you expect to keep the old values.
See my answer here for an example: SQLite - UPSERT *not* INSERT or REPLACE
Try
INSERT OR REPLACE INTO [someTable] (a,b) VALUES(1,2) WHERE c = '3'

Resources