We are currently facing an issue with Window Functions in SAP Hana SPS12.
One of our error is when we are using STRING_AGG function.
Here is the code :
/*
CREATE TABLE TEST_STR_AGG (
GROUP_ID varchar(1)
, CLASS_ID varchar(5)
, MEMBER varchar(5)
);
*/
TRUNCATE TABLE TEST_STR_AGG;
INSERT INTO TEST_STR_AGG VALUES ('A', 'A_XX1', 'A0001');
INSERT INTO TEST_STR_AGG VALUES ('A', 'A_XX1', 'A0002');
INSERT INTO TEST_STR_AGG VALUES ('A', 'A_XX1', 'A0003');
INSERT INTO TEST_STR_AGG VALUES ('A', 'A_XX2', 'A0004');
INSERT INTO TEST_STR_AGG VALUES ('A', 'A_XX2', 'A0005');
INSERT INTO TEST_STR_AGG VALUES ('A', 'A_XX3', 'A0006');
INSERT INTO TEST_STR_AGG VALUES ('B', 'B_XX1', 'B0001');
INSERT INTO TEST_STR_AGG VALUES ('B', 'B_XX2', 'B0002');
INSERT INTO TEST_STR_AGG VALUES ('B', 'B_XX3', 'B0003');
INSERT INTO TEST_STR_AGG VALUES ('B', 'B_XX4', 'B0004');
INSERT INTO TEST_STR_AGG VALUES ('B', 'B_XX4', 'B0005');
INSERT INTO TEST_STR_AGG VALUES ('B', 'B_XX4', 'B0006');
SELECT GROUP_ID
, CLASS_ID
, STRING_AGG( MEMBER, ' ; ' **ORDER BY MEMBER ASC** ) as MEMBERS
FROM TEST_STR_AGG
GROUP BY GROUP_ID
, CLASS_ID ;
The STRING_AGG used to work perfectly with an ORDER BY clause before a patch install. Now, it is only working with a little volume of lines, as it works in the example I'm giving you. When we are working on more than 500k lines, some lines are disappearring from our result if we add the ORDER BY clause in the STRING_AGG. If we're not, it works.
We have the same issue on FIRST_VALUE and LAST_VALUE functions.
It seems to be a core optimization rule which is corrupting the results...
Does anybody know something about this, please ?
Many thanks
Yep, it's a known bug. Don't have the SAP note ready right now, but it's fixed in a current revision.
/--
Found the SAP note for this:
2365540 - Aggregation Function AVG() Returns ? / NULL Values When Used in Combination With STRING_AGG Including an ORDER BY Clause
Solution
Apply SAP HANA database Revision >= 112.07 (SPS11) or >= 122.02 (SPS12).
--/
Related
I'm very new to tSQLt and am having some difficulty with what should really be a very simple test.
I have added a column to the SELECT statement executed within a stored procedure.
How do I test in a tSQLt test that the column is included in the resultset from that stored procedure?
Generally, when adding a column to the output of a stored procedure, you will want to test that the column both exists and is populated with the correct data. Since we're going to make sure that the column is populated with the same data, we can design a test that does exactly that:
CREATE PROCEDURE MyTests.[test stored procedure values MyNewColumn correctly]
AS
BEGIN
-- Create Actual and Expected table to hold the actual results of MyProcedure
-- and the results that I expect
CREATE TABLE MyTests.Actual (FirstColumn INT, MyNewColumn INT);
CREATE TABLE MyTests.Expected (FirstColumn INT, MyNewColumn INT);
-- Capture the results of MyProcedure into the Actual table
INSERT INTO MyTests.Actual
EXEC MySchema.MyProcedure;
-- Create the expected output
INSERT INTO MyTests.Expected (FirstColumn, MyNewColumn)
VALUES (7, 12);
INSERT INTO MyTests.Expected (FirstColumn, MyNewColumn)
VALUES (25, 99);
-- Check that Expected and Actual tables contain the same results
EXEC tSQLt.AssertEqualsTable 'MyTests.Expected', 'MyTests.Actual';
END;
Generally, the stored procedure you are testing relies on other tables or other stored procedures. Therefore, you should become familiar with FakeTable and SpyProcedure as well: http://tsqlt.org/user-guide/isolating-dependencies/
Another option if you are just interested in the structure of the output and not the content (and you are running on SQL2012 or greater) would be to make use of sys.dm_exec_describe_first_result_set_for_object in your test.
This dmo (dynamic management object) returns a variety of information about the first result set returned for a given object.
In my example below, I have only used a few of the columns returned by this dmo but others are available if, for example, your output includes decimal data types.
In this test, I populate a temporary table (#expected) with information about how I expect each column to be returned - such as name, datatype and nullability.
I then select the equivalent columns from the dmo into another temporary table (#actual).
Finally I use tSQLt.AssertEqualsTable to compare the contents of the two tables.
Having said all that, whilst I frequently write tests to validate the structure of views or tables (using tSQLt.AssertResultSetsHaveSameMetaData), I have never found the need to just test the result set contract for procedures. Dennis is correct, you would typically be interested in asserting that the various columns in your result set are populated with the correct values and by the time you've covered that functionality you should have covered every column anyway.
if object_id('dbo.myTable') is not null drop table dbo.myTable;
go
if object_id('dbo.myTable') is null
begin
create table dbo.myTable
(
Id int not null primary key
, ColumnA varchar(32) not null
, ColumnB varchar(64) null
)
end
go
if object_id('dbo.myProcedure') is not null drop procedure dbo.myProcedure;
go
create procedure dbo.myProcedure
as
begin
select Id, ColumnA, ColumnB from dbo.myTable;
end
go
exec tSQLt.NewTestClass #ClassName = 'myTests';
if object_id('[myTests].[test result set on SQL2012+]') is not null drop procedure [myTests].[test result set on SQL2012+];
go
create procedure [myTests].[test result set on SQL2012+]
as
begin
; with expectedCte (name, column_ordinal, system_type_name, is_nullable)
as
(
-- The first row sets up the data types for the #expected but is excluded from the expected results
select cast('' as nvarchar(200)), cast(0 as int), cast('' as nvarchar(200)), cast(0 as bit)
-- This is the result we are expecting to see
union all select 'Id', 1, 'int', 0
union all select 'ColumnA', 2, 'varchar(32)', 0
union all select 'ColumnB', 3, 'varchar(64)', 1
)
select * into #expected from expectedCte where column_ordinal > 0;
--! Act
select
name
, column_ordinal
, system_type_name
, is_nullable
into
#actual
from
sys.dm_exec_describe_first_result_set_for_object(object_id('dbo.myProcedure'), 0);
--! Assert
exec tSQLt.AssertEqualsTable '#expected', '#actual';
end
go
exec tSQLt.Run '[myTests].[test result set on SQL2012+]'
I have one table named Test with columns named ID,Name,UserValue,AverageValue
ID,Name,UserValue,AverageValue (As Appears on Table)
1,a,10,NULL
2,a,20,NULL
3,b,5,NULL
4,b,10,NULL
5,c,25,NULL
I know how to average the numbers via (SELECT Name, AVG(UserValue) FROM Test GROUP BY Name)
Giving me:
Name,Column1(AVG(Query)) (As Appears on GridView1 via databind when I run the website)
a,15
b,7.5
c,25
What I need to do is make the table appear as such by inserting the calculated AVG() into the AverageValue column server side:
ID,Name,UserValue,AverageValue (As Appears on Table)
1,a,10,15
2,a,20,15
3,b,5,7.5
4,b,10,7.5
5,c,25,25
Conditions:
The AVG(UserValue) must be inserted into Test table AverageValue.
If new entries are made the AverageValue would be updated to match AVG(UserValue).
So what I am looking for is a SQL command that is something like this:
INSERT INTO Test (AverageValue) VALUES (SELECT Name, AVG(UserValue) FROM Test GROUP BY Name)
I have spent considerable amount of time searching on google to find an example but have had no such luck. Any examples would be greatly appreciated. Many thanks in advance.
Try this:
with toupdate as (
select t.*, avg(uservalue) over (partition by name) as newavg
from test t
)
update toupdate
set AverageValue = newavg;
The CTE toupdate is an updatable CTE, so you can just use it in an update statement as if it were a table.
I believe this will do the trick for you. I use the merge statement a lot! It's perfect for doing things like this.
Peace,
Katherine
use [test_01];
go
if object_id (N'tempdb..##test', N'U') is not null
drop table ##test;
go
create table ##test (
[id] [int] identity(1, 1) not null,
[name] [nvarchar](max) not null,
[user_value] [int] not null,
[average_value] [decimal](5, 2),
constraint [pk_test_id] primary key([id])
);
go
insert into ##test
([name], [user_value])
values (N'a',10),
(N'a',20),
(N'b',5),
(N'b',10),
(N'c',25);
go
with [average_builder] as (select [name],
avg(cast([user_value] as [decimal](5, 2))) as [average_value]
from ##test
group by [name])
merge into ##test as target
using [average_builder] as source
on target.[name] = source.[name]
when matched then
update set target.[average_value] = source.[average_value];
go
select [id], [name], [user_value], [average_value] from ##test;
go
I am using a select withing an insert to add a previous record value. This requires me to do the following code:
insert into My_table
values ('a', select value_with_sp_char from table where criterion_to_guarantee_single_row=true), 'b','c')
Now whenever the value_with_sp_char has a character like _,&,%,.,comma,- the query fails.
Any ideas on how I can get that value inserted correctly?
Yes, you are right, I solved this.
I was not entirely truthful in the way I have represented this question.
I was trying to add the value to a variable like so
Declare
lv_txt_var varchar2(255) := '';
Begin
select value_with_sp_char into lv_txt_var from table where criterion_to_guarantee_single_row=true;
if (input_param = null)
insert into table values ('a', lv_txt_var, 'b', 'c');
end if;
when I used the above query, that failed because of the special char. However, when I modified this to use the select query, it worked.
You really don't need PL/SQL for this insert. It's better to write it this way:
INSERT INTO table_a
SELECT 'a', value_with_sp_char, 'b', 'c'
FROM table_b
WHERE criterion_to_guarantee_single_row = true;
I am trying to insert 15530 record in a certain table using SQLite3 shell but i get that error , I searched for the solution SQLITE_MAX_COMPOUND_SELECT which defaults to 500 is the reason , but i don't know how to change it using Shell.
"Error: too many terms in compound SELECT"
http://desmond.imageshack.us/Himg861/scaled.php?server=861&filename=sqlite.jpg&res=landing
INSERT INTO table_name (my_id, my_name) VALUES
(1, 'Aaliyah'),
(2, 'Alvar Aalto'),
(3, 'Willie Aames'),
...
(15530, 'name');
The multiple-value INSERT INTO syntax was introduced in SQLite 3.7.11, so the original syntax is fine on recent versions of SQLite. On older versions, you can use an alternative syntax.
However, the limit SQLITE_MAX_COMPOUND_SELECT cannot be raised at runtime, so you need to split your inserts into batches of 500 rows each. This will be more efficient than inserting one row per query. E.g.
BEGIN;
INSERT INTO table_name (id, name) VALUES (1, 'foo'), ..., (500, 'bar');
INSERT INTO table_name (id, name) VALUES (501, 'baz'), ..., (1000, 'zzz');
...
COMMIT;
INSERT INTO doesn't work that way.
Try this:
BEGIN TRANSACTION
INSERT INTO author (author_id, author_name) VALUES (1, 'Aaliyah')
INSERT INTO author (author_id, author_name) VALUES (2, 'Alvar Aalto')
INSERT INTO author (author_id, author_name) VALUES (3, 'Willie Aames')
...
END TRANSACTION
http://www.sqlite.org/lang_insert.html
I have a Table A in Schema A and Table B in Schema B.
Schema B.Table B has Product Information taken from Schema A.Table A ( this is the main database for Products Profile).
When ever Update happens to products Information in Schema A.Table A, that Update should reflect in Schema B. Table B ?
How can i write trigger for it ?..
I have ProductID in both tables
Why not create a before update trigger? The inserts to tableB will only commit if the entire transaction commits.
EDIT: if you want updates to tableB, try this:
--drop table testtab_a;
create table testtab_a
(
col1 varchar2(10) primary key,
col2 varchar2(10)
);
--drop table testtab_b;
create table testtab_b
(
col1 varchar2(10) primary key,
col2 varchar2(10)
);
insert into testtab_a values ('A', 'B');
insert into testtab_a values ('X', 'B');
insert into testtab_a values ('Z', 'C');
insert into testtab_b values ('A', 'B');
insert into testtab_b values ('X', 'B');
insert into testtab_b values ('Z', 'C');
CREATE OR REPLACE TRIGGER testtab_tr
BEFORE UPDATE
ON testtab_a REFERENCING NEW AS NEW OLD AS OLD
FOR EACH ROW
begin
update testtab_b
set col1 = :new.col1,
col2 = :new.col2
where col1 = :old.col1;
end;
select * from testtab_a;
select * from testtab_b;
update testtab_a set col2 = 'H' where col1 = 'A';
EDIT2: If you need to go across schemas, you can use a dblink.
Inside trigger use:
update testtab_b#someSchema
set col1 = :new.col1,
col2 = :new.col2
where col1 = :old.col1;
Make sure you have proper grants setup by DBAs to do your updates, as well as any synonyms you may need (depending on your env).
Finally, if you are trying to keep these 2 tables in "sync", then don't forget about inserts and deletes (which can also be handled via similar trigger).
This is NOT the answer to replication, however, and this approach should be used very sparingly.