Snowflake - Recursion exceeded max iteration count (100) - recursion

WITH recursive T1(USER_NAME,ID, PARENT_ID, LVL, ROOT_ID, PATH) AS (
-- ANCHOR MEMBER.
SELECT USR_NM,USR_NO,
MNGR_EMPLY_NBR PARENT_ID,
1 AS LVL,
USR_NO AS ROOT_ID,
TO_CHAR(FRST_NM || ' ' || LST_NM) AS PATH
FROM EMPLOYEE
UNION ALL
-- RECURSIVE MEMBER.
SELECT T2.USR_NM,T2.USR_NO,
T2.MNGR_EMPLY_NBR PARENT_ID,
LVL+1,
T1.ROOT_ID,
T1.PATH || '|' || T2.FRST_NM || ' ' || T2.LST_NM AS PATH
FROM EMPLOYEE T2 , T1
WHERE T2.MNGR_EMPLY_NBR = T1.ID
)
select * from T1
While running the above code I'm facing Recursion exceeded max iteration count (100) in snowflake. Can anyone guide me with a solution for this or is there any way to re-write this code without recursion?

This is standard infinite recursion protection.
You can read about it here: Potential for Infinite Loops
You must submit a request to Snowflake Support to increase the limit and enter how many.
Referring to the documentation:
In theory, constructing a recursive CTE incorrectly can cause an infinite loop. In practice, Snowflake prevents this by limiting the number of iterations that the recursive clause will perform in a single query. The MAX_RECURSIONS parameter limits the number of iterations.
To change MAX_RECURSIONS for your account, please contact Snowflake Support.

The limit is about to be removed as part of 2022_02 change bundle.
https://community.snowflake.com/s/article/Hierarchical-Data-Queries-Iteration-Limits-No-Longer-Enforced
Here is the pending behaviour change log: https://community.snowflake.com/s/article/Pending-Behavior-Change-Log
2022_02
Disabled by default in 6.7 (Mar 9-10); can be enabled for testing
Planned to be enabled by default in March

Related

Function returning varchar inside select

Trying to generalize the SQL what splits a string/varchar into records. Here is the working SQL:
SELECT test.* FROM test JOIN (
SELECT level nbr, REGEXP_SUBSTR('1,3', '(.*?)(,|$)', 1, level, NULL, 1) value
FROM dual CONNECT BY level <= REGEXP_COUNT('1,3', ',')+1 ORDER BY level
) requested ON test.id=requested.value
What I mean by generalizing is; moving the recurring SQL (in this case the bit between the parenthesis's from the working SQL above) to a procedure/function so it can be reused. In this case I'm trying to find a way to insert a generated inner select statement. This is how the generalized SQL may look like:
SELECT t.* FROM table t JOIN (<GENERATED_INNER_SELECT>) my ON t.x=my.x;
However I didn't succeed yet, I tried tho but calling my function to generate the inner select statement directly resulted in:
ORA-00900: invalid SQL statement
And using the function in the generalized SQL resulted in:
ORA-00907: missing right parenthesis
None of these errors make any sense to me in this context.
Perhaps you can help? check out the full case on dbfiddle.
If you generate a SQL fragment to use as a subquery then the overall statement that embeds that as a subquery would have to be executed dynamically too.
It would be simpler to have the function actually doing the split itself, and returning a collection - as a schema-level collection type:
CREATE TYPE T_NUMBERS AS TABLE OF NUMBER
/
CREATE OR REPLACE FUNCTION split(p_string VARCHAR2, p_seperator VARCHAR2 DEFAULT ',')
RETURN T_NUMBERS AS
L_NUMBERS T_NUMBERS;
BEGIN
SELECT REGEXP_SUBSTR(p_string, '(.*?)(,|$)', 1, level, NULL, 1)
BULK COLLECT INTO L_NUMBERS
FROM dual
CONNECT BY level <= REGEXP_COUNT(p_string, ',')+1;
RETURN L_NUMBERS;
END split;
/
SELECT * FROM TEST
WHERE id MEMBER OF (split('1,3'))
/
ID NAM
---------- ---
1 foo
3 foe
or if you prefer the table collection expression approach:
SELECT t.*
FROM TABLE(split('1,3')) tmp
JOIN test t ON t.id = tmp.column_value;
It would be even simpler if the query could be called with a collection of numbers in the first place, but without seeing how the call is being made - and the string generated - it's hard to say exactly how you'd need to change that. You could even use a built-in collection type then, instead of having to define your own:
SELECT t.*
FROM TABLE(SYS.ODCINUMBERLIST(1,3)) tmp
JOIN test t ON t.id = tmp.column_value;
but it relies on the caller being able to pass the numbers in rather than a string (note the lack of single quotes...)

How to write sqlite transaction that rolls back on any error

I have searched extensively on this and I have found a lot of people asking the question but no answers that included code examples to help me understand.
I'd like to write a transaction (in sql using the command line sqlite3 interface) that performs several update statements, and if any of them fail for any reason, rolls back the transaction. The default behaviour appears to be to roll-back the statement that failed but commit the others.
This tutorial appears to advise that it's sufficient to add begin; and rollback; before and after the statements, but that's not true because I've tried it with deliberate errors and the non-error statements were definitely committed (which I don't want).
This example really confuses me because the two interlocutors seem to give conflicting advice at the end - one says that you need to write error handling (without giving any examples) whereas the other says that no error handling is needed.
My MWE is as follows:
create table if not exists accounts (
id integer primary key not null,
balance decimal not null default 0
);
insert into accounts (id, balance) values (1,200),(2,300);
begin transaction;
update accounts set balance = field1 - 100 where id = 1;
update accounts set balance = field1 + 100 where id = 2;
update accounts set foo = 23; //Deliberate error
commit;
The idea is that none of these changes should be committed.
The sqlite3 command-line shell is intended to be used interactively, so it allows you to continue after an error.
To abort on the first error instead, use the -bail option:
sqlite3 -bail my.db < mwe.sql
If you are executing line by line, then the idea is that you first run these commands:
create table if not exists accounts (
id integer primary key not null,
balance decimal not null default 0
);
insert into accounts (id, balance) values (1,200),(2,300);
begin transaction;
update accounts set balance = field1 - 100 where id = 1;
update accounts set balance = field1 + 100 where id = 2;
update accounts set foo = 23; //Deliberate error
At this point, if you have no errors, you run the commit:
commit;
All the updates should be visible if you open a second connection and query the table.
On another hand if you got an error, instead of committing you rollback:
rollback;
All the updates should be rolled back;
If you are doing it programatically in java you would enclose the updates in a try - catch block, and commit at the end of the try, or rollback inside the catch.

Not able to display average

i want to display an average score but its not getting displayed even though the code is executed, here is my code:
set serveroutput on size 10000;
declare
s_student_id grade.student_id%type;
g_score grade.score%type;
begin
for c in (select distinct grade.student_id, avg(grade.score) into s_student_id, g_score from grade inner join class on grade.class_id = class.class_id group by grade.student_id having count(class.course_id) > 4)
loop
dbms_output.put_line('Student' || c.student_id || ' :' || g_score);
end loop;
exception
when no_data_found then dbms_output.put_line('There are no students who selected more than 4 courses');
end;
/
Output:
anonymous block completed
Student1 :
I think this is what you're after:
set serveroutput on size 10000;
declare
v_counter integer := 0;
begin
for rec in (select grade.student_id,
avg(grade.score) g_score
from grade
inner join class on grade.class_id = class.class_id
group by grade.student_id
having count(class.course_id) > 4)
loop
v_counter := v_counter + 1;
dbms_output.put_line('Student: ' || rec.student_id || ', avg score: ' || rec.g_score);
end loop;
if v_counter = 0 then
raise no_data_found;
end if;
exception
when no_data_found then
dbms_output.put_line('There are no students who selected more than 4 courses');
end;
/
There are several points to note:
Good formatting of your sql statements (and pl/sql) will aid you when it comes to understanding, debugging and maintaining your code. If you can read it easily, chances are you'll understand it more quickly.
If you're using a cursor-for-loop, you don't need the into clause - that's only for when you are using an explicit select statement. You also don't need to declare your own variables to hold the data returned by the cursor - the cursor-for-loop declares the record variable to return the row into for you - in your example, that would be c, which I've renamed to rec for clarity.
Giving identifiers names that reflect what they are/do is also essential for ease of maintenance, readability etc.
When referring to the contents of the field from the cursor, use the record variable, e.g. rec.student_id, rec.g_score. Thus, it is important to give your columns aliases if you're doing anything other than a straight select (e.g. I've given avg(grade.score) an alias, but I didn't need to bother for grade.student_id)
If there are no records returned by the cursor, you will never get a no_data_found exception. Instead, you'll have to check to see if you had any rows returned - the easiest way to do this is to have some sort of counter. Once the loop has completed, you can then check the counter. If it shows that no rows were returned, you can then raise the no_data_found error yourself - or, more simply, you could skip the exception block and just put the dbms_output statement there instead. YMMV.
If you are going to go with the exception block, in production code you would most likely want to raise an actual error. In that case you would use RAISE or, if you need to pass a user defined error message out, RAISE_APPLICATION_ERROR.
Finally, I'm guessing this is some sort of homework question, and as such, the presence of the dbms_output statements is ok. However, out in the real world, you only ever want to use dbms_output for ad-hoc debugging or in non-production code because relying on dbms_output to pass information around to calling code is just asking for trouble. It's not robust, and there are far better, reliable methods of passing data around.

Stored procedure slow when called from web, fast from Management Studio

I have stored procedure that insanely times out every single time it's called from the web application.
I fired up the Sql Profiler and traced the calls that time out and finally found out these things:
When executed the statements from within the MS SQL Management Studio, with same arguments (in fact, I copied the procedure call from sql profile trace and ran it): It finishes in 5~6 seconds avg.
But when called from web application, it takes in excess of 30 seconds (in trace) so my webpage actually times out by then.
Apart from the fact that my web application has its own user, every thing is same (same database, connection, server etc)
I also tried running the query directly in the studio with the web application's user and it doesn't take more than 6 sec.
How do I find out what is happening?
I am assuming it has nothing to do with the fact that we use BLL > DAL layers or Table adapters as the trace clearly shows the delay is in the actual procedure. That is all I can think of.
EDIT I found out in this link that ADO.NET sets ARITHABORT to true - which is good for most of the time but sometime this happens, and the suggested work-around is to add with recompile option to the stored proc. In my case, it's not working but I suspect it's something very similar to this. Anyone knows what else ADO.NET does or where I can find the spec?
I've had a similar issue arise in the past, so I'm eager to see a resolution to this question. Aaron Bertrand's comment on the OP led to Query times out when executed from web, but super-fast when executed from SSMS, and while the question is not a duplicate, the answer may very well apply to your situation.
In essence, it sounds like SQL Server may have a corrupt cached execution plan. You're hitting the bad plan with your web server, but SSMS lands on a different plan since there is a different setting on the ARITHABORT flag (which would otherwise have no impact on your particular query/stored proc).
See ADO.NET calling T-SQL Stored Procedure causes a SqlTimeoutException for another example, with a more complete explanation and resolution.
I also experience that queries were running slowly from the web and fast in SSMS and I eventually found out that the problem was something called parameter sniffing.
The fix for me was to change all the parameters that are used in the sproc to local variables.
eg. change:
ALTER PROCEDURE [dbo].[sproc]
#param1 int,
AS
SELECT * FROM Table WHERE ID = #param1
to:
ALTER PROCEDURE [dbo].[sproc]
#param1 int,
AS
DECLARE #param1a int
SET #param1a = #param1
SELECT * FROM Table WHERE ID = #param1a
Seems strange, but it fixed my problem.
Not to spam, but as a hopefully helpful solution for others, our system saw a high degree of timeouts.
I tried setting the stored procedure to be recompiled by using sp_recompile and this resolved the issue for the one SP.
Ultimately there were a larger number of SP's that were timing-out, many of which had never done so before, by using DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE the incident rate of timeouts has plummeted significantly - there are still isolated occurrences, some where I suspect the plan regeneration is taking a while, and some where the SPs are genuinely under-performant and need re-evaluation.
Could it be that some other DB calls made before the web application calls the SP is keeping a transaction open? That could be a reason for this SP to wait when called by the web application. I say isolate the call in the web application (put it on a new page) to ensure that some prior action in the web application is causing this issue.
You can target specific cached execution plans via:
SELECT cp.plan_handle, st.[text]
FROM sys.dm_exec_cached_plans AS cp
CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS st
WHERE [text] LIKE N'%your troublesome SP or function name etc%'
And then remove only the execution plans causing issues via, for example:
DBCC FREEPROCCACHE (0x050006003FCA862F40A19A93010000000000000000000000)
I've now got a job running every 5 minutes that looks for slow running procedures or functions and automatically clears down those execution plans if it finds any:
if exists (
SELECT cpu_time, *
FROM sys.dm_exec_requests req
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS sqltext
--order by req.total_elapsed_time desc
WHERE ([text] LIKE N'%your troublesome SP or function name etc%')
and cpu_time > 8000
)
begin
SELECT cp.plan_handle, st.[text]
into #results
FROM sys.dm_exec_cached_plans AS cp
CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS st
WHERE [text] LIKE N'%your troublesome SP or function name etc%'
delete #results where text like 'SELECT cp.plan_handle%'
--select * from #results
declare #handle varbinary(max)
declare #handleconverted varchar(max)
declare #sql varchar(1000)
DECLARE db_cursor CURSOR FOR
select plan_handle from #results
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO #handle
WHILE ##FETCH_STATUS = 0
BEGIN
--e.g. DBCC FREEPROCCACHE (0x050006003FCA862F40A19A93010000000000000000000000)
print #handle
set #handleconverted = '0x' + CAST('' AS XML).value('xs:hexBinary(sql:variable("#handle"))', 'VARCHAR(MAX)')
print #handleconverted
set #sql = 'DBCC FREEPROCCACHE (' + #handleconverted + ')'
print 'DELETING: ' + #sql
EXEC(#sql)
FETCH NEXT FROM db_cursor INTO #handle
END
CLOSE db_cursor
DEALLOCATE db_cursor
drop table #results
end
Simply recompiling the stored procedure (table function in my case) worked for me
like #Zane said it could be due to parameter sniffing. I experienced the same behaviour and I took a look at the execution plan of the procedure and all the statements of the sp in a row (copied all the statements form the procedure, declared the parameters as variables and asigned the same values for the variable as the parameters had). However the execution plan looked completely different. The sp execution took 3-4 seconds and the statements in a row with the exact same values was instantly returned.
After some googling I found an interesting read about that behaviour: Slow in the Application, Fast in SSMS?
When compiling the procedure, SQL Server does not know that the value of #fromdate changes, but compiles the procedure under the assumption that #fromdate has the value NULL. Since all comparisons with NULL yield UNKNOWN, the query cannot return any rows at all, if #fromdate still has this value at run-time. If SQL Server would take the input value as the final truth, it could construct a plan with only a Constant Scan that does not access the table at all (run the query SELECT * FROM Orders WHERE OrderDate > NULL to see an example of this). But SQL Server must generate a plan which returns the correct result no matter what value #fromdate has at run-time. On the other hand, there is no obligation to build a plan which is the best for all values. Thus, since the assumption is that no rows will be returned, SQL Server settles for the Index Seek.
The problem was that I had parameters which could be left null and if they were passed as null the would be initialised with a default value.
create procedure dbo.procedure
#dateTo datetime = null
begin
if (#dateTo is null)
begin
select #dateTo = GETUTCDATE()
end
select foo
from dbo.table
where createdDate < #dateTo
end
After I changed it to
create procedure dbo.procedure
#dateTo datetime = null
begin
declare #to datetime = coalesce(#dateTo, getutcdate())
select foo
from dbo.table
where createdDate < #to
end
it worked like a charm again.
--BEFORE
CREATE PROCEDURE [dbo].[SP_DEMO]
(
#ToUserId bigint=null
)
AS
BEGIN
SELECT * FROM tbl_Logins WHERE LoginId = #ToUserId
END
--AFTER CHANGING TO IT WORKING FINE
CREATE PROCEDURE [dbo].[SP_DEMO]
(
#ToUserId bigint=null
)
AS
BEGIN
DECLARE #Toid bigint=null
SET #Toid=#ToUserId
SELECT * FROM tbl_Logins WHERE LoginId = #Toid
END

PL/SQL parser to identify the operation on table

I am writing a PL/SQL parser to identify the operations(Select,Insert,Delete) performed on the Table when I run Procedure, Function or Package.
GOAL:I Goal of this tool is to identify which all the tables will be affected by running the procedure,Fun to prepare with better test case.
Any better ideas or tool will really help a lot.
INPUT:
some SQL file with procedure
or proc file.
OUTPUT required is:
SELECT from: First_table, secondTable
-> In procedure XYZ --This is if the procedure is calling one more procedure
INSERT into: SomeTable
INSERT into: SomeDiffTable
-> END of procedure XYZ --End of one more procedure.
DELETE from: xyzTable
INSERT into: OnemoreTable
My requirement is When I am parsing porc1 if it calls another proc2. I have to go inside that proc2 to find out what all the operation is performed in that and come back to proc1 and continue.:
For this I have to store the all procedure some where and while parsing I have to check each token(word with space) in the tempStorage to find out if it is procedure or not.
As my logic's takes lot of time. Can any body suggest better logic to achieve my GOAL.
There's also the possiblity of triggers being involved. That adds an additional layer of complexity.
I'd say you're better off mining DBA_DEPENDENCIES with a recursive query to determine impact analysis in the abstract; it won't capture dynamic SQL, but nothing will 100% of the time. In your case, proc1 depends on proc2, and proc2 depends on whatever it depends on, and so forth. It won't tell you the nature of the dependency - INSERT, UPDATE, DELETE, SELECT - but it's a beginning.
If you're really interested in determining the actual impact of a single-variable-value run of a procedure, implement it in a non-production system, and then turn auditing on your system up to 11:
begin
for i in (select owner, object_type, object_name from dba_objects
where owner in ([list of application schemas]
and object_type in ('TABLE', 'PACKAGE', 'PROCEDURE', 'FUNCTION', 'VIEW')
loop
execute immediate 'AUDIT ALL ON ' || i.owner || '.' || i.object_type ||
' BY SESSION';
end loop;
end;
/
Run your test, and see what objects got touched as a result of the exectution by mining the audit trail. It's not bulletproof, as it only audits objects that got touched by that execution, but it does tell you how they got touched.

Resources