How to expand rows limit in Tera Data studio? - teradata

I have a sql command like this below
SELECT * FROM READ_NOS_FM (
ON (SELECT cast (NULL as JSON))
USING
LOCATION('/s3/bigXXXXXX.s3.amazonaws.com/JSONDATA/09380000/')
RETURNTYPE('NOSREAD_RECORD')
) AS D;
result: return over 2000 rows , be blocked.
https://imgur.com/a/51F3jgA

Related

Spool space error when inserting large result set to table

I have a SQL query in teradata that returns a results set of ~160m rows in (I guess) a reasonable time: dependent on how good a day the server is having it runs between 10-60 minutes.
I recently got access to space to save it as a table, however using my initial query and the "insert into " command I get error 2646-no more spool.
query structure is
insert into <test_DB.tablename>
with smaller_dataset as
(
select
*
from
(
select
items
,case items
from
<Database.table>
QUALIFY ROW_NUMBER() OVER (PARTITION BY A,B ORDER BY C desc , LAST_UPDATE_DTM DESC) = 1
where 1=1
and other things
) T --irrelevant alias for subquery
QUALIFY ROW_NUMBER() OVER (PARTITION BY A, B ORDER BY C desc) = 1)
, employee_table as
(
select
items
,max(J1.field1) J1_field1
,max(J2.field1) J2_field1
,max(J3.field1) J3_field1
,max(J4.field1) J4_field1
from smaller_dataset S
self joins J1,J2,J3,J4
group by
non-aggregate items
)
select
items
case items
from employee_table
;
How can I break up the return into smaller chunks to prevent this error?

I'd like to have Characters only (no signs, numbers and spaces at all)

It should be done with SQLite
just like this;
yes, I know, it is quite easy task, If I use UDF(User Define Function).
but, I have severe difficulty with it.
so, looking for another way (no UDF way) to achieve my goal.
Thanks
for your reference,
I leave a link that I have failed to make UDF (using AutoHotkey)
SQLite/AutoHotkey, I have problem with Encoding of sqlite3_result_text return function
I believe that you could base the resolution on :-
WITH RECURSIVE eachchar(counter,rowid,c,rest) AS (
SELECT 1,rowid,'',mycolumn AS rest FROM mytable
UNION ALL
SELECT counter+1,rowid,substr(rest,1,1),substr(rest,2) FROM eachchar WHERE length(rest) > 0 LIMIT 100
)
SELECT group_concat(c,'') AS mycolumn, myothercolumn, mycolumn AS original
FROM eachchar JOIN mytable ON eachchar.rowid = mytable.rowid
WHERE length(c) > 0
AND (
unicode(c) BETWEEN unicode('a') AND unicode('z')
OR unicode(c) BETWEEN unicode('A') AND unicode('Z')
)
GROUP BY rowid;
Demo :-
Perhaps consider the following :-
/* Create the Test Environment */
DROP TABLE IF EXISTS mytable;
CREATE TABLE IF NOT EXISTS mytable (mycolumn TEXT, myothercolumn);
/* Add the Testing data */
INSERT INTO mytable VALUES
('123-abc_"D E F()[]{}~`!##$%^&*-+=|\?><<:;''','A')
,('123-xyz_"X Y Z()[]{}~`!##$%^&*-+=|\?><<:;''','B')
,('123-abc_"A B C()[]{}~`!##$%^&*-+=|\?><<:;''','C')
;
/* split each character thenconcatenat only the required characters*/
WITH RECURSIVE eachchar(counter,rowid,c,rest) AS (
SELECT 1,rowid,'',mycolumn AS rest FROM mytable
UNION ALL
SELECT counter+1,rowid,substr(rest,1,1),substr(rest,2) FROM eachchar WHERE length(rest) > 0 LIMIT 100
)
SELECT group_concat(c,'') AS mycolumn, myothercolumn, mycolumn AS original
FROM eachchar JOIN mytable ON eachchar.rowid = mytable.rowid
WHERE length(c) > 0
AND (
unicode(c) BETWEEN unicode('a') AND unicode('z')
OR unicode(c) BETWEEN unicode('A') AND unicode('Z')
)
GROUP BY rowid;
/* Cleanup Test Environment */
DROP TABLE IF EXISTS mytable;
This results in :-

Use fields of outer query in group by of subquery

My table: CREATE TABLE T(id INT PRIMARY KEY, value INT UNIQUE)
This query, as I considered, would produce a median value of value in the table. But sqlite v.3.9.1 gives me the error no such column: ot.value to the line with group by. And it process line with where successfully, although it uses a similar expression. What's the problem of the query?
select
ot.id,
ot.value
from T as ot
where (
select count(c) > count(DISTINCT c) from (
select count(*) c from T as it
where it.value != ot.value
group by it.value < ot.value
) LessMore
)
The same query succeeds in PostgreSQL and prints what was expected. MySQL gives the error Error Code: 1054. Unknown column 'ot.value' in 'where clause'

Use for loop in Teradata SQL

I would like to do this in teradata SQL/ MACRO or PROCEDURE :
CREATE MACRO insertloop ( val1 VARCHAR( 1000)) AS
(
sublist_i = ' SELECT sublist from table3 '
FOR sublist_i in sublist :
INSERT INTO table5
SELECT t.id, t.address, sum(t.amount)
FROM table2 AS t
WHERE
t.id in sublist_i
AND t.address = :val1
GROUP BY t.id t.address
);
Explanation:
table3 contains list of id (by block of 1000 id)
(12, 546, 999)
(45,789)
(970, 990, 123)
Main reason :
table2 is very huge (1 billion record).
A full join requires too much memory, we need
to create a table table3 containing disjoint list of id
and iterate on this list.
But, am not sure how to correct this MACRO to be make correct.

SQL Paging on more than 10 lac Records

I am using MS SQL 2008 R2. One of my table have more than 10 lac rows — 1 lac is 105 or 100,000, so 10 lac is 1,000,000).
I want to bind this to ASP Gridview. I tried custom paging with page size and index. But grid not binded. Timeout Error occured.
Tried directly execute stored procedure, but it takes a long time.
How can I optimize this procedure ?
My procedure
ALTER PROCEDURE SP_LOAN_APPROVAL_GET_LIST
#USERCODE NVARCHAR(50) ,
#FROMDATE DATETIME = NULL ,
#TODATE DATETIME = NULL ,
#PAGESIZE INT ,
#PAGENO INT ,
#TOTALROW BIGINT OUTPUT
AS
BEGIN
SELECT *
FROM ( SELECT DOC_NO ,
DOC_DATE_GRE ,
EMP_CODE ,
EMP_NAME_ENG as Name ,
LOAN_AMOUNT ,
DESC_ENG as Discription ,
REMARKS ,
ROW_NUMBER() OVER(
ORDER BY ( SELECT 1 )
) AS [ROWNO]
from VW_PER_LOAN
Where isnull( POST_FLAG , 'N' ) = 'N'
and ISNULl( CANCEL_FLAG , 'N' ) != 'Y'
and DOC_DATE_GRE between ISNULL(#FROMDATE , DOC_DATE_GRE )
and ISNULL(#TODATE , DOC_DATE_GRE )
and BRANCH in ( SELECT *
FROM DBO.FN_SSP_GetAllowedBranches(#USERCODE)
)
) T
WHERE T.ROWNO BETWEEN ((#PAGENO-1)*#PAGESIZE)+1 AND #PAGESIZE*(#PAGENO)
SELECT #TOTALROW=COUNT(*)
from VW_PER_LOAN
Where isnull(POST_FLAG,'N')= 'N'
and ISNULl(CANCEL_FLAG,'N')!='Y'
and DOC_DATE_GRE between ISNULL(#FROMDATE,DOC_DATE_GRE)and ISNULL(#TODATE,DOC_DATE_GRE)
and BRANCH in ( SELECT *
FROM DBO.FN_SSP_GetAllowedBranches(#USERCODE)
)
END
Thanks
The first thing to do is to look at your execution plan and discuss it with a DBA if you don't understand it.
The obvious thing that stands out is that your where clause has pretty much every column reference wrapped in some sort of function. That makes them expressions and make the SQL optimizer unable to use any covering indices that might exist.
It looks like you are calling a table-valued function as an uncorrelated subquery. That would worry me with respect to performance. I'd probably move that out of the query. Instead run it just once and populate a temporary table.

Resources