ROWNUM equivalent in Teradta - teradata

is there anything equivalent to "ROWNUM" in teradata ? i have to implement the below query in teradata, it runs fine with oracle. any ideas or suggestions ?
INSERT INTO ADDRES(CITY,STATEPROVINCEID) SELECT 'sample',AA.ID FROM
AA WHERE ROWNUM<=1000

As there's no ORDER BY you can simply use:
INSERT INTO ADDRES(CITY,STATEPROVINCEID)
SELECT TOP 1000 'sample',AA.ID
FROM AA
But this is not random, it's just the first 1000 rows found on an AMP.
To get get sampled rows:
INSERT INTO ADDRES(CITY,STATEPROVINCEID)
SELECT 'sample',AA.ID
FROM AA
SAMPLE 1000
If you are a statistician and need a true random sample switch to:
SAMPLE RANDOMIZED ALLOCATION 1000
You can also get multiple samples, up to 16, e.g.
SAMPLE 1000,2000 --use column SAMPLEID to know which row belongs to which sample
or a fractional sample:
SAMPLE 0.1 -- 10% of the rows
or a stratified sample, i.e. samples from different groups:
SAMPLE WHEN col< 0 THEN 10
WHEN col <100 THEN 20
ELSE 50
END

I'm not sure it won't help in your situation, but for future reference, Teradata has a ROW_NUMBER() function. It works pretty much like everyone else's :
ROW_NUMBER over ([PARTITION by <column>] ORDER by <column1>[<column2]...]).
Teradata has the added advantage of being able to constrain on it using QUALIFY, instead of having to use a derived table.
Select
...
from
...
QUALIFY ROW_NUMBER over (order by...)

Related

Oracle 11g PLSQL - Splitting A Record Out Into Constituent Records Over Time - Row Generating

I have a dataset (a view) that has a numeric field "WR_EST_MHs". If that field exceeds a certain number of man hours (120 or 60, depending on 2 other fields' values), I need to split it out into constiuent records and spread those hours over future weeks.
The OH_UG_Key and 1kMCM_Flag fields determine the threshold for splitting. For example, if the OH_UG = 1 AND 1kMCM_Flag = 'N' and the WR_EST_MHs > 120, then spread the WR_EST_MHs value over as many records as is necessary, in 120 MH increments, changing only the WRSchedDate and WRSchedDate_Key fields (advancing each by one week).
Each OH_UG / 1kMCM_Flag / WR_EST_MHs scenario is as follows:
This is an example of what I need to do:
I thought that something like this might work, but I haven't worked with levels before:
with cte as
2 (Select * from "STJOF"."vfactScheduledWAWork"
5 )
6 select WR_Key, WP_Key, WRShedDate, DistSA_Key_Hash, CrewHQ_Key_Hash, Priority_Key_Hash, JobType_Key_Hash, WRStatus_Key_Hash, PerfBy_Key, OHUG_Key, 1kMCM_Flag, WR_EST_MHs
7 from cte cross join table(cast(multiset(select level from dual
8 connect by level >= WR_EST_MHs / 120
9 ) as sys.odcinumberlist))
10 order by WR_Key;
I also thought this could be done with a "tally table" which I have a little experience with. I really don't know where to begin on this one.
So I would say that a "Tally Table" will work if it is applied correctly. (Or, in this case, a tally view.)
First, break the logic for the hour breakout into a function so we don't have case when everywhere like so:
CREATE OR REPLACE FUNCTION get_hour_breakout(in_ohug_key IN NUMBER, in_1kmcm_flag in varchar2, in_tot_hours in number)
RETURN number
IS hours number;
BEGIN
hours:=
case when in_ohug_key=2 and in_1kmcm_flag='N' and in_tot_hours>60 then 60 else
case when in_ohug_key=2 and in_1kmcm_flag='Y' and in_tot_hours>60 and in_tot_hours<=120 then 60 else
case when in_ohug_key=2 and in_1kmcm_flag='Y' and in_tot_hours>120 then 120 else
120
end
end
end;
RETURN(hours);
END get_hour_breakout;
This way, if the hour breakout logic changes, it can be tweaked in one place.
Second, join to a dynamic "tally" view like so:
select wr_key,
WP_Key,
wrscheddate+idxkey.nnn*7 wrscheddate,
to_char(wrscheddate+idxkey.nnn*7,'yyyymmdd') WRSchedDate_Key,
OHUG_Key,
kMCM_Flag,
case when (wr_est_mhs-idxkey.nnn*get_hour_breakout(ohug_key, kmcm_flag, wr_est_mhs))>=get_hour_breakout(ohug_key, kmcm_flag, wr_est_mhs) then get_hour_breakout(ohug_key, kmcm_flag, wr_est_mhs) else wr_est_mhs-idxkey.nnn*get_hour_breakout(ohug_key, kmcm_flag, wr_est_mhs) end wr_est_mhs
from yourView inner join (SELECT ROWNUM-1 nnn
FROM ( SELECT 1 just_a_column
FROM dual
CONNECT BY LEVEL <= 52
)
) idxkey on vwrk.wr_est_mhs/get_hour_breakout(ohug_key, kmcm_flag, wr_est_mhs) > idxkey.nnn
By using the connect by level we, in effect, generate a bunch of zero indexed rows, then by joining to it with the hours divided by the breakout greater than the feed number we get a few rows for each group.
For example, if the function returns 120 and the hours are 100 you get a single row, so it stays 1 to 1. If the function returns 120 and the hours are 500, however, you get 5 rows because 500/120=4.1666666…, which in the join gives rows 4,3,2,1,0. Then the rest is simple math to determine the number of hours per breakout.
This could also be improved by moving the function call into the lower view so it is only used once per row. And the inline tally view could be made into it's own view, depends on the maintainability you need to build into it.

Make each distinct value of a column a new column and count in SQLite

I have a similar question like the one here: distinct values as new columns & count
But instead of having only 3 values (in the case above: drivers), I have about 1 million, so I cannot list all of them in my code. How can I do that in SQLite?
So I kind of want something like the code below to be repeated for i= 1 to length(DISTINCT(driver)):
SELECT model
, COUNT(model) as drives
, SUM(distance) as distance
, SUM(CASE WHEN driver=DISTINCT(driver)[i] THEN 1 ELSE 0 END) AS DISTINCT(driver)[i]
FROM new_table
GROUP BY model;
SQLite has no mechanism for dynamic SQL. You have to read the list of all possible drivers from the database, and construct the query with a separate SUM(CASE...) column for each value in your program.
But a large number of columns is inefficient, and when it becomes larger than 2000, it will not work anyway.
It might be a better idea to return each matrix entry individually:
SELECT model,
driver,
COUNT(*) AS drives_for_this_model_and_driver
FROM new_table
GROUP BY model, driver
ORDER BY model, driver;

What does sqlite do when you use group by and don't give it an aggregate function?

It doesn't give an error so what is it supposed to do?
From my experimentation it gives you the last row from the data set.
you do the following:
select A, AVG(B), C from table group by C
A
1
2
3
4
Then it will say
4 , 4.3 , a
Standard SQL does not allow this.
SQLite (and MySQL) just give the value from some random record in the group.
(It happens to be the last one in this case because of the way the computation is implemented.)
SQLite (beginning with version 3.7.11) guarantees that when you use MIN or MAX, such unaggregated values come from the a record that matches the MIN/MAX.

Loop through variable number of tables

I run a simulation with varying number of iterations and each iteration creates an output table like table1,table2,table3... They all have the same structure like:
ID | value
but varying number of rows.
For each table, I want to compute the average of the 'value' column and show them in a new table with the column "averages" like:
tableNumber | averageValue
1 | 516
2 | 512
3 | 521
... | ...
Is this possible in SQlite if the number of tables is quite high? And if not, how can I achieve this in a different way?
Thanks a lot in advance :-)
Instead of creating different tables, put the results in the same table, and have a column which indicates which batch or set the row belongs to. Then when you query the table you can filter on that column so that you're working only with the desired batch/set. Put an index on that column to improve the efficiency of the query and make it run faster. There will be no need to save the average results to separate tables either. Your query can produce the result without your having to persist the results as data in another table.
select batch, avg(value) as AvgValue
from simulation
where batch = 100
group by batch

Fastest Way to Count Distinct Values in a Column, Including NULL Values

The Transact-Sql Count Distinct operation counts all non-null values in a column. I need to count the number of distinct values per column in a set of tables, including null values (so if there is a null in the column, the result should be (Select Count(Distinct COLNAME) From TABLE) + 1.
This is going to be repeated over every column in every table in the DB. Includes hundreds of tables, some of which have over 1M rows. Because this needs to be done over every single column, adding Indexes for every column is not a good option.
This will be done as part of an ASP.net site, so integration with code logic is also ok (i.e.: this doesn't have to be completed as part of one query, though if that can be done with good performance, then even better).
What is the most efficient way to do this?
Update After Testing
I tested the different methods from the answers given on a good representative table. The table has 3.2 million records, dozens of columns (a few with indexes, most without). One column has 3.2 million unique values. Other columns range from all Null (one value) to a max of 40K unique values. For each method I performed four tests (with multiple attempts at each, averaging the results): 20 columns at one time, 5 columns at one time, 1 column with many values (3.2M) and 1 column with a small number of values (167). Here are the results, in order of fastest to slowest
Count/GroupBy (Cheran)
CountDistinct+SubQuery (Ellis)
dense_rank (Eriksson)
Count+Max (Andriy)
Testing Results (in seconds):
Method 20_Columns 5_Columns 1_Column (Large) 1_Column (Small)
1) Count/GroupBy 10.8 4.8 2.8 0.14
2) CountDistinct 12.4 4.8 3 0.7
3) dense_rank 226 30 6 4.33
4) Count+Max 98.5 44 16 12.5
Notes:
Interestingly enough, the two methods that were fastest (by far, with only a small difference in between then) were both methods that submitted separate queries for each column (and in the case of result #2, the query included a subquery, so there were really two queries submitted per column). Perhaps because the gains that would be achieved by limiting the number of table scans is small in comparison to the performance hit taken in terms of memory requirements (just a guess).
Though the dense_rank method is definitely the most elegant, it seems that it doesn't scale well (see the result for 20 columns, which is by far the worst of the four methods), and even on a small scale just cannot compete with the performance of Count.
Thanks for the help and suggestions!
SELECT COUNT(*)
FROM (SELECT ColumnName
FROM TableName
GROUP BY ColumnName) AS s;
GROUP BY selects distinct values including NULL. COUNT(*) will include NULLs, as opposed to COUNT(ColumnName), which ignores NULLs.
I think you should try to keep the number of table scans down and count all columns in one table in one go. Something like this could be worth trying.
;with C as
(
select dense_rank() over(order by Col1) as dnCol1,
dense_rank() over(order by Col2) as dnCol2
from YourTable
)
select max(dnCol1) as CountCol1,
max(dnCol2) as CountCol2
from C
Test the query at SE-Data
A development on OP's own solution:
SELECT
COUNT(DISTINCT acolumn) + MAX(CASE WHEN acolumn IS NULL THEN 1 ELSE 0 END)
FROM atable
Run one query that Counts the number of Distinct values and adds 1 if there are any NULLs in the column (using a subquery)
Select Count(Distinct COLUMNNAME) +
Case When Exists
(Select * from TABLENAME Where COLUMNNAME is Null)
Then 1 Else 0 End
From TABLENAME
You can try:
count(
distinct coalesce(
your_table.column_1, your_table.column_2
-- cast them if you want replace value from column are not same type
)
) as COUNT_TEST
Function coalesce help you combine two columns with replace not null values.
I used this in mine case and success with correctly result.
Not sure this would be the fastest but might be worth testing. Use case to give null a value. Clearly you would need to select a value for null that would not occur in the real data. According to the query plan this would be a dead heat with the count(*) (group by) solution proposed by Cheran S.
SELECT
COUNT( distinct
(case when [testNull] is null then 'dbNullValue' else [testNull] end)
)
FROM [test].[dbo].[testNullVal]
With this approach can also count more than one column
SELECT
COUNT( distinct
(case when [testNull1] is null then 'dbNullValue' else [testNull1] end)
),
COUNT( distinct
(case when [testNull2] is null then 'dbNullValue' else [testNull2] end)
)
FROM [test].[dbo].[testNullVal]

Resources