want to insert statement in a loop in teradata - teradata

I'm inserting data into table from another table using below query in Teradata and I want to run this statement until table reaches 20GB. So I want to run below statement in a loop to achieve that. However I written one but it's giving query invalid error when I'm trying to execute. Could you please help me as I'm new to Teradata. Thanks.
insert into schema1.xyx select * from schema2.abc;

if/loop/etc. are only allowed in Stored Procedures.
Looping a billion times will be quite inefficient (and will result in much more than 20GB). Better check the current size of table abc from dbc.TableSizeV, calculate how many loops you need and then cross join.
insert into schema1.xyx
select t.*
from schema2.abc AS t
cross join
( -- calculated number of loops
select top 100000 *
-- any table with a large number of rows
from sys_calendar_calendar
);
But much easier is using sampling.
Calculate the number of rows needed and then
insert into schema1.xyx
select *
from schema2.abc
sample with replacement 100000000;

Related

Snowflake, Python/Jupyter analysis

I am new to Snowflake, and running a query to get a couple of day's data - this returns more than 200 million rows, and take a few days. I tried running the same query in Jupyter - and the kernel restars/dies before the query ends. Even if it got into Jupyter - I doubt I could analyze the data in any reasonable timeline (but maybe using dask?).
I am not really sure where to start - I am trying to check the data for missing values, and my first instinct was to use Jupyter - but I am lost at the moment.
My next idea is to stay within Snowflake - and check the columns there with case statements (e.g. sum(case when column_value = '' then 1 else 0 end) as number_missing_values
Does anyone have any ideas/direction I could try - or know if I'm doing something wrong?
Thank you!
not really the answer you are looking for but
sum(case when column_value = '' then 1 else 0 end) as number_missing_values`
when you say missing value, this will only find values that are an empty string
this can also be written is a simpler form as:
count_if(column_value = '') as number_missing_values
The data base already knows how many rows are in a column, and it knows how many null columns there are. If loading data into a table, it might make more sense to not load empty strings, and use null instead then, for not compute cost you can run:
count(*) - count(column) as number_empty_values
also of note, if you have two tables in snowflake you can compare the via the MINUS
aka
select * from table_1
minus
select * from table_2
is useful to find missing rows, you do have to do it in both directions.
Then you can HASH rows, or hash the whole table via HASH_AGG
But normally when looking for missing data, you have an external system, so the driver is 'what can that system handle' and finding common ground.
Also in the past we where search for bugs in our processing that cause duplicate data (where we needed/wanted no duplicates) so then the above, and COUNT DISTINCT like commands come in useful.

How to get multiple random numbers in sqlite3

I'm trying to sample multiple records from big table randomly. Currently, a way I can go is to make random numbers in python and use them in sqlite3. Though I know random() of sqlite gives a random number, I don't know how to get (DISTINCT) multiple random numbers in sqlite. If I can make a table filled with random numbers, it would be OK.
If you know how many rows you want, you can use something like:
SELECT DISTINCT col1, col2, etc FROM yourtable ORDER BY random() LIMIT 10;

How to Query for Recent Rows in SQLITE3

I'm using SQLite3 and trying to query for recent rows. So I'm having SQLite3 insert a unix timestamp into each row with strftime('%s','now'). My Table looks like this:
CREATE TABLE test(id INTEGER PRIMARY KEY, time);
INSERT INTO test (time) VALUES (strftime('%s','now')); --Repeated
SELECT * FROM test;
1|1516816522
2|1516816634
3|1516816646 --etc lots of rows
Now I want to query for only recent entries, for example, I'm trying to get all rows with a time within the last hour. I'm trying the following SQL query:
SELECT * FROM test WHERE time > strftime('%s','now')-60*60;
However, that always returns all rows regardless of the value in the time column. I really don't know what's going on.
Also, if I put WHERE time > strftime('%s','now') it'll return nothing (which is expected) but if I put WHERE time > strftime('%s','now')-1 then it'll return everything. I don't know why.
Here's one more example:
sqlite> SELECT , strftime('%s','now')-1 AS window FROM test WHERE time > window;
1|1516816522|1516817482
2|1516816634|1516817482
3|1516816646|1516817482
It seems that SQLite3 thinks the values in the middle column are greater than the values in the right column!?
This isn't at all what I expect. Can someone please tell me what's going on? Thanks!
The purpose of strftime() is to format values, so it returns a string.
When you try to do computations with its return value, the database must convert it into a number. And numbers and strings cannot be compared directly with each other.
You must ensure that both values in a comparison have the same data type.
The best way to do this is to store numbers in the table:
INSERT INTO test (time)
VALUES (CAST(strftime('%s','now') AS MAKE_THIS_A_NUMBER_PLEASE));
(Or just declare the column type as something with numeric affinity.)

sqlite3: Intersect two tables where one value BETWEEN two others

I have two tables, one has single entries like this:
'rs47' 1027
The other has ranges:
'gene1' 1000 1500
These tables are huge, so I am trying to figure out the most efficient way to get all entries from table 1 where the entries are within any range in table 2.
I don't think that INTERSECT can be used like this. I know how to use SELECT to do this for a single entry:
SELECT name FROM 'table2' INDEXED BY 'start_end' WHERE 1027 BETWEEN start AND end
But I am not sure how to do that for every record in a table. Any ideas?
To check whether corresponding rows exist in the other table, you can use a correlated subquery:
SELECT *
FROM Table1
WHERE EXISTS (SELECT 1
FROM Table2
WHERE Table1.Value BETWEEN Table2.StartValue AND Table2.EndValue);

sqlite subqueries with group_concat as columns in select statements

I have two tables, one contains a list of items which is called watch_list with some important attributes and the other is just a list of prices which is called price_history. What I would like to do is group together 10 of the lowest prices into a single column with a group_concat operation and then create a row with item attributes from watch_list along with the 10 lowest prices for each item in watch_list. First I tried joins but then I realized that the operations where happening in the wrong order so there was no way I could get the desired result with a join operation. Then I tried the obvious thing and just queried the price_history for every row in the watch_list and just glued everything together in the host environment which worked but seemed very inefficient. Now I have the following query which looks like it should work but it's not giving me the results that I want. I would like to know what is wrong with the following statement:
select w.asin,w.title,
(select group_concat(lowest_used_price) from price_history as p
where p.asin=w.asin limit 10)
as lowest_used
from watch_list as w
Basically I want the limit operation to happen before group_concat does anything but I can't think of a sql statement that will do that.
Figured it out, as somebody once said "All problems in computer science can be solved by another level of indirection." and in this case an extra select subquery did the trick:
select w.asin,w.title,
(select group_concat(lowest_used_price)
from (select lowest_used_price from price_history as p
where p.asin=w.asin limit 10)) as lowest_used
from watch_list as w

Resources