Pass array to Redshift parametrized insertion - odbc

I have the following Redshift query executed via ODBC / C++:
INSERT INTO _tmp_limit_0(val) VALUES(?);
_tmp_limit_0 is a temporary table.
I want to bind an string array to the ? parameter with SQLBindParameter to insert multiple rows in one statement.
The problem is if I query the rows in the table, it seems, that only 1 row has been inserted:
SELECT COUNT(*) AS cnt FROM _tmp_limit_0;
Can anybody help me, what did I wrong? How can I insert multiple rows in one query using SQL (the solution should work across multiple relational DB)
Thank you

Each row's data needs to be within parentheses separated with commas. Like this:
INSERT INTO table(val) VALUES (1), (2), (3);
If you want to insert multiple columns of data in multiple rows then the column information is comma separated within the parentheses. Like:
INSERT INTO table(val1, val2) VALUES (1, 9), (2, 8), (3, 7);
See - https://docs.aws.amazon.com/redshift/latest/dg/r_INSERT_30.html

Related

want to insert statement in a loop in teradata

I'm inserting data into table from another table using below query in Teradata and I want to run this statement until table reaches 20GB. So I want to run below statement in a loop to achieve that. However I written one but it's giving query invalid error when I'm trying to execute. Could you please help me as I'm new to Teradata. Thanks.
insert into schema1.xyx select * from schema2.abc;
if/loop/etc. are only allowed in Stored Procedures.
Looping a billion times will be quite inefficient (and will result in much more than 20GB). Better check the current size of table abc from dbc.TableSizeV, calculate how many loops you need and then cross join.
insert into schema1.xyx
select t.*
from schema2.abc AS t
cross join
( -- calculated number of loops
select top 100000 *
-- any table with a large number of rows
from sys_calendar_calendar
);
But much easier is using sampling.
Calculate the number of rows needed and then
insert into schema1.xyx
select *
from schema2.abc
sample with replacement 100000000;

Sqlite UNION ALL Query with Different Columns

Using UNION ALL with Queries with different number of column returns the following error sqlite3.OperationalError: SELECTs to the left and right of UNION ALL do not have the same number of result columns.
I tried
this answer but I think that is now outdated and does not work. I tried to find something in the documentation but I couldn't find it.
Both UNION and UNION ALL do not work.
This answer is a bit complex for me to understand.
What would be the workaround to achieve this? A Column with Null - how do I do that ?
Update:
Also, I don't know the no. or name of tables as in my program I allow the user to create and manipulate data.
To find out the queries in the database, I Use:
SELECT name FROM sqlite_master WHERE type='table';
and to find out the columns I use this:
[i[0] for i in cursor.description]
Simple, the column size and type for each side of a union must be identical.
You can make them identical by casting columns to the correct type, or setting missing columns to NULL.
I think you just need to compensate for the missing columns with adding 'empty' columns.
CREATE TABLE test_table1(k INTEGER, v INTEGER);
CREATE TABLE test_table2(k INTEGER);
INSERT INTO test_table1(k,v) VALUES(4, 5);
INSERT INTO test_table2(k) VALUES(4);
SELECT * FROM test_table1 UNION ALL SELECT *,'N/A' FROM test_table2;
4|5
4|N/A
Here I've added another pseudo-column with 'N/A', so the test_table2 has two columns before the UNION ALL happens.

SQLite: Add up integer tables based on shared indicators

I am aggregating numbers from different sqlite databases into a single output database table.
I need to add up integer columns i1,i2,i3 in the output table based on three indicating columns a,b,c that tell me which rows to update:
ATTACH DATABASE "out.db" AS output;
INSERT INTO output.rows(a,b,c,i1,i2,i3)
SELECT DISTINCT "some_value", b, c, 0, 0, 0 FROM main.rows
ON CONFLICT IGNORE;
#THE FOLLOWING LINES MIGHT SHOW WHAT I MEAN...
UPDATE output.rows SET i1=i1+i1_,i2=i2+i2_, i3=i3+i3_
WHERE a="some_value" AND b=b_ and c=c_
SELECT i1_, i2_, i3_, b_, c_ FROM main.rows;
I do not want to type in all the combinations of a,b,c. As you can see, a does not come from main but from external information (the filename).
In newer versions of SQLite that support UPSERT, the following seems to work:
ATTACH DATABASE "$out.db" AS output;
INSERT INTO output.rows(a,b,c,i1,i2,i3)
SELECT "some_value", b, c, i1, i2, i3 FROM main.rows WHERE true
ON CONFLICT (a,b,c) DO UPDATE SET i1=i1+i1, i2=i2+i2, i3=i3+i3;
In my case, the columns i1,i2,i3 coming from main actually had a different name (say I1,I2,I3) than their counterpart in output, therefore the UPDATE was clearer (i1=i1+I1). I failed to reference as main.rows.i1 inside the UPDATE statement. If you know how to solve that ambiguity, please comment.

SQLITE: Replacing highly redundant with index to another table

I have a table t with around 500,000 rows. One of the columns (stringtext) contains a very long string and I have now discovered that that there are in fact only 80 distinct strings. I'd like to declutter table t by moving the strings into a separate table, s, and merely referencing them in t.
I have created a separate table of the long strings, including what is effectively an explicit row-index number using:
CREATE TEMPORARY TABLE stmp AS
SELECT DISTINCT
stringtext
FROM t;
CREATE TABLE s AS
SELECT _ROWID_ AS stringindex, stringtext
FROM stmp;
(It was creating this table that showed me there were only a few distinct strings).
How can I now replace stringtext in t with the corresponding stringindex from s?
I would think about something like Update t set stringtext = (select stringindex from s where s.stringtext = t.stringtext) and would recommend first making an index on s(stringtext) as SQLite might not be smart enough to build a temporary index. And then a VACUUMing would be in order.
Untested.

SQLite split column by integer versus character

I have several data tables where the "SampleName" column has integer and character values.
I'd like to split that column into 2; "Date" (int) and "SiteName" (Char).
Also - if there is a command function that can reorganize the integers into date values, this would be super helpful.
Any idea how to do this for multiple tables in one database?
Below is an example of the tables I'm looking at.
Note that SQLite does not support a formal date type. Instead, you can consider storing the date portion of the column as text, using text for the site name as well. Assuming you already had a Date and SiteName column in your table you could try the following:
UPDATE yourTable
SET
Date = SUBSTR(SampleName, 1, 8),
SiteName = SUBSTR(SampleName, 10, 3)
In most other databases, you could then also drop the SampleName column if it no longer served a purpose. But this isn't possible in SQLite; the closest thing would be to drop the table and recreate it all over again withou the SampleName column.

Resources