Issue with join in Teradata - teradata

I have a very strange problem when I do the following join on two different tables in Teradata. It does not return any results, indicating that there is no common value between table_A and table_B.
SELECT
*
FROM
table_A a JOIN table_B b ON a.id = b.id;
<>
0 rows returned
However when I run the following two queries, I do get results indicating that the id column in both the tables has at least one row with 'John'.
SELECT
id
FROM table_A
WHERE
id = 'John';
<>
1 row returned
SELECT
id
FROM table_B
WHERE
id = 'John';
<>
1 row returned
The data type for all the columns in table_A and table_B is null

Can you try using the below approach :
ON trim(UPPER(table_A.id)) = trim(UPPER(table_B.id))

Related

unique one column adn return all data with mariaDB [duplicate]

My database structure contains columns: id, name, value, dealer. I want to retrieve row with lowest value for each dealer. I've been trying to mess up with MIN() and GROUP BY, still - no solution.
Solution1:
SELECT t1.* FROM your_table t1
JOIN (
SELECT MIN(value) AS min_value, dealer
FROM your_table
GROUP BY dealer
) AS t2 ON t1.dealer = t2.dealer AND t1.value = t2.min_value
Solution2 (recommended, much faster than solution1):
SELECT t1.* FROM your_table t1
LEFT JOIN your_table t2
ON t1.dealer = t2.dealer AND t1.value > t2.value
WHERE t2.value IS NULL
This problem is very famous, so there is a special page for this in Mysql's manual.
Check this: Rows Holding the Group-wise Maximum/Minimum of a Certain Column
select id,name,MIN(value) as pkvalue,dealer from TABLENAME
group by id,name,dealer;
here you group all rows by id,name,dealer and then you will get min value as pkvalue.
SELECT MIN(value),dealer FROM table_name GROUP BY dealer;
First you need to resolve the lowest value for each dealer, and then retrieve rows having that value for a particular dealer. I would do this that way:
SELECT a.*
FROM your_table AS a
JOIN (SELECT dealer,
Min(value) AS m
FROM your_table
GROUP BY dealer) AS b
ON ( a.dealer= b.dealer
AND a.value = b.m )
Try following:
SELECT dealer, MIN(value) as "Lowest value"
FROM value
GROUP BY dealer;
select id, name, value, dealer from yourtable where dealer
in(select min(dealer) from yourtable group by name, value)
These answers seem to miss the edge case of having multiple minimum values for a dealer and only wanting to return one row.
If you want to only want one value for each dealer you can use row_number partition - group - the table by dealer then order the data by value and id. we have to make the assumption that you will want the row with the smallest id.
SELECT ord_tbl.id,
ord_tbl.name,
ord_tbl.value,
ord_tbl.dealer
FROM (SELECT your_table.*,
ROW_NUMBER() over (PARTITION BY dealer ORDER BY value ASC, ID ASC)
FROM your_table
) AS ord_tbl
WHERE ord_tbl.ROW_NUMBER = 1;
Be careful though that value, id and dealer are indexed. If not this will do a full table scan and can get pretty slow...

SELECT SUM of each row and the next row

Table three columns id, numers1 and numbers2. We need to summarize numers1 and numbers2 but the first row to the second row numers1 numers2 the second with the third and forth etc.:
CREATE TABLE tb1 (id INTEGER PRIMARY KEY AUTOINCREMENT,numbers1,numbers2);
INSERT INTO tb1 (numbers1,numbers2) values(1,10);
INSERT INTO tb1 (numbers1,numbers2) values(2,20);
INSERT INTO tb1 (numbers1,numbers2) values(3,30);
INSERT INTO tb1 (numbers1,numbers2) values(4,40);
INSERT INTO tb1 (numbers1,numbers2) values(5,50);
I want to get as:
21
32
43
54
with the reference of getting the correct row index per record here:
How to use ROW_NUMBER in sqlite
I was able to create the required result with the following query:
SELECT
num1 + coalesce(b_num2, 0)
FROM(
SELECT
num1,
(select count(*) from test as b where a.id >= b.id) as cnt
FROM test as a) as a
LEFT JOIN
(SELECT num2 as b_num2,
(select count(*) from test as b where a.id >= b.id) as cnt
FROM test as a
) as b
ON b.cnt = a.cnt + 1
Explanation:
by joining two same table of similar record index, then merge the next record with the current record and then sum num1 of current record with num2 of next record, I do not know how you want to deal with the last row as it does not have a next row so I assume it to add nothing to have a result of just the value of num1
Result:
For one row with a specific ID x, you can get values from the next row by searching for ID values larger than x, and taking the first such row:
SELECT ...
FROM tb1
WHERE id > x
ORDER BY id
LIMIT 1;
You can then use this as a correlated subquery to get that value for each row:
SELECT numbers1 + (SELECT T2.numbers2
FROM tb1 AS T2
WHERE T2.id > T1.id
ORDER BY T2.id
LIMIT 1) AS sum
FROM tb1 AS T1
WHERE sum IS NOT NULL; -- this omits the last row, where the subquery returns NULL

SQLite: copy fields from one table to another table using a condition [duplicate]

I have two tables, with a same column named user_name, saying table_a, table_b.
I want to, copy from table_b, column_b_1, column_b2, to table_b1, column_a_1, column_a_2, respectively, where the user_name is the same, how to do it in SQL statement?
As long as you have suitable indexes in place this should work alright:
UPDATE table_a
SET
column_a_1 = (SELECT table_b.column_b_1
FROM table_b
WHERE table_b.user_name = table_a.user_name )
, column_a_2 = (SELECT table_b.column_b_2
FROM table_b
WHERE table_b.user_name = table_a.user_name )
WHERE
EXISTS (
SELECT *
FROM table_b
WHERE table_b.user_name = table_a.user_name
)
UPDATE in sqlite3 did not support a FROM clause for a long time, which made this a little more work than in other RDBMS. UPDATE FROM was implemented in SQLite 3.33 however (2020-08-14) as mentioned at: https://stackoverflow.com/a/63079219/895245
If performance is not satisfactory, another option might be to build up new rows for table_a using a select and join with table_a into a temporary table. Then delete the data from table_a and repopulate from the temporary.
Starting from the sqlite version 3.15 the syntax for UPDATE admits a column-name-list
in the SET part so the query can be written as
UPDATE table_a
SET
(column_a_1, column_a_2) = (SELECT table_b.column_b_1, table_b.column_b_2
FROM table_b
WHERE table_b.user_name = table_a.user_name )
which is not only shorter but also faster
the last "WHERE EXISTS" part
WHERE
EXISTS (
SELECT *
FROM table_b
WHERE table_b.user_name = table_a.user_name
)
is actually not necessary
It could be achieved using UPDATE FROM syntax:
UPDATE table_a
SET column_a_1 = table_b.column_b_1
,column_a_2 = table_b.column_b_2
FROM table_b
WHERE table_b.user_name = table_a.user_name;
Alternatively:
UPDATE table_a
SET (column_a_1, column_a_2) = (table_b.column_b_1, table_b.column_b_2)
FROM table_b
WHERE table_b.user_name = table_a.user_name;
UPDATE FROM - SQLite version 3.33.0
The UPDATE-FROM idea is an extension to SQL that allows an UPDATE statement to be driven by other tables in the database. The "target" table is the specific table that is being updated. With UPDATE-FROM you can join the target table against other tables in the database in order to help compute which rows need updating and what the new values should be on those rows
There is an even much better solution to update one table from another table:
;WITH a AS
(
SELECT
song_id,
artist_id
FROM
online_performance
)
UPDATE record_performance
SET
op_song_id=(SELECT song_id FROM a),
op_artist_id=(SELECT artist_id FROM a)
;
Update tbl1
Set field1 = values
field2 = values
Where primary key in tbl1 IN ( select tbl2.primary key in tbl1
From tbl2
Where tbl2.primary key in tbl1 =
values);
The accepted answer was very slow for me, which is in contrast to the following:
CREATE TEMPORARY TABLE t1 AS SELECT c_new AS c1, table_a.c2 AS c2 FROM table_b INNER JOIN table_a ON table_b.c=table_a.c1;
CREATE TEMPORARY TABLE t2 AS SELECT t1.c1 AS c1, c_new AS c2 FROM table_b INNER JOIN t1 ON table_b.c=t1.c2;

Update one column in table1 from value in table2

I am trying to update one column in table1 from a column in table2. Here
is what i am doing but i am getting an ORA error.
ORA-01427: single-row subquery returns more than one row.
update table1 a
set a.art_num = (
select b.art_num from table2 b
where a.comp_id = b.comp_id );
Thanks so much in advance!
This happens because your subquery returns more than one result.
You should check this one:
select b.art_num
from table2 b
where a.comp_id = b.comp_id
You could try to select DISTINCT (search for distinct in the link for documentation) on the subquery:
update table1 a
set a.art_num = (
select distinct(b.art_num)
from table2 b
where a.comp_id = b.comp_id );

pyodbc-access column by name

I have 2 tables with 150 columns and trying to join those tables and fetch the result set one by one and process them:
qry = '''select a.*, b.*
from table_a a
full outer join table_b b
where a.id = b.id'''
table_row = conn.execute(qry) #execute method yields a generator
Now, I need to access the resultset which is generator and determine the values of each and every column of table-1 & table-2
For example:- if table-1 & table-2 has a column named name, I need to compare it..
How can I access the resultset by columnnname, im using Pyodbc,
ie resultset.table1.name = resultset.table2.name
Use the ISO information schema views (I'm using SQL Server in the
example) to return column names for each table, substituting
database and schema parameters values as appropriate.
Merge the resulting lists into a set containing column names present in both tables.
Use this set to build a string representing column names to select from each table, aliasing each column by prefixing with a table name. Defining column aliases will allow you to differentiate columns by table.
Execute select query and print values for comparison.
Code sample
# assumes connection, cursor already setup
# build SQL for retrieving column names
sql = '''SELECT COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMN
WHERE TABLE_CATALOG = ? AND TABLE_SCHEMA = ?
AND TABLE_NAME = ?'''
# get column names from table_a
rows = cursor.execute(sql, ('database', 'schema', 'table_a')).fetchall()
table_a_columns = [column[0] for column in rows]
# get column names from table_b
rows = cursor_b.execute(sql, ('database', 'schema', 'table_b')).fetchall()
table_b_columns = [column[0] for column in rows]
# get unique matching columns from lists
matches = set(table_a_columns).intersection(table_b_columns)
# get string of column names to use in query, setting column alias prefixed with
# table name for each column
column_alias = 'a.{0} as a_{0}, b.{0} as b_{0}'
columns = ', '.join([column_alias.format(column) for column in matches])
sql = 'SELECT {} FROM table_a a FULL OUTER JOIN table_b b ON a.id = b.id'
sql = sql.format(columns)
# print values to compare
for row in cursor.execute(sql):
print row
There's probably a less complicated way, but it's eluding me.

Resources