record types that weren't found for a specific value in oracle query - oracle11g

I have this query
Select distinct p_id, p_date,p_city
from p_master
where p_a_id in(1,2,5,8,2,1,10,02)
and my IN clause contains 200 values. How do I get to know which ones weren't returned by the query. Each value in the IN clause may have a record in some cases they don't. I want to know all the records that weren't found for any selected p_a_id type.
Please help

This will do the trick but I'm sure there's an easier way to find this out :-)
with test1 as
(select '1,2,5,8,2,1,10,02' str from dual)
select * from (
select trim(x.column_value.extract('e/text()')) cols
from test1 t, table (xmlsequence(xmltype('<e><e>' || replace(t.str,',','</e><e>')|| '</e></e>').extract('e/e'))) x) cols
left outer join
(Select count(*), p_a_id from p_master where p_a_id in (1,2,5,8,2,1,10,02) group by p_a_id) p
on p.p_a_id = cols.cols
where p_a_id is null
;

Related

Is it possible to compare value to multiple columns in ''In'' clause?

select m.value
from MY_TABLE m
where m.value in (select m2.some_third_value, m2.some_fourth_value
from MY_TABLE_2 m2
where m2.first_val member of v_my_array
or m2.second_val member of v_my_array_2)
Is it possible to write a select similar to this, where m.value is compared to two columns and has to match at least one of those? Something like where m.value in (select m2.first_val, m2.second_val). Or is writing two separate selects unavoidable here?
No. When there are multiple columns in the IN clause, there must be the same number of columns in the WHERE clause. The pairwise query compares each record in the WHERE clause against the records returned by the sub-query. The statement below
SELECT *
FROM table_main m
WHERE ( m.col_1, m.col_2 ) IN (SELECT s.col_a,
s.col_b
FROM table_sub s)
is equivalent to
SELECT *
FROM table_main m
WHERE EXISTS (SELECT 1
FROM table_sub s
WHERE m.col_1 = s.col_a
AND m.col_2 = s.col_b)
The only way to search both columns in one SELECT statement would be to OUTER JOIN the second table to the first table.
SELECT m.*
FROM table_main m
LEFT JOIN table_sub s ON (m.col_1 = s.col_a OR m.col_1 = s.col_b)
WHERE m.col_1 = s.col_a
OR m.col_1 = s.col_b

No more spool space in Teradata while trying Update

I'm trying to update a table with to many rows 388.000.
This is the query:
update DL_RG_ANALYTICS.SH_historico
from
(
SELECT
CAST((MAX_DIA - DIA_PAGO) AS INTEGER) AS DIAS_AL_CIERRE_1
FROM
(SELECT * FROM DL_RG_ANALYTICS.SH_historico A
LEFT JOIN
(SELECT ANO||MES AS ANO_MES, MAX(DIA) AS MAX_DIA FROM DL_RG_ANALYTICS.SH_CALENDARIO
GROUP BY 1) B
ON A.ANOMES = B.ANO_MES
) M) N
SET DIAS_AL_CIERRE = DIAS_AL_CIERRE_1;
Any help is apreciate.
This first thing I'd do is replace the SELECT * with only the columns you need. You can also remove the M derived table to make it easier to read:
UPDATE DL_RG_ANALYTICS.SH_historico
FROM (
SELECT CAST((MAX_DIA - DIA_PAGO) AS INTEGER) AS DIAS_AL_CIERRE_1
FROM DL_RG_ANALYTICS.SH_historico A
LEFT JOIN (
SELECT ANO || MES AS ANO_MES, MAX(DIA) AS MAX_DIA
FROM DL_RG_ANALYTICS.SH_CALENDARIO
GROUP BY 1
) B ON A.ANOMES = B.ANO_MES
) N
SET DIAS_AL_CIERRE = DIAS_AL_CIERRE_1;
What indexes are defined on the SH_CALENDARIO table? If there is a composite index of (ANO, MES) then you should re-write your LEFT JOIN sub-query to GROUP BY these two columns since you concatenate them together anyways. In general, you want to perform joins, GROUP BY and OLAP functions on indexes, so there will be less row re-distribution and they will run more efficiently.
Also, this query is updating all rows in the table with the same value. Is this intended, or do you want to include extra columns in your WHERE clause?

executing multiple select statements to pick a value in where condition using Case statement

Case statement with select statement as loops in where condition
need to bring values by referring two tables. if the value doesn't exist in table a it has to refer the 2nd table
Sel * from Table A
where city = (case when (sel distinct city from Table A) is null
then (sel city from Table B) end)
expected output is as shown below
Sel * from Table A
where City = 'XYZ'
if value is not present in table A it has to refer Table B statement and show the value in where condition
The one thing you need to be careful with here is to make sure you return a single value in your scalar sub-queries -- (sel distinct city from Table A) and (sel city from Table B). If you can always guarantee that then I think your query will work as is.
A safer way to do it is guarantee you always get one row. Here's one option:
SELECT *
FROM TableA
WHERE City = (
SELECT city
FROM (
-- Get all cities from TableA
SELECT city, 1 AS table_priority
FROM tableA
UNION ALL
-- Get all cities from TableB
SELECT city, 2
FROM tableB
) src
QUALIFY ROW_NUMBER() OVER(ORDER BY src.table_priority, src.city) = 1 -- Return one row
)

Query fails to execute after converting a column from Varchar2 to CLOB

I have a oracle query
select id from (
select ID, ROW_NUMBER() over (partition by LATEST_RECEIPT order by ID) rownumber
from Table
where LATEST_RECEIPT in
(
select LATEST_RECEIPT from Table
group by LATEST_RECEIPT
having COUNT(1) > 1
)
) t
where rownumber <> 1;
The data type of LATEST_RECEIPT was earlier varchar2(4000) and this query worked fine. Since the length of the column needs to be extended i modified it to CLOB, after which this fails. Could anyone help me fix this issue or provide a work around?
You can change your inner query to look for other rows with the same last_receipt value but a different ID (assuming ID is unique); if another row exists then that is equivalent to your count returning greater than one. But you can't simply test two CLOB values for equality, you need to use dbms_lob.compare:
select ID
from your_table t1
where exists (
select null from your_table t2
where dbms_lob.compare(t2.LATEST_RECEIPT, t1.LATEST_RECEIPT) = 0
and t2.ID != t1.ID
-- or if ID isn't unique: and t2.ROWID != t1.ROWID
);
Applying the row number filter is tricker, as you also can't use a CLOB in the analytic partition by clause. As André Schild suggested, you can use a hash; here passing the integer value 3, which is the equivalent of dbms_crypto.hash_sh1 (though in theory that could change in a future release!):
select id from (
select ID, ROW_NUMBER() over (partition by dbms_crypto.hash(LATEST_RECEIPT, 3)
order by ID) rownumber
from your_table t1
where exists (
select null from your_table t2
where dbms_lob.compare(t2.LATEST_RECEIPT, t1.LATEST_RECEIPT) = 0
and t2.ID != t1.ID
-- or if ID isn't unique: and t2.ROWID != t1.ROWID
)
)
where rownumber > 1;
It is of course possible to get a hash collision, and if that happened - you had two latest_receipt values which both appeared more than once and both hashed to the same value - then you could get too many rows back. That seems pretty unlikely, but it's something to consider.
So rather than ordering you can only look for rows which have the same lastest_receipt and a lower ID:
select ID
from your_table t1
where exists (
select null from your_table t2
where dbms_lob.compare(t2.LATEST_RECEIPT, t1.LATEST_RECEIPT) = 0
and t2.ID < t1.ID
);
Again that assumes ID is unique. If it isn't then you could still use rowid instead, but you would have less control over which rows were found - the lowest rowid isn't necessarily the lowest ID. Presumably you're using this to dine rows to delete. If you actually don't mind which row you keep and which you delete then you could still do:
and t2.ROWID < t1.ROWID
But since you are currently ordering that probably isn't acceptable, and hashing might be preferable, despite the small risk.

How to delete multiple rows with 2 columns as composite primary key in SQLite?

I need to delete some rows in a SQLite table with two columns as primary key, like this:
DELETE FROM apt_lang
WHERE (apt_fk, apt_lang_fk) NOT IN ((42122,"en"),(42123,"es"),(42123,"en"))
This works on Oracle and MySQL but not in SQLite.
Can anybody help me?
First, find out which rows you want to delete.
The easiest way is with a join:
SELECT *
FROM apt_lang
JOIN (SELECT 42122 AS apt_fk, 'en' AS apt_lang_fk UNION ALL
SELECT 42123 , 'es' UNION ALL
SELECT 42123 , 'en' )
USING (apt_fk, apt_lang_fk)
To use this with a DELTE, either check with EXISTS for a match:
DELETE FROM apt_lang
WHERE NOT EXISTS (SELECT 1
FROM apt_lang AS a2
JOIN (SELECT 42122 AS apt_fk, 'en' AS apt_lang_fk UNION ALL
SELECT 42123 , 'es' UNION ALL
SELECT 42123 , 'en' )
USING (apt_fk, apt_lang_fk)
WHERE apt_fk = apt_lang.apt_fk
AND apt_lang_fk = apt_lang.apt_lang_fk)
or get the ROWIDs of the subquery and check against those:
DELETE FROM apt_lang
WHERE rowid NOT IN (SELECT apt_lang.rowid
FROM apt_lang
JOIN (SELECT 42122 AS apt_fk, 'en' AS apt_lang_fk UNION ALL
SELECT 42123 , 'es' UNION ALL
SELECT 42123 , 'en' )
USING (apt_fk, apt_lang_fk))
This should work:
DELETE FROM apt_lang WHERE (apt_fk, apt_lang_fk) NOT IN (VALUES (42122,"en"),(42123,"es"),(42123,"en"))
Yes, it's possible to delete rows from SQLite based on a subquery that builds on multiple columns. This can be done with SQLite's concatenate "||" operator. It might help to show an example.
Setup:
create table a (x,y);
insert into a values ('A','B');
insert into a values ('A','C');
create table b (x,y);
insert into b values ('A','C');
insert into b values ('A','X');
Show Tables:
select * from a;
A|B
A|C
select * from b;
A|C
A|X
Assuming you want to delete from table a rows where column x and column y don't match with table b, the following select will accomplish that.
delete from a where x||y not in (select a.x||a.y from a,b where a.x=b.x and a.y=b.y);
Result:
select * from a;
A|B
Summary
This relies on concatenating several columns into one with the "||" operator. Note, it will work on calculated values too, but it might require casting the values. So, just a few conversions to note with the "||" operator...
select 9+12|| 'test';
21 -- Note we lost 'test'
select cast(9+12 as text)|| 'test';
21test -- Good! 'test' is there.

Resources