Ok so this is my code....
DECLARE
V_INVENTORY_ITEM INVENTORY.ITEM%TYPE;
V_INVENTORY_PRICE INVENTORY.PRICE%TYPE;
V_INVENTORY_ONHAND INVENTORY.ONHAND%TYPE;
V_TRANS_ITEM TRANSACTION.ITEM%TYPE;
V_TRANS_CODE TRANSACTION.CODE%TYPE;
V_NEW_INVE_ITEM NEW_INVENTORY.ITEM%TYPE;
V_NEW_INVE_SOLD NEW_INVENTORY.SOLD%TYPE;
V_NEW_INVE_RETURNED NEW_INVENTORY.RETURNED%TYPE;
V_NEW_INVE_ONHAND NEW_INVENTORY.ONHANDNEW%TYPE;
V_NEW_INVE_PURCHASED NEW_INVENTORY.PURCHASED%TYPE;
V_NEW_INVE_ORIGINAL NEW_INVENTORY.ONHANDORIG%TYPE;
CURSOR INVEN_CURSOR IS
SELECT ITEM, PRICE, ONHAND FROM INVENTORY
ORDER BY ITEM;
CURSOR TRANS_CURSOR IS
SELECT ITEM, CODE FROM TRANSACTION
WHERE V_INVENTORY_ITEM = ITEM
ORDER BY ITEM;
BEGIN
OPEN INVEN_CURSOR;
LOOP
FETCH INVEN_CURSOR INTO V_INVENTORY_ITEM, V_INVENTORY_PRICE, V_INVENTORY_ONHAND;
EXIT WHEN INVEN_CURSOR%NOTFOUND;
V_NEW_INVE_SOLD := 0;
V_NEW_INVE_RETURNED := 0;
V_NEW_INVE_ONHAND := 0;
V_NEW_INVE_PURCHASED := 0;
V_NEW_INVE_ORIGINAL := V_INVENTORY_ONHAND;
OPEN TRANS_CURSOR;
LOOP
FETCH TRANS_CURSOR INTO V_TRANS_ITEM, V_TRANS_CODE;
EXIT WHEN TRANS_CURSOR%NOTFOUND;
IF V_TRANS_CODE = 'P' THEN
V_NEW_INVE_ONHAND := V_INVENTORY_ONHAND + 1;
V_NEW_INVE_PURCHASED := V_NEW_INVE_PURCHASED + 1;
V_NEW_INVE_ORIGINAL := V_INVENTORY_ONHAND;
END IF;
IF V_TRANS_CODE = 'R' THEN
V_NEW_INVE_RETURNED := V_NEW_INVE_RETURNED + 1;
V_NEW_INVE_ONHAND := V_INVENTORY_ONHAND + 1;
V_NEW_INVE_ORIGINAL := V_INVENTORY_ONHAND;
END IF;
IF V_TRANS_CODE = 'S' THEN
V_NEW_INVE_SOLD := V_NEW_INVE_SOLD + 1;
V_NEW_INVE_ONHAND := V_INVENTORY_ONHAND - 1;
V_NEW_INVE_ORIGINAL := V_INVENTORY_ONHAND;
END IF;
END LOOP;
INSERT INTO NEW_INVENTORY
VALUES(V_INVENTORY_ITEM, V_NEW_INVE_SOLD, V_NEW_INVE_PURCHASED, V_NEW_INVE_RETURNED, V_INVENTORY_ONHAND, V_NEW_INVE_ONHAND);
CLOSE TRANS_CURSOR;
END LOOP;
CLOSE INVEN_CURSOR;
END;
/
I'm trying to update a table, which is an inventory table...
this reads the transaction table and updates a new table...(the new inventory)
on my if statements something is wrong, because every variable comes out as 0;
any suggestions?
my tables
SQL> select * from inventory;
ITEM PRICE ONHAND
--------------- ---------- ----------
BALL 12.99 5
PEN 1.99 10
PENCIL 2.99 1
PAPER 5.99 3
ERASER .99 6
BACKPACK 19.99 10
STAPLER 3.99 12
RULER 4.99 9
NOTEBOOK 6.99 12
9 rows selected.
SQL>
SQL> select * from transaction;
ITEM CO
--------------- --
BALL P
BALL R
BALL S
BALL S
BALL S
PEN R
PEN S
PEN S
PEN P
PENCIL S
PENCIL R
PENCIL S
PENCIL P
PAPER S
PAPER S
PAPER S
ERASER R
ERASER S
ERASER S
ERASER P
BACKPACK S
BACKPACK S
BACKPACK S
BACKPACK P
STAPLER R
STAPLER S
RULER S
NOTEBOOK S
NOTEBOOK S
NOTEBOOK S
NOTEBOOK S
NOTEBOOK S
NOTEBOOK S
33 rows selected.
SQL>
SQL> select * from new_inventory;
ITEM SOLD RETURNED ONHAND
--------------- ---------- ---------- ----------
BACKPACK 0 0 0
BALL 0 0 0
ERASER 0 0 0
NOTEBOOK 0 0 0
PAPER 0 0 0
PEN 0 0 0
PENCIL 0 0 0
RULER 0 0 0
STAPLER 0 0 0
9 rows selected.
Try putting the following statements before opening the TRAN_CURSOR
V_NEW_INVE_SOLD := 0;
V_NEW_INVE_RETURNED := 0;
V_NEW_INVE_ONHAND := 0;
There is no reason for needing to a) reinvent the join (you've got two cursors that you're manually doing a nested loop join on - why do that, when the Oracle Optimizer is perfectly capable of joining two tables together and deciding the best join method itself?) and b) doing the calculations and inserts row-by-row (aka slow-by-slow).
Instead, you can achieve the whole thing in a single insert statement like so:
insert into new_inventory (item,
new_inve_purchased,
new_inve_returned,
new_inve_sold,
orig_onhand,
new_inve_onhand) -- Amend as appropriate; I guessed at what the new_inventory column names were.
select item,
nvl(new_inve_purchased, 0) new_inve_purchased,
nvl(new_inve_returned, 0) new_inve_returned,
nvl(new_inve_sold, 0) new_inve_sold,
nvl(onhand, 0) orig_onhand,
nvl(onhand, 0)
+ nvl(new_inve_purchased, 0)
+ nvl(new_inve_returned, 0)
- nvl(new_inve_sold, 0) new_inve_onhand
from (select inv.item,
inv.onhand,
trn.code
from inventory inv
inner join transaction trn on (inv.item = trn.item))
pivot (sum(1) for code in ('P' as new_inve_purchased,
'R' as new_inve_returned,
'S' as new_inve_sold));
The benefits of using a single SQL statement to do the work are:
It's easier to debug - you can run the select statement on its own to see what it's doing
It'll be more performant; you're letting the database SQL engine do the majority of the work, rather than having PL/SQL talk to SQL, and SQL returning results back to PL/SQL for each row in the inventory table.
Because it's much more compact than the corresponding PL/SQL, there's less code to keep track of, making it much easier to read and understand.
Note also that I've specified the list of columns that you're inserting into (although I had to guess at their names - you'll need to amend as appropriate!).
This is good practice (especially if it's code that will end up in production!) as failing to do so could lead to trouble down the line when someone adds a new column into the table. Best be specific now, and avoid such problems entirely!
Related
I have a table TABLE in SQLite database with columns DATE, GROUP. I want to select the first 10 entries in each group. After researching similar topics here on stackoverflow, I came up with the following query, but it runs very slowly. Any ideas how to make it faster?
select * from TABLE as A
where (select count(*) from TABLE as B
where B.DATE < A.DATE and A.GROUP == B.GROUP) < 10
This is the result of EXPLAIN QUERY PLAN (TABLE = clients_bets):
Here are a few suggestions :
Use a covering index (an index containing all the data needed in the subquery, in this case the group and date)
create index some_index on some_table(some_group, some_date)
Additionally, rewrite the subquery to make is less dependent on outer query :
select * from some_table as A
where rowid in (
select B.rowid
from some_table as B
where A.some_group == B.some_group
order by B.some_date limit 10 )
The query plan change from :
0 0 0 SCAN TABLE some_table AS A
0 0 0 EXECUTE CORRELATED LIST SUBQUERY 1
1 0 0 SEARCH TABLE some_table AS B USING COVERING INDEX idx_1 (some_group=?)
to
0 0 0 SCAN TABLE some_table AS A
0 0 0 EXECUTE CORRELATED SCALAR SUBQUERY 1
1 0 0 SEARCH TABLE some_table AS B USING COVERING INDEX idx_1 (some_group=? AND some_date<?)
While it is very similar, the query seems quite faster. I'm not sure why.
I am currently on oracle 11.2. Below is a snippet of the code.I want to extract range of elements from record type on every page number passed.
Rather than in the query itself I want to extract from the table type.
suppose the collection is filled with 13 records
page passed with 1 should give elements from 1 to 5
page =2 -> 6 to 10
page =3 -> 11 to 13
I don't want to put page logic in the select statement.
I am not getting the correct output when i pass page 2 and on wards.
I don't have the exact code right now,but when I go to office tomorrow morning,I will update the correct code which is inside the loop.
create or replace procedure p1 (page number) is
TYPE rec_typ IS RECORD (col1 VARCHAR2(5),col2 VARCHAR2(50),col3
number(10));
TYPE rec_tab IS TABLE OF rec_typ INDEX BY BINARY_INTEGER;
t_tab rec_tab ;
f_tab rec_tab ;
n number :=0;
BEGIN
Select * bulk collect into t_tab from test;
For j in p1*5-4..p1*5
LOOP
if t_tab.exists(j) then
n:= n+1;
f_tab.extend;
f_tab(n) :=t_tab(j);
end if;
END LOOP;
END;
use LIMIT Option it will help you!!
refer : http://www.dba-oracle.com/plsql/t_plsql_limit_clause.htm
Sample query : in this code you can pass 13 in your parameter.
create or replace procedure p1 (page number) is
TYPE rec_typ IS RECORD (col1 VARCHAR2(5),col2 VARCHAR2(50),col3 number(10));
TYPE rec_tab IS TABLE OF rec_typ INDEX BY BINARY_INTEGER;
t_tab rec_tab ;
f_tab rec_tab ;
n number :=0;
j number :=1;
CURSOR C IS
Select * bulk collect into t_tab from test;
BEGIN
OPEN C;
LOOP
FETCH C BULK_COLLECT INTO t_tab LIMIT 5;
EXIT WHEN L_PF.COUNT=0;
if t_tab.exists(j) then
n:= n+1;
f_tab.extend;
f_tab(n) :=t_tab(j);
end if;
j:=j+1;
END LOOP;
CLOSE C;
END;
Note: Its only for Sample code . Engage this code with your logic. Hope it will help you. if its helpful to you, click useful up tick button, which is left side of this answer.
Instead of this collection you can use ROWNUM and then use pagination concept here. Hope this below snippet helps.
CREATE OR REPLACE
PROCEDURE p1(
page NUMBER)
IS
TYPE rec_typ
IS
RECORD
(
col1 VARCHAR2(5),
col2 VARCHAR2(50),
col3 NUMBER(10));
TYPE rec_tab
IS
TABLE OF rec_typ INDEX BY BINARY_INTEGER;
t_tab rec_tab ;
LV_SQL VARCHAR2(32676);
lv_cond VARCHAR2(32676);
BEGIN
LV_COND:=
CASE
WHEN PAGE = 1 THEN
' AND a.rn BETWEEN 1 AND 5 '
WHEN PAGE =2 THEN
' AND a.rn BETWEEN 6 AND 10 '
WHEN PAGE = 3 THEN
' AND a.rn BETWEEN 11 AND 15 '
ELSE
''
END;
LV_SQL:= 'SELECT
a.col1,
a.col2,
a.col3
FROM
(SELECT T.*,ROWNUM RN FROM TEST T)A
WHERE 1 = 1 '||lv_cond;
EXECUTE IMMEDIATE lv_sql BULK COLLECT INTO t_tab;
FOR z IN t_tab.FIRST..t_tab.LAST
LOOP
DBMS_OUTPUT.PUT_LINE(t_tab(z).col1||' '||t_tab(z).col2||' '||t_tab(z).col1);
END LOOP;
END;
/
i have a table ops containing operations on an account, the balance of this account and an index giving the chronological order of these operations.
idx op_sum account_balance
1 200 200
2 -30 170
4 -20 160 -- this operation has idx=4 so the balance is accurate!
3 10 180
A trigger is ensuring that idx stays unique.
CREATE TRIGGER on_insert_before BEFORE INSERT ON ops
WHEN (SELECT op_sum FROM ops WHERE idx=NEW.idx)
BEGIN
UPDATE ops SET idx=idx+1 WHERE idx>=NEW.idx;
END;
I now want to add a trigger that could calculate the account_balance when adding a new operation while considering the chronological order and updating the rows with a superior idx if any.
As an example if i do this: INSERT INTO ops(idx, op_sum) VALUES(2,-90);
my table should look like that:
idx op_sum account_balance
1 200 200
3 -30 80
5 -20 70
4 10 90
2 -90 110
I tried things like that:
CREATE TRIGGER on_insert_after AFTER INSERT ON ops
FOR EACH ROW
BEGIN
UPDATE ops SET account_balance=
(CASE WHEN idx=1
THEN op_sum
ELSE (SELECT account_balance FROM ops WHERE idx=idx-1)+op_sum
END);
END;
But it didn't work (to be more exact this part doesn't work (SELECT account_balance FROM ops WHERE idx=idx-1)).
I have also been experimenting with recursive common table expressions, that would give me the accurate results:
WITH RECURSIVE
cnt(x,y,z) AS (VALUES(1,200,200) UNION ALL
SELECT
x+1,
(SELECT op_sum FROM ops WHERE idx=x+1),
z+(SELECT op_sum FROM ops where idx=x+1)
FROM cnt where x<(select max(idx) from ops))
SELECT x,y,z FROM cnt;
But i would like to know if there is a way to do it inside of my table, with triggers.
To get the new value from a different row, you must use a correlated subquery:
UPDATE ops SET account_balance=
(CASE WHEN idx=1
THEN op_sum
ELSE op_sum + (SELECT account_balance
FROM ops AS previous_ops
WHERE previous_ops.idx = ops.idx - 1)
END);
Can somebody tell me why 'CASE WHEN' makes it so slow and how to optimize / fix it, please ?
It is needed to get the items that are pinned to be put first in the result and in order.
I could probably do it after the sql query but i think it would be faster, when done right, if this sorting is done inside the sql query.
slow query ~490ms
SELECT
places.id AS place_id,
url,
title,
thumbnails.score AS score,
thumbnails.clipping AS clipping,
thumbnails.lastModified AS lastModified,
EXISTS (SELECT 1 FROM pinned pi WHERE pi.place_id = places.id) AS pinned
FROM places
LEFT JOIN thumbnails ON (thumbnails.place_id = places.id)
LEFT JOIN pinned j ON (j.place_id = places.id) WHERE (hidden == 0)
ORDER BY case when j.id is null then 1 else 0 end,
j.id,
frecency DESC LIMIT 24
Removing the 'CASE WHEN' part:
query ~6ms
SELECT
places.id AS place_id,
url,
title,
thumbnails.score AS score,
thumbnails.clipping AS clipping,
thumbnails.lastModified AS lastModified,
EXISTS (SELECT 1 FROM pinned pi WHERE pi.place_id = places.id) AS pinned
FROM places
LEFT JOIN thumbnails ON (thumbnails.place_id = places.id) WHERE (hidden == 0)
ORDER BY frecency DESC LIMIT 24
Table info:
var Create_Table_Places =
'CREATE TABLE places (' +
'id INTEGER PRIMARY KEY,' +
'url LONGVARCHAR,' +
'title LONGVARCHAR,' +
'visit_count INTEGER DEFAULT 0,' +
'hidden INTEGER DEFAULT 0 NOT NULL,' +
'typed INTEGER DEFAULT 0 NOT NULL,' +
'frecency INTEGER DEFAULT -1 NOT NULL,' +
'last_visit_date INTEGER,' +
'dateAdded INTEGER,' +
'lastModified INTEGER' +
')';
var Create_Table_Thumbnails =
'CREATE TABLE thumbnails (' +
'id INTEGER PRIMARY KEY,' +
'place_id INTEGER UNIQUE,' +
'data LONGVARCHAR,' +
'score REAL,' +
'clipping INTEGER,' +
'dateAdded INTEGER,' +
'lastModified INTEGER' +
')';
var Create_Table_Pinned =
'CREATE TABLE pinned (' +
'id INTEGER PRIMARY KEY,' +
'place_id INTEGER UNIQUE,' +
'position INTEGER,' +
'dateAdded INTEGER,' +
'lastModified INTEGER' +
')';
To find out whether there are fundamental differences in the execution of queries, use EXPLAIN QUERY PLAN.
In SQLite 3.7.almost15, your queries have the following plans:
selectid order from detail
-------- ----- ---- ------
0 0 0 SCAN TABLE places (~100000 rows)
0 1 1 SEARCH TABLE thumbnails USING INDEX sqlite_autoindex_thumbnails_1 (place_id=?) (~1 rows)
0 2 2 SEARCH TABLE pinned AS j USING COVERING INDEX sqlite_autoindex_pinned_1 (place_id=?) (~1 rows)
0 0 0 EXECUTE CORRELATED SCALAR SUBQUERY 1
1 0 0 SEARCH TABLE pinned AS pi USING COVERING INDEX sqlite_autoindex_pinned_1 (place_id=?) (~1 rows)
0 0 0 USE TEMP B-TREE FOR ORDER BY
selectid order from detail
-------- ----- ---- ------
0 0 0 SCAN TABLE places (~100000 rows)
0 1 1 SEARCH TABLE thumbnails USING INDEX sqlite_autoindex_thumbnails_1 (place_id=?) (~1 rows)
0 0 0 EXECUTE CORRELATED SCALAR SUBQUERY 1
1 0 0 SEARCH TABLE pinned AS pi USING COVERING INDEX sqlite_autoindex_pinned_1 (place_id=?) (~1 rows)
0 0 0 USE TEMP B-TREE FOR ORDER BY
These two plans are almost identical, except for the duplicate pinned lookup.
If your SQLite doesn't execute the queries this way, update it.
In you first query, you can remove the subquery for the pinned field because you are already joining with the pinned table, and you're executing exactly the same lookup that was done for the join; use j.id IS NOT NULL instead.
Your CASE WHEN has the purpose of sorting the NULLs after the other values.
You can get the same effect by converting all NULLs to some value that is sorted after numbers, such as a string:
... ORDER BY IFNULL(j.id, ''), frecency DESC
However, in theory, this should not have much of a runtime difference from CASE WHEN.
IN my oracle database i want to create a function or procedure with cursor which will use dynamic table name.here is my code.
CREATE OR REPLACE Function Findposition ( model_in IN varchar2,model_id IN number) RETURN number IS cnumber number;
TYPE c1 IS REF CURSOR;
c2 c1;
BEGIN
open c2 FOR 'SELECT id,ROW_NUMBER() OVER ( ORDER BY id) AS rownumber FROM '||model_in;
FOR employee_rec in c2
LOOP
IF employee_rec.id=model_id then
cnumber :=employee_rec.rownumber;
end if;
END LOOP;
close c2;
RETURN cnumber;
END;
help me to solve this problem.IN
There is no need to declare a c1 type for a weakly typed ref cursor. You can just use the SYS_REFCURSOR type.
You can't mix implicit and explicit cursor calls like this. If you are going to OPEN a cursor, you have to FETCH from it in a loop and you have to CLOSE it. You can't OPEN and CLOSE it but then fetch from it in an implicit cursor loop.
You'll have to declare a variable (or variables) to fetch the data into. I declared a record type and an instance of that record but you could just as easily declare two local variables and FETCH into those variables.
ROWID is a reserved word so I used ROWPOS instead.
Putting that together, you can write something like
SQL> ed
Wrote file afiedt.buf
1 CREATE OR REPLACE Function Findposition (
2 model_in IN varchar2,
3 model_id IN number)
4 RETURN number
5 IS
6 cnumber number;
7 c2 sys_refcursor;
8 type result_rec is record (
9 id number,
10 rowpos number
11 );
12 l_result_rec result_rec;
13 BEGIN
14 open c2 FOR 'SELECT id,ROW_NUMBER() OVER ( ORDER BY id) AS rowpos FROM '||model_in;
15 loop
16 fetch c2 into l_result_rec;
17 exit when c2%notfound;
18 IF l_result_rec.id=model_id
19 then
20 cnumber :=l_result_rec.rowpos;
21 end if;
22 END LOOP;
23 close c2;
24 RETURN cnumber;
25* END;
SQL> /
Function created.
I believe this returns the result you expect
SQL> create table foo( id number );
Table created.
SQL> insert into foo
2 select level * 2
3 from dual
4 connect by level <= 10;
10 rows created.
SQL> select findposition( 'FOO', 8 )
2 from dual;
FINDPOSITION('FOO',8)
---------------------
4
Note that from an efficiency standpoint, you'd be much better off writing this as a single SQL statement rather than opening a cursor and fetching every row from the table every time. If you are determined to use a cursor, you'd want to exit the cursor when you've found the row you're interested in rather than continuing to fetch every row from the table.
From a code clarity standpoint, many of your variable names and data types seem rather odd. Your parameter names seem poorly chosen-- I would not expect model_in to be the name of the input table, for example. Declaring a cursor named c2 is also problematic since it is very non-descriptive.
You can do this, you don't need loop when you are using dynamic query
CREATE OR REPLACE Function Findposition(model_in IN varchar2,model_id IN number)
RETURN number IS
cnumber number;
TYPE c1 IS REF CURSOR;
c2 c1;
BEGIN
open c2 FOR 'SELECT rownumber
FROM (
SELECT id,ROW_NUMBER() OVER ( ORDER BY id) AS rownumber
FROM '||model_in || '
) WHERE id = ' || model_id;
FETCH c2 INTO cnumber;
close c2;
return cnumber;
END;