How to put an update in a API function - plsql

as a rule, it's better to hide the single-row fetches inside a function, so instead of:
BEGIN
SELECT name
INTO l_name
FROM mytable
WHERE primary_key = id_primary_key;
it would be better to develop a
PACKAGE mypackage
IS
FUNCTION fnc_name (id_primary_key IN mytable.primary_key%TYPE)
RETURN mytable.name%TYPE;
and executing
BEGIN
l_name := mypackage.fnc_name (id_primary_key);
But what about updating?
I mean, if I decide to develop the same solution for updating but in that case every time I need to update only few columns of the table, how would you develop such an API?
Oracle version 10g
Thanks!
Mark

You are starting to develop a "table API" or "TAPI". These are problematic, for the reason you have mentioned: if the TAPI's update procedure updates all 20 columns of the table from 20 parameters, but in a particular case you only need to update 3 columns, how should you call it? One way is to simply have to pass all 20 values even though you are not changing most of them. Another is to give each parameter a "funny" default like CHR(0) and have the API update be like:
UPDATE mytable
SET column1 = CASE WHEN p_column1 = CHR(0) THEN column1 ELSE p_column1 END,
...
A different (I'd say better) approach is the "Transaction API" or XAPI. Here you build a separate procedure for each business transaction that might need to update the table. For example:
PROCEDURE terminate_employee
( p_empid INTEGER
, p_termination_date DATE
, p_termination_reason VARCHAR2
);
This procedure will use a simple SQL update statement to update the 3 columns that need to be updated when terminating an employee.
Some would say SQL is an API to the database!

Related

Need to get data from a table using database link where database name is dynamic

I am working on a system where I need to create a view.I have two databases
1.CDR_DB
2.EMS_DB
I want to create the view on the EMS_DB using table from CDR_DB. This I am trying to do via dblink.
The dblink is created at the runtime, i.e. DB Name is decided at the time user installs the database, based on the dbname dblink is decided.
My issue is I am trying to create a query like below to create a view from a table which name is decided at run time. Please see below query :
select count(*)
from (SELECT CONCAT('cdr_log#', alias) db_name
FROM ems_dbs a,
cdr_manager b
WHERE a.db_type = 'CDR'
and a.ems_db_id = b.cdr_db_id
and b.op_state = 4 ) db_name;
In this query cdr_log#"db_name" is the runtime table name(db_name get's created at runtime).
When I'm trying to run above query, I'm not getting the desired result. The result of the above query is '1'.
When running only the sub-query from the above query :
SELECT CONCAT('cdr_log#', alias) db_name
FROM ems_dbs a,
cdr_manager b
WHERE a.db_type = 'CDR'
and a.ems_db_id = b.cdr_db_id
and b.op_state = 4;
i'm getting the desired result, i.e. cdr_log#cdrdb01
but when i'm trying to run the full query, getting result as '1'.
Also, when i'm trying to run as
select count(*) from cdr_log#cdrdb01;
I'm getting the result as '24' which is correct.
Expected Result is that I should get the same output similar to the query :
select count(*) from cdr_log#cdrdb01;
---24
But the desired result is coming as '1' using the full query mentioned initially.
Please let me know a way to solve the above problem. I found a way to do it via a procedure, but i'm not sure how can I invoke this procedure.
Can this be done as part of sub query as I have used above?
You're not going to be able to create a view that will dynamically reference an object over a database link unless you do something like create a pipelined table function that builds the SQL dynamically.
If the database link is created and named dynamically at installation time, it would probably make the most sense to create any objects that depend on the database link (such as the view) at installation time too. Dynamic SQL tends to be much harder to write, maintain, and debug than static SQL so it would make sense to minimize the amount of dynamic SQL you need. If you can dynamically create the view at installation time, that's likely the easiest option. Even better than directly referencing the remote object in the view, particularly if there are multiple objects that need to reference the remote object, would probably be to have the view reference a synonym and create the synonym at install time. Something like
create synonym cdr_log_remote
for cdr#<<dblink name>>
create or replace view view_name
as
select *
from cdr_log_remote;
If you don't want to create the synonym/ view at installation time, you'd need to use dynamic SQL to reference the remote object. You can't use dynamic SQL as the SELECT statement in a view so you'd need to do something like have a view reference a pipelined table function that invokes dynamic SQL to call the remote object. That's a fair amount of work but it would look something like this
-- Define an object that has the same set of columns as the remote object
create type typ_cdr_log as object (
col1 number,
col2 varchar2(100)
);
create type tbl_cdr_log as table of typ_cdr_log;
create or replace function getAllCDRLog
return tbl_cdr_log
pipelined
is
l_rows typ_cdr_log;
l_sql varchar(1000);
l_dblink_name varchar(100);
begin
SELECT alias db_name
INTO l_dblink_name
FROM ems_dbs a,
cdr_manager b
WHERE a.db_type = 'CDR'
and a.ems_db_id = b.cdr_db_id
and b.op_state = 4;
l_sql := 'SELECT col1, col2 FROM cdr_log#' || l_dblink_name;
execute immediate l_sql
bulk collect into l_rows;
for i in 1 .. l_rows.count
loop
pipe row( l_rows(i) );
end loop;
return;
end;
create or replace view view_name
as
select *
from table( getAllCDRLog );
Note that this will not be a particularly efficient way to structure things if there are a large number of rows in the remote table since it reads all the rows into memory before starting to return them back to the caller. There are plenty of ways to make the pipelined table function more efficient but they'll tend to make the code more complicated.

How to add a date with an INSERT INTO SELECT in PL SQL?

For my homework I need to make a package that contains a procedure that removes all records from the table dwdimartist and then fills the table with all records from the table artist and add a field with the date of today.
This is what I've created, it works but I don't know if it's performant enough. Is it better to use a cursor that loops over each record in the table artist?
CREATE OR REPLACE PACKAGE BODY dwh AS
PROCEDURE filldwh IS
today_date DATE := SYSDATE;
BEGIN
DELETE FROM dwdimartist;
INSERT INTO dwdimartist (artistkey, name) (SELECT artistid, name FROM artist);
UPDATE dwdimartist SET added = today_date;
END filldwh;
END dwh;
Simple SQL query like you did is better choice than a cursor or implicit loop.
possible improvement:
You should do it at once without update: set the date during insert.
INSERT INTO dwdimartist (artistkey, name, added)
(SELECT artistid, name, sysdate FROM artist);
Hope it helps
You don’t need to use cursors. You can hardly beat Insert ... select since it’s SQL itself and in most cases it works better than any programmatic structure due to native dbms optimization. However, you can do better if you decrease number of I/O operations. You don’t need update here. Simply add sysdate to your select and insert everything together.
insert into dwdimartist(artistkey, name, added) select artistid, name, sysdate from artist;
Another thing to improve is your ‘delete’. While it’s small table for learning purposes you won’t feel any difference between delete and truncate but for real ETL tasks and big tables you’ll see it definitely. Yes you cannot use truncate directly in your PL/SQL but there are 3 ways to do it:
execute immediate ‘truncate dwdimartist’;
Dbms_utility.exec_ddl_statement(‘truncate ...;’);
DBMS_SQL package
However, remember that calling truncate will execute commit like any ddl command. So delete may be the only option sometimes. But even if it is, you should avoid it for the whole table.

Why TableAdapter doesn't recognize #parameter

I am using table adapter Query configuration wizard in Visual studio 2013 for getting data from my database. For some queries like this:
SELECT *
FROM ItemsTable
ORDER BY date_of_creation desc, time_of_creation desc
OFFSET (#PageNumber - 1) * #RowsPerPage ROWS
FETCH NEXT #RowsPerPage ROWS ONLY
it doesn't recognize the #pageNumber as a paremeter and it cannot generate function that has these arguments while it works fine for queries like:
Select Top (#count) * from items_table
Why does in first query tableadapter fail to generate function with mentioned arguments whereas it can generate function fine for second one for example: tableadapter.getDataByCount(?int count)
Am I forced to use stored procedure, if yes since I don't know anything about it how?
Update: The Problem exactly occurs in TableAdapter Configuration Wizard in DataSet editor (VS 2013) and it doesn't generate functions with these parameters some times it says #RowsPerPage should be declared! but it should generate a function with this arguments I found that it happens when we don't use #parameter_name in clause other than SELECT and WHERE for example in this query we used the, in Offset clause.
I can't tell you how to fix it in ASP, but here is a simple stored procedure that should do the same thing:
CREATE PROCEDURE dbo.ReturnPageOfItems
(
#pageNumber INT,
#rowsPerPage INT
)
AS
BEGIN;
SELECT *
FROM dbo.ItemsTable
ORDER BY date_of_creation desc,
time_of_creation desc
OFFSET (#pageNumber - 1) * #rowsperpage ROWS
FETCH NEXT #rowsPerPage ROWS ONLY;
END;
This will also perform better than simply passing the query, because SQL Server will take advantage of the cached query plan created for the procedure on its first execution. It is best practice not to use SELECT *, as that can cause maintenance trouble for you if there are schema changes to the table(s) involved, so I encourage you to spell out the columns in which you're actually interested. The documentation for the CREATE PROCEDURE command is available here, and it spells out the many various options you have in greater detail. However, the code above should work fine as is.
If you need to grant access to your application user so they can use this proc, that code is
GRANT EXECUTE ON OBJECT::dbo.ReturnPageOfItems TO userName;

Increase performance on insert cursor?

I would like to ask you how would you increase the performance on Insert cursor in this code?
I need to use dynamic plsql to fetch data but dont know how to improve the INSERT in best way. like Bulk Insert maybe?
Please let me know with code example if possible.
// This is how i use cur_handle:
cur_HANDLE integer;
cur_HANDLE := dbms_sql.open_cursor;
DBMS_SQL.PARSE(cur_HANDLE, W_STMT, DBMS_SQL.NATIVE);
DBMS_SQL.DESCRIBE_COLUMNS2(cur_HANDLE, W_NO_OF_COLS, W_DESC_TAB);
LOOP
-- Fetch a row
IF DBMS_SQL.FETCH_ROWS(cur_HANDLE) > 0 THEN
DBMS_SQL.column_value(cur_HANDLE, 9, cont_ID);
DBMS_SQL.COLUMN_VALUE(cur_HANDLE, 3, proj_NR);
ELSE
EXIT;
END IF;
Insert into w_Contracts values(counter, cont_ID, proj_NR);
counter := counter + 1;
END LOOP;
You should do database actions in sets whenever possible, rather than row-by-row inserts. You don't tell us what CUR_HANDLE is, so I can't really rewrite this, but you should probably do something like:
INSERT INTO w_contracts
SELECT ROWNUM, cont_id, proj_nr
FROM ( ... some table or joined tables or whatever... )
Though if your first value there is a primary key, it would probably be better to assign it from a sequence.
Solution 1) You can populate inside the loop a PL/SQL array and then just after the loop insert the whole array in one step using:
FORALL i in contracts_tab.first .. contracts_tab.last
INSERT INTO w_contracts VALUES contracts_tab(i);
Solution 2) if the v_stmt contains a valid SQL statement you can directly insert data into the table using
EXECUTE IMMEDIATE 'INSERT INTO w_contracts (counter, cont_id, proj_nr)
SELECT rownum, 9, 3 FROM ('||v_stmt||')';
"select statement is assembled from a website, ex if user choose to
include more detailed search then the select statement is changed and
the result looks different in the end. The whole application is a web
site build on dinamic plsql code."
This is a dangerous proposition, because it opens your database to SQL injection. This is the scenario in which Bad People subvert your parameters to expand the data they can retrieve or to escalate privileges. At the very least you need to be using DBMS_ASSERT to validate user input. Find out more.
Of course, if you are allowing users to pass whole SQL strings (you haven't provided any information regarding the construction of W_STMT) then all bets are off. DBMS_ASSERT won't help you there.
Anyway, as you have failed to give the additional information we actually need, please let me spell it out for you:
will the SELECT statement always have the same column names from the same table name, or can the user change those two?
will you always be interested in the third and ninth columns?
how is the W_STMT string assembled? How much control do you have over its projection?

How to find out which package/procedure is updating a table?

I would like to find out if it is possible to find out which package or procedure in a package is updating a table?
Due to a certain project being handed over (the person who handed over the project has since left) without proper documentation, data that we know we have updated always go back to some strange source point.
We are guessing that this could be a database job or scheduler that is running the update command without our knowledge. I am hoping that there is a way to find out where the source code is calling from that is updating the table and inserting the source as a trigger on that table that we are monitoring.
Any ideas?
Thanks.
UPDATE: I poked around and found out
how to trace a statement back to its
owning PL/SQL object.
In combination with what Tony mentioned, you can create a logging table and a trigger that looks like this:
CREATE TABLE statement_tracker
( SID NUMBER
, serial# NUMBER
, date_run DATE
, program VARCHAR2(48) null
, module VARCHAR2(48) null
, machine VARCHAR2(64) null
, osuser VARCHAR2(30) null
, sql_text CLOB null
, program_id number
);
CREATE OR REPLACE TRIGGER smb_t_t
AFTER UPDATE
ON smb_test
BEGIN
INSERT
INTO statement_tracker
SELECT ss.SID
, ss.serial#
, sysdate
, ss.program
, ss.module
, ss.machine
, ss.osuser
, sq.sql_fulltext
, sq.program_id
FROM v$session ss
, v$sql sq
WHERE ss.sql_address = sq.address
AND ss.SID = USERENV('sid');
END;
/
In order for the trigger above to compile, you'll need to grant the owner of the trigger these permissions, when logged in as the SYS user:
grant select on V_$SESSION to <user>;
grant select on V_$SQL to <user>;
You will likely want to protect the insert statement in the trigger with some condition that only makes it log when the the change you're interested in is occurring - on my test server this statement runs rather slowly (1 second), so I wouldn't want to be logging all these updates. Of course, in that case, you'd need to change the trigger to be a row-level one so that you could inspect the :new or :old values. If you are really concerned about the overhead of the select, you can change it to not join against v$sql, and instead just save the SQL_ADDRESS column, then schedule a job with DBMS_JOB to go off and update the sql_text column with a second update statement, thereby offloading the update into another session and not blocking your original update.
Unfortunately, this will only tell you half the story. The statement you're going to see logged is going to be the most proximal statement - in this case, an update - even if the original statement executed by the process that initiated it is a stored procedure. This is where the program_id column comes in. If the update statement is part of a procedure or trigger, program_id will point to the object_id of the code in question - you can resolve it thusly:
SELECT * FROM all_objects where object_id = <program_id>;
In the case when the update statement was executed directly from the client, I don't know what program_id represents, but you wouldn't need it - you'd have the name of the executable in the "program" column of statement_tracker. If the update was executed from an anonymous PL/SQL block, I'm not how to track it back - you'll need to experiment further.
It may be, though, that the osuser/machine/program/module information may be enough to get you pointed in the right direction.
If it is a scheduled database job then you can find out what scheduled database jobs exist and look into what they do. Other things you can do are:
look at the dependencies views e.g. ALL_DEPENDENCIES to see what packages/triggers etc. use that table. Depending on the size of your system that may return a lot of objects to trawl through.
Search all the database source code for references to the table like this:
select distinct type, name
from all_source
where lower(text) like lower('%mytable%');
Again that may return a lot of objects, and of course there will be some "false positives" where the search string appears but isn't actually a reference to that table. You could even try something more specific like:
select distinct type, name
from all_source
where lower(text) like lower('%insert into mytable%');
but of course that would miss cases where the command was formatted differently.
Additionally, could there be SQL scripts being run through "cron" jobs on the server?
Just write an "after update" trigger and, in this trigger, log the results of "DBMS_UTILITY.FORMAT_CALL_STACK" in a dedicated table.
The purpose of this function is exactly to give you the complete call stack of al the stored procedures and triggers that have been fired to reach your code.
I am writing from the mobile app, so i can't give you more detailed examples, but if you google for it you'll find many of them.
A quick and dirty option if you're working locally, and are only interested in the first thing that's altering the data, is to throw an error in the trigger instead of logging. That way, you get the usual stack trace and it's a lot less typing and you don't need to create a new table:
AFTER UPDATE ON table_of_interest
BEGIN
RAISE_APPLICATION_ERROR(-20001, 'something changed it');
END;
/

Resources