Can we compile procedure or function if there is DDL lock on that procedure? - plsql

procedure cannot be compiled? it is just running and nothing happened.
and I have try following code to check whether any block session make this. but I did not find any blocked session.
select
*
from
v$session
where
blocking_session is not NULL
order by
blocking_session;

You won't find it as a blocking session, nor in gv$lock, because it's a library object lock rather than a data object lock. Instead, look at gv$access and/or dba_ddl_locks. You can also look at gv$session for a plsql_object_id or plsql_entry_object_id matching the object_id of your procedure in either dba_objects or dba_procedures. That's not a sure-fire way of catching everything though, if you have chained PL/SQL calls... but gv$access and/or dba_ddl_locks will have what you need for sure.
If something is executing your procedure, you will have to wait for them to complete, or kill their session, before you can compile it. It's a weak point in Oracle, that prevents us from pushing code changes in without kicking everybody out first. But that's the way it is.

I found that there is a lot of rows in the table and whenever I try to compile procedure it can call that table too. So I Stop the compilation process at that time and execute the rollback statement. Therefore it takes time to rollback milions of rows one by one I have tracked those process through the following script.
select s.sid, s.serial#, s.client_info, t.addr, sum(t.used_ublk)
from v$transaction t, v$session s
where t.addr = s.taddr
group by s.sid, s.serial#, s.client_info, t.addr;
I just need to wait and might be I did not find any idea about to do anything except waiting. When all the transactions are rollbacked then I try to compile it and it compiled.

Related

Progress-4gl: How does transaction scope apply to external program calling?

I need some help understanding transaction scoping for procedures/programs outside the current program.
Suppose I've three program, program A, program B and program C. Inside program A, I've a procedure that has some lines in it wrapped inside a do transaction (not strongly typed) block. Within that do transaction block, it calls another Program B. Upon return from program B there is an undo, leave command. Within the same transaction block, it calls program C and has an undo, leave after this call too.
My question is, if within the transaction block, program B executes without errors, but program c returned an error, will the undo,leave after program C call will also undo transactions that happened inside program B?
Procedure do_something:
some processing....
do transaction:
error-message = "".
{run programB.p}
if error-message <> "" then undo, leave.
some further processing...
error-message = "".
{run programC.p}
if error-message <> "" then undo, leave.
end. /* end of do transaction */
end procedure.
Yes. In the example that you describe everything gets rolled back.
It is not so much that it is "extended" per se but just that the transaction includes everything that happens in that session from the point in time when it is enabled all the way until it is either committed or rolled back. Internal procedures, external procedures, user defined functions, methods of classes, trigger code etc.
"In that session" is important - if you call a procedure on an app server that activity is NOT included since it is its own process with its own distinct transaction context.
When app servers are involved things get messy. The original caller has no (built-in) capability to know what to roll back in the called app server session. The app server call could return an error that causes the caller to roll back if it encounters problems but the caller could also decide to trap and ignore that error.
Yes. Everything happening in the transaction block will be undone.

Creating many batches (SysOperation Framework) very quickly doing similar processes - "Cannot edit a record in LastValue (SysLastValue)"?

I have a SysOperation Framework process that creates a ReliableAsynchronous batch to post packing slips and several get created at a time.
Depending on how quickly I click to create them, I get:
Cannot edit a record in LastValue (SysLastValue).
An update conflict occurred due to another user process deleting the record or changing one or more fields in the record.
And
Cannot create a record in LastValue (SysLastValue). User ID: t edit a, Class.
The record already exists.
On a couple of them in the BatchHistory. I have this.parmLoadFromSysLastValue(false); set. I'm not sure how to prevent writing to SysLastValue table.
Any idea what could be going on?
I get this exception a lot too, so I've created the habit of catching DuplicateKeyException in my service operation. When it is thrown, catch it and retry (for a default of 5x).
The error occurs when a lot of processes run simultaneously, like you are doing now.
DupplicateKeyException can be caught inside a transaction so you could improve by putting a try/catch around the code that does the insert in the SysLastValue table if you can find the code.
As far as I can see these are the only to occurrences where a record is inserted in this table (except maybe in kernel):
InventUnusedDimCleanUp.serialize()
SysAutoSemaphore.autoSemaphore()
Put a breakpoint there and see if that code is executed. If so you can add a try/catch with retry and see if that "fixes" it.
You could also use the tracing cockpit and the trace parser to figure out where that record is inserted if it's not one of those two.
My theory about LoadFromSysLastValue: I believe setting this.parmLoadFromSysLastValue(false) does not work since it is only taken into account when the dialog is started, not when your operation is executed. When in batch, no SysLastValue will be used to initialize your data contract as you want it to use the exact parameters you have supplied in your data contract .
It's because of the code calling SysOperationController.savelast() while in batch, my solution is to set loadFromSysLastValue to false in SysOperationController.loadFromSysLastValue() as part of the in batch check:
if (!this.isInBatch())
{
.....
}
//Begin
else
{
loadFromSysLastValue = false;
}
//End

PL/SQL wait for update in Oracle

How do I create PL/SQL function which waits for update on some row for specified timeout and then returns.
What I want to accomplish is - I have long running process which will update it's status to ASYNC_PROCESS table by process_id. I need function which returns with true/false when this process has completed, but also I need this function to wait some time for this process complete, return on timeout or return imediately with true, when process has completed. I don't want to use sleep(1 sec), because in such case I will be having 1 sec lag. I don't want to use sleep(1 msec), because in such case I am spending cpu resources (and 1msec lag).
Is there a good way how experienced programmer would accomplish this?
That function will be called from .NET (So I need minimal lag between DB operation and .NET/UI)
THNX,
Beef
I think the most sensible thing to do in this case is to use update triggers on that ASYNC_PROCESS table.
You should also look into the DBMS_ALERT package. Here's an edited excerpt from that doc:
Create an alert:
DBMS_ALERT.REGISTER('emp_table_alert');
Create a trigger on your table to fire the alert:
CREATE TRIGGER emptrig AFTER INSERT ON emp
BEGIN
DBMS_ALERT.SIGNAL('emp_table_alert', 'message_text');
END;
From your .net code, you can the use something that calls this:
DBMS_ALERT.WAITONE('emp_table_alert', :message, :status, :timeout);
Make sure you read the docs for what :status and :timeout do.
You should look at Oracle Advanced Queuing. It offers the kind of functions your looking for.
You'll probably need a separate queue table where a trigger on ASYNC_PROCESS inserts messages. You then use the AQ functions to retrieve (or wait for) the next message in the queue table.
If you're doing this in C#.NET, why wouldn't you simply spawn a worker thread to do the update (via ODAC)? Why hand the responsibility over to Oracle when (it seems) you want a .NET process to make the update call (in background) and have the main process be notified of its completion.
See here and here for examples, although there are several approaches in .NET for this (delegates, events, async callbacks, thread pools, etc)

Can I call SQLExecute after SQLFreeStmt without SQLPrepare?

I have the following sequence of code calls:
SQLPrepare
SQLExecute(hstmt, SQL_CLOSE);
SQLFreeStmt
//It works till here
SQLExecute //Now it fails.
Why am I required to call SQLPrepare again, I just freed the cursor. I shouldn't prepare the SQL statement again.
The correct behaviour is that SQLPrepare/SQLExecute/SQLFreStmt(stmt, SQL_CLOSE) should allow another SQLExecute on the same stmt handle without re-preparing. You can see this as a valid state transition in the ODBC Programmers Guide. You might use this sequence if you had done a SQLPrepare(sql) and only fetched some of the rows (instead of all of them) as without the SQLFreeStmt(stmt, SQL_CLOSE) or fetching until SQL_NO_DATA was returned you'd get an invalid cursor state if you issued another SQLExecute. SQLFreeStmt(stmt, SQL_DROP) is equivalent to SQLFreeHandle(SQL_HANDLESTMT,stmt) and frees the entire stmt handle meaning you cannot use it again at all.
SQLFreeStmt(hstmt, SQL_CLOSE) cleans up everything to do with that statement handle, take a look at the summary:
SQLFreeStmt stops processing
associated with a specific statement,
closes any open cursors associated
with the statement, discards pending
results, or, optionally, frees all
resources associated with the
statement handle.
If you don't want to use SQLPrepare again you can use SQLExecDirect instead.

When is the code between a PL/SQL Package begin/end block executed?

I have the PL/SQL code that is similar to the following snippet:
create or replace
package body MY_PACKAGE as
type array_type is table of char index by varchar2(1);
lookup_array array_type;
function DO_SOMETHING(input nvarchar2)
return varchar2 as
begin
-- Do something here with lookup_array
end DO_SOMETHING;
procedure init_array as
begin
lookup_array('A') := 'a';
lookup_array('B') := 'b';
-- etc
end init_array;
begin
init_array;
end MY_PACKAGE;
It uses a static lookup array to process data supplied to DO_SOMETHING. My question is, when is init_array called and lookup_array loaded into memory? When the package is compiled? When it is called for the first time? Is it called more than once? Is there a better way to implement a static lookup array?
Thanks!
You can refer to this link:
http://www.dba-oracle.com/plsql/t_plsql_lookup_tables.htm
"This means the procedure is executed during package initialization. As a result during the lifetime of the session, the procedure is never called manually unless a refresh of the cached table is required."
Q1. "When is init_array called and lookup_array loaded into memory? When the package is compiled? When it is called for the first time? Is it called more than once?"
init_array is called when any function or procedure in the package is called - i.e. "just in time". It will be called whenever the package state is lost (i.e. it may be called more than once per session).
This has implications for the scenario where package state is lost - e.g. when someone recompiles the package. In this scenario, the following sequence occurs:
Your session calls do_something - init_array is called first, then do_something executes - your session now has some memory allocated in its PGA to hold the array.
My session recompiles the package. At this stage, your session's memory that is allocated for that package is marked "invalid".
Your session calls do_something - Oracle detects that your session's memory is marked invalid, and issues ORA-04061 "existing state of xxx has been invalidated".
If your session calls do_something again, it proceeds without error - it first calls init_array and then executes do_something.
Q2. "Is there a better way to implement a static lookup array?"
I don't see any real problems with this approach, so long as you take into account the behaviour described above.
In some cases I've seen people put the init call at the start of each function/procedure that needs the array - i.e. whenever do_something is called, it checks to see if it needs to initialise, and if so calls init_array. The advantage of this approach is that you can customise init_array to only initialise the bits that that function/procedure needs - which might be advantageous if init_array does a lot of work - which might help to avoid a big one-time startup overhead for each session.

Resources