PL/SQL wait for update in Oracle - plsql

How do I create PL/SQL function which waits for update on some row for specified timeout and then returns.
What I want to accomplish is - I have long running process which will update it's status to ASYNC_PROCESS table by process_id. I need function which returns with true/false when this process has completed, but also I need this function to wait some time for this process complete, return on timeout or return imediately with true, when process has completed. I don't want to use sleep(1 sec), because in such case I will be having 1 sec lag. I don't want to use sleep(1 msec), because in such case I am spending cpu resources (and 1msec lag).
Is there a good way how experienced programmer would accomplish this?
That function will be called from .NET (So I need minimal lag between DB operation and .NET/UI)
THNX,
Beef

I think the most sensible thing to do in this case is to use update triggers on that ASYNC_PROCESS table.
You should also look into the DBMS_ALERT package. Here's an edited excerpt from that doc:
Create an alert:
DBMS_ALERT.REGISTER('emp_table_alert');
Create a trigger on your table to fire the alert:
CREATE TRIGGER emptrig AFTER INSERT ON emp
BEGIN
DBMS_ALERT.SIGNAL('emp_table_alert', 'message_text');
END;
From your .net code, you can the use something that calls this:
DBMS_ALERT.WAITONE('emp_table_alert', :message, :status, :timeout);
Make sure you read the docs for what :status and :timeout do.

You should look at Oracle Advanced Queuing. It offers the kind of functions your looking for.
You'll probably need a separate queue table where a trigger on ASYNC_PROCESS inserts messages. You then use the AQ functions to retrieve (or wait for) the next message in the queue table.

If you're doing this in C#.NET, why wouldn't you simply spawn a worker thread to do the update (via ODAC)? Why hand the responsibility over to Oracle when (it seems) you want a .NET process to make the update call (in background) and have the main process be notified of its completion.
See here and here for examples, although there are several approaches in .NET for this (delegates, events, async callbacks, thread pools, etc)

Related

Can we compile procedure or function if there is DDL lock on that procedure?

procedure cannot be compiled? it is just running and nothing happened.
and I have try following code to check whether any block session make this. but I did not find any blocked session.
select
*
from
v$session
where
blocking_session is not NULL
order by
blocking_session;
You won't find it as a blocking session, nor in gv$lock, because it's a library object lock rather than a data object lock. Instead, look at gv$access and/or dba_ddl_locks. You can also look at gv$session for a plsql_object_id or plsql_entry_object_id matching the object_id of your procedure in either dba_objects or dba_procedures. That's not a sure-fire way of catching everything though, if you have chained PL/SQL calls... but gv$access and/or dba_ddl_locks will have what you need for sure.
If something is executing your procedure, you will have to wait for them to complete, or kill their session, before you can compile it. It's a weak point in Oracle, that prevents us from pushing code changes in without kicking everybody out first. But that's the way it is.
I found that there is a lot of rows in the table and whenever I try to compile procedure it can call that table too. So I Stop the compilation process at that time and execute the rollback statement. Therefore it takes time to rollback milions of rows one by one I have tracked those process through the following script.
select s.sid, s.serial#, s.client_info, t.addr, sum(t.used_ublk)
from v$transaction t, v$session s
where t.addr = s.taddr
group by s.sid, s.serial#, s.client_info, t.addr;
I just need to wait and might be I did not find any idea about to do anything except waiting. When all the transactions are rollbacked then I try to compile it and it compiled.

How to gracefully shut down reactive kafka-consumer and commit last processed record?

My painful hunt for this feature is fully described in disgustingly log question: Several last offsets aren't getting commited with reactive kafka and it shows my multiple attemps with different failures.
How would one subscribe to ReactiveKafkaConsumerTemplate<String, String>, which will process the records in synchronous way (for simplicity), and will ack/commit every 2s AND upon manual cancellation of stream? Ie. it works, ack/commits every 2s. Then via rest/jmx/whatever comes signal, the stream terminates and ack/commits the last processed kafka record.
After a lot of attempts I was able to come up with following solution. It seems to work, but it's kinda ugly, because it's very "white-box" where outer flow highly depends on stuff happening inside of other methods. Please criticise and suggest improvements. Thanks.
kafkaReceiver.receive()
.flatMapSequential(receivedKafkaRecord -> processKafkaRecord(receivedKafkaRecord), 16)
.takeWhile(e-> !stopped)
.sample(configuration.getKafkaConfiguration().getCommitInterval())
.concatMap(offset -> {
log.debug("ack/commit offset {}", offset.offset());
offset.acknowledge();
return offset.commit();
})
.doOnTerminate(()-> log.info("stopped."));
What didn't work:
A) you cannot use Disposable.dispose, since that would break the stream and your latest processed record won't be committed.
B) you cannot put take on top of stream, as that would cancel the stream and you won't be able to commit either.
C) not sure how I'd be able to intercorporate usage of errors here.
Because of what didn't work stream termination is triggered by boolean field named stopped, which can be set anyhow.
Flow explained:
flatMapSequential — because of inner parallelism and necessity to commit N only if all N-1 was processed.
processKafkaRecord returns Mono<ReceiverOffset>, ie. the offset of processed record to have something to ack/commit. When stopped the method will skip processing and return Mono.empty
take will stop stream if stopped, this has to be put here becaue of possibility of whole sample interval consisting only from "empties"
rest is simple: sample by given interval, commit in order. If sample does return empty record, commit is skipped. Finally we log that stream is cancelled.
If anyone know how to improve, please criticise.

Creating many batches (SysOperation Framework) very quickly doing similar processes - "Cannot edit a record in LastValue (SysLastValue)"?

I have a SysOperation Framework process that creates a ReliableAsynchronous batch to post packing slips and several get created at a time.
Depending on how quickly I click to create them, I get:
Cannot edit a record in LastValue (SysLastValue).
An update conflict occurred due to another user process deleting the record or changing one or more fields in the record.
And
Cannot create a record in LastValue (SysLastValue). User ID: t edit a, Class.
The record already exists.
On a couple of them in the BatchHistory. I have this.parmLoadFromSysLastValue(false); set. I'm not sure how to prevent writing to SysLastValue table.
Any idea what could be going on?
I get this exception a lot too, so I've created the habit of catching DuplicateKeyException in my service operation. When it is thrown, catch it and retry (for a default of 5x).
The error occurs when a lot of processes run simultaneously, like you are doing now.
DupplicateKeyException can be caught inside a transaction so you could improve by putting a try/catch around the code that does the insert in the SysLastValue table if you can find the code.
As far as I can see these are the only to occurrences where a record is inserted in this table (except maybe in kernel):
InventUnusedDimCleanUp.serialize()
SysAutoSemaphore.autoSemaphore()
Put a breakpoint there and see if that code is executed. If so you can add a try/catch with retry and see if that "fixes" it.
You could also use the tracing cockpit and the trace parser to figure out where that record is inserted if it's not one of those two.
My theory about LoadFromSysLastValue: I believe setting this.parmLoadFromSysLastValue(false) does not work since it is only taken into account when the dialog is started, not when your operation is executed. When in batch, no SysLastValue will be used to initialize your data contract as you want it to use the exact parameters you have supplied in your data contract .
It's because of the code calling SysOperationController.savelast() while in batch, my solution is to set loadFromSysLastValue to false in SysOperationController.loadFromSysLastValue() as part of the in batch check:
if (!this.isInBatch())
{
.....
}
//Begin
else
{
loadFromSysLastValue = false;
}
//End

create a queue of process in classic asp

here is the problem :
there is classic asp app which is calling lame.exe for encoding mp3s for lots of time per day
and there is no control of the way of calling lame.exe from several users in another word there is no queue for that purpose.
so here is what I am thinking about :
//below code all are pseudo-code
//process_flag and mp3 and processId all are reside in a database
function addQ(string mp3)
add a record to database
and set process_flag to undone
then goto checkQ
end function
function checkQ()
if there is a process in queue list and process_flag is undone
sort in by processID asc
for each processID
processQ(processID)
end for
end function
function ProcessQ(int processID)
run lame.exe with the help of wscript.exe
after doing the job set the process_flag to done
end function
so I just want to know is there any better solution?
or any other approaches out there?
regards.
Looks like a reasonable approach for classic asp.
Just make sure that in your checkQ function, you are only retrieving queue items that have the process_flag set to undone, or you might be trying to re-process the same items over and over.
Read this article for another approach using MSMQ - it starts by creating a new Public Queue, then sending messages to it from your asp page. It also required an additional executable to process queued items.
This is a perfect application for MSMQ. Let proven code handle the reliable messaging, concurrency control etc. so you can just focus on the application logic.

best practices for using sqlite for a database queue

I am using an sqlite database for a producer-consumer queue.
One or more producers INSERT one row at a time with a new autoincremented primary key.
There is one consumer (implemented in java, uses the sqlite-jdbc library) and I want it to read a batch of rows and delete them. It seems like I need transactions to do this but trying to use SQLite with transactions seems to not work right. Am I overthinking this?
If I do end up needing transactions, what's the right way to do this in Java?
Connection conn;
// assign here
boolean success = false;
try {
// do stuff
success = true;
}
finally
{
if (success)
conn.commit();
else
conn.rollback();
}
See this trail for an introduction on transaction handling with Java JDBC.
As for your use case, I think you should use transactions, especially if the consumer is complex. The tricky part is always to decide when a row has been consumed and when it should be considered again. For example, if you have an error before the consumer can actually do its job, you'll want a rollback. But if the row contains illegal data (like a text in a number field), then the rollback will turn into an infinite loop.
Normally, with SQLite there are explicit (not implicit!) transactions. So you need something like "START TRANSACTION" of course, it could be that your Java binding has this incorporated -- but good bindings don't.
So you might want to add the necessary transaction start (there might be a specialiced method in your binding).

Resources