Oracle DB trigger fails, then succeeds with same input - oracle11g

In my project, there is a DB trigger that gets input from an application. The application populates a table, the trigger takes the table rows as inputs, and populates a different table as output. The trigger has been performing fine for years.
A few months ago, the trigger started failing for a huge number of inputs, giving a general exception. When manually trying to reprocess the errored out inputs, they get processed correctly. So now I have written a second trigger, that searches for errored out entries, and updates their status as "not processed", and the original trigger processes them correctly.
While it took care of the problem, I still cannot figure out why the first errors happen in the first place. If it were a trigger problem, the issue could be reproduced with the same input, but it can't. Any errored out input, when processed again, passes with flying colours.
What could be the problem here? When does an Oracle DB trigger throw general exception with an input, but never a second time with the same input?

Related

How does PeopleSoft App Engine program flow occur

I am learning more about PeopleSoft Application Engine program flow. From what I've read in PeopleBooks, any actions within a step that specify a Do Select , Do When or Do While perform a looping activity, where all subsequent Actions (within that step) are looped through one row at a time.
I have seen some App Engine programs, including the below one where a Do Select action occurs in a step, followed by a Call Section action that executes anoter section of the program. Does this mean that the loops still iterates over the called section one row at a time, just like any other action(s) would be repeated within the calling step?
My 2nd question is specific to the below App Engine program. In the highlighted PeopleCode action at the bottom of the program, you can see it runs PeopleCode to check/compare data elements and then Exit. My question is whether this code is running within the context of the looping action occuring above where it is executing one row at a time, or is this executing by looking at everything in the buffer at the same time? I would think it can only process row-by-row as it needs to correctly exit/break from the step. Hopefully my question makes sense, but I'm happy to clarify is needed. Thanks!
Both of your assumptions are correct.
If you call another program section within a Do ..., then that call gets executed once for every row that is returned from the Do .... Within the context of the called section, the data in your state tables and temp tables will the same as they were when you hit the Call Section action.
When you execute a PeopleCode action, it executes with whatever data is in the state records and temp tables at that time.

Second ODBC UPDATE call without a 1s delay causes first to not happen

I'm making a couple of calls in a loop into a custom API to update a table in an SQL database, and I found that if I perform the second one immediately, the first one does not actually change the database. If I wait one second between calls, it works.
These two calls were originally only made after individual UI button presses, so this is likely the first time anyone ever tried doing it twice in a row so quickly after each other. We had a feature request that now requires it though.
A hardcoded sleep() is good for tracking down the issue, but it really goes against the grain to consider that a solution. So I'd like to know what needs to be done in ODBC to ensure a previous operation on a table has completed so that the next one won't fail. But again, I'm a total ODBC noob, so I'm not familiar with how its API is supposed to be used (and of course the author of this code left the company over 6 months ago).
Tracking through the layers of API code, I found
Everything ends in Windows ODBC calls.
A single handle for a single connection (from SQLAllocHandle ) is used for all calls.
The query in question is roughly UPDATE table_name SET ... BEGIN INSERT INTO table_name (...) VALUES (...); END
The call sequence for each query seems to be:
SQLCancel();
check(SQLPrepare());
check(SQLExecute());
SQLCancel();
Where check() is:
if (code != SQL_SUCCESS && code != SQL_SUCCESS_WITH_INFO) {
exception();
}
The main issue I see here is that warnings will be totally ignored. But my noob reading of things is that if the query is still running at the end of the call, it should have gone into exception() with something like HYT00 (Timeout expired), right?
The only other thing I can think is that another thread might be calling this API on the same connection, and cancelling the operation with its SQLCancel(). I'll go triple check that, but I'm pretty sure that's not happening.

A specific aspx page hangs on postback forever

In my application, out of 500 or so pages, only one specific page hangs forever. It keeps loading forever, never stops (I waited for 30 minutes).
Problem is this happens only in one or two off cases. Normally the same page works fine. It is a data entry page, so basically user will enter some data and we save the same in 2-3 different tables using a transaction. If I enter the data five times, it is possible it will hang 1 time, randomly. I tried saving exact same data five times, and it hung only twice, so clearly data is not at fault.
I also checked the database tables, and nothing seems to be locked either.
I am not sure, exactly why it is happening. I know it is extremely weird request, but I just want few suggestions for debugging.
Never mind. I finally got it. It was really a stupid problem. I had three insert and one select query in the transaction. Insert queries were fine. Select query was the one causing the timeout locking the transaction. Added an index on the table that was used by select query, and now it works perfect.

Is context.executeQueryAsync a transactional operation?

Let's say i update multiple items in a loop and then call executeQueryAsync() on ClientContext class and this call returns error (failed callback is invoked). Can i be sure that not a sinle item was updated of all these i wanted to update? is there a chance that some of them will get updated and some of them will not? In other words, is this operation transactional? Thank you, i cannot find a single post about it.
I am asking about CSOM model not server solutions.
SharePoint handles its internal updates in a transactional method, so updating a document will actually be multiple calls to the DB that if one method fails, will roll back the other changes so nothing is half updated on a failure.
However, that is not made available to us as an external developer. If you create an update that updates 9 items within your executeQueryAsync call and it fails on #7, then the first 6 will not be rolled back. You are going to have to write code to handle the failures and if rolling back is important, then you will have to manually roll back the changes within your code.

SQL Server database hangs on trigger execution

We have implemented 6-7 trigger on a table and there are 4 update trigger out of these. All of the 4 triggers require long processing because of data manipulation and conditions. But whenever trigger executes then all the pages on the website stop responding and hangs for every other user from different systems also. Even when we execute update statement in SQL Server Management Studio on the trigger holding table then it also hangs. Can we resolve this hanging issue by shifting this trigger code into the stored procedure and call this stored procedures after update statement of the table?
Because I think trigger block the table access to the other user at the time of execution. If not then can anyone provide the solution over it.
Triggers are dangerous - they get fired whenever things happen, and you have no control over when and how often they fire.
You should definitely NOT do any time-consuming processing in a trigger! A trigger should be super fast, and lean.
If you need processing - let the trigger record the info needed into a separate "command" table, and have another process (e.g. a scheduled SQL Agent job) that checks that table for commands to be executed, and then executes those commands - separately, independently of the main application, in a separate execution path.
Don't block your main app by doing excessive data processing / manipulation in a trigger! That's the wrong place to do this!
Can we resolve this hanging issue by shifting this trigger code into the stored procedure
and call this stored procedures after update statement of the table?
You have a box that weights a ton. Does it get lighter when you put it into some nice packaging?
A trigger is already compiled. Putting it into a stored procedure is just dressing it up differently.
Your problem is that you abuse triggers to do heavy processing - something they should not do by design. Change the design.
Because I think trigger block the table access to the other user at the time of execution.
Well, triggers do NO SUCH THING - so you think wrong.
A trigger does what it is told to do and an empty trigger sets zero locks (the locks are there from whatever triggers it). If you do set up a table wide lock - fire whoever did that and redesign.
Triggers should be fast, light and be over fast. NO heavy processing in them.
Without actually seeing the triggers it's impossible to diagnose this confidently but here goes...
The TRIGGER won't set up a lock as such but if it sets off other UPDATE statements they'll require locks and if those UPDATE statements fire other triggers then you could have a chain reaction that produces the kind of grief you seem to be experiencing.
If that sounds like what might be happening then removing the triggers and doing the processing explicitly by running a stored procedure at the end may fix it. If the stored procedure is rubbish then you'll still have problems but at least they'll be easier to fix. Try to ensure that the Stored Procedure only updates the records that need updated
The main problem with shifting the functionality to a stored procedure that you run after the update is ensuring that it is in fact run every time.
If your asp.net skills are stronger than your T-SQL skills then this should be a far easier problem to solve than untangling a web of SQL triggers.
The other issue is that the between the update completing and the Stored Procedure completing the records will be in an intermediate state showing the initial change but not the remaining ones. This may or may not be a problem in your case

Resources