Spring JDBC: Oracle transaction errors out after 120 seconds - spring-mvc

For a particular requirement, I will have to iterate through a list of 50000 records and insert them into database. The requirement is that if any one of the 50000 records fail, all the other records should be rollback. And hence we did not involve any commit in the processing. But this resulted in the following error:
[2/1/16 14:01:47:939 CST] 000000be SystemOut O ERROR
org.springframework.jdbc.UncategorizedSQLException:
PreparedStatementCallback; uncategorized SQLException for SQL [INSERT
INTO ...) VALUES (...)]; SQL state [null]; error code [0]; Current
thread has not commited in more than [120] seconds and may incur
unwanted blocking locks. Please refactor code to commit more
frequently.
Now, when we implemented batching - We are using PreparedStatement.executeBatch() method to insert data in batches of 100, the above error doesn't arise. The autoCommit is set to true by default for the batching, so a commit happens after every batch execution.
Could anyone suggest how we can handle the rollback mechanism in the above case. If 50th batch execution fails, then we want all the previous 49 batch executions to be reverted. We are using Spring Data/JDBC, Oracle 11g database, WebSphere application server. I have read somewhere that the above 120 seconds timeout for commit can also be configured in the JTA settings of WebSphere. Is it so? Please suggest any alternatives or other possible solutions.
Thank you in advance.

You must set autocommit to false and only commit at the end if all your batches executed successfully.

Related

WSO2 ESB 6.1.0 Batch Processing

I have a requirement to process 10 million records in MS SQL database using WSO2 ESB.
Input file can be XML or Flat file.
I have created a dataservice provided in WSO2 ESB.
Now, I start process to read from XML and insert into MS SQL database, I want to commit every 5000 records during processing via ESB so that if 5001 record fails, I can restart the processing from 5001 record instead of 0.
First problem, commit is happening for all records at once. I want to configure it in such a way that it should process 5000 records, commits in DB and then proceed with next set of records. Additionally, if the batch job fails after processing 10000 records, I want the batch job to start processing from 100001 record and not from 0
Please suggest ideas.
Thanks,
Abhishek
This is a more or less common pattern. Create an agent/process continously reading from an ipc buffer (memory or file).
The ESB endpoint simply writes into the buffer.
The agent is reponsible of retrying and/or notify asynchonously if finally cannot commit.
What you can do is you can write start and end records in a file and place it in ESB, When the schedule starts it will pick the record from file, in your case 5000 and then process it in DSS, now if DSS response is successful then you increment the record and update in the file in this case 10000, now if DSS response is not success then 10000 will be mentioned in the file, after you find the root cause as why it failed and fix it and then run the schedule then it will take record from 10000 and if it is success then write 15000 in file, so this will continue till it doesn't meet the end condition

When to check for database is locked errors in sqlite

On the linux server, where our web app runs, we also have a small app that uses
sqlite (it is written in c).
For performing database actions we use the following commands:
sqlite3_prepare_v2
sqlite3_bind_text or sqlite3_bind_int
sqlite3_step
sqlite3_finalize
Every now and then there was a concurrency situation and I got the following error:
database is locked
So I thought: "This happens when one process writes a certain record and the
other one is trying to read exactly the same record."
So after every step-command, where this collision could be, I checked for this error. When it happended, I waited a few milliseconds and the tried again.
But the sqlite error "database is locked" still occurred.
So I changed every step command and the code lines after it. Somehow I thought that this "database is locked" error could only occur with the step command.
But the error kept coming.
My question is now:
Do I have to check after any sqlite3 command for "error_code ==5" (database is locked)?
Thanks alot in advance
If you're receiving error code 5 (busy) you can limit this by using an immediate transaction. If you're able to begin an immediate transaction, SQLite guarantees that you won't receive a busy error until you commit.
Also note that SQLite doesn't have row-level locking. The entire database is locked. Using a WAL journal, you can one writer and multiple readers. With other journaling methods, you can have either one writer, or multiple readers, but not both simultaneously.
SQLite Documentation on 'SQLITE_BUSY'

how find out the process description from process Id on redshift?

I'm trying to debug a deadlock on Redshift:
SQL Execution failed ... deadlock detected
DETAIL: Process 7679 waits for AccessExclusiveLock on relation 307602 of database 108260; blocked by process 7706.
Process 7706 waits for AccessShareLock on relation 307569 of database 108260; blocked by process 7679.
Is there a sql query to get a description for process ids 7679 and 7706?
select * from stl_query where pid=XXX
This will give you the query txt which will help you identify your query.
You can also query stv_locks to check is there are any current updates in the database, and str_tr_conflict will display all the lock conflict on the table.

c# timed out exception for simple query.

Im getting a time out expired.
The timeout period elapsed prior to completion of the operation.
If I execute the same query in SQL server management its getting executed but trying to execute the query through the program does not succeed.
Where could I have gone wrong?
The query is very simple and its throwing exception.
The time out set is 90 sec.
Select isnull(max(voucherid),0)+1 from XXX
dbmanager.executequery(con,"uspXXX",parameter);
Rebuild your statistics and/or indexes on that table.
See this answer.

Just started getting AIR SQLite Error 3182 Disk I/O error occurred

We have a new beta version of our software with some changes, but not around our database layer.
We've just started getting Error 3128 reported in our server logs. It seems that once it happens, it happens for as long as the app is open. The part of the code where it is most apparent is where we log data every second via SQLite. We've generated 47k errors on our server this month alone.
3128 Disk I/O error occurred. Indicates that an operation could not be completed because of a disk I/O error. This can happen if the runtime is attempting to delete a temporary file and another program (such as a virus protection application) is holding a lock on the file. This can also happen if the runtime is attempting to write data to a file and the data can't be written.
I don't know what could be causing this error. Maybe an anti-virus program? Maybe our app is getting confused and writing data on top of each other? We're using async connections.
It's causing lots of issues and we're at a loss. It has happened in our older version, but maybe 100 times in a month rather than 47,000 times. Either way I'd like to make it happen "0" times.
Possible solution: Exception Message: Some kind of disk I/O error occurred
Summary: There is probably not a problem with the database but a problem creating (or deleting) the temporary file once the database is opened. AIR may have permissions to the database, but not to create or delete files in the directory.
One answer that has worked for me is to use the PRAGMA statement to set the journal_mode value to something other than DELETE. You do this by issuing a PRAGMA statement in the same way you would issue a query statement.
PRAGMA journal_mode = OFF
Unfortunately, if the application crashes in the middle of a transaction when the OFF journaling mode is set, then the database file will very likely go corrupt.1.
1 http://www.sqlite.org/pragma.html#pragma_journal_mode
The solution was to make sure database delete, update, insert only happened one at at time by wrapping a little wrapper. On top of that, we had to watch for error 3128 and retry. I think this is because we have a trigger running that could lock the database after we inserted data.

Resources