I am using an sqlite database for a producer-consumer queue.
One or more producers INSERT one row at a time with a new autoincremented primary key.
There is one consumer (implemented in java, uses the sqlite-jdbc library) and I want it to read a batch of rows and delete them. It seems like I need transactions to do this but trying to use SQLite with transactions seems to not work right. Am I overthinking this?
If I do end up needing transactions, what's the right way to do this in Java?
Connection conn;
// assign here
boolean success = false;
try {
// do stuff
success = true;
}
finally
{
if (success)
conn.commit();
else
conn.rollback();
}
See this trail for an introduction on transaction handling with Java JDBC.
As for your use case, I think you should use transactions, especially if the consumer is complex. The tricky part is always to decide when a row has been consumed and when it should be considered again. For example, if you have an error before the consumer can actually do its job, you'll want a rollback. But if the row contains illegal data (like a text in a number field), then the rollback will turn into an infinite loop.
Normally, with SQLite there are explicit (not implicit!) transactions. So you need something like "START TRANSACTION" of course, it could be that your Java binding has this incorporated -- but good bindings don't.
So you might want to add the necessary transaction start (there might be a specialiced method in your binding).
Related
What would a Cosmos stored procedure look like that would set the PumperID field for every record to a default value?
We are needing to do this to repair some data, so the procedure would visit every record that has a PumperID field (not all docs have this), and set it to a default value.
Assuming a one-time data maintenance task, arguably the simplest solution is to create a single purpose .NET Core console app and use the SDK to query for the items that require changes, and perform the updates. I've used this approach to rename properties, for example. This works for any Cosmos database and doesn't require deploying any stored procs or otherwise.
Ideally, it is designed to be idempotent so it can be run multiple times if several passes are required to catch new data coming in. If the item count is large, one could optionally use the SDK operations to scale up throughput on start and scale back down when finished. For performance run it close to the endpoint on an Azure Virtual Machine or Function.
For scenarios where you want to iterate through every item in a container and update a property, the best means to accomplish this is to use the Change Feed Processor and run the operation in an Azure function or VM. See Change Feed Processor to learn more and examples to start with.
With Change Feed you will want to start it to read from the beginning of the container. To do this see Reading Change Feed from the beginning.
Then within your delegate you will read each item off the change feed, check it's value and then call ReplaceItemAsync() to write back if it needed to be updated.
static async Task HandleChangesAsync(IReadOnlyCollection<MyType> changes, CancellationToken cancellationToken)
{
Console.WriteLine("Started handling changes...");
foreach (MyType item in changes)
{
if(item.PumperID == null)
{
item.PumperID = "some value"
//call ReplaceItemAsync(), etc.
}
}
Console.WriteLine("Finished handling changes.");
}
In a clustered intershop environment, we see a lot of error messages. I'm suspecting the communication between the application servers is not reliable.
Caused by: com.intershop.beehive.orm.capi.common.ORMException:
Could not UPDATE object: com.intershop.beehive.bts.internal.orderprocess.basket.BasketPO
Is there safe way to for the local application server, to load the latest instance.
BasketPO basket = null;
try{
BasketPOFactory factory = (BasketPOFactory) NamingMgr.getInstance().lookupFactory(BasketPOFactory.FACTORY_NAME);
try(ORMObjectCollection<BasketPO>baskets = factory.getObjectsBySQLWhere("uuid=?", new Object[]{basketID},CacheMode.NO_CACHING);){
if(null != baskets && !baskets.isEmpty()){
basket = baskets.stream().findFirst().get();
}
}
}
catch(Throwable t){
Logger.error(this, t.getMessage(),t);
}
Does the ORMObject#refresh method help ?
try{
if(null != basket)
basket.refresh();
}
catch(Throwable t){
Logger.error(this, t.getMessage(),t);
}
You experience that error because an optimistic lock "fails". To understand the problem better I'll try to explain how the optimistic locking works in particular in the Intershop ORM layer.
There is a column named OCA in the PO tables (OCA == optimistic control attribute?). Imagine that two servers (or two different threads/transactions) try to update the same row in a table. For performance reasons there is no DB locking involved by default (e.g. by issuing select for update). Instead the first thread/server increments the OCA by one when it updates the row successfully within its transaction.
The second thread/server knows the value of the OCA from the time that it created its own state. It then tries to update the row by issuing a similar query:
UPDATE ... OCA = OCA + 1 ... WHERE UUID = <uuid> AND OCA = <old_oca>
Since the OCA is already incremented by the first thread/server this update fails (in reality - updates 0 rows) and the exception that you posted above is thrown when the ORM layer detects that no rows were updated.
Your problem is not the inter-server communication but rather the fact that either:
multiple servers/threads try to update the same object;
there are direct updates in the database that bypass the ORM layer (less likely);
To solve this you may:
Avoid that situation altogether (highly recommended by me :-) );
Use the ISH locking framework (very cumbersome imHo);
Use pesimistic locking supported by the ISH ORM layer and Oracle (beware of potential performance issues, deadlocks, bugs);
Use Java locking - but since the servers run in different JVM-s this is rarely an option;
OFFTOPIC remarks: I'm not sure why you use getObjectsBySQLWhere when you know the primary key (uuid). As far as I remember ORMObjectCollection-s should be closed if not iterated completely.
UPDATE: If the cluster is not configured correctly and the multicasts can't be received from the nodes you won't be able to resolve the problems programatically.
The "ORMObject.refresh()" marks the cached shared state as invalid. Next access to the object reloads the state from the database. This impacts the performance and increase the database server load.
BUT:
The "refresh()" method does not reload the PO instance state if it already assigned to the current transaction.
Would be best to investigate and fix the server communication issues.
Other possibility is that it isn't a communication problem (multicast between node in the cluster i assume), but that there are simply two request trying to update the basket at the same time. Example two ajax request to update something on the basket.
I would avoid trying to "fix" the orm, it would only cause more harm than good. Rather investigate further and post back more information.
I have a SysOperation Framework process that creates a ReliableAsynchronous batch to post packing slips and several get created at a time.
Depending on how quickly I click to create them, I get:
Cannot edit a record in LastValue (SysLastValue).
An update conflict occurred due to another user process deleting the record or changing one or more fields in the record.
And
Cannot create a record in LastValue (SysLastValue). User ID: t edit a, Class.
The record already exists.
On a couple of them in the BatchHistory. I have this.parmLoadFromSysLastValue(false); set. I'm not sure how to prevent writing to SysLastValue table.
Any idea what could be going on?
I get this exception a lot too, so I've created the habit of catching DuplicateKeyException in my service operation. When it is thrown, catch it and retry (for a default of 5x).
The error occurs when a lot of processes run simultaneously, like you are doing now.
DupplicateKeyException can be caught inside a transaction so you could improve by putting a try/catch around the code that does the insert in the SysLastValue table if you can find the code.
As far as I can see these are the only to occurrences where a record is inserted in this table (except maybe in kernel):
InventUnusedDimCleanUp.serialize()
SysAutoSemaphore.autoSemaphore()
Put a breakpoint there and see if that code is executed. If so you can add a try/catch with retry and see if that "fixes" it.
You could also use the tracing cockpit and the trace parser to figure out where that record is inserted if it's not one of those two.
My theory about LoadFromSysLastValue: I believe setting this.parmLoadFromSysLastValue(false) does not work since it is only taken into account when the dialog is started, not when your operation is executed. When in batch, no SysLastValue will be used to initialize your data contract as you want it to use the exact parameters you have supplied in your data contract .
It's because of the code calling SysOperationController.savelast() while in batch, my solution is to set loadFromSysLastValue to false in SysOperationController.loadFromSysLastValue() as part of the in batch check:
if (!this.isInBatch())
{
.....
}
//Begin
else
{
loadFromSysLastValue = false;
}
//End
In the past I have avoided ORM and always handcrafted parameterised queries etc. This is very time consuming and a real pain when first developing an application. Recently I decided to have another look at ORM specifically the Sqlite.NET ORM.
I would like to use SQLite ORM features but also be able to run a batch of native SQL commands to prepopulate a database.
We are using the SqliteNetExtensions-MvvmCross dll to enable one-to-many relationships etc and this all looks fine. My issues comes to when I want to seed the database with configuration data. I was hoping to simply provide a sql file that contained a series of sql statements that it would run one after another.
I have grabbed the SQlite.NET code from GITHub and run the tests. I have then extended the StringQueryTests class that has a simple [Product] table to do the following:-
[Test]
public void AlanTest()
{
StringBuilder sb = new StringBuilder(200);
sb.Append(" DELETE FROM Product;");
sb.Append(" INSERT INTO Product VALUES (1,\"Name1\",1,1);");
sb.Append(" INSERT INTO Product VALUES (2,\"Name2\",2,3);");
db.Execute(sb.ToString());
}
When I run this it does not throw an error and in fact the behaviour seems to be that it will only run the first command. If I paste the contents of sb.ToString() into a sqlite database query window it will work just fine.
Is this the expected behaviour? If so, how do I go about overcoming this so that I can use an approach like above. I don’t really want to have to create objects to manage all SQL statements if possible.
I can see that there are a number of approaches that could be adopted to overcome this issue - anyone got a work around or suggestions that they think can solve this issue?
Kind regards
Alan.
I just ran into this issue too. I found a blog post that explains why.
Here is what the post says in case it goes missing.
All of the code [in sqlite-net] correctly checks the result codes and throws exceptions accordingly.
Although I haven't posted all relevant code here, I did review it, and the real origin of this behavior is elsewhere - in the native sqlite3.dll sqlite3_prepare_v2 method. Here's the relevant part of the documentation:
These routines only compile the first statement in zSql, so *pzTail is left pointing to what remains uncompiled.
Since sqlite-net doesn't do anything with the uncompiled tail, only the first statement in the command is actually executed. The remainder is silently ignored. In most cases you won't notice that when using sqlite-net. You will either use its micro ORM layer or execute individual statements. The only common exception that comes to mind, is trying to execute DDL or migration scripts which are typically multi statement batches.
Can't you do :
[Test]
public void AlanTest()
{
var queries = new List<string> ()
{
" DELETE FROM Product",
" INSERT INTO Product VALUES (1,\"Name1\",1,1)",
" INSERT INTO Product VALUES (2,\"Name2\",2,3)"
};
db.BeginTransaction ();
queries.ForEach (query => db.Execute (query));
db.Commit ();
}
You don't really need the transaction, just faster execution / checkpoint rollback...
How do I create PL/SQL function which waits for update on some row for specified timeout and then returns.
What I want to accomplish is - I have long running process which will update it's status to ASYNC_PROCESS table by process_id. I need function which returns with true/false when this process has completed, but also I need this function to wait some time for this process complete, return on timeout or return imediately with true, when process has completed. I don't want to use sleep(1 sec), because in such case I will be having 1 sec lag. I don't want to use sleep(1 msec), because in such case I am spending cpu resources (and 1msec lag).
Is there a good way how experienced programmer would accomplish this?
That function will be called from .NET (So I need minimal lag between DB operation and .NET/UI)
THNX,
Beef
I think the most sensible thing to do in this case is to use update triggers on that ASYNC_PROCESS table.
You should also look into the DBMS_ALERT package. Here's an edited excerpt from that doc:
Create an alert:
DBMS_ALERT.REGISTER('emp_table_alert');
Create a trigger on your table to fire the alert:
CREATE TRIGGER emptrig AFTER INSERT ON emp
BEGIN
DBMS_ALERT.SIGNAL('emp_table_alert', 'message_text');
END;
From your .net code, you can the use something that calls this:
DBMS_ALERT.WAITONE('emp_table_alert', :message, :status, :timeout);
Make sure you read the docs for what :status and :timeout do.
You should look at Oracle Advanced Queuing. It offers the kind of functions your looking for.
You'll probably need a separate queue table where a trigger on ASYNC_PROCESS inserts messages. You then use the AQ functions to retrieve (or wait for) the next message in the queue table.
If you're doing this in C#.NET, why wouldn't you simply spawn a worker thread to do the update (via ODAC)? Why hand the responsibility over to Oracle when (it seems) you want a .NET process to make the update call (in background) and have the main process be notified of its completion.
See here and here for examples, although there are several approaches in .NET for this (delegates, events, async callbacks, thread pools, etc)