What is the strategy used by JobLockService to enforce the synchronization ? Does it lock the whole Repository? or there is another technique?
When i write a code such that :
String lockString = jobLockService.getLock(QName.createQName(Prefix,LocalName, Resolver));
LockToken lockToken = new LockToken();
lockToken.set(lockString);
// Something going here such as create a node or update or delete
// another somethign processes here
jobLockService.releaseLock(lockString);
As you can notice from that code i use JobLockService What is happening once the lock is acquired ? Does it lock the repository at all and prevents any other prcoesses from accessing the repository ?
I'm asking about actual techniques used to enforece the synchronization.
Also, What is about LockToken here ? what is the benefit from it ?
Thanks in advance, you replies are very highly appreciated.
JobLockService does not put any lock on actual content stored into the repository, let alone lock the whole repository. After you successfully called JobLockService.getLock, any thread is free to update whichever node it wants to edit. It is your code that must ensure that blocks that have to execute with a controlled concurrency are first trying to obtain the same lock.
That LockToken object you create seems of no use and can be dropped.
Related
I am using below query which has handle but I can see nothing happens even if I delete/not delete the object of the handle. But everyone says always delete the object finally. Why we need to delete them? what happens if we don't delete them? How do we see that ?
finally:
if valid-handle(hQueryTest) then delete object hQueryTest no-error.
if valid-handle(hQuerytestvalue) then delete object hQuerytestvalue no-error.
end finally.
OpenEdge simply does not have a reference count based garbage collector for handle based objects. So the object the query handle points to will remain in memory of the AVM forever. If that's on an AppServer, the memory consumption of the AppServer process may grow slightly but steadily.
OpenEdge has the concept of WIDGET-POOLs that can support with memory management.
You can check the DynObjects.* Log-entry-type to get insights into the Life-Time of dynamic objects, handle-based or class-based.
I have a set of events that have been refactored to another package. This works as is until I execute the event replay. Digging deeper I noticed a payloadtype in the domainevententry table and figure changing this would be sufficient but alas it seems the xml root element of the event needs to be changed as well. I am hoping there is a simple way to do this.
I cannot find any examples on upcasting to different packages or using XStream aliasing so any help would be greatly appreciated.
Thanks
As you noticed, the default payload type stored in events is the fully qualified class name. This ensures that out of the box serialization and deserialization work as intended. However, moving classes around thus means the payload type can no longer be found, requiring some adjustment to be made.
You could have used the EventTypeUpcaster, as referred to in the Reference Guide. The EventTypeUpcaster is dedicated to adjusting the payload type, and thus can also be used to deal with changing package names.
When using (the default) XStreamSerializer, aliasing the tags would indeed also work. How to set aliases can be seen here for example. AS noticed in that sample, the alias is added to the XStream instance. The XStreamSerializer uses an XStream instance to support de-/serialization from/to XML. To adjust the XStream instance, you can simply use the builder paradigm on the XStreamSerializer. The JavaDoc of the builder should be specific enough to help you out how to use it.
Went the long way round with this but it seems to work. As always, backup your database before executing large volume changes. I also restarted the service using the database on completion. Needless to say I will make sure the events are in logical packages before deploying next time :)
Database Engine: Postgres 10
Table: domainevententry
update domainevententry
set
payloadtype = '<new.package.Classname>',
payload = lo_from_bytea(0, decode(REPLACE(
subquery.output,
'<old.package.Classname>',
'<new.package.Classname>'
), 'escape'))
from (
SELECT eventidentifier, payloadtype, encode(lo_get(payload::oid), 'escape') as output FROM domainevententry
WHERE eventidentifier in (
'<event guid 1>',
'<event guid 2>'
)
AND payloadtype = '<old.package.Classname>'
) as subquery
where domainevententry.eventidentifier = subquery.eventidentifier;
Once that is completed I needed to update the OWNER of the large object:
ALTER LARGE OBJECT <LargeObjectId> OWNER TO database_role;
Probably not the most elegant solution but based on the time constraints I have, it did the job. There are probably encoding issues with this solution for the large object but it all worked out for me in the end. Feel free to share any optimizations that would make the above more suitable.
Firing off the Axon Framework replays rebuilt the projections and everything lined up.
In a clustered intershop environment, we see a lot of error messages. I'm suspecting the communication between the application servers is not reliable.
Caused by: com.intershop.beehive.orm.capi.common.ORMException:
Could not UPDATE object: com.intershop.beehive.bts.internal.orderprocess.basket.BasketPO
Is there safe way to for the local application server, to load the latest instance.
BasketPO basket = null;
try{
BasketPOFactory factory = (BasketPOFactory) NamingMgr.getInstance().lookupFactory(BasketPOFactory.FACTORY_NAME);
try(ORMObjectCollection<BasketPO>baskets = factory.getObjectsBySQLWhere("uuid=?", new Object[]{basketID},CacheMode.NO_CACHING);){
if(null != baskets && !baskets.isEmpty()){
basket = baskets.stream().findFirst().get();
}
}
}
catch(Throwable t){
Logger.error(this, t.getMessage(),t);
}
Does the ORMObject#refresh method help ?
try{
if(null != basket)
basket.refresh();
}
catch(Throwable t){
Logger.error(this, t.getMessage(),t);
}
You experience that error because an optimistic lock "fails". To understand the problem better I'll try to explain how the optimistic locking works in particular in the Intershop ORM layer.
There is a column named OCA in the PO tables (OCA == optimistic control attribute?). Imagine that two servers (or two different threads/transactions) try to update the same row in a table. For performance reasons there is no DB locking involved by default (e.g. by issuing select for update). Instead the first thread/server increments the OCA by one when it updates the row successfully within its transaction.
The second thread/server knows the value of the OCA from the time that it created its own state. It then tries to update the row by issuing a similar query:
UPDATE ... OCA = OCA + 1 ... WHERE UUID = <uuid> AND OCA = <old_oca>
Since the OCA is already incremented by the first thread/server this update fails (in reality - updates 0 rows) and the exception that you posted above is thrown when the ORM layer detects that no rows were updated.
Your problem is not the inter-server communication but rather the fact that either:
multiple servers/threads try to update the same object;
there are direct updates in the database that bypass the ORM layer (less likely);
To solve this you may:
Avoid that situation altogether (highly recommended by me :-) );
Use the ISH locking framework (very cumbersome imHo);
Use pesimistic locking supported by the ISH ORM layer and Oracle (beware of potential performance issues, deadlocks, bugs);
Use Java locking - but since the servers run in different JVM-s this is rarely an option;
OFFTOPIC remarks: I'm not sure why you use getObjectsBySQLWhere when you know the primary key (uuid). As far as I remember ORMObjectCollection-s should be closed if not iterated completely.
UPDATE: If the cluster is not configured correctly and the multicasts can't be received from the nodes you won't be able to resolve the problems programatically.
The "ORMObject.refresh()" marks the cached shared state as invalid. Next access to the object reloads the state from the database. This impacts the performance and increase the database server load.
BUT:
The "refresh()" method does not reload the PO instance state if it already assigned to the current transaction.
Would be best to investigate and fix the server communication issues.
Other possibility is that it isn't a communication problem (multicast between node in the cluster i assume), but that there are simply two request trying to update the basket at the same time. Example two ajax request to update something on the basket.
I would avoid trying to "fix" the orm, it would only cause more harm than good. Rather investigate further and post back more information.
In Meteor, I got a collection that the client subscribes to. In some cases, instead of publishing the documents that exists in the collection on the server, I want to send down some bogus data. Now that's fine using the this.added function in the publish.
My problem is that I want to treat the bogus doc as if it were a real document, specifically this gets troublesome when I want to update it. For the real docs I run a RealDocs.update but when doing that on the bogus doc it fails since there is no representation of it on the server (and I'd like to keep it that way).
A collection API that allowed me to pass something like local = true this would be fantastic but I have no idea how difficult that would be to implement and I'm not to fond of modifying the core code.
Right now I'm stuck at either creating a BogusDocs = new Meteor.Collection(null) but that makes populating the Collection more difficult since I have to either hard code fixtures in the client code or use a method to get the data from the server and I have to make sure I call BogusDocs.update instead of RealDocs.update as soon as I'm dealing with bogus data.
Maybe I could actually insert the data on the server and make sure it's removed later, but the data really has nothing to do with the server side collection so I'd rather avoid that.
Any thoughts on how to approach this problem?
After some further investigation (the evented mind site) it turns out that one can modify the local collection without making calls to the server. This is done by running the same methods as you usually would, but on MyCollection._collection instead of just on Collection. MyCollection.update() would thus become MyCollection._collection.update(). So, using a simple wrapper one can pass in the usual arguments to a update call to update the collection as usual (which will try to call the server which in turn will trigger your allow/deny rules) or we can add 'local' as the last argument to only perform the update in the client collection. Something like this should do it.
DocsUpdateWrapper = function() {
var lastIndex = arguments.length -1;
if (arguments[lastIndex] === 'local') {
Docs._collection.update(arguments.slice(0, lastIndex);
} else {
Docs.update(arguments)
}
}
(This could of course be extended to a DocsWrapper that allows for insertion and removals too.)(Didnt try this function yet but it should serve well as an example.)
The biggest benefit of this is imo that we can use the exact same calls to retrieve documents from the local collection, regardless of if they are local or living on the server too. By adding a simple boolean to the doc we can keep track of which documents are only local and which are not (An improved DocsWrapper could check for that bool so we could even omit passing the 'local' argument.) so we know how to update them.
There are some people working on local storage in the browser
https://github.com/awwx/meteor-browser-store
You might be able to adapt some of their ideas to provide "fake" documents.
I would use the transform feature on the collection to make an object that knows what to do with itself (on client). Give it the corruct update method (real/bogus), then call .update rather than a general one.
You can put the code from this.added into the transform process.
You can also set up a local minimongo collection. Insert on callback
#FoundAgents = new Meteor.Collection(null, Agent.transformData )
FoundAgents.remove({})
Meteor.call 'Get_agentsCloseToOffer', me, ping, (err, data) ->
if err
console.log JSON.stringify err,null,2
else
_.each data, (item) ->
FoundAgents.insert item
Maybe this interesting for you as well, I created two examples with native Meteor Local Collections at meteorpad. The first pad shows an example with plain reactive recordset: Sample_Publish_to_Local-Collection. The second will use the collection .observe method to listen to data: Collection.observe().
I need to force reread data from DB within one php execution, using propel. I already have a bit hacky solution: call init%modelName% for corresponding classes, but want something better.
Is there any single call or service config option for that? Like killing whole instance pool.
About service: we use symfony2 and don't need cache only in one specific case, hence we can create even separate environment for that.
You can globally disable the instance pooling by calling: Propel::disableInstancePooling() (Propel::enableInstancePooling() is useful to enable the instance pooling).
Otherwise, you can rely on PEER classes which contain generated methods like clearInstancePool(), and clearRelatedInstancePool().
I needed to update realated objects and found out clear%modelName% should be called.
init%modelName% deletes all entries and related entires could never be read. clear[Related]InstancePool don't help.
$foo = FooQuery->create()->findOne();
// meanwhile somebody else updated DB and Propel don't know about that:
mysql_query("INSERT INTO `foo_bars`, (`foo_id`, `bar_id`) VALUES (".$foo->getId().", 1)");
// here we need some magic to force Propel re-read relation table.
$foo->clearFooBars();
// now entries would be re-read
$foo->getFooBars();