null BroadcasterFactory before Nettosphere initialization - atmosphere

I want to inject a BroadcasterFactory into a Publisher-style class before I have constructed my Nettosphere via it's builders. But a call to BroadcasterFactory.getDefault() returns null before it's initialized via the construction of my Nettosphere. I guess I could build a BroadcasterFactory myself first, but that seems to be messing with the Nettosphere construction process.
At a high level I want access to Broadcasters (1 per connection) from the backend in order to push individual messages to clients.
I could do my own map of broadcasters, but broadcasterfactory already does this and I don't really want to have to manage 2 stores of broadcasters.
Thanks :)

Which version of Atmosphere are you using?
Just tested with 2.0.0-SNAPSHOT and it worked.
I suspect they were a regression before.

Related

Axon Framework Event Package Refactoring

I have a set of events that have been refactored to another package. This works as is until I execute the event replay. Digging deeper I noticed a payloadtype in the domainevententry table and figure changing this would be sufficient but alas it seems the xml root element of the event needs to be changed as well. I am hoping there is a simple way to do this.
I cannot find any examples on upcasting to different packages or using XStream aliasing so any help would be greatly appreciated.
Thanks
As you noticed, the default payload type stored in events is the fully qualified class name. This ensures that out of the box serialization and deserialization work as intended. However, moving classes around thus means the payload type can no longer be found, requiring some adjustment to be made.
You could have used the EventTypeUpcaster, as referred to in the Reference Guide. The EventTypeUpcaster is dedicated to adjusting the payload type, and thus can also be used to deal with changing package names.
When using (the default) XStreamSerializer, aliasing the tags would indeed also work. How to set aliases can be seen here for example. AS noticed in that sample, the alias is added to the XStream instance. The XStreamSerializer uses an XStream instance to support de-/serialization from/to XML. To adjust the XStream instance, you can simply use the builder paradigm on the XStreamSerializer. The JavaDoc of the builder should be specific enough to help you out how to use it.
Went the long way round with this but it seems to work. As always, backup your database before executing large volume changes. I also restarted the service using the database on completion. Needless to say I will make sure the events are in logical packages before deploying next time :)
Database Engine: Postgres 10
Table: domainevententry
update domainevententry
set
payloadtype = '<new.package.Classname>',
payload = lo_from_bytea(0, decode(REPLACE(
subquery.output,
'<old.package.Classname>',
'<new.package.Classname>'
), 'escape'))
from (
SELECT eventidentifier, payloadtype, encode(lo_get(payload::oid), 'escape') as output FROM domainevententry
WHERE eventidentifier in (
'<event guid 1>',
'<event guid 2>'
)
AND payloadtype = '<old.package.Classname>'
) as subquery
where domainevententry.eventidentifier = subquery.eventidentifier;
Once that is completed I needed to update the OWNER of the large object:
ALTER LARGE OBJECT <LargeObjectId> OWNER TO database_role;
Probably not the most elegant solution but based on the time constraints I have, it did the job. There are probably encoding issues with this solution for the large object but it all worked out for me in the end. Feel free to share any optimizations that would make the above more suitable.
Firing off the Axon Framework replays rebuilt the projections and everything lined up.

intershop ORMException could not update - refresh ORMObject

In a clustered intershop environment, we see a lot of error messages. I'm suspecting the communication between the application servers is not reliable.
Caused by: com.intershop.beehive.orm.capi.common.ORMException:
Could not UPDATE object: com.intershop.beehive.bts.internal.orderprocess.basket.BasketPO
Is there safe way to for the local application server, to load the latest instance.
BasketPO basket = null;
try{
BasketPOFactory factory = (BasketPOFactory) NamingMgr.getInstance().lookupFactory(BasketPOFactory.FACTORY_NAME);
try(ORMObjectCollection<BasketPO>baskets = factory.getObjectsBySQLWhere("uuid=?", new Object[]{basketID},CacheMode.NO_CACHING);){
if(null != baskets && !baskets.isEmpty()){
basket = baskets.stream().findFirst().get();
}
}
}
catch(Throwable t){
Logger.error(this, t.getMessage(),t);
}
Does the ORMObject#refresh method help ?
try{
if(null != basket)
basket.refresh();
}
catch(Throwable t){
Logger.error(this, t.getMessage(),t);
}
You experience that error because an optimistic lock "fails". To understand the problem better I'll try to explain how the optimistic locking works in particular in the Intershop ORM layer.
There is a column named OCA in the PO tables (OCA == optimistic control attribute?). Imagine that two servers (or two different threads/transactions) try to update the same row in a table. For performance reasons there is no DB locking involved by default (e.g. by issuing select for update). Instead the first thread/server increments the OCA by one when it updates the row successfully within its transaction.
The second thread/server knows the value of the OCA from the time that it created its own state. It then tries to update the row by issuing a similar query:
UPDATE ... OCA = OCA + 1 ... WHERE UUID = <uuid> AND OCA = <old_oca>
Since the OCA is already incremented by the first thread/server this update fails (in reality - updates 0 rows) and the exception that you posted above is thrown when the ORM layer detects that no rows were updated.
Your problem is not the inter-server communication but rather the fact that either:
multiple servers/threads try to update the same object;
there are direct updates in the database that bypass the ORM layer (less likely);
To solve this you may:
Avoid that situation altogether (highly recommended by me :-) );
Use the ISH locking framework (very cumbersome imHo);
Use pesimistic locking supported by the ISH ORM layer and Oracle (beware of potential performance issues, deadlocks, bugs);
Use Java locking - but since the servers run in different JVM-s this is rarely an option;
OFFTOPIC remarks: I'm not sure why you use getObjectsBySQLWhere when you know the primary key (uuid). As far as I remember ORMObjectCollection-s should be closed if not iterated completely.
UPDATE: If the cluster is not configured correctly and the multicasts can't be received from the nodes you won't be able to resolve the problems programatically.
The "ORMObject.refresh()" marks the cached shared state as invalid. Next access to the object reloads the state from the database. This impacts the performance and increase the database server load.
BUT:
The "refresh()" method does not reload the PO instance state if it already assigned to the current transaction.
Would be best to investigate and fix the server communication issues.
Other possibility is that it isn't a communication problem (multicast between node in the cluster i assume), but that there are simply two request trying to update the basket at the same time. Example two ajax request to update something on the basket.
I would avoid trying to "fix" the orm, it would only cause more harm than good. Rather investigate further and post back more information.

How to get Contract State from vaultService of MockNode in Corda M12.1?

I have created the MockNetwork and MockNodes for testing the CorDapp.
Then I successfully executed the Flows with State. It help me to store states on ledger.
I'm able to fetch previously stored state using :
mockNode1.rpcOps.vaultAndUpdates().first
.filterStatesOfType<SsiState>()
But unable to fetch same states using vaultService of mockNode1:
mockNode1.services.vaultService.track().first.states
or
mockNode1.vault.track().first.states
what could be the cause?
The solution would be to rebase to Corda M13. In M12.1, the new vault query interface (query(), track()) was only partially implemented, hence why it is not behaving as expected.
Alternatively, if you wish to remain on M12.1 you can use mockNode1.services.vaultService.states() instead. It is worth noting that this method will be deprecated going forward in favour of the new interface which you initially tried to use and which is defined here: https://docs.corda.net/api-vault.html

Clear propel cache (instance pool)

I need to force reread data from DB within one php execution, using propel. I already have a bit hacky solution: call init%modelName% for corresponding classes, but want something better.
Is there any single call or service config option for that? Like killing whole instance pool.
About service: we use symfony2 and don't need cache only in one specific case, hence we can create even separate environment for that.
You can globally disable the instance pooling by calling: Propel::disableInstancePooling() (Propel::enableInstancePooling() is useful to enable the instance pooling).
Otherwise, you can rely on PEER classes which contain generated methods like clearInstancePool(), and clearRelatedInstancePool().
I needed to update realated objects and found out clear%modelName% should be called.
init%modelName% deletes all entries and related entires could never be read. clear[Related]InstancePool don't help.
$foo = FooQuery->create()->findOne();
// meanwhile somebody else updated DB and Propel don't know about that:
mysql_query("INSERT INTO `foo_bars`, (`foo_id`, `bar_id`) VALUES (".$foo->getId().", 1)");
// here we need some magic to force Propel re-read relation table.
$foo->clearFooBars();
// now entries would be re-read
$foo->getFooBars();

MVVM Light Messenger executing multiple times

I am using MVVM Light and am using Messages to communicate between ViewModels to let a ViewModel know when it is ok to execute something. My problem is that I register for a message and then it receives it multiple times. so to keep from my program executing something more than once I have to create boolean flags to see if it has already been recieved. Any idea why it does this and how I can stop it?
Make sure you unregister your message handlers once you do not need them anymore. The Messenger keeps a reference to the registered methods and this prevents them from being garbage collected.
Therefore, for ViewModels: make sure that you call Cleanup once you done (or implement IDisposable and call Cleanup from there).
For Views (Controls, Windows, or similar) call Messenger.Unregister in an event that occurs on the teardown of the view, e.g. the Unloaded event.
This is a known behaviour of the MVVM and has been discussed in several places.
Very old question but I solved the problem by doing this:
static bool isRegistered = false;
and then, in the constructor:
if( !isRegistered )
{
Messenger.Default.Register<MyMessage>(this, OnMessageReceived);
isRegisterd = true;
}
I have seen this issue before. It had to do with the Messenger.Default.Register being called more than once. The MVVMLight Messenger class will register the same item 'x' number of times. This is why when you call the Send you get it many times.
Anyone know how to prevent MVVMLight from registering multiple times?
really old but thought I would answer just in case somebody needs it. I was fairly new to silverlight at the time and the issue ended up being a memory leak as the viewModel, which had multiple instances, was still in memory.
As other contributors mentioned, the same message is being registered multiple times. I have noticed this behavior taking place while navigating to View X then navigating back to View Z where the message is registered in the constructor of the Z ViewModel. One solution is to set the NavigationCacheMode property to Required
<Page
........
........
NavigationCacheMode="Required">

Resources