I'm new to Axon Framework and struggling to understand what processing groups are and what they are used for.
If you guys can expand on this, it would be greatly appreciated.
I'm trying to have 2 instances of the same application running on different hosts with a single database (event store). However, I get the error below. The first host works fine but the second doesn't.
Should I assign different processing groups to them?
org.axonframework.eventhandling.tokenstore.UnableToClaimTokenException: Unable to claim token 'projections[0]'. It is owned by '1#xxxxx-yyyyy-zzzzz'
Regards,
Carlo
welcome!
Let me go through your doubts and try to help you :)
What is a Processing Group?
Processing Group is a logical way to group Event Handlers. You can define them on your Event Handling Component using the #ProcessingGroup("processingGroupName") annotation or, in case you do not provide a name, the default value is the full.package.name. Keep in mind that a Tracking Event Processor is created for each Processing Group.
What is a Processing Group used for?
As said before, it is very much related to the Tracking Event Processor. In this case, each TEP claims its Tracking Token (in order to avoid multiple processing of the same event in different threads/nodes). You can very much read about it in depth here.
What about your problem?
From the shared log and your comments stating you have 2 instances, it just means one instance already claimed that token and the other one can't claim it. If the first one releases it, they will race to claim it, meaning any of them can do it. There are ways to have both processing event in a parallel way and you can better check the docs here and here.
To sum up, there is nothing wrong with this log line and I believe you saw it as an INFO and not an ERROR.
Hope to have clarified your doubts about it.
Related
I have a Firebase Event called conversion_performed. It has a parameter success.
Can I trigger a conversion only if the parameter success is "true"? Or do I need to have two different events: conversion_succeeded and conversion_failed?
Well after reading some documentation, Google Analytics for Firebase provides event reports that help you understand how users interact with your app. You can create custom tag in order to track the user behavior and create custom analytics trigger to allow a proper process related to this event.
In special, there are a king of tags called conversions, based in this definition the conversions are classified in macro and micro.And I think, this classification is important for your question.
In case of macro conversion, if your trigger processes a large amount of information and perform multiple operations over a single event/conversion, I think you should separate in both tags to isolate each processing. Or if you require to perform very different path of data process based on success, I strongly suggest separate the event in two.
On the other hand, in case of micro conversion, if the processing trigger does not have much work and if the knowledge of the success of the conversion is part of a unique processing path, I suggest keep just one conversion trigger.
In a nutshell, your questions depends entirely about your problem nature. And I suggest two criteria to separarte or not your conversion trigger.
Heavy process - Separate your tags in app/web or create several pub/sub trigger to separate processing.
Isolated behavior - If the execution of the trigger is totally different depending on the success of the conversion, I recommend two conversions and two triggers.
I have completely changed the answer to try to go deeper into your problem because I think, the first one was very shallow. And I hope this answer is helpful to you.
I need to capture session level details in a table. i have 150 workflows for which i need to maintain an audit table which will have info like session start time, session end time, applied rows, rejected rows etc.
i can not use workflow variables using assignment task method because i need a reusable solution since i have 150 workflows.
i tried using Post Session Success Command Task with built in variables as $PM#TableName etc.
But there is no built in varible to capture the session start time and end time.
Last option i thought that could help is extract stats from session logs. can anyone please explain me how to achieve this.
Please let me know if there is any way to deal with this issue.
Thanks in advance
Here's the Framework to get this done:
http://powercenternotes.blogspot.com/2014/01/an-etl-framework-for-operational.html
Each session hfas a $SessionName.StartTime as well as $SessionName.EndTime variable available
It's a terrible idea to try and parse the logs. Feasible, but I'd avoid it.
All information is already available in Informatica Repository. For every Workflow and every task of the Workflow. That should be used instead.
On our project, I store some data on block chain, my question is how can get to know when these data changed.(so that we can send emails, sms, web notifications to end user)
the first thought were list as below, but it seems either of them were best choice
query database every few seconds. Very stupid way, but it seems can be worked.
using one RPC interceptor, and check all the things send to node.
using flows, subflows
using schedule service
do you have any good ieads about this, please kindly reply me, thanks a lot.
Ok so a few points, there is no "real" blockchain on Corda, there are consumed/unconsumed states, I think you should reformulate a bit better your question, do you want to be notified when a states get consumed? When a new state is created? Try to explain in "Corda" terms :D
I'm fairly new to xProc and xPath, but I've been asked to solve the following problem:
Step 2 receives data via the secondary port from step 1. Step 2 contains a p:for-each, which saves a document into a folder for each element that passes the for-each.
(Part A)
These documents (let's say I receive 6 documents from for-each) lay in the same directory and get filtered by p:directory-list and are eventually stored in one single document, containing the whole path of every document the for-each created. (Part B)
So far, so good.
The problem is that Part A seems to be too slow. Part B already tries to read the data Step A
stores while the directory is still empty. Meaning, I'm having a performance / synchronization problem.
And now comes the question:
Is it possible to let the pipeline wait and to let it continue as soon as a certain event occurs?
That's what I'm imagining:
Step B waits as long as necessary until the directory, which Step A stores the data in, is no longer empty. I read something about
dbxml:breakpoint, but unfortunately I couldn't find more information than the name and
a short description of what it seems to do:
Set a breakpoint, optionally based upon a condition, that will cause pipeline operation to pause at the breakpoint, possibly requiring user intervention to continue and/or issuing a message.
It would be awesome if you know more about it and could give an example of how it's used. It would also help if you know a workaround or another way to solve this problem.
UPDATE:
After searching google for half an eternity, I found SMIL which's timesheets seem to do the trick. Has anyone experience with throwing XML / xProc and SMIL together?
Back towards the end of 2009 I proposed the concept of 'Orchestrating XProc with SMIL' http://broadcast.oreilly.com/2009/09/xproc-and-smil-orchestrating-p.html in a blog post on the O'Reilly Network.
However, I'm not sure that this (XProc + Time) is the solution to your problem. It's not entirely clear, to me, from you description what's happening. Are you implying that you're trying to write something to disk and then read it in a subsequent step? You need to keep stuff in the pipeline in order to ensure you can connect outputs to subsequent inputs.
very new to signalR, and have rolled up a very simple app that will take questions for moderation at conferences (felt like a straight forward use case)
I have 2 hubs at the moment:
- Question (for asking questions)
- Speaker (these should receive questions and allow moderation, but that will come later)
Solution lives at https://github.com/terrybrown/InterASK
After watching a video (by David Fowler/Damian Edwards) (http://channel9.msdn.com/Shows/Web+Camps+TV/Damian-Edwards-and-David-Fowler-Demonstrate-SignalR)
and another that I can't find the URL for atm, I thought I'd go with 'groups' as the concept to keep messages flowing to the right people.
I implemented IConnected, IDisconnect as I'd seen in one of the videos, and upon debugging I can see Connect fire (and on reload I can see disconnect fire), but it seems nothing I do adds a person to a group.
The signalR documentation suggests "Groups are not persisted on the server so applications are responsible for keeping track of what connections are in what groups so things like group count can be achieved" which I guess is telling me that I need to keep some method (static or otherwise?) of tracking who is in a group?
Certainly I don't seem able to send to groups currently, though I have no problem distributing to anyone currently connected to the app and implementing the same JS method (2 machines on the same page).
I suspect I'm just missing something - I read a few of the other questions on here, but none of them seem to mention IConnected/IDisconnect, which tells me these are either new (and nobody is using them) or that they're old (and nobody is using them).
I know this could be considered a subjective question, though what I'm looking for is just a simple means of managing the groups so that I can do what I want to - send a question from one hub, and have people connected to a different hub receive it - groups felt the cleanest solution for this?
Many thanks folks.
Terry
As you seem to understand, which groups a logical connection (user if you will) is in is something you the application writer is responsible for maintaining across network disconnects/reconnects. If you look at the way JabbR does this, it maintains the state of which "rooms" a user is in in its database. Upon reconnecting, a user's identity helps place the current connection back into the proper set of groups that represent the specific "rooms".