When my spring aplication come up and makes an attempt to issue any command using send method NoHandlerForCommandException is observed. This exception is observed just after the startup of the application and after a few moments it can find the handler and everything works as expected.
How can I know if the command bus and all other command handling components are setup before initiating any command?
I have read somewhere on stackoverflow that in coming version of Axon Framework an event would be emitted after setting up or after receiving the start signal from command handling configuration, has that been introduced?
I believe the issue you are talking about is this one which is not done yet but you can follow it up there.
To your problem, the only way to do that right now is to wait a few seconds before you start your testing (not the best approach).
There are ways to check using Axon Server API if the command handlers are already registered there or not but that's not an easy task and not beautiful as well so I would stick with the wait approach by now until it gets properly fixed.
Related
I am working on an app which has the web component (visited via browser) and background task processing component, to which web component delegates some long running stuff.
I've just hit an issue when I refreshed my web browser only to find it loading indefinitely (first spotted in AJAX, but later in normal request).
It did not really look apparent but as soon as I shut down the background Symfony command which also utilizes EntityManager the browser get unblocked and proceeds with request.
My app uses RabbitMQ to store job requests which are publish by web component. The Symfony command uses the same "backbone" to create RabbitMQ consumer and take consume those jobs.
I tried, without any result:
Restarting Apache
Restating RabbitMQ
Purging RabbitMQ queue
Using different EntityManagers for web and command
I use OldSoundRabbitMqBundle (link) to facilitate communication between those two.
The web component gets stuck regardless of action being called (not related to RabbitMQ producer).
Has anyone stumbled upon similar issue?
This happens on dev box, I haven't got around giving it a spin on a production server, nor would I until I find out more about this.
It would seem that I misused the locking mechanism in Postgres. Indeed the task processing component is a long-running task, but given that it is Symfony command, Doctrine connection is being established as early as possible.
Now comes the tricky part: I used the LOCK TABLE statement to lock some tables away from concurrent access (EXCLUSIVE type). Without closing the connection (not entity manager), those locks are left intact, until I restart the command (every 10th task).
This was the root cause.
I am still investigating some edge-cases, but since I moved away to advisory locking, I had no more lock-ups.
I have a process which I will be invoking manually for the first time in prod environment. Thing is, the process stops when the server is down or if the server is stopped. In this scenario, I will not be able to invoke the process manually everytime since it will be in production environment and not feasible also. So i need to know how can i invoke a process automatically once the server is up?
Heard that one way is to write a custom component to start the process using livecycle implementation class.
Please let me know how to go about it?
Any help regarding this is much appreciated!
Thanks
There are at least two ways you can do this.
First is the custom component route. You invoke the process on component life-cycle start to ensure that the invocation happens every time your component is deployed.
Second is the servlet route. You invoke the process on the initialisation of the servlet making sure that the server started.
The servlet implementation is a better fit for purpose, the only downside is, you need to package and deploy it separately as it won't be a part of the LCAs.
You can find the code samples on how to invoke LC processes using APIs on adobe docs. You can use Java API, WS API or Rest, whichever you are more comfortable with.
http://help.adobe.com/en_US/livecycle/9.0/programLC/help/index.htm
When i run it in Flash Builder (debug mode) the remote object called successfully. but whenever i build the application (AIR application), then the remote object will return no result nor fault, the busy cursor is showing about 3 seconds. then no clue at all.
Any idea how to get advance fault or something than regular fault event or result event?
or anyone have the same experience?
UPDATE:
Actually it was failed only for ONE service method, for other method (some of them took longer time to call) the service call is work fine.
CASE SOLVED
So the problem was not on the service call, but on my result conversion that cause the advanced datagrid failed to render.
Best regards
ktutnik.
Try using a software like Charles to see what happens during the network call.
Kind of an open question that I run into once in a while -- if you have an EJB stateful or stateless bean, or possibly a direct servlet process, that may with the wrong parameters start running long on a production system, how could you effectively add in a manual 'kill switch' for an administrator/person to specifically kill that thread/process?
You can't, or at least you shouldn't, interfere with application server threads directly. So a "kill switch" look definitively inappropriate to me in a Java EE environment.
I do however understand the problem you have, but would rather suggest to take an asynchronous approach where you split you job in smaller work unit.
I did that using EJB Timers and was happy with the result: An initial timer is created for the first work unit. When the app. server executes the timer, it then register as second one that correspond to the 2nd work unit, etc. Information can be passed form one work unit to the other because EJB Timers support the storage of custom information. Also, timer execution and registration is transactional, which is fine to work with database. You can even shutdown and restart the application sever with this approach. Before each work unit ran, we checked in database if the job had been canceled in the meantime.
I'm inheriting a legacy project, and there's a page that calls a method that makes a web service call. Load and performance testing has detected that this page sometimes takes a really long time to close, but usually it's fine, and if there is one hanging, all other requests for that page hang until the first one resolves, and then they all resolve.
I think this might have something to do with the web service being instantiated and not disposed, but I don't know how I can be more sure. I tried to add a delegate to the Dispose method but that doesn't seem to ever fire. (I'm not sure it would without Dispose being called explicitly, but that doesn't really matter.)
So what can I look for on the production server or any deployed environment to watch those requests pile up and go out (or get handled in an orderly manner, if they aren't the problem)?
You might consider using a tool like .NET Memory Profiler. You can use it to attach to your running application, and it can find and report all undisposed objects.
I think they have a free two week trial.
In your web app code, you could write to a log (event log, text file) right before you send a request, and again right after you get the response. Include some identifying information about the request. Add timestamps.
If you only want to use this while debugging, wrap it in #debug.
Or if you want to test in release mode, you could put a boolean in appSettings in web.config so you could turn the logging off and on.