Is there a way to configure Rebus SagaFixture Bus retry counts? - rebus

It looks like everytime I set up a Mock object to throw an exception within a Rebus SagaFixture, when the exception is thrown the bus handles retries several times. This causes async issues in the test assertions on the Saga data and Exceptions based on the state mutating after the saga was first initialized.

If you want to configure the number of delivery attempts, you can use one of the saga fixture factory methods that has a maxDeliveryAttempts parameter, e.g. like
using var fixture = SagaFixture.For(() => new MySaga(), maxDeliveryAttempts: 1);
This test shows that the number of caught exceptions can be counted on, when setting MAX delivery attempts: https://github.com/rebus-org/Rebus.TestHelpers/blob/master/Rebus.TestHelpers.Tests/Bugs/TestAsyncConcurrencyChallengeWithSagaFixture.cs
Unfortunately, the generic factory method (e.g. SagaFixture.For<MySaga>()) does not currently have any additional parameters, but I'll add that some time later today. 🙂
Update: Rebus.TestHelpers 7.1.1 accepts the same parameters in the generic factory method, so it's now possible to call SagaFixture.For<MySaga>(maxDeliveryAttempts: 1))

Related

In Disassembler pipeline component - Send only last message out from GetNext() method

I have a requirement where I will be receiving a batch of records. I have to disassemble and insert the data into DB which I have completed. But I don't want any message to come out of the pipeline except the last custom made message.
I have extended FFDasm and called Disassembler(), then we have GetNext() which is returning every debatched message out and they are failing as there is subscribers. I want to send nothing out from GetNext() until Last message.
Please help if anyone have already implemented this requirement. Thanks!
If you want to send only one message on the GetNext, you have to call on Disassemble method to the base Disassemble and get all the messages (you can enqueue this messages to manage them on GetNext) as:
public new void Disassemble(IPipelineContext pContext, IBaseMessage pInMsg)
{
try
{
base.Disassemble(pContext, pInMsg);
IBaseMessage message = base.GetNext(pContext);
while (message != null)
{
// Only store one message
if (this.messagesCount == 0)
{
// _message is a Queue<IBaseMessage>
this._messages.Enqueue(message);
this.messagesCount++;
}
message = base.GetNext(pContext);
}
}
catch (Exception ex)
{
// Manage errors
}
Then on GetNext method, you have the queue and you can return whatever you want:
public new IBaseMessage GetNext(IPipelineContext pContext)
{
return _messages.Dequeue();
}
The recommended approach is to publish messages after disassemble stage to BizTalk message box db and use a db adapter to insert into database. Publishing messages to message box and using adapter will provide you more options on design/performance and will decouple your DB insert from receive logic. Also in future if you want to reuse the same message for something else, you would be able to do so.
Even then for any reason if you have to insert from pipeline component then do the following:
Please note, GetNext() method of IDisassembler interface is not invoked until Disassemble() method is complete. Based on this, you can use following approach assuming you have encapsulated FFDASM within your own custom component:
Insert all disassembled messages in disassemble method itself and enqueue only the last message to a Queue class variable. In GetNext() message then return the Dequeued message, when Queue is empty return null. You can optimize the DB insert by inserting multiple rows at a time and saving them in batches depending on volume. Please note this approach may encounter performance issues depending on the size of file and number of rows being inserted into db.
I am calling DBInsert SP from GetNext()
Oh...so...sorry to say, but you're doing it wrong and actually creating a bunch of problems doing this. :(
This is a very basic scenario to cover with BizTalk Server. All you need is:
A Pipeline Component to Promote BTS.InterchageID
A Sequential Convoy Orchestration Correlating on BTS.InterchangeID and using Ordered Delivery.
In the Orchestration, call the SP, transform to SOAP, call the SOAP endpoint, whatever you need.
As you process the Messages, check for BTS.LastInterchagneMessage, then perform your close out logic.
To be 100% clear, there are no practical 'performance' issues here. By guessing about 'performance' you've actually created the problem you were thinking to solve, and created a bunch of support issues for later on, sorry again. :( There is no reason to not use an Orchestration.
As noted, 25K records isn't a lot. Be sure to have the Receive Location and Orchestration in different Hosts.

Spring batch : get actual item in write error listener while using Async item processor

Application background:
I am using async item processor and passing delegate as composite processor. When i get an exception in processors, my write error listener gets called.
onWriteError method signature is (Exception exception, List items)
Issue:
All items on the list to onWriteError method are Future tasks. If i call "get" method on future task, it gives me the same exception which caused write error.
How can i get the original item in writer listener methods during async execution?
I couldn't provide any actual code because my company policy prohibits me from posting code in online forums.
You can't. It's like going back in time. That is one of the disadvantages of the AsyncItemProcessor/AsyncItemWriter combination. As the javadoc points out in the AsyncItemProcessor:
Because the Future is typically unwrapped in the ItemWriter, there are
lifecycle and stats limitations (since the framework doesn't know what
the result of the processor is). While not an exhaustive list, things
like StepExecution.filterCount will not reflect the number of filtered
items and ItemProcessListener.onProcessError(Object, Exception) will
not be called.

Lambda as a leaky state machine?

I have a micro-service which involved in an OAuth 1 interaction. I'm finding myself in a situation where two runs of the Lambda functions with precisely the same starting states have very different outcomes (where state is considered the "event" passed in, environment variables, and "stageParameters" from the API Gateway).
Here's a Cloudwatch log that shows two back-to-back runs:
You can see that while the starting state is identical, the execution path changes pretty quickly. In the second case (failure case), you see the log entry "Auth state changed: null" ... that is very odd indeed because in fact this is logged before even the first line of code of the "handler" is executed. Here's the beginning of the functions handler:
export const handler = (event, context, cb) => {
console.log('EVENT:\n', JSON.stringify(event, null, 2));
So where is this premature logging entry coming from? Well, one must assume that it somehow is left over from prior executions. Let me demonstrate ... it is in fact an event listener that was setup in the prior execution. This function interacts with a Firebase DB and the first time it connects it sets the following up:
auth.signInWithEmailAndPassword(username, password)
.then((result) => {
auth.onAuthStateChanged(this.watchAuthState);
where the watchAuthState function is simply:
watchAuthState(user) {
console.log(`Auth state changed:\n`, JSON.stringify(user, null, 2));
}
This seems to mean that when I run the DB a second time I am already "initialized" with the Firebase DB but apparently the authentication has been invalidated. My number one aim is to just get back to a predictive state model and have it execute precisely the same each time.
If, there are sneaky ways to reuse cached state between Lambda executions in resource useful ways then I guess that too would be interesting but only if we can do that while achieving the predictive state machine.
Regarding the logs order, look at the ID that comes after each timestamp at the beginning of each line. I believe this is the invocation ID. In the two lines you have highlighted in orange, they are from different invocations of the function. The EVENT log is the first line to get logged from the invocation with ID ending in 754ee. The Auth state changed: null line is a log entry coming from the earlier invocation of the function with invocation ID ending in c40d5.
It looks like you are setting auth state to null at the end of an invocation, but the Firebase connection is global, so the second function invocation thinks the Firebase connection is already initialized, but then it throws errors because the authentication was nulled out.
My number one aim is to just get back to a predictive state model and
have it execute precisely the same each time.
Then you need to be aware of Lambda container reuse, and not use any global variables.

Why is ASP.NET HttpContext.Current not set when starting a task with current synchronization context

I was playing around with asynchronous features of .NET a little bit and came up with a situation that I couldn't really explain. When executing the following code inside a synchronous ASP.NET MVC controller
var t = Task.Factory.StartNew(()=>{
var ctx = System.Web.HttpContext.Current;
//ctx == null here
},
CancellationToken.None,
TaskCreationOptions.None,
TaskScheduler.FromCurrentSynchronizationContext()
);
t.Wait();
ctx is null within the delegate. Now to my understanding, the context should be restored when you use the TaskScheduler.FromCurrentSynchronizationContext() task scheduler. So why isn't it here? (I can, btw, see that the delegate gets executed synchronously on the same thread).
Also, from msdn, a TaskScheduler.FromCurrentSynchronizationContext() should behave as follows:
All Task instances queued to the returned scheduler will be executed
through a call to the Post method on that context.
However, when I use this code:
var wh = new AutoResetEvent(false);
SynchronizationContext.Current.Post(s=> {
var ctx = System.Web.HttpContext.Current;
//ctx is set here
wh.Set();
return;
},null);
wh.WaitOne();
The context is actually set.
I know that this example is little bit contrived, but I'd really like to understand what happens to increase my understanding of asynchronous programming on .NET.
Your observations seem to be correct, it is a bit puzzling.
You specify the scheduler as "TaskScheduler.FromCurrentSynchronizationContext()". This associates a new "SynchronizationContextTaskScheduler". Now if you look into this class it uses:
So if the task scheduler has access to the same "Synchronization
Context" and that should reference
"LegacyAspNetSychronizationContext". So surely it appears that
HttpContext.current should not be null.
In the second case, when you use a SychronizationContext (Refer:MSDN Article) the thread's context is shared with the task:
"Another aspect of SynchronizationContext is that every thread has a
“current” context. A thread’s context isn’t necessarily unique; its
context instance may be shared with other threads."
SynchronizationContext.Current is provided by LegacyAspNetSychronizationContext in this case and internally has a reference to HttpApplication.
When the Post method has to invoke registered callback, it calls HttpApplication.OnThreadEnter, which ultimately results in setting of the current thread's context as HttpCurrent.Context:
All the classes referenced here are defined as internal in the framework and is making it a bit difficult to investigate further.
PS: Illustrating that both SynchornizationContext objects in fact point to "LegacyAspNetSynchronizationContext":
I was googling for HTTPContext info some time ago. And I found this:
http://odetocode.com/articles/112.aspx
It's about threading and HTTPContext. There is good explanation:
The CallContext provides a service extremely similar to thread local storage (except CallContext can perform some additional magic during a remoting call). Thread local storage is a concept where each logical thread in an application domain has a unique data slot to keep data specific to itself. Threads do not share the data, and one thread cannot modify the data local to a different thread. ASP.NET, after selecting a thread to execute an incoming request, stores a reference to the current request context in the thread’s local storage. Now, no matter where the thread goes while executing (a business object, a data access object), the context is nearby and easily retrieved.
Knowing the above we can state the following: if, while processing a request, execution moves to a different thread (via QueueUserWorkItem, or an asynchronous delegate, as two examples), HttpContext.Current will not know how to retrieve the current context, and will return null. You might think one way around the problem would be to pass a reference to the worker thread
So you have to create reference to your HTTPContext.Current via some variable and this variable will be adressed from other threads you will create in your code.
Your results are odd - are you sure there's nothing else going on?
Your first example ( with Task ) only works because Task.Wait() can run the task body "inline".
If you put a breakpoint in the task lambda and look at the call stack, you will see that the lambda is being called from inside the Task.Wait() method - there is no concurrency. Since the task is being executed with just normal synchronous method calls, HttpContext.Current must return the same value as it would from anywhere else in your controller method.
Your second example ( with SynchronizationContext.Post ) will deadlock and your lambda will never run.
This is because you are using an AutoResetEvent, which doesn't "know" anything about your Post. The call to WaitOne() will block the thread until the AutoResetEvent is Set. At the same time, the SynchronizationContext is waiting for the thread to be free in order to run the lambda.
Since the thread is blocked in WaitOne, the posted lambda will never execute, which means the AutoResetEvent will never be set, which means the WaitOne will never be satisfied. This is a deadlock.

Best way to implement 1:1 asynchronous callbacks/events in ActionScript 3 / Flex / AIR?

I've been utilizing the command pattern in my Flex projects, with asynchronous callback routes required between:
whoever instantiated a given command object and the command object,
the command object and the "data access" object (i.e. someone who handles the remote procedure calls over the network to the servers) that the command object calls.
Each of these two callback routes has to be able to be a one-to-one relationship. This is due to the fact that I might have several instances of a given command class running the exact same job at the same time but with slightly different parameters, and I don't want their callbacks getting mixed up. Using events, the default way of handling asynchronicity in AS3, is thus pretty much out since they're inherently based on one-to-many relationships.
Currently I have done this using callback function references with specific kinds of signatures, but I was wondering if someone knew of a better (or an alternative) way?
Here's an example to illustrate my current method:
I might have a view object that spawns a DeleteObjectCommand instance due to some user action, passing references to two of its own private member functions (one for success, one for failure: let's say "deleteObjectSuccessHandler()" and "deleteObjectFailureHandler()" in this example) as callback function references to the command class's constructor.
Then the command object would repeat this pattern with its connection to the "data access" object.
When the RPC over the network has successfully been completed (or has failed), the appropriate callback functions are called, first by the "data access" object and then the command object, so that finally the view object that instantiated the operation in the first place gets notified by having its deleteObjectSuccessHandler() or deleteObjectFailureHandler() called.
I'll try one more idea:
Have your Data Access Object return their own AsyncTokens (or some other objects that encapsulate a pending call), instead of the AsyncToken that comes from the RPC call. So, in the DAO it would look something like this (this is very sketchy code):
public function deleteThing( id : String ) : DeferredResponse {
var deferredResponse : DeferredResponse = new DeferredResponse();
var asyncToken : AsyncToken = theRemoteObject.deleteThing(id);
var result : Function = function( o : Object ) : void {
deferredResponse.notifyResultListeners(o);
}
var fault : Function = function( o : Object ) : void {
deferredResponse.notifyFaultListeners(o);
}
asyncToken.addResponder(new ClosureResponder(result, fault));
return localAsyncToken;
}
The DeferredResponse and ClosureResponder classes don't exist, of course. Instead of inventing your own you could use AsyncToken instead of DeferredResponse, but the public version of AsyncToken doesn't seem to have any way of triggering the responders, so you would probably have to subclass it anyway. ClosureResponder is just an implementation of IResponder that can call a function on success or failure.
Anyway, the way the code above does it's business is that it calls an RPC service, creates an object encapsulating the pending call, returns that object, and then when the RPC returns, one of the closures result or fault gets called, and since they still have references to the scope as it was when the RPC call was made, they can trigger the methods on the pending call/deferred response.
In the command it would look something like this:
public function execute( ) : void {
var deferredResponse : DeferredResponse = dao.deleteThing("3");
deferredResponse.addEventListener(ResultEvent.RESULT, onResult);
deferredResponse.addEventListener(FaultEvent.FAULT, onFault);
}
or, you could repeat the pattern, having the execute method return a deferred response of its own that would get triggered when the deferred response that the command gets from the DAO is triggered.
But. I don't think this is particularly pretty. You could probably do something nicer, less complex and less entangled by using one of the many application frameworks that exist to solve more or less exactly this kind of problem. My suggestion would be Mate.
Many of the Flex RPC classes, like RemoteObject, HTTPService, etc. return AsyncTokens when you call them. It sounds like this is what you're after. Basically the AsyncToken encapsulates the pending call, making it possible to register callbacks (in the form of IResponder instances) to a specific call.
In the case of HTTPService, when you call send() an AsyncToken is returned, and you can use this object to track the specific call, unlike the ResultEvent.RESULT, which gets triggered regardless of which call it is (and calls can easily come in in a different order than they were sent).
The AbstractCollection is the best way to deal with Persistent Objects in Flex / AIR. The GenericDAO provides the answer.
DAO is the Object which manages to perform CRUD Operation and other Common
Operations to be done over a ValueObject ( known as Pojo in Java ).
GenericDAO is a reusable DAO class which can be used generically.
Goal:
In JAVA IBM GenericDAO, to add a new DAO, the steps to be done is simply,
Add a valueobject (pojo).
Add a hbm.xml mapping file for the valueobject.
Add the 10-line Spring configuration file for the DAO.
Similarly, in AS3 Project Swiz DAO. We want to attain a similar feet of achievement.
Client Side GenericDAO model:
As we were working on a Client Side language, also we should be managing a persistent object Collection (for every valueObject) .
Usage:
Source:
http://github.com/nsdevaraj/SwizDAO

Resources