Rebus - Run unit tests and wait for Rebus to complete all threads - rebus

I am building an integration test where I am using InMemNetwork to run the test.
There is a Thread.Sleep call just before an assert but that is a dodgy way of testing and it slows down our tests a lot.
I am also doing some integration tests using SagaFixtures and a simple IBus implementation that runs synchronously but its all a bit tedious with registering handlers, running handlers and deferring messages.
Is there a way to wait on all threads in use by Rebus until they are finished executing without augmenting production code using things like ManualResetEvent (used in Rebus own tests)?

I usually use SagaFixture as you do, and then I use FakeBus to inject into saga handlers in order to capture their actions.
Most of my tests are unit tests of simple handlers though, but I will often inject "real" service, like e.g. implementation of IThis and IThat that go to a real database.
For a couple of scenarios though I spin up multiple endpoints with an in-mem transport, and then I usually implement an extension on InMemNetwork that helps me wait for particular events to be published or something like that – it could look like this in a test:
var updated = await Network.WaitForNext<WhateverUpdated>(subscriberAddress, timeoutSeconds: 20);
where WaitForNext is simply an extension method that polls the queue specified by subscriberAddress for the next message and tries to deserialize it as WhateverUpdated.
I hope that can give you some inspiration :)

For some scenario's I use the following approach to wait for Rebus to complete all message processing. The rebus endpoints are hosted in separate exe's and the rebus filesystem transport is used for integration tests (normally it's Azure SB). The integration test spins up the exe's and in each exe Rebus is configured with 0 workers, so it's doing nothing. Then in the test we have a WaitForMessagesProcessed() method that configures a number of workers and blocks until there are no more messages to be processed.
Here is how it roughly looks in code:
public class MessageProcessor() {
private string queueName;
private int messagesWaitingForProcessing;
private IBus bus;
public MessageProcessor(string queueName) {
this.queueName = queueName;
this.bus = Configure.With(adapter)
.Transport(t => t.UseFileSystem(#"c:\baseDirectory", this.queueName))
.Options(o =>
{
o.SetNumberOfWorkers(0);
})
.Events(e =>
{
e.BeforeMessageSent += (thebus, headers, message, context) =>
{
// When sending to itself, the message is not queued on the network.
var m = context.Load<Rebus.Pipeline.Send.DestinationAddresses>();
if (m.Any(t => t == this.queueName))
this.messagesWaitingForProcessing++;
};
e.AfterMessageHandled += (thebus, headers, message, context, args) =>
{
this.messagesWaitingForProcessing--;
};
})
.Start();
}
public async Task WaitForMessagesProcessed()
{
this.DetermineMessagesWaitingForProcessing();
while (this.messagesWaitingForProcessing > 0)
{
this.bus.Advanced.Workers.SetNumberOfWorkers(2);
while (this.messagesWaitingForProcessing > 0)
{
await Task.Delay(100);
}
this.bus.Advanced.Workers.SetNumberOfWorkers(0);
this.DetermineMessagesWaitingForProcessing();
}
}
public void DetermineMessagesWaitingForProcessing() {
this.messagesWaitingForProcessing = Directory.GetFiles(GetDirectoryForQueueNamed(this.queueName), "*.rebusmessage.json").Count();
}
private static string GetDirectoryForQueueNamed(string queueName)
{
return Path.Combine(this.baseDiretory, queueName);
}
}
A test could be like
[TestMethod]
public void Test() {
var endpoint1 = LaunchExe("1");
var endpoint2 = LaunchExe("2");
endPoint1.DoSomeAction();
endPoint1.WaitForMessagesProcessed();
Assert.AreEqual("expectation", endPoint1.Query());
}

Related

Multiple Workers in .net core worker services?

The dot net core 3.0 worker services template shown as follow:
public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureServices(services =>
{
services.AddHostedService<Worker>();
});
}
The "Worker" class is derived from BackgroundService. It loops to write log to console every 1000 ms.
My questions:
Can I run multiple "Worker"s simultaneously? (I know I can create another class "Worker2". But can I run two copies of same class "Worker"?)
If yes, how I can configure two "Worker" with different configuration or parameters, say, two Workers with different looping intervals? (Because instance of "Worker" class is created by DI framework. I don't know how I can pass different config/parameters to two different instance of "Worker")
You can have a "parent" worker that launches the "real" workers like this...
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
var workers = new List<Task>();
foreach(var delay in _config.LoopIntervals)
workers.Add(DoRealWork(delay, stoppingToken));
await Task.WhenAll(workers.ToArray());
}
}
Then...
private async Task DoRealWork(int delay, CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
_logger.LogInformation("worker {delay} checking in at {time}", delay, DateTimeOffset.Now);
await Task.Delay(delay, stoppingToken);
}
}
_config gets populated from appSettings.json and passed in to the constructor of the Worker like this...
var cfg = hostContext.Configuration.GetSection(nameof(WorkerConfig)).Get<WorkerConfig>();
services.AddSingleton(cfg);
services.AddHostedService<Worker>();
and the appSettings...
{
"WorkerConfig": {
"LoopIntervals": [ 1000, 2000, 3000 ]
}
}
The reason why "AddHostedService" not accept same class registered twice:
I checked the github source of "AddHostedService" and found that it is implemented as follows:
services.TryAddEnumerable(ServiceDescriptor.Singleton<IHostedService, THostedService>());
According to Microsoft documentation of "TryAddEnumerable" method, the service is only added if the collection contains no other registration for the same service and implementation type. This is the reason why I cannot run two copies of "Worker".
(The "TryAddEnumerable" is used intentionally to avoid multiple instance created for same service. ref: https://github.com/aspnet/Extensions/issues/1078)
Solution:
So I can added two workers by the using "AddSingleton" directly...
services.AddSingleton<IHostedService, Worker>();
services.AddSingleton<IHostedService, Worker>();
It works.
Passing parameter to service constructor:
Then, I modified the Worker class constructor by adding a second parameter for loop interval. Finally, I use implementation factory in registering service as follow
services.AddSingleton<IHostedService>(sp => new Worker(sp.GetService<ILogger<Worker>>(), 1000));
services.AddSingleton<IHostedService>(sp => new Worker(sp.GetService<ILogger<Worker>>(), 2000));
Hi I am also looking for solution to run multiple instances of background service
Thanks to Raymond Wong, the solution worked perfectly.
Just want to extend it further to make it configurable using AppSettings
"AppSettings":{
"MaxInstances":2
}
Then when registering background service
var limit = appSettings.MaxInstances;
for (int i = 0; i < limit ; i++)
{
services.AddSingleton<IHostedService, Worker>();
}
So we can control the no of instances from azure configuration.

Is it possible to create an async inteceptor using Castle.DynamicProxy?

We basically have a class that looks like this below that is using the Castle.DynamicProxy for Interception.
using System;
using System.Collections.Concurrent;
using System.Reflection;
using System.Threading;
using System.Threading.Tasks;
using Castle.DynamicProxy;
namespace SaaS.Core.IoC
{
public abstract class AsyncInterceptor : IInterceptor
{
private readonly ILog _logger;
private readonly ConcurrentDictionary<Type, Func<Task, IInvocation, Task>> wrapperCreators =
new ConcurrentDictionary<Type, Func<Task, IInvocation, Task>>();
protected AsyncInterceptor(ILog logger)
{
_logger = logger;
}
void IInterceptor.Intercept(IInvocation invocation)
{
if (!typeof(Task).IsAssignableFrom(invocation.Method.ReturnType))
{
InterceptSync(invocation);
return;
}
try
{
CheckCurrentSyncronizationContext();
var method = invocation.Method;
if ((method != null) && typeof(Task).IsAssignableFrom(method.ReturnType))
{
var taskWrapper = GetWrapperCreator(method.ReturnType);
Task.Factory.StartNew(
async () => { await InterceptAsync(invocation, taskWrapper).ConfigureAwait(true); }
, // this will use current synchronization context
CancellationToken.None,
TaskCreationOptions.AttachedToParent,
TaskScheduler.FromCurrentSynchronizationContext()).Wait();
}
}
catch (Exception ex)
{
//this is not really burring the exception
//excepiton is going back in the invocation.ReturnValue which
//is a Task that failed. with the same excpetion
//as ex.
}
}
....
Initially this code was:
Task.Run(async () => { await InterceptAsync(invocation, taskWrapper)).Wait()
But we were losing HttpContext after any call to this, so we had to switch it to:
Task.Factory.StartNew
So we could pass in the TaskScheduler.FromCurrentSynchronizationContext()
All of this is bad because we are really just swapping one thread for another thread. I would really love to change the signature of
void IInterceptor.Intercept(IInvocation invocation)
to
async Task IInterceptor.Intercept(IInvocation invocation)
And get rid of the Task.Run or Task.Factory and just make it:
await InterceptAsync(invocation, taskWrapper);
The problem is Castle.DynamicProxy IInterecptor won't allow this. I really want do an await in the Intercept. I could do .Result but then what is the point of the async call I am calling? Without being able to do the await I lose out of the benefit of it being able to yield this threads execution. I am not stuck with Castle Windsor for their DynamicProxy so I am looking for another way to do this. We have looked into Unity, but I don't want to replace our entire AutoFac implementation.
Any help would be appreciated.
All of this is bad because we are really just swapping one thread for another thread.
True. Also because the StartNew version isn't actually waiting for the method to complete; it will only wait until the first await. But if you add an Unwrap() to make it wait for the complete method, then I strongly suspect you'll end up with a deadlock.
The problem is Castle.DynamicProxy IInterecptor won't allow this.
IInterceptor does have a design limitation that it must proceed synchronously. So this limits your interception capabilities: you can inject synchronous code before or after the asynchronous method, and asynchronous code after the asynchronous method. There's no way to inject asynchronous code before the asynchronous method. It's just a limitation of DynamicProxy, one that would be extremely painful to correct (as in, break all existing user code).
To do the kinds of injection that is supported, you have to change your thinking a bit. One of the valid mental models of async is that a Task returned from a method represents the execution of that method. So, to append code to that method, you would call the method directly and then replace the task return value with an augmented one.
So, something like this (for return types of Task):
protected abstract void PreIntercept(); // must be sync
protected abstract Task PostInterceptAsync(); // may be sync or async
// This method will complete when PostInterceptAsync completes.
private async Task InterceptAsync(Task originalTask)
{
// Asynchronously wait for the original task to complete
await originalTask;
// Asynchronous post-execution
await PostInterceptAsync();
}
public void Intercept(IInvocation invocation)
{
// Run the pre-interception code.
PreIntercept();
// *Start* the intercepted asynchronous method.
invocation.Proceed();
// Replace the return value so that it only completes when the post-interception code is complete.
invocation.ReturnValue = InterceptAsync((Task)invocation.ReturnValue);
}
Note that the PreIntercept, the intercepted method, and PostInterceptAsync are all run in the original (ASP.NET) context.
P.S. A quick Google search for async DynamicProxy resulted in this. I don't have any idea how stable it is, though.

Regulate network calls in SyncAdapter onPerformSync

I m sending several retrofit calls via SyncAdapter onPerformSync and I m trying to regulate http calls by sending out via a try/catch sleep statement. However, this is blocking the UI and will be not responsive only after all calls are done.
What is a better way to regulate network calls (with a sleep timer) in background in onPerformSync without blocking UI?
#Override
public void onPerformSync(Account account, Bundle extras, String authority, ContentProviderClient provider, SyncResult syncResult) {
String baseUrl = BuildConfig.API_BASE_URL;
Retrofit retrofit = new Retrofit.Builder()
.baseUrl(baseUrl)
.addConverterFactory(GsonConverterFactory.create())
.build();
service = retrofit.create(HTTPService.class);
Call<RetroFitModel> RetroFitModelCall = service.getRetroFit(apiKey, sortOrder);
RetroFitModelCall.enqueue(new Callback<RetroFitModel>() {
#Override
public void onResponse(Response<RetroFitModel> response) {
if (!response.isSuccess()) {
} else {
List<RetroFitResult> retrofitResultList = response.body().getResults();
Utility.storeList(getContext(), retrofitResultList);
for (final RetroFitResult result : retrofitResultList) {
RetroFitReview(result.getId(), service);
try {
// Sleep for SLEEP_TIME before running RetroFitReports & RetroFitTime
Thread.sleep(SLEEP_TIME);
} catch (InterruptedException e) {
}
RetroFitReports(result.getId(), service);
RetroFitTime(result.getId(), service);
}
}
}
#Override
public void onFailure(Throwable t) {
Log.e(LOG_TAG, "Error: " + t.getMessage());
}
});
}
}
The "onPerformSync" code is executed within the "SyncAdapterThread" thread, not within the Main UI thread. However this could change when making asynchronous calls with callbacks (which is our case here).
Here you are using an asynchronous call of the Retrofit "call.enqueue" method, and this has an impact on thread execution. The question we need to ask at this point:
Where callback methods are going to be executed?
To get the answer to this question, we have to determine which Looper is going to be used by the Handler that will post callbacks.
In case we are playing with handlers ourselves, we can define the looper, the handler and how to process messages/runnables between handlers. But this time it is different because we are using a third party framework (Retrofit). So we have to know which looper used by Retrofit?
Please note that if Retrofit didn't already define his looper, you
could have caught an exception saying that you need a looper to
process callbacks. In other words, an asynchronous call needs to be in
a looper thread in order to post callbacks back to the thread from
where it was executed.
According to the code source of Retrofit (Platform.java):
static class Android extends Platform {
#Override CallAdapter.Factory defaultCallAdapterFactory(Executor callbackExecutor) {
if (callbackExecutor == null) {
callbackExecutor = new MainThreadExecutor();
}
return new ExecutorCallAdapterFactory(callbackExecutor);
}
static class MainThreadExecutor implements Executor {
private final Handler handler = new Handler(Looper.getMainLooper());
#Override public void execute(Runnable r) {
handler.post(r);
}
}
}
You can notice "Looper.getMainLooper()", which means that Retrofit will post messages/runnables into the main thread message queue (you can do research on this for further detailed explanation). Thus the posted message/runnable will be handled by the main thread.
So that being said, the onResponse/onFailure callbacks will be executed in the main thread. And it's going to block the UI, if you are doing too much work (Thread.sleep(SLEEP_TIME);). You can check it by yourself: just make a breakpoint in "onResponse" callback and check in which thread it is running.
So how to handle this situation? (the answer to your question about Retrofit use)
Since we are already in a background thread (SyncAdapterThread), so there is no need to make asynchronous calls in your case. Just make a Retrofit synchronous call and then process the result, or log a failure. This way, you will not block the UI.

SignalR - access clients from server-side business logic

I have a requirement to start a process on the server that may run for several minutes, so I was thinking of exposing the following hub method:-
public async Task Start()
{
await Task.Run(() => _myService.Start());
}
There would also be a Stop() method that allows a client to stop the running process, probably via a cancellation token. I've also omitted code that prevents it from being started if already running, error handling, etc.
Additionally, the long-running process will be collecting data which it needs to periodically broadcast back to the client(s), so I was wondering about using an event - something like this:-
public async Task Start()
{
_myService.AfterDataCollected += AfterDataCollectedHandler;
await Task.Run(() => _myService.Start());
_myService.AfterDataCollected -= AfterDataCollectedHandler;
}
private void AfterDataCollectedHandler(object sender, MyDataEventArgs e)
{
Clients.All.SendData(e.Data);
}
Is this an acceptable solution or is there a "better" way?
You don't need to use SignalR to start the work, you can use the applications already existing framework / design / API for this and only use SignalR for the pub sub part.
I did this for my current customers project, a user starts a work and all tabs belonging to that user is updated using signalr, I used a out sun library called SignalR.EventAggregatorProxy to abstract the domain from SignalR. Disclaimer : I'm the author of said library
http://andersmalmgren.com/2014/05/27/client-server-event-aggregation-with-signalr/
edit: Using the .NET client your code would look something like this
public class MyViewModel : IHandle<WorkProgress>
{
public MyViewModel(IEventAggregator eventAggregator)
{
eventAggregator.Subscribe(this);
}
public void Handle(WorkProgress message)
{
//Act on work progress
}
}

How WebMethod behaves during Ajax Call?

When calling WebMethod on a Webpage using jQuery. We define this as static.
However static methods always have one instance. What happens when multiple web requests are made.
Does it really happen asynchronously or
all the requests are pipelined waiting for the WebMethod to accept the requests?
I created a sample console program to simulate the scenario on static method work & found them to execute in sequential order.
class Program
{
static int count = 10;
static void Main(string[] args)
{
new Program().foobar();
Console.ReadLine();
}
public void foobar()
{
Parallel.Invoke(() => work("one"), () => work("two"), () => work("three"), ()=> work("four"));
}
static void work(string str)
{
Thread.Sleep(3000);
count++;
Console.WriteLine(str + " " + count);
}
}
Can you please put some light on this concept?
They will not execute sequentially. If you created multiple apps in a client server scenario it would be a better example since your console app inherently runs everything sequentially.
That said, with the static methods you just need to be aware of shared resources, data, etc. Local data is fine.

Resources