I would like to execute workflow using WorkflowApplication synchronously on calling thread.
Link http://msmvps.com/blogs/theproblemsolver/archive/2011/01/07/doing-synchronous-workflow-execution-using-the-workflowapplication.aspx provides one example but Idle and Abort events are still being executed on separate threads.
Is there something in framework that already provides full sync execution or I will have to write it?
The workflow runtime, regardless of the host you choose, is always asynchronous. There is nothing you can do about it beyond using a different SynchronizationContext or blocking the thread until the workflow is done. Ron Jacobs has a similar approach using a ManualResetEvent with his Workflow Episodes.
Two years later... The best way is :
class SynchronousSynchronizationContext : SynchronizationContext
{
public override void Post(SendOrPostCallback d, object state)
{
this.Send(d, state);
}
}
Related
I have a .NET Core Web API application. The app contains models for db access and a way to send emails to users. My end goal is to call a method nightly (to email users that their registration expired and to mark it expired in the database).
So, in short, I can build an endpoint and call it manually every night. Or build a windows service to call the endpoint. Or build a windows service to do the work. But I want to keep the logic in one application.
My ideal solution would be to have a timer running inside my app and calling a method in a service every 24 hours. Of course, that's not possible, so I am looking at Hangfire. The official documentation seems to indicate that there is a lot of overhead.
Hangfire keeps background jobs and other information that relates to the processing inside a persistent storage. Persistence helps background jobs to survive on application restarts, server reboots, etc.
Do I need this if I just want to call a method?
Background jobs are processed by Hangfire Server. It is implemented as a set of dedicated (not thread pool’s) background threads that fetch jobs from a storage and process them. Server is also responsible to keep the storage clean and remove old data automatically.
Do I even need jobs?
Is there a way to JUST call a method without all this overhead with Hangfire?
tl;dr: Are there options to opt out of the dashboard, database connectivity, etc and just have Hangfire work as a timer?
My ideal solution would be to have a timer running inside my app and calling a method in a service every 24 hours. Of course, that's not possible...
It's very possible, actually, using IHostedService. You should take some time to read the full documentation, but simply, for your scenario, you'd just need something like:
internal class NightlyEmailHostedService : IHostedService, IDisposable
{
private Timer _timer;
public Task StartAsync(CancellationToken cancellationToken)
{
_timer = new Timer(DoWork, null, TimeSpan.Zero,
TimeSpan.FromHours(24));
return Task.CompletedTask;
}
private void DoWork(object state)
{
// send email
}
public Task StopAsync(CancellationToken cancellationToken)
{
_timer?.Change(Timeout.Infinite, 0);
return Task.CompletedTask;
}
public void Dispose()
{
_timer?.Dispose();
}
}
Then, in Startup.cs just add:
services.AddHostedService<NightlyEmailHostedService>();
Now, that's an extremely naive approach. It basically just kicks off a timer that will run once every 24 hours, but depending on when your app started, it may not always be at night. In reality, you'd likely want to have the timer run every minute or so, and check against a particular time you actually want the email to go out. There's an interesting implementation of handling cron-style times via an IHostedService you might want to reference.
The long and short is that it's very possible to do this all in your app, without requiring anything additional like Hangfire. However, you have to a do a bit more work than you would have to using something like Hangfire, of course.
Since the very begining of writing ASP.NET applications when I wanted to add a threading there are 3 simple ways I can accomplish threading within my ASP.NET application :
Using the System.Threading.ThreadPool.
Using a custom delegate and calling its BeginInvoke method.
Using custom threads with the aid of System.Threading.Thread class.
The first two methods offer a quick way to fire off worker threads for your application. But unfortunately, they hurt the overall performance of your application since they consume threads from the same pool used by ASP.NET to handle HTTP requests.
Then I wanted to use a new Task or async/await to write IHttpAsyncHandler. One example you can find is what Drew Marsh explains here : https://stackoverflow.com/a/6389323/261950
My guess is that using Task or async/await still consume the thread from the ASP.NET thread pool and I don't want for the obvious reason.
Could you please tell me if I can use Task (async/await) on the background thread like with System.Threading.Thread class and not from thread pool ?
Thanks in advance for your help.
Thomas
This situation is where Task, async, and await really shine. Here's the same example, refactored to take full advantage of async (it also uses some helper classes from my AsyncEx library to clean up the mapping code):
// First, a base class that takes care of the Task -> IAsyncResult mapping.
// In .NET 4.5, you would use HttpTaskAsyncHandler instead.
public abstract class HttpAsyncHandlerBase : IHttpAsyncHandler
{
public abstract Task ProcessRequestAsync(HttpContext context);
IAsyncResult IHttpAsyncHandler.BeginProcessRequest(HttpContext context, AsyncCallback cb, object extraData)
{
var task = ProcessRequestAsync(context);
return Nito.AsyncEx.AsyncFactory.ToBegin(task, cb, extraData);
}
void EndProcessRequest(IAsyncResult result)
{
Nito.AsyncEx.AsyncFactory.ToEnd(result);
}
void ProcessRequest(HttpContext context)
{
EndProcessRequest(BeginProcessRequest(context, null, null));
}
public virtual bool IsReusable
{
get { return true; }
}
}
// Now, our (async) Task implementation
public class MyAsyncHandler : HttpAsyncHandlerBase
{
public override async Task ProcessRequestAsync(HttpContext context)
{
using (var webClient = new WebClient())
{
var data = await webClient.DownloadDataTaskAsync("http://my resource");
context.Response.ContentType = "text/xml";
context.Response.OutputStream.Write(data, 0, data.Length);
}
}
}
(As noted in the code, .NET 4.5 has a HttpTaskAsyncHandler which is similar to our HttpAsyncHandlerBase above).
The really cool thing about async is that it doesn't take any threads while doing the background operation:
An ASP.NET request thread kicks off the request, and it starts downloading using the WebClient.
While the download is going, the await actually returns out of the async method, leaving the request thread. That request thread is returned back to the thread pool - leaving 0 (zero) threads servicing this request.
When the download completes, the async method is resumed on a request thread. That request thread is briefly used just to write the actual response.
This is the optimal threading solution (since a request thread is required to write the response).
The original example also uses threads optimally - as far as the threading goes, it's the same as the async-based code. But IMO the async code is easier to read.
If you want to know more about async, I have an intro post on my blog.
I've been looking for information through internet for a couple of days. Let me sum up what I found until now :
ASP.NET ThreadPool facts
As Andres said: When async/await will not consume an additional ThreadPool thread ? Only in the case you are using BCL Async methods. that uses an IOCP thread to execute the IO bound operation.
Andres continues with ...If you are trying to async execute some sync code or your own library code, that code will probably use an additional ThreadPool thread unless you explicitely use the IOCP ThreadPool or your own ThreadPool.
But as far as I know you can't chose whetever you want to use a IOCP thread, and making correct implementation of the threadPool is not worth the effort. I doubt someone does a better one that already exists.
ASP.NET uses threads from a common language runtime (CLR) thread pool to process requests. As long as there are threads available in the thread pool, ASP.NET has no trouble dispatching incoming requests.
Async delegates use the threads from ThreadPool.
When you should start thinking about implementing asynchronous execution ?
When your application performs relatively lengthy I/O operations (database queries, Web service calls, and other I/O operations)
If you want to do I/O work, then you should be using an I/O thread (I/O Completion Port) and specifically you should be using the async callbacks supported by whatever library class you're using. Theirs names start with the words Begin and End.
If requests are computationally cheap to process, then parallelism is probably an unnecessary overhead.
If the incoming request rate is high, then adding more parallelism will likely yield few benefits and could actually decrease performance, since the incoming rate of work may be high enough to keep the CPUs busy.
Should I create new Threads ?
Avoid creating new threads like you would avoid the plague.
If you are actually queuing enough work items to prevent ASP.NET from processing further requests, then you should be starving the thread pool! If you are running literally hundreds of CPU-intensive operations at the same time, what good would it do to have another worker thread to serve an ASP.NET request, when the machine is already overloaded.
And the TPL ?
TPL can adapt to use available resources within a process. If the server is already loaded, the TPL can use as little as one worker and make forward progress. If the server is mostly free, they can grow to use as many workers as the ThreadPool can spare.
Tasks use threadpool threads to execute.
References
http://msdn.microsoft.com/en-us/magazine/cc163463.aspx
http://blogs.msdn.com/b/pfxteam/archive/2010/02/08/9960003.aspx
https://stackoverflow.com/a/2642789/261950
Saying that "0 (zero) threads will be servicing this request" is not accurate entirely.
I think you mean "from the ASP.NET ThreadPool", and in the general case that will be correct.
When async/await will not consume an additional ThreadPool thread?
Only in the case you are using BCL Async methods (like the ones provided by WebClient async extensions) that uses an IOCP thread to execute the IO bound operation.
If you are trying to async execute some sync code or your own library code, that code will probably use an additional ThreadPool thread unless you explicitely use the IOCP ThreadPool or your own ThreadPool.
Thanks,
Andrés.
The Parallel Extensions team has a blog post on using TPL with ASP.NET that explains how TPL and PLINQ use the ASP.NET ThreadPool. The post even has a decision chart to help you pick the right approach.
In short, PLINQ uses one worker thread per core from the threadpool for the entire execution of the query, which can lead to problems if you have high traffic.
The Task and Parallel methods on the other hand will adapt to the process's resources and can use as little as one thread for processing.
As far as the Async CTP is concerned, there is little conceptual difference between the async/await construct and using Tasks directly. The compiler uses some magic to convert awaits to Tasks and Continuations behind the scenes. The big difference is that your code is much MUCH cleaner and easier to debug.
Another thing to consider is that async/await and TPL (Task) are not the same thing.
Please read this excellent post http://blogs.msdn.com/b/ericlippert/archive/2010/11/04/asynchrony-in-c-5-0-part-four-it-s-not-magic.aspx to understand why async/await doesn't mean "using a background thread".
Going back to our topic here, in your particular case where you want to perform some expensive calculations inside an AsyncHandler you have three choices:
1) leave the code inside the Asynchandler so the expensive calculation will use the current thread from the ThreadPool.
2) run the expensive calculation code in another ThreadPool thread by using Task.Run or a Delegate
3) Run the expensive calculation code in another thread from your custom thread pool (or IOCP threadPool).
The second case MIGHT be enough for you depending on how long your "calculation" process run and how much load you have. The safe option is #3 but a lot more expensive in coding/testing. I also recommend always using .NET 4 for production systems using async design because there are some hard limits in .NET 3.5.
There's a nice implementation of HttpTaskAsyncHandler for the .NET 4.0 in the SignalR project. You may want to ckeck it out: http://bit.ly/Jfy2s9
I have an ASP .NET website running on GoDaddy in a shared environment. The application is a subscription-based service with options for recurring billing to users.
Every hour, we need to synchronize user data with our payment processor to update users who have upgraded or cancelled their accounts. The payment processor, does not have a mechanism for calling a URL or otherwise notifying us of changes.
The problem: We need to create a background thread that runs some code at a predefined interval. There are some good articles about background tasks in .NET but I am sure, there could be a simpler way around this. Maybe an application-wide timer that can call a function, etc.
The limitation: Shared environment does not allow windows services, external applications, full-trust, etc.
Since this is a production application, I would like to use the safest approach possible rather than arm-twisting IIS.
I had a similar problem, I'm developing a ASP proof of concept and use a background thread that performs a task that could take several hours. Problem is, ASP.Net can recycle the AppDomain at anytime (killing my background thread).
To prevent this, you can register your background thread to ASP.Net so it will notify your thread to shut down. To do this implement the following interface:
public interface IRegisteredObject
{
void Stop(bool immediate);
}
And register your object to ASP using the following static method:
HostingEnvironment.RegisterObject(this);
When ASP.NET tears down the AppDomain, it will first attempt to call Stop method on all registered objects. In most cases, it’ll call this method twice, once with immediate set to false. This gives your code a bit of time to finish what it is doing. ASP.NET gives all instances of IRegisteredObject a total of 30 seconds to complete their work, not 30 seconds each. After that time span, if there are any registered objects left, it will call them again with immediate set to true.
By preventing the Stop method from returning (by locking a field when the worker is busy), we stop ASP from shutting down the AppDomain until our work is finished.
public void Stop(bool immediate)
{
lock (_lock)
{
_shuttingDown = true;
}
HostingEnvironment.UnregisterObject(this);
}
public void DoWork(Action work)
{
lock (_lock)
{
if (_shuttingDown)
{
return;
}
work();
}
}
Use a Task instead of action to benefit from cancellation options. For your specific case you could start a timer that executes tasks like this.
PS. This is a hack and ASP isn't meant to run background tasks so use a windows service or WCF service when possible! I use this since it simplifies development, maintenance and installation.
For more information see my source: http://haacked.com/archive/2011/10/16/the-dangers-of-implementing-recurring-background-tasks-in-asp-net.aspx
To update for 2018 - The Hangfire NuGet package is perfect for this
Since there were no answers, I thought I'd post my solution in case it helps others.
Not the ideal approach by any means but for those who might gain from it, I created a cron job on another Linux hosting account we had to call the required ASP .NET url. Management horror but does the job.
Do I have to lock access to instance members?
Example:
public class HttpModule : IHttpModule
{
//...
Dictionary<int, int> foo;
void UseFoo(int a, int b)
{
foo[a] = b;
}
}
It's not crystal clear to me so far from the MSDN documentation, but I found a forum post from someone who claims to know the answer. It sounds like you shouldn't expect bad stuff to happen with your implementation, but you should be aware that foo's state will not necessarily be shared across all results since your HttpModule will be created once per HttpApplication that IIS chooses to keep in its pool.
I wanted to offer here my findings related to this question as I have observed in IIS6:
I have been dealing with this issue extensively in IIS6 and have found some interesting results utilizing log4net and reflection to capture execution history. What I have found is that there is extensive 'thread management' going on behind the scenes. It seems that there is a 'primary' series of threads that corresponds 1:1 to HttpApplication. These threads however to do not exclusively handle the pipeline for your request. Various different sub-threads can be called when these instances are accessed. Subsequent new requests and resource requests utilized by your application seem to share some persistent information relating to your original request but are yet never handled entirely by the initial thread indicating some type of relationship. I could not discern any concrete pattern (other than what i've previously described) as to which elements were divvied to other threads as it was seemingly random. My conclusion to this evidence is that there is some concept of hierarchical pooling? occurring where some unknown subset of reference elements are inherited in the child threads through the parent reference.
So as an answer I would say that HttpModules are shared between threads. In terms of locking instance values, this would be applicable if the values apply to all requests which use the module and must maintain some state. I could see this being useful if attempting to maintain stateful instance values which are expensive to ascertain so that they could be reused in subsequent requests.
This issue had been troubling me for some time hopefully this info helps someone.
I recently found an article which touches on this question slightly:
http://www.dominicpettifer.co.uk/Blog/41/ihttpmodule-gotchas---the-init---method-can-get-called-multiple-times
It doesn't mention threads, but only says that the worker process will
instantiate as many HttpApplication
objects as it thinks it needs, then
it'll pool them for performance
reasons, reusing instances as new
requests come in before sending them
back into the pool.
Following the code in from the link, you can be sure that your init code is executed once in a thread-safe manner:
private static bool HasAppStarted = false;
private readonly static object _syncObject = new object();
public void Init(HttpApplication context)
{
if (!HasAppStarted)
{
lock (_syncObject)
{
if (!HasAppStarted)
{
// Run application StartUp code here
HasAppStarted = true;
}
}
}
}
I've been meaning to set up a test app to run this and test it, just to see if it's true, but I haven't had the time.
The article posted by Jim is interesting, but as Jim says it does not mention anything about thread safety.
I guess you would only need the lock mechanism if you are initializing static members or performing "only once" initializations i.e. initializing a static resource.
I couldn't conclude from MSDN nor the article mentioned by Jim that we need the lock mechanism when initializing non-static class variables.
I am writing a custom Windows Workflow Foundation activity, that starts some process asynchronously, and then should wake up when an async event arrives.
All the samples I’ve found (e.g. this one by Kirk Evans) involve a custom workflow service, that does most of the work, and then posts an event to the activity-created queue. The main reason for that seems to be that the only method to post an event [that works from a non-WF thread] is WorkflowInstance.EnqueueItem, and the activities don’t have access to workflow instances, so they can't post events (from non-WF thread where I receive the result of async operation).
I don't like this design, as this splits functionality into two pieces, and requires adding a service to a host when a new activity type is added. Ugly.
So I wrote the following generic service that I call from the activity’s async event handler, and that can reused by various async activities (error handling omitted):
class WorkflowEnqueuerService : WorkflowRuntimeService
{
public void EnqueueItem(Guid workflowInstanceId, IComparable queueId, object item)
{
this.Runtime.GetWorkflow(workflowInstanceId).EnqueueItem(queueId, item, null, null);
}
}
Now in the activity code, I can obtain and store a reference to this service, start my async operation, an when it completes, use this service to post an event to my queue. The benefits of this - I keep all the activity-specific code inside activity, and I don't have to add new services for each activity types.
But seeing the official and internet samples doing it will specialized non-reusable services, I would like to check if this approach is OK, or I’m creating some problems here?
There is a potential problem here with regard to workflow persistence.
If you create long running worklfows that are persisted in a database to the runtime will be able to restart these workflows are not reloaded into memory until there is some external event that reloads them. As there they are responsible for triggering the event themselves but cannot until they are reloaded. And we have a catch 22 :-(
The proper way to do this is using an external service. And while this might feel like dividing the code into two places it really isn't. The reason is that the workflow is responsible for the big picture, IE what should be done. And the runtime service is responsible for the actual implementation or how it should be done. That way you can change the how without changing the why and when part.
A followup - regardless of all the reasons, why it "should be done" using a service, this will be directly supported by .NET 4.0, which provides a clean way for an activity to start an asynchronous work, while suspending the persistence of the activity.
See
http://msdn.microsoft.com/en-us/library/system.activities.codeactivitycontext.setupasyncoperationblock(VS.100).aspx
for details.