Hosting WF as Windows Service - workflow-foundation-4

I am trying to construct a simple windows workflow to monitor a directory for inbound files and do some DB updates using Windows WF 4.0. Currently I am planning to build a 'WCF Workflow Service' and host it as a 'Windows service' running 24/7 (with a daily service shutdown and startup).
Further in the future I am planning to consume this service using an ASP.NET/WPF application to create a basic dashboard kind of stuff.
Considering the idea of directory polling for files with WF hosted on windows service, does it seems to be a good idea? What can be the cons of this?
Please advice if there are any drawbacks on this or can this achieved by better means?

I'm actually doing this, but it is a bit more complex than you think, and should be avoided if possible.
You should not be blocking from within an Activity; if it is expected to be a long running Activity that is waiting from input from the outside (FileSystemWatcher event, for instance), the workflow should idle itself and wait to be woken from the outside.
How I did this was I created a workflow extension that hosted the FileSystemWatcher. Once the Activity was ready to watch for a file, it created a bookmark and passed it to the extension.
The extension then started the FSW, holding onto the bookmark.
When a FSW event was fired, the extension resumed the bookmark, passing in an object that contained details about the event. The Activity did what was needed with the event, then re-scheduled itself.
Normally I wouldn't have done this, but I had some requirements that forced me to use WF4 to accomplish this goal. If I didn't have to use WF4, I would have just spun up the FSW within the service and consumed the events.
Unless you expect to have to be very flexible with your configuration detailing what you do with the FSW event, and expect this to change relatively often during deployment of the service, I'd skip WF4.

Related

Make .net core service run in multiple machines to make it highly available but do the work by only one node

I have a .Net core application that consists of some background tasks (hosted services) and WEB APIs (which controls and get statuses of those background tasks). Other applications (e.g. clients) communicate with this service through these WEB API endpoints. We want this service to be highly available i.e. if a service crashes then another instance should start doing the work automatically. Also, the client applications should be able to switch to the next service automatically (clients should call the APIs of the new instance, instead of the old one).
The other important requirement is that the task (computation) this service performed in the background can’t be shared between two instances. We have to make sure only one instance does this task at a given time.
What I have done up to now is, I ran two instances of the same service and use a SQL server-based distributed locking mechanism (SqlDistributedLock) to acquire a lock. If a service could acquire a lock then goes and do the operation while the other node waiting to acquire the lock. If one service crashed the next node could be able to acquire the lock. On the client-side, I used Polly based retry mechanism to switch the calling URL to the next node to find the working node.
But this design has an issue, if the node which acquired the lock loses the connectivity to the SQL server then the second service managed to acquire the lock and started doing the work while the first service is also in the middle of doing the same.
I think I need some sought of leader election (seems done it wrongly), Can anyone help me with a better solution for this kind of a problem?
This problem is not specific to .Net or any other framework. So please make your question more general so as to make it more accessible. Generally the solution to this problem lies in the domain of Enterprise Integration Patterns, so consult the references as the status quo may change.
At first sight and based on my own experience developing distributed systems, I suggest two solutions:
use a load balancer or gateway to distribute requests between your service instances.
use a shared message queue broker to put requests in and let each service instance dequeue a request for processing.
Either is fine and I can use both for my own designs.

Windows scheduler API with console application Vs .net scheduler tools with asp.net mvc to execute long running processes inside my asp.net MVC

I am working on an asp.net mvc-5 web application, deployed under windows 2012 & iis-8. my asp.net mvc have many CRUD operations which are implemented as action methods inside my asp.net mvc.
But my asp.net mvc web application will be doing scheduled long running network scan process, the network scan will mainly do the following steps:-
Get the list of our servers and vms from our database.
Get the scanning username and password for each server and vm from a third party tool, using Rest API.
Call some powershell scripts to retrieve the servers & vms info such as network info, memory, name, etc.
Update our ERP system with the scan info using Rest API.
Now I did a pilot project using the following approach:-
I define a Model method inside my asp.net mvc to do the above 4 steps.
Then I install hangfire tool which will be calling the scan method on predefined scheduler.
Also I create a View inside my asp.net mvc which allow users to set the hangfire schedule settings (this require to do an IIS reset on the host server for hangfire to get the new settings).
Now I run a test scan for a round 150 servers which took around 40 minutes to complete , and it worked well. The only thing I noted is that if I set the schedule to run on non-business hours (where no activity is made on IIS) then hangfire will not be able to call the job, and once the first request is made the missed jobs will run. I overcome this limitation by defining a windows task which calls IIS each 15 minutes, to keep application pool live, and it worked well...
Now the other approach I am reading about is doing my above is as follow:-
Instead of defining Model method inside asp.net mvc to do the scan, I can create a separate console application to do the scan.
Then inside my asp.net mvc to create a view which allow users to create and schdule a task inside the windows tasks scheduler. I can do so by integrating with the windows task scheduler API.
Where this windows task will be calling the console application.
Now I am not sure which approach is better and why ? now generally speaking long running/background jobs should not run under iis.. But at the same time defining these long running processes as console app and calling these apps inside windows task scheduler will create extra dependencies on my web application. And will add extra effort when moving the application from move server to another (for example from test to live)..
Beside this I read that tools such as hangfire, quartz and other are designed to allow running long running tasks inside IIS and they eliminate the need to create console applications and scheduling these console applications using task scheduler ..
So can anyone advice on this?
In my opinion, if it is possible to solve the scheduling problem on the web application side, there is no need to create a scheduler task or a new console application for triggering purposes. The problem you will probably face when using scheduling task in a web application is generally common as you might see is that: The scheduler works like a charm during debugging of the web application, but not being able to trigger after publishing it to IIS. At this point the problem is generally related to IIS rather than the schedulers Quartz.NET, Hangfire, etc. Although there are lots of articles or solution methods posted on the web, unfortunately only some of them is working properly. In addition to this, most of them require lots of configuration settings on the web and machine configuration.
However, there are also some kind of solutions for such a kind of scheduling problem and I believe in that it is worthy to give a try Keep Alive Service For IIS 6.0/7.5. Just install it on the server to which you publish your application and enjoy. Then your published application will be alive after application pool recycling, IIS/Application restarting, etc. That is also used in our MVC application in order to send notification mails weekly and has been worked for months without any problem. Here are the sample code that I use in our MVC application. For more information please visit Scheduled Tasks In ASP.NET With Quartz.Net and Quartz.NET CronTrigger.
*Global.asax:*
protected void Application_Start()
{
JobScheduler.Start();
}
*EmailJob.cs:*
using Quartz;
public class EmailJob : IJob
{
public void Execute(IJobExecutionContext context)
{
SendEmail();
}
}
*JobScheduler.cs:*
using Quartz;
using Quartz.Impl;
public class JobScheduler
{
public static void Start()
{
IScheduler scheduler = StdSchedulerFactory.GetDefaultScheduler();
scheduler.Start();
IJobDetail job = JobBuilder.Create<EmailJob>().Build();
ITrigger trigger = TriggerBuilder.Create()
.WithIdentity("trigger1", "group1")
.StartNow()
.WithSchedule(CronScheduleBuilder
.WeeklyOnDayAndHourAndMinute(DayOfWeek.Monday, 10, 00)
//.WithMisfireHandlingInstructionDoNothing() //Do not fire if the firing is missed
.WithMisfireHandlingInstructionFireAndProceed() //MISFIRE_INSTRUCTION_FIRE_NOW
.InTimeZone(TimeZoneInfo.FindSystemTimeZoneById("GTB Standard Time")) //(GMT+02:00)
)
.Build();
scheduler.ScheduleJob(job, trigger);
}
}
Also I create a View inside my asp.net mvc which allow users to set the hangfire schedule settings (this require to do an IIS reset on the host server for hangfire to get the new settings).
You're resetting your webserver to update a task's schedule? That doesn't sound healthy. What you might do is keep track of what the scheduled time should be, and on execution, check if the current time is within a certain range of the scheduled time (or has already been executed), otherwise abort the job.
The only thing I noted is that if I set the schedule to run on non-business hours (where no activity is made on IIS) then hangfire will not be able to call the job, and once the first request is made the missed jobs will run. I overcome this limitation by defining a windows task which calls IIS each 15 minutes, to keep application pool live, and it worked well...
Hangfire's documentation has a page about running delayed tasks that mentions what you need to change to accomodate this.
Using Windows' Task Scheduler doesn't seem like a good idea; it's not meant for the execution of ad-hoc, short-lived tasks. You probably need elevation to create tasks, and you'd probably need to define another scheduled task to clean up the mountain of tasks that would exist after a few dozen background jobs have been executed.
You're also correct that using Windows' Task Scheduler would make it more difficult to move your application around.

Scheduled Task or Timer Class

Speaking of server resources (in general) and background processes. Would it be better to use a separate executable and a windows scheduled task or use the timer class and make use of the same resources as you application.
There are a few pros and cons to both methods, but what I'm wondering is this: Would making use of shared resources (thread pools and the like) be better than separate resources? Sure the process would be taking resources from the app, but isn't it technically already doing that either way?
you have given too little context to really understand the whole. how does the timer trigger the activity at certain time if the application is closed or there is nobody connected (logged on)? This kind of stays the same for both ASP.NET and Windows client because IIS takes the application down when nobody is connected for a while.
in my opinion a Windows' scheduled task is way better because you decouple from IIS application pool / application lifecycle and you also separate better and are sure that at that time the call will be executed and the activity started.

How to get workflow Blocking Bookmarks(statemachine) without relying on workflow persistence service

I need to get the next activities(transitions) what my workflow is being blocked for as soon as workflow entered a new state without relying on workflow persistence service, I found out that workflow persistence start to hit database when workflow instance is idle, which has a time latence when there are more than one instance of workflow running, it pose a serious problem for me, I need the blooking bookmarks to be in Synch with my workflow status, which I will set in code activity when workflow enters its new state, from codeActivityContext and NativityContext, there is no way to get the api to get this information(the next transitions), both the statemachine class and state class are sealed, there is not way to tag into it.I am using the blocking bookmarks to indict how the workflow will flow to U.I, so that I can drive the workflow from U.I, I am hosting the statemachine using workflowserviceHost with IIS. I am wondering why I am the only one run into this issue, I have been struggle with this issue for some time.
Thanks in advance.
Your best options is using a TrackingParticipant where you can see exactly what is going on in a workflow as it is executing. From the TrackingParticipant you can then save the bookmarks and have the UI reuse them.

Long-running thread process under ASP.NET + WCF

Duplicate
This is a close duplicate of Dealing with a longer running process in WCF. Please considering posting your answer to that one instead of this.
Original Question
I'm implementing the business layer of an application that must run some background processes at scheduled times. The business layer is made up of several WCF services all running under the same web application.
The idea is defining a set of 'tasks' that must be run at different times (eg. every 5 minutes, everyday at 23:00, etc). That wouldn't be hard to implement as a windows service, but the problem is, the tasks need access to data caches that are living in the services, so this 'scheduler' must run under the IIS context in order to access that data.
What I'm doing currently is using a custom ServiceHostFactory in one of the WCF services which spawns a child thread and returns. The child thread sleeps and wakes up every X minutes to see if there are scheduled tasks and executes them.
But I'm worried about IIS randomly killing my thread when it recycles the application pool or after some inactive time (eg. no activity on any of the WCF services, which listen for requests from the presentation layer). The thread must run uninterrupted regardless of activity on the services. Is this really possible?
I have found an article by someone doing the same thing, but his solution seems to be pinging the server from the child thread itself regularly. Hopefully there is a better solution.
I have at some point implemented a Windows Service that would load a web page on a regular basis. The purpose of that was was that the site was hosting a Workflow Foundation runtime, and we wanted to ensure that the web application was brought back up after IIS recycling the application pool. Perhaps the same approach can be used in this case; have a service (or Scheduled Task in Windows; even simpler) run every x minutes and load a page that will check for tasks.
Is it a possibility to run either a Windows Service or place applications in the Windows Scheduler to execute methods in the WCF at certain times? Maybe use a BackgroundWorker inside the WCF. Another option would be for WCF to spawn other applications to do the business logic, passing the appropriate data, or pointers to the data in memory(unsafe).

Resources