I have ASP.NET web application which is hosted on two different IIS web servers(Server A and Server B) for http request Clustering. I have designed web application where user can able to create and kick-off(run) manual windows task scheduler on IIS web server where website is hosted (but in my case it is hosted on two different web servers for load balancing).
When first time user creates scheduler from web UI and http request goes to Server A to create scheduler, so it will created manual windows task scheduler on Server A. But now next time when user tries to kick-off the windows task scheduler and http request goes to Server B, but there is no windows task scheduler on Server B (in first http request windows scheduler has been created on Server A). The second http request is unable to find the task scheduler on Server B and it is displaying the alert message that no windows task scheduler found.
As below, Server A has one scheduler- MyScheduler but the server B does not have any scheduler with same name
How can I come out of this challange, please do the needful.
After lot of research and development I found that Windows Task scheduler allows us to define the Target Server where you want to Create, Read, Run and Delete the tasks.
TaskService.TargetServer property is gets the name of the computer that is running the Task Scheduler service that the user is connected to.You can find more details about the TaskService here.
https://msdn.microsoft.com/en-us/library/windows/desktop/aa383474(v=vs.85).aspx
Suppose we have Server A and Server B as mentioned in question above. So we can define Server A as target server to create scheduler on single machine in Cluster environment.
Example:
TaskService taskService=new TaskService(targetServer: "Server A");
Related
We have a microservice having hangfire scheduler deployed in multiple server environment. Currently the hangfire gets triggers from all 3 servers simultaneously resulting in duplication of process on all 3 servers. Is there any way to restrict the hangfire job to be execute only on 1 server at a time?
In your web application, code logic to enable hangfire only on one physical server. This is what I have in my startup.cs (run when application restarts),
// if "isHangfireOn" = 1 inside web.config, enable hangfire on server.
// ONLY one physical server should run with this setting enabled.
if (("1".Equals(ConfigurationManager.AppSettings["isHangfireOn"].ToString())))
{
//Specify the use of Sqlserver for timed task persistence
GlobalConfiguration.Configuration.UseSqlServerStorage("hangfireconfig");
//enable server
app.UseHangfireServer();
//enable Dashboard panel
app.UseHangfireDashboard();
//Cyclic execution of tasks
RecurringJob.AddOrUpdate(() => Email.SendEmail(), Cron.Daily(amHour, amMins), TimeZoneInfo.Local);
}
Then in your web.config, enable "isHangfireOn" on ONE server only. Disable it for any other servers.
I am trying to integrate Jenkins and Web deploy v3.5 over "HTTP" connection. The server has IIS 10 and Windows Server 2016. The build is getting failed with an error,
Web deployment task failed. (Could not complete the request to remote agent URL 'http://IPAddress:8172/MSDeploy.axd?site=WebSite'.)
I am using the following command,
/property:configuration=Dev /p:DeployOnBuild=True /p:DeployTarget=MsDeployPublish /p:CreatePackageOnPublish=False /p:AllowUntrusted=True /p:MsDeployPublishMethod=WMSvc /p:MsDeployServiceUrl="http://IpAddress:8172/MSDeploy.axd" /p:DeployIisAppPath="WebSite" /p:AllowUntrustedCertificate=True /p:Username=SomeUsername /p:Password=SomePassword
Troubleshooting:
8172 port is allowed for Jenkins.
Web deployment services are running.
Users have been given with sufficient rights to the directory.
WebDeploy user is added to administrator group.
I am suspecting if Web deploy tool doesn't work over HTTP connection, is that true?
Web Deploy is actually just a way of deploying. The services are running on a server and listening on the port 8172. I do not get why you are using a whole web adress, when all you need is the connection to the server (ip or domain only!).
Example: 0.00.000.000 or example.org
Check if you installed the handler too. You need the web deploy service and handler running.
Regards,
Maheshvara
I encountered the problem by taking following steps,
Ms web deploy works under the secure connection. it should be called by https://
Configured three rules as mentioned under Management Service Delegation Rule
2.1 ---- createApp with WDeployConfigWriter User
2.2 ---- setAcl
2.3 ---- contentPath_intiApp
Reference: https://learn.microsoft.com/en-us/iis/publish/using-web-deploy/configure-the-web-deployment-handler
In 2.1 step, WDeployConfigWriter user needs to be created manually. Web deploy tool use two users WDeployAdmin and WDeployConfigWriter
Reference: https://blog.richardszalay.com/2013/08/02/manually-creating-wdeployadmin-and-wdeployconfigwriter/
I have created Scheduler Job in Azure using HTTPS action to periodically ping ASP.NET Web application deployed on VM. Web application is configured with HTTPS binding using SSL certificate. I have configured IIS security using IIS Crypto 2.0 tool.
Scheduler Job fails with an error:
Http Action - Request to host 'www.somesamplehost.com' failed: SendFailure The
underlying connection was closed: An unexpected error occurred on a
send.
The same URL works fine when I try to access it from any browser.
Screenshot from Azure Portal:
The issue was due to SSL certificate. I have ECC SSL certificate for my Web Application and Azure Scheduler Job does not support making requests to Web Application that has that kind of certificate.
Guys from MS suggested alternative for Scheduler Job. It is a feature in Azure Portal called "Logic Apps". You can set it up in a way that it periodically makes requests to specific URL, but "Logic Apps" has much more capabilities compared to Scheduler Job.
So when I run my two services locally, I can hit service A which sends a command to service B which picks it up and processes it. Pretty straight forward. However, when I publish these to my web server and I send a request to service A which sends it to service B (I can see the message in service B's queue) but it won't get picked up and processed. I created an endpoint on service B that simply returns an OK response -- if I call this endpoint, effectively "touching" the service, everything kicks on and the messages get processed from that point on.
I figured maybe this had something to do with with late compiling, so I changed the publish to precompile on publish, but I get the same result.
Is there a way to have the service start processing as soon as it is published? Also worth noting that both services are WebAPI 2
Another option (probably more “standard”) would be to move the “handlers” into a Windows Service instead of a web application.
For this Windows Service you can leverage the NServiceBus Host which will turn a standard class library into a Windows Service for you. They have a good amount of documentation about this here: https://docs.particular.net/nservicebus/hosting/nservicebus-host/?version=Host_6
I would argue that this is more stable as you can separate the processing of sending commands (Web Application / WebApi) and processing commands / publishing events (NSB Host). The host can sit on the web server itself or you can put these on a different server.
Our default architecture is to have a separate server run our NSB Hosts as you scale web applications and NSB Hosts differently. If you run the NSB Host on the web server, you can get issues where a web app gets too much traffic and takes the NSB Host processing down. You can always start simple with using 1 server for both, monitor the server and then move things around as traffic increases.
Not sure if this is the "right way" to do things, but what I ended up doing is setting up each site to be always running and auto initialize.
App pool set to "Always Running"
Website set preload enabled = true
Web.config has webServer entry for application initialization doAppInitAfterRestart="true"
Web Role added to server "Application Initialization"
With those things being set, the deployment process is basically to publish the site and iisreset. If there are better options, I'm still looking :)
We've implemented a "background service" in our Asp.Net web app that receives messages from MSMQ at random intervals without requiring an HTTP request to start up the application. We auto start this web app using a serviceAutoStartProvider.
Everyhing works when IIS initially starts up, the server is rebooted and so on, we receive messages just fine. BUT if we just stop the site in IIS (not touching the application or app pool), the application stops receiving MSMQ messages. And when we start the web site again, the serviceAutoStartProvider is not called again, so our app does not start listening to MSMQ messages again!
If we issue a HTTP request against the web app after the IIS site has been stopped and started again, it starts listening to MSMQ messages again.
Shouldn't our "background service" web app continue to listen to MSMQ messages even if the IIS site is stopped? It won't get any requests, but I think it should continue to run.
What exactly happens in an Asp.Net application/app pool when the IIS site is stopped? Any events fired that we can hook up to? The app pool claims to be "started" in IIS manager, but code is not running in it.
Why isn't our serviceAutoStartProvider called when the site is started again? I believe it is "by design", since the application isn't really stopped. But the applications isn't running, either, has to be waken up by an actual HTTP request.
When the IIS Web App shuts down (eg. due to no new HTTP(S) requests for the timeout time) the .NET app domain (within the app pool worker process) completely closes and unloads. This includes all background threads, including those used by the .NET thread pool.
A Web App can be configured with a longer (or no) timeout, then background worker threads could continue to process work.
But better would be to either run such workers in a specialist service process managed completely separately.
Or, even better, use IIS application hosting with WCF to create the MSMQ listener. I understand in this case the integration of Windows Process Activation Services with IIS would restart the Web App if a new message arrived after it had been shutdown.
I would host the MSMQ listenner in a windows service. Why couple it to IIS?
UPDATE
Actually what I mean is why couple the MSMQ and ASPNET in the same app pool?
You can now use "Application Initialization" feature of IIS8 (and IIS7.5), more information including version availability and usage documentation can be found at:
http://www.iis.net/learn/get-started/whats-new-in-iis-8/iis-80-application-initialization
This replaces "Application Warm-Up Module" which is no longer supported, and provides us with proper control over component/service initialization in an "always running" scenario.