I have a fairly long process running in a standard postback. Then I have the following page method to return progress to a repeating JavaScript function that is supposed to report on progress.
My simple page method:
[WebMethod]
public static int GetProgress()
{
return (int)(HttpContext.Current.Session["ActivationResources.ImportProgress"] ?? 0);
}
My clientside script:
function startProgress() {
window.setInterval(updateImportProgress(), 500);
}
var importProgress = 0;
function updateImportProgress() {
//debugger;
PageMethods.GetProgress(function (result, response, context) {
if (result == importProgress) {
$("#messageLabel").append(" .");
}
else {
$("#messageLabel").html("Busy importing resources - " + result + "%");
}
importProgress = result;
});
}
The updateImportProgress function is called, but Firebug reports that the POST for GetProgress is 'aborted'. Why could this be? I suspect that this is because the call to the static method is blocked by the actual executing method whose progress I am trying to monitor. A breakpoint in the GetProgress method is never hit.
I had this issue before. As a workaround I've implemented the code in an ashx file and avoided that way to touch the same page. Should work for your code as well, as you simply report a session variable.
Btw, note that your code breaks when you have 2 requests running in a single session. That can happen when you open multiple tabs of the same page in the browser.
Related
I have implemented a custom receiver for Microsoft ASP.NET WebHooks by implementing WebHookHandler.
public class Web_WebHookHandler : WebHookHandler
{
public Web_WebHookHandler()
{
this.Receiver = CustomWebHookReceiver.ReceiverName;
}
public override Task ExecuteAsync(string generator, WebHookHandlerContext context)
{
SendNotification();
return Task.FromResult(true);
}
private void SendNotification()
{
Task.Factory.StartNew(() => {
// doing some processing
});
}
}
Whenever some event gets fired, it hits my above receiver 3 times. I have tried everything but nothing made any difference. Please help me sort it out.
Try adding bellow code in the ExecuteAsync before return. .i.e.
context.Response = new System.Net.Http.HttpResponseMessage (System.Net.HttpStatusCode.Gone);
return Task.FromResult(true);
Actually webhooks dispatcher inspects response from your receiver and retries if proper response is not sent back. So in order to tell dispatcher that request has been processed and everything is okay, you need to set context.Response and also return Task.FromResult(true).
Otherwise it will keep trying for 3 times atleast.
When calling WebMethod on a Webpage using jQuery. We define this as static.
However static methods always have one instance. What happens when multiple web requests are made.
Does it really happen asynchronously or
all the requests are pipelined waiting for the WebMethod to accept the requests?
I created a sample console program to simulate the scenario on static method work & found them to execute in sequential order.
class Program
{
static int count = 10;
static void Main(string[] args)
{
new Program().foobar();
Console.ReadLine();
}
public void foobar()
{
Parallel.Invoke(() => work("one"), () => work("two"), () => work("three"), ()=> work("four"));
}
static void work(string str)
{
Thread.Sleep(3000);
count++;
Console.WriteLine(str + " " + count);
}
}
Can you please put some light on this concept?
They will not execute sequentially. If you created multiple apps in a client server scenario it would be a better example since your console app inherently runs everything sequentially.
That said, with the static methods you just need to be aware of shared resources, data, etc. Local data is fine.
Consider the following (based on the default MVC template), which is a simplified version of some "stuff" that happens in the background - it completes fine, and shows the expected result, 20:
public ActionResult Index()
{
var task = SlowDouble(10);
string result;
if (task.Wait(2000))
{
result = task.Result.ToString();
}
else
{
result = "timeout";
}
ViewBag.Message = result;
return View();
}
internal static Task<long> SlowDouble(long val)
{
TaskCompletionSource<long> result = new TaskCompletionSource<long>();
ThreadPool.QueueUserWorkItem(delegate
{
Thread.Sleep(50);
result.SetResult(val * 2);
});
return result.Task;
}
However, now if we add some async into the mix:
public static async Task<long> IndirectSlowDouble(long val)
{
long result = await SlowDouble(val);
return result;
}
and change the first line in the route to:
var task = IndirectSlowDouble(10);
then it does not work; it times out instead. If we add breakpoints, the return result; in the async method only happens after the route has already completed - basically, it looks like the system is unwilling to use any thread to resume the async operation until after the request has finished. Worse: if we had used .Wait() (or accessed .Result), then it will totally deadlock.
So: what is with that? The obvious workaround is "don't involve async", but that is not easy when consuming libraries etc. Ultimately, there is no functional difference between SlowDouble and IndirectSlowDouble (although there is obvious a structural difference).
Note: the exact same thing in a console / winform / etc will work fine.
It's to do with the way the synchronization context is implemented in ASP.NET (Pre .NET 4.5). There's tons of questions about this behavior:
Task.WaitAll hanging with multiple awaitable tasks in ASP.NET
Asp.net SynchronizationContext locks HttpApplication for async continuations?
In ASP.NET 4.5, there's a new implementation of the sync context that's described in this article.
http://blogs.msdn.com/b/webdev/archive/2012/11/19/all-about-httpruntime-targetframework.aspx
When you use .Result there is always a possibility of deadlock because .Result is blocking by nature. The way to avoid deadlocks is to not block on Tasks (you should use async and await all the way down). The subject is in details described here:
Don't Block on Async Code
One fix is to add ConfigureAwait:
public static async Task<long> IndirectSlowDouble(long val)
{
long result = await SlowDouble(val).ConfigureAwait(false);
return result;
}
Another fix is to use async/await throughout:
public async Task<ActionResult> Index()
{
var task = IndirectSlowDouble(10);
long result = await task;
ViewBag.Message = result.ToString();
return View();
}
I've heard a LOT in the past about how programming with Threads and Tasks is very dangerous to the naive. Well, I'm naive, but I've got to learn sometime. I am making a program (really, it's a Generic Handler for ASP.Net) that needs to call to a 3rd party and wait for a response. While waiting, I'd like to have the handler continue doing some other things, so I am trying to figure out how to do the 3rd party web request asynchronously. Based on some answers to some other questions I've received, here is what I've come up with, but I want to make sure I won't get into big problems when my handler is called multiple time concurrently.
To test this I've built a console project.
class Program
{
static void Main(string[] args)
{
RunRequestAsynch test = new RunRequestAsynch();
test.TestingThreadSafety = Guid.NewGuid().ToString();
Console.WriteLine("Started:" + test.TestingThreadSafety);
Task tTest = new Task(test.RunWebRequest);
tTest.Start();
while (test.Done== false)
{
Console.WriteLine("Still waiting...");
Thread.Sleep(100);
}
Console.WriteLine("Done. " + test.sResponse);
Console.ReadKey();
}
}
I instantiate a separate object (RunRequestAsynch) set some values on it, and then start it. While that is processing I'm just outputting a string to the console window.
public class RunRequestAsynch
{
public bool Done = false;
public string sResponse = "";
public string sXMLToSend = "";
public string TestingThreadSafety = "";
public RunRequestAsynch() { }
public void RunWebRequest()
{
Thread.Sleep(500);
// HttpWebRequest stuff goes here
sResponse = TestingThreadSafety;
Done = true;
Thread.Sleep(500);
}
}
So...if I run 1000 of these simultaneously, I can count on the fact that each instance has its own memory and properties, right? And that the line "Done = true;" won't fire and then every one of the instances of the Generic Handler die, right?
I wrote a .bat file to run several instances, and the guid I set on each specific object seems to stay the same for each instance, which is what I want...but I want to make sure I'm not doing something really stupid that will bite me in the butt under full load.
I don't see any glaring problems, however you should consider using the Factory.StartNew instead of Start. Each task will only be executed once, so there isn't any problem with multiple tasks running simultaneously.
If you want to simplify your code a little and take advantage of the Factory.StartNew, in your handler you could do something like this (from what I remember of your last question):
Task<byte[]> task = Task.Factory.StartNew<byte[]>(() => // Begin task
{
//Replace with your web request, I guessed that it's downloading data
//change this to whatever makes sense
using (var wc = new System.Net.WebClient())
return wc.DownloadData("Some Address");
});
//call method to parse xml, will run in parallel
byte[] result = task.Result; // Wait for task to finish and fetch result.
Some scenarios to ponder. There is a legacy code which has following implementation Example1 and Example2. If we try to implement MSDN recommendation then the legacy code fails.
Here is a Legacy code example:
Example 1:
void Page_Load() {
.... some code
if(condition) {
/// some condition
} else {
RedirectPage(url);
}
// another code block
// some other conditions.
}
Example 2:
a. File1.ascx
void Page_Load() {
try {
.. some code
base.CheckPreference();
RedirectPage(defaultPage);
}
catch(Exception ex) {
ExceptionHandling.GetErrorMessage(ex);
}
}
b. BaseClass.cs // this is the base class
void CheckPreference() {
try {
if(condition) {
RedirectPage(url1);
} else if(condition2) {
RedirectPage(url2);
} else {
// update session
}
}
catch(Exception ex) {
ExceptionHandling.GetErrorMessage(ex);
throw;
}
}
void RedirectPage(string url) {
Response.Redirect(url);
}
One possible way is to add a boolean field in the class e.g endExecution, set the field to true whenever RedirectPage is called.
We have to update RedirectPage code see code snippet below:
// Updated code - MSDN recommendation.
void RedirectPage(url) {
Response.Redirect(url, false);
this.Context.ApplicationInstance.CompleteRequest();
endExecution = true;
}
Please suggest some other better ways to improve the legacy code implementation.
Probably the most unintuitive thing for folks issuing a redirect is that in our minds we've already returned from the method what we call Respond.Redirect (or whatever the equivilent is in your language/platform of the day. All we've done is call a method.
Bottom line is that you have to stop processing the request to avoid trying to commit to responses for the same request. That would throw an exception on just about any platform I've worked with.
ASP.NET MVC improved this with the ActionResponse so that you are returning from the method (and terminating the remainder of request processing) with code that looks like this:
return Redirect(url);
Bottom line is that you need to get in the habit of returning from your event right after you perform your redirect. Any deviation from that habit needs to be documented in the code why. This will help make the application perform the way you expect.
The approach that you've taken is perfectly reasonable.