I do a test in Page_Load function in asp.net
protected void Page_Load(object sender, EventArgs e)
{
Task.Factory.StartNew(() =>
{
while (true)
{
Response.Write("Hi");
Thread.Sleep(100);
}
});
Thread.Sleep(2000);
Response.Write("hello");
}
I found the fact is that the task will be killed when the pageload function is finished.
Is that true? Or the task is still alive?
If I want the task is still alive, how can I do?
Using the Response object on multiple threads is not safe. Also, after a request has ended using the Response is not going to be successful. Let's assume that you had properly locked it:
The task is not killed because ASP.NET does not even know that task exists. How could it? You never give a reference to that task to ASP.NET. Probably, you haven't really looked at the error message you get. That task probably had an exception.
Don't use ASP.NET objects in an unsafe way if you want that task to continue to run.
Related
I have written an application that uses background workers for long running tasks. At times, after the task is completed, the application will freeze. It doesn't do it right away, it will do it after the application sits idle for a little bit of time.
To try to find out where it is hanging, in my development environment I ran it and waited for it to freeze. I then went to Debug > Break All. It is hanging in the Main() method in Program.cs:
static class Program
{
[STAThread]
static void Main()
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new Main());
}
}
The Application.Run line is highlighted as where the application is hung. When I hover my cursor over the carat in the left border I get a tool tip saying "This is the next statement to execute when this thread returns from the current function."
In looking at this code I realized that it is calling the "main" form of the application, which I named "Main." So my first question is does this matter since the current method is named "Main" also? If so, what are the ramifications of renaming the form, if that is possible?
If that is not an issue, then it would go back to the background worker I would imagine. The application never freezes if those long running tasks are never ran. I know that you should never try to access the UI thread from a background worker thread and I don't think I'm doing that but here is some code that hopefully someone may spot something:
First I start the thread from the UI thread passing in an argument:
bgwInternal.RunWorkerAsync(clients)
In the DoWork method it calculates and creates invoices for the passed in argument (clients). It creates PDF files and saves them to disk. None of that work tries to access the UI. It does use the ProgressChanged event handler to update a progress bar and a label in the UI:
private void bgwInternal_ProgressChanged(object sender, ProgressChangedEventArgs e)
{
pgbProgress.Value = e.ProgressPercentage;
lblProgress.Text = e.ProgressPercentage.ToString();
}
And finally the RunWorkerCompleted event handler:
private void bgwInternal_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e)
{
if (e.Error != null)
{
MessageBox.Show("Error occurred during invoice creation.\n\r\n\rError Message: " + e.Error.Message, "Error", MessageBoxButtons.OK, MessageBoxIcon.Error);
}
else if (!e.Cancelled)
{
MessageBox.Show("Invoice Creation Complete", "Complete", MessageBoxButtons.OK, MessageBoxIcon.Information);
}
else
{
MessageBox.Show("Invoice Creation Cancelled", "Cancelled", MessageBoxButtons.OK, MessageBoxIcon.Information);
}
btnCreateInv.Enabled = true;
btnClose.Enabled = true;
btnCancel.Enabled = false;
}
Could it be hanging because I'm accessing UI elements in this event handler?
One final note, I was using Application.DoEvents():
while (bgwInternal.IsBusy)
{
Application.DoEvents();
}
But I commented that out to see if it would make a difference and it did not.
Not having a lot of multithreading experience I chose to use background worker threads because they are simple and straightforward. Other than using Debug > Break All I really don't know how to track down the exact reason this is happening.
Any thoughts / ideas would be greatly appreciated.
Thanks.
I have global.asax file, and I use this two functions to update cache value every 15 seconds by this code:
protected void Application_Start(object sender,EventArgs e)
{
Context.Cache.Insert("value","some value",null,DateTime.Now.AddSeconds(15),Cache.NoSlidingExpiration,CacheItemProirirt.Default,new CacheItemRemovedCallback(updating));
}
private void updating(string key,object value,CacheItemRemoveReason reason)
{
Context.Cache.Insert("value","updated value",null,DateTime.Now.AddSeconds(15),Cache.NoSlidingExpiration,CacheItemProirirt.Default,new CacheItemRemovedCallback(updating));
}
but it give me an NullReferenceException, and the context is null, please why I can't use context at the "updating" function?
Application_Start doesn't have any context.
The first event that does is Begin_Request.
Application_Start occurs when the particular website gets fired up for the first time, or after been recycled.
To keep the cache item renewed I suggest you do that in the Begin_Request, where you check if it's there, and if not, initiate it again.
This way it's only use memory while the site is being hit, otherwise not.
I saw an example of forever iframe implementation ( comet simulation ) , so I decided to test it but with the addition of asynchronous approach , so that there will be no blocking.
Pretty simple :
I have a page (index.html) with hidden iframe which has SRC of AdminPush.aspx:
/*1*/ protected void Page_Load(object sender, EventArgs e)
/*2*/ {
/*3*/ UpdateMessage();
/*4*/ }
/*5*/
/*6*/
/*7*/ protected void UpdateMessage()
/*8*/ {
/*9*/ HttpContext.Current.Response.ContentType = "text/html";
/*10*/ Response.Write("<script >parent.UpdateMessage(DateTime.Now.Second)</script>");
/*11*/ Response.Flush();
/*12*/
/*13*/ //async part goes here !!
/*14*/ this.RegisterAsyncTask(new PageAsyncTask(async cancellationToken =>
/*15*/ {
/*16*/ await Task.Delay(2000, cancellationToken);
/*17*/ UpdateMessage();
/*18*/ }));
/*19*/ }
On the AdminPush.aspx Page I added :
Async="true"
On the html page (index.html) I added :
function UpdateMessage(Message)
{
console.log(Message);
}
function setupAjax() //body Onload - calls it.
{
var iframe = document.createElement("iframe");
iframe.src = "adminpush.aspx";
iframe.style.display = "none";
document.body.appendChild(iframe);
}
So basically the iframe is being injected with script comands , which updates the parent of the iframe which is index.html.
It is working.
But when I tested it - it stopped updating after 45 seconds.
I thought it had to do with requestTimeout prop in web.config - but it wasnt.
It was related to the missing AsyncTimeOut prop in the AdminPush.aspx page.
Question #1:
According to msdn AsyncTimeout :
Gets or sets a value indicating the time-out interval used when
processing asynchronous tasks.
But it also says :
A TimeSpan that contains the allowed time interval for completion of
the asynchronous task. The default time interval is 45 seconds.
please notice that I "delay" 2 sec every time
at first I set the timeout to 1 minute , but then it failed also. I thought that the timeout should be regarding each operation , and not to sum(all async operations)
Why is it like that ? it suppose to be timeout for async task ! (single) but it behaves as sum(tasks)
The wording here are misleading. any clarification ?
Question #2:
I need to set it to max value. what is that value ? but still , I need it so support a browser for a very long time. so I'm afraid that this value won't help either.
Is there any way I can RESET this value (after n cycles) ?
I know that there are other solutions/libraries like signalR which are doing the job, still, it does not prevent learning how other stuff are done.
The idea of Asynchronous Pages is to free IIS so more users can be served, if you create a page that "never" finishes, you will eat up all your resources.
That been said... if you still want to do it...
We "know" (documentation), Asynchronous Pages work by splitting the execution of the page in 2... everything BEFORE the Background Tasks and everything AFTER the tasks, in that way IIS can process more requests while the background tasks finish their work. (there is more to it, but that is enough for now)
So... they "must" be creating some kind of Task Manager (like a root/main task) that executes all the registered tasks in sequence, in that way IIS starts processing the page, fires up the task manager, frees IIS, the task manager keeps processing the tasks and when it finishes, it returns control to IIS.
That would explain why the AsyncTimeout controls all the registered tasks instead of one-by-one (The timeout is actually applied to the Task Manager).
I tested a variation of your code with a timeout of 6000 seconds and it works:
C#:
protected void Page_Load(object sender, EventArgs e)
{
Page.RegisterAsyncTask(new PageAsyncTask(ProcessTask));
}
protected async Task ProcessTask()
{
await Task.Delay(1000);
Response.Write(DateTime.Now.ToLongTimeString() + "<br/>");
Response.Flush();
Page.RegisterAsyncTask(new PageAsyncTask(ProcessTask));
}
aspx:
<%# Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="Sample03.Default" Async="true" AsyncTimeout="6000" %>
Hope it helps.
I have an ASP.net page that is creating a service reference to a WCF service and making calls in multiple places in my page. I instantiate the service reference in Page_Load and have an instance variable to store it:
private FooClient _serviceClient;
protected void Page_Load(object sender, EventArgs e)
{
_serviceClient = nwe FooClient();
_serviceClient.GetAllFoos();
}
protected void btnSave_Click(object sender, EventArgs e)
{
_serviceClient.SaveFoo();
}
I just discovered that I need to be disposing of the service reference when I am done using it or else the connections will be kept alive and will block incoming connections if I reach the max number of connections. Where would the best place to dispose of these references be? I was thinking of doing it on the OnUnLoad event.
Is there a better way of doing this?
Personally, I would open FooClient when I need it, so not in Page_Load but in the methods that do web service calls. This way, you know exactly what happens to it. I usually take the following approach:
var client = OpenClient();
try
{
// Perform operation(s) on client.
}
finally
{
CloseClient(client);
}
This way you are sure you close your proxy, whatever happens (if there are exceptions you need to catch, simply add a catch clause). The CloseClient method should look like the one in PaulStack's answer.
Another benefit you get when you do this is that multiple calls don't interfere with eachother. Suppose one of you web service calls leads to an unexpected exception. The client channel is now in a faulted state and therefore unusable for any other calls.
And third, suppose an exception does occur that you can not catch or do not want to catch, I'm not sure Page_Unload is actually called (and I don't know what page method will be called in that event). This will also leave connections open.
according to MSDN documentation and personal experience do something as follows:
try
{
...
client.Close();
}
catch (CommunicationException e)
{
...
client.Abort();
}
catch (TimeoutException e)
{
...
client.Abort();
}
catch (Exception e)
{
...
client.Abort();
throw;
}
this would allow the correct closing or abortion of the service when necessary rather than leaving them to be disposed at a predefined time - only keep the connection open as long as you definately have to. personally i dont like inheriting from IDisposable as its very heavy in performance
Our ASP.NET 2 web application handles exceptions very elegantly. We catch exceptions in Global ASAX in Application_Error. From there we log the exception and we show a friendly message to the user.
However, this morning we deployed the latest version of our site. It ran ok for half an hour, but then the App Pool crashed. The site did not come back up until we restored the previous release.
How can I make the app pool crash and skip the normal exception handler? I'm trying to replicate this problem, but with no luck so far.
Update: we found the solution. One of our pages was screenscraping another page. But the URL was configured incorrectly and the page ended up screenscraping itself infinitely, thus causing a stack overflow exception.
The most common error that I have see and "pool crash" is the loop call.
public string sMyText
{
get {return sMyText;}
set {sMyText = value;}
}
Just call the sMyText...
In order to do this, all you need to do is throw any exception (without handling it of course) from outside the context of a request.
For instance, some exception raised on another thread should do it:
protected void Page_Load(object sender, EventArgs e)
{
// Create a thread to throw an exception
var thread = new Thread(() => { throw new ArgumentException(); });
// Start the thread to throw the exception
thread.Start();
// Wait a short while to give the thread time to start and throw
Thread.Sleep(50);
}
More information can be found here in the MS Knowledge Base
Aristos' answer is good. I've also seen it done with a stupid override in the Page life cycle too when someone change the overriden method from OnInit to OnLoad without changing the base call so it recursed round in cirlces through the life cycle: i.e.
protected override void OnLoad(EventArgs e)
{
//some other most likely rubbish code
base.OnInit(e);
}
You could try throwing a ThreadAbortException.