Posting xml with ServerXMLHTTP timeout - asp.net

I'm working on two websites. One is an existing classic asp site which posts xml to a new asp.net (.net 3.5) website. The classic asp site is using msxml's serverxmlhttp object in vbscript to send this xml over. The whole thing works until I make a seemingly unrelated change to the asp.net site.
When I add a few lines of code that uses System.Speech.Synthesis to generate a wav file from text the classic asp websites serverxmlhttp.send command times out. As far as I can tell the asp.net page is working fine, it makes it through the few new lines of code without an issue (the wav file is generated). The few lines of speech code causing the issue is done well before the timeout.
It seems like the asp.net page was actually sending some sort of acknowledgement back to the classic page which is no longer getting sent. I should also point out that the speech code was throwing an exception saying it needed to be asynchronous which I fixed by adding Async="true" to the . However, it works when async="true", it's just those speech lines that break it. The "problem code" is just
SpeechSynthesizer speaker = new SpeechSynthesizer();
speaker.Volume = 100;
speaker.SelectVoiceByHints(System.Speech.Synthesis.VoiceGender.Female, System.Speech.Synthesis.VoiceAge.Adult, 0);
try
{
speaker.SetOutputToWaveFile("c:\\test\\output.wav");
}
catch (Exception ex)
{
retVal = false;
}
speaker.Speak(msgText);
speaker.SetOutputToDefaultAudioDevice();
Does anyone have any suggestions on what could be wrong or what I could use to help debug this?

It seems like the asp.net page was actually sending some sort of acknowledgement back to the classic page which is no longer getting sent
It sounds like you should investigate it more so you can tell us server's response behavior before and after. Also please indicate the exception thrown.
My guess would be these APIs don't work well in a service process. I have no clue though really. I'm curious about the exception, you're not clear about what you made async.

Related

What actually happens when you Stop Debugging?

I have 2 ASP.NET applications. 1 is in VB, 1 is in C#.
When the user logins with certain credentials in the VB app should be re-routed to the C# app. Likewise, certain credentials for the C# app gets re-routed to the VB app, and vice versa.
VB -> C# works. This functionality was written by a third party. (The C# application is essentially just a rewrite of our VB app, but more modern. However, the entire package isn't being rewritten).
I've tried to reverse the code so that the C# app will call a stored procedure in the DB to create a token, redirect the browser to the VB app which calls a procedure to get that token and set some Session variables.
I don't have it quite working right, one of the major issues is that that the Browser simply does not navigate off the C# login page to the VB page. If I run Profiler on the DB however, I can see that "load token" stored procedure being called. That must mean that the code is getting executed, but the browser isn't redirecting correctly, right?
More importantly, and the reason I'm posting this question however is I don't understand what's actually happening when I stop debugging my app. I set a break point immediately following the call to create that token in the DB. So I run my application, log in, trigger the break point and I can see the good data in the DB. If I immediately Stop Debugging, the load token procedure still gets called. How!?
Here's the code;
In my LoginController:
public ActionResult ValidateUser(objLogin)
{
var ds = LoginData.ValidateUser(objLogin);
string url = "someUrl/" + ds.Tables["Key"].Rows[0][0].ToString();
System.Web.HttpContext.Current.Response.Redirect(url, true);
return Json(objLogin, JsonRequestBehavior.AllowGet);
}
That redirect points to the landing page in the VB application, which parses out the key from the URL and passes that as a parameter to another DB SP... However, the browser never navigates off my login page regardless if I stop debugging or not.
Frankly, I'm not entirely sure what the return statement does; if I try to step into it it just continues on as if I hit "Play". Application resumes control and just chills at the login page. It's part of the third-party rewrite. The VB app was very old, pretty unstructured. New C# rewrite uses MVC. I'm familiar with the principles but I'm not an expert on it, especially not in .NET.
And in LoginData
public DataSet ValidateUser(Login objLogin)
{
DataSet dsData;
using (SqlCommand sqlCommand = new SqlCommand("Validate_User_Main")
{
// execute this procedure; assign results to dsData
}
string authKey = GetAuthKey(dsData.userId);
DataTable dtTemp = new DataTable("key"); //putting break point here after the key gets created but before the redirect is called in LoginController.ValidateUser
dtTemp.Columns.Add("Key");
DataDrow drTemp = dtTemp.NewRow();
drTemp[0] = authkey;
dtTemp.Rows.Add(drTemp);
dsData.Tables.Add(dtTemp);
return dsData;
}
Edit: If I close my browser window while still waiting on my breakpoint, then stop debugging that "load token" call isn't utilized. If I simply Stop Debugging but leave my browser open, it gets called. So it must be redirecting "behind the scenes", right? I don't understand...
When you stop debugging the debugger is detached. This simply means that it stops tracking the running code. The code keeps running, as you have seen, but know you can't set breakpoints, watch variable etc.

Have Page Method Unhandled Exceptions Behave as Other ASP.Net Unhandled Exceptions

I have a webform that has a single page method. My other web apps log unhandled exceptions to the system event log on the web server. Since the other developers that I work with expect to see entries for app errors in the event log, I would like this app to do the same.
However, I have the app send error emails when an exception is caught from calling code inside the page method. It is not writing to the event log when this occurs. Note: the page method re-throws the exception after calling my email notification method.
From what I've read so far it seems that ASP.Net logs errors to the event log by default. I imagine that the same is not true for Page Methods/WebMethods because they basically throw the exception to the client code calling it.
Is there a trivial way to have that exception bubble up properly so that it writes to the event log? No other apps write to the event log directly from what I've seen so I don't think the application could create a new source since our security people keep a lot of things locked down (with good intentions, yay security).
[WebMethod]
public static object MyPseudoWebMethod()
{
try
{
// My exception spawning unreliable code here
}
catch(Exception ex)
{
// Cleanup ...
this.SendErrorNotification(ex);
throw; // <-- This doesn't bubble up but I'd love for it to!
}
}
Hmm interesting problem. You are right in that WebMethod exceptions do NOT follow normal exception flow.
The Application_Error event is not fired if your web method throws an
exception. The reason for this is that the HTTP handler for XML Web
services consumes any exception that occurs while an XML Web service
is executing and turns it into a SOAP fault prior to the
Application_Error event is called.
(from here)
The above page suggests using a SOAP extension to catch that exception before its swallowed, but here's how I'd do it if you don't want to do that:
1) Make a new 'error recieving' ASPX page that you will build that will take whatever params you want to record in your error log. So for example, have this page take in a POST named "ExceptionDetails" or whatever else you wish to capture. This page will NOT be viewed directly in a browser, so it doesnt need any ASPX controls or anything, but using a MasterPage on it won't hurt anything.
2) In the code behind of this new page, grab whatever POSTS you are sending in and new-up an Exception with whatever details you need. Immediate throw this exception. Doing this means that this exception will follow whatever flow other unhandled exceptions follow in the app (logging, emailing, etc).
3) On the page that calls the WebMethod JS, Wrap the calls to the WebMethod in a try-catch
4) In the catch block, print out whatever message you want in the browser, and initiate a new AJAX post to that new error receiving ASPX page, passing along whatever POST stuff you made that page look for.
The new AJAX call will NOT change ANYTHING in the user's perception by default. The AJAX call fires off a new request to that page, and the ASPX page itself is actually totally unaware that its AJAX and not a normal browser request. Any cookies/session/authentication data that's currently set are available to the AJAXed page as well, if you are recording a user's ID or anything. If you look at the returned response from a tool like Firebug, you will see that its actually the YellowScreenOfDeath's HTML (unless you have a custom 500 page, in which case its that HTML that comes back).
This is simply how the legacy ASMX web services work.
The only workaround is to stop using them (which you should do anyway, unless you're stuck with .NET 2.0). WCF doesn't have this problem.

Cookies NULL On Some ASP.NET Pages (even though it IS there!)

I'm working on an ASP.NET application and I'm having difficulty in understanding why a cookie appears to be null.
On one page (results.aspx) I create a cookie, adding entries every time the user clicks a checkbox. When the user clicks a button, they're taken to another page (graph.aspx) where the contents of that cookie is read.
The problem is that the cookie doesn't seem to exist on graph.aspx. The following code returns null:
Request.Cookies["MyCookie"];
The weird thing is this is only an issue on our staging server. This app is deployed to a production server and it's fine. It also works perfectly locally.
I've put debug code on both pages:
StringBuilder sb = new StringBuilder();
foreach (string cookie in Request.Cookies.AllKeys)
{
sb.Append(cookie.ToString() + "<br />");
}
this.divDebugOutput.InnerHtml = sb.ToString();
On results.aspx (where there are no problems), I can see the cookies are:
MyCookie
__utma
__utmb
__utmz
_csoot
_csuid ASP.NET_SessionId
__utmc
On graph.aspx, you can see there is no 'MyCookie'
__utma
__utmb
__utmz
_csoot
_csuid ASP.NET_SessionId
__utmc
With that said, if I take a look with my FireCookie, I can see that the same cookie does in fact exist on BOTH pages! WTF?!?!?!?! (ok, rant over :-) )
Has anyone seen something like this before? Why would ASP.NET claim that a cookie is null on one page, and not null on another?
This was happening because I was running the app under a different virtual directory. When I ran it on the original one, it worked.
I would suggest loading the IIS debug diagnostics tools. It is entirely possible that on that particular server there is a resource problem or unhandled exception that is killing that particular cookie AFTER it is added to the response but before it is flushed to the user. This is basically caused by a series of exceptions that occur in rapid succession causing rapid fail protection to shut down the w3wp.exe process that your page is running under. When the process is spooled back up to feed the response, the cookie is gone and all that goes out is the rendered html.
You might also try turning off rapid fail protection or altering memory settings/recycling settings on the application pool.

Is it best to handle SQL Exceptions or rather use a customError page and the Application_Error method in Global.asax?

I am using ASP.Net 2.0. I found myself in an awkward situtation because I could not find the cause of an error. Up until now I have been using this idiom when it comes to accessing data.
try
{
access data
}
catch (SqlException exception)
{
Response.redirect("error.aspx");
}
finally
{
Dispose of connections, commands etc
}
Now I can never see what the actual error was because it is always handled. Sometimes I can run SQL Profiler and catch the last statement and run that in SQL Query Analyzer and see if there is a problem. That is obviously terrible.
So I thought that the global.asax Application_Error method would save me and I would email myself the exception. But this method seems to only be called for unhandled exceptions. So my question is, is rather better to not handle the exception at all, the system sends the email and use a customError page. Or is it better to ram the exception into the session, redirect to the error.aspx and try and do something with the error then. Is there a way to call Application_Error again? Because if I throw the exception again at the error.aspx then I get the yellow screen of death?
In my opinion, use a library (log4net for example) to log your exceptions and throw the exception again, let the asp.net redirect the error page to a custom page with web.config customErrors section.
Log4Net or Enterprise Library's Exception Handling Application Block both have email sending features.
Also take a look at ELMAH, very smart and pluggable exception handling module.
Logging would be invaluable here. In your catch blocks, log the error out to a log file. its a good idea to get in the habit of doing this at it can be a real life saver to see whats going on. Have a look at nlog which is a logging library. There are various tools out there too that allow you to analyse logs produced by nlog
Since you are using ASP.NET 2.0, your best bet is to do nothing.
ASP.NET added a feature called ASP.NET Health Monitoring. By default, this will log detailed exception information to the Application event log. It can be configured to send different kinds of problem to different destinations.
So, simply do nothing, and everything will be fine.
You can simply log the exception somewhere before redirecting to the Error page.
Something like :
try
{
//access data
}
catch (SqlException exception)
{
LogException(exception);
Response.redirect("error.aspx");
}
finally
{
//Dispose of connections, commands etc
}
Thus you can have it both ways, Customer will be directed to error page and still your exception is logged somewhere for you to review and get to the bottom of it.
By the way, there are many free logging libraries that you can use notably log4net and Enterprise library

Soap Post Failure

So, I'm trying to use soap to communicate with a webservice and getting errors. What is frustrating about this particular issue is that it works perfectly fine with my local copy of the webservice (yes, I tried turning off my firewall) and used to work fine with a previous version of the webservice and client. I suspect I could (though I'll have to look up how to do this) add an action parameter to what the client is sending. However, I am very curious why it was able to work previously without one.
Edit Clarification: I think the relevant code was the same between when it stopped working and when it worked (since I checked against an old version of the program and had the same problems and the relevant code was the same...unless I missed something subtle). I know the actual server program is the same on both the local copy and the remote copy, even though it only works locally. I thus suspect there is some sort of weird configuration setting I can change to make it work.
Error message: "soap:ClientUnable to handle request without a valid action parameter. Please supply a valid soap action."
VB Client Code
'WEB_SERVICE_URL_CONST = http://site.com/foo.asmx
'domDoc.xml = <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><soap:Body><TestConnection xmlns="http://site.com"/></soap:Body></soap:Envelope>
Dim oXml As New XMLHTTPRequest
oXml.Open "POST", WEB_SERVICE_URL_CONST, False, "\"
oXml.setRequestHeader "Content-Type", "text/xml"
oXml.send domDoc.xml
C# Server Code
[WebMethod]
public int TestConnection()
{
return 1;
}
Are you sure the first two lines are not commented-out in the real version? The sample code seems to have no value for WEB_SERVICE_URL_CONST and domDoc.xml, and an empty Soap request would indeed not specify any action?
EDIT: I find the "used to work fine with a previous version of the webservice and client" a bit confusing. If you actually changed both the client and the server, then what part did not change?
The problem was with the xmlns="http://site.com" code. When I tested locally, I was using xmlns="http://localhost" by mistake. In fact, this should not be changed on the client or the server (in the server, this would be WebService(Namespace =...)), regardless of where I am testing...but if it does change, the client and server need to match.

Resources