ASP.NET renders incomplete HTML - asp.net

I have an asp.net C# .net 3.5 page which contains several user controls. I am noticing that sometimes the html loaded on the browser is incomplete. It seems to get cut-off.
Any suggestions on how to troubleshoot whats the root cause?

This can be symptomatic of server errors or proxy problems. I would use Fiddler to check what's going back and forth between your browser and the server. If you are getting any 500 (server error) response codes, that would be a good place to look.
Another thing to check would be javascript errors on the page, because depending on what your javascripts are doing, errors can prevent loading of other content in some cases.

womp probably has most of the bases covered, but the other angle that can lead to issues like this would be exceptions getting eaten but causing processing to stop, thereby sending half the page or somesuch.

Verify that your HTML is being written to the page by viewing the source code of the page after it loads. My guess is that the HTML that is being output is invalid, and that the browser isn't able to properly display it. Make sure all your HTML tags are properly closed and balanced.
It could also be an issue with the request being ended midway through. Try removing one control at a time from the page and see if the situation improves. If it does, you'll know which control is to blame.

It is quite unlikely that it is the same problem, but I had that happen before where the page had a custom filter attached to response.filter which reformatted the output to fix up some dotnet SEO problems. And the one we wrote had a bug where one regular expression consumed a bit too much copy in some instances and broke the output

Related

PageRequestManagerParserErrorException on Live Server - Help!

I'm really hoping that someone can help. I have tomorrow to get this right or we're in trouble. I apologise as I only have the details from memory, being at home.
The error is this:
Sys.WebForms.PageRequestManagerParserErrorException: The message received from the server could not be parsed. Common causes for this error are when the response is modified by calls to Response.Write(), response filters, HttpModules, or server trace is enabled.
Details: Error parsing near '
<!DOCTYPE html P'.
It appears in both FF and IE so far, and only happens on the live servers.
It occurs within partial (or attempted partial as they stop working and don't update) postbacks within an update panel on a form that is on every page. It doesn't always do it, but once it's broken, it's consistent. I have a suspicion that it kicks in after the pages in question have been posted back to by another page, but I'm not 100% about this. This form is absolutely integral to the function of the site.
I've googled and googled, and I've seen the lists of the usual causes (Reponse.Write, tracing etc.) as well as the not so usual causes (e.g. firewalls messing with headers), but none seem to apply, plus some doubly do not apply because this issue does not occur at all on our staging server. I don't know if slower load times would affect it.
Any help would be greatly appreciated!
Edit: I'm using .net 2.0 and the ajax framework 1.0.
Thanks to Greg for his comment, but in the end I found the issue - which is indeed one of the culprits in several posts regarding this that I had looked at previously, but I missed! It was my Output Caching causing the problem, and though we've since looked at more aspects of the caching, this particular issue was fixed by a) turning off the caching (obviously!) or b) setting VaryByParam="*". Further info here: <Removed by OP, dead link>

How to speed up Google adsense and analytics loading time?

This might fall under the category of "you can't", but I thought it might be prudent to at least see if there is something I can do about this.
According to FireBug, the major bottleneck in my page loading times seems to be a gap between the loading of the html and the loading of Google adsense and analytics. Notice in the screenshot below that the initial GET only takes 214 ms, and that adsense + analytics loading takes roughly 130 ms combined. However, the entire page load time is 1.12 seconds due to that large pause in between the initial GET and the adsense/analytics loading.
If it makes any difference at all, the site is running off of the ASP.NET MVC RC1 stack.
alt text http://kevinwilliampang.com/pics/firebug.jpg
Update: After removing adsense and analytics, I'm still seeing a slow response time. Hovering over the initial GET request, I see that the following speeds: 96ms Receiving Data, 736ms DOMContentLoaded (event), 778ms 'load' (event). I'm guessing then that the performance is a result of my own jQuery javascript that has processing tied to the ($document).ready() event?
You should place your analytics code at the bottom of the page so that everything else loads first. Other than that, I don't think there's much you can do.
edit: Actually, I just found this interesting blog post on a way to speed up analytics by hosting your own urchin.js file. Maybe it's worth a look.
I've never seen anything like that using Firebug on Stack Overflow and we use Analytics as well.
I just ran a trace and I see the request for the
http://www.google-analytics.com/__utm.gif?...
Happening directly after the DOMContentLoaded event (the blue line). So I'd suspect the AdSense, first. Have you tried disabling that?
As it goes, I happen to have rather heavily researched this just this week. Long story short, you are screwed. As others have said the best you can do is put it at the bottom of the list of requests and make the rest of your code depend on ready rather than onload events - jQuery is really good here. Some portion of the js is static, so you could clone that locally if you keep an eye on it for maintenance purposes.
The google code isn't quite as helpful as it could be in this area*, but it's their ballgame and anything you do to change it is going to be both complex and risky. In theory, wrapping with a non-blocking script call in the header is possible, but would be unlikely to gain you a benefit given the additional abstraction, and ultimately with adsense your payload is an html source, not script.
* it's possible google have a good reason, but nothing I can deduce from the code they expose
Probably not anything you can do aside from putting those includes right before the closing body tag, if you haven't already. JavaScript includes block parallel HTTP requests which is why they should be kept out of <head>
Surely Google's servers will be the fastest part of the loading, given that your ISP and most ISPs will have a local cache of the files too?
You could inject the script into the head on page load perhaps, but I'm not sure how that effects urchin.js.
Could be that your page simply takes that long to parse? It seems nothing network-related is happening. It simply waits around a second before the adsense/analytics requests are even fired off.
I don't suppose you have a few hundred tables on the page or something? ;)

Invalid Webresource.axd parameters being generated

Original Question:
We have an odd error with WebResource.axd url generation. (It does not seem to be related to the fairly common "WebRsource.axd Padding is invalid and cannot be removed" issue).
We have an ASP.NET web page that, when created, adds a script reference to WebResource.axd.
In this case, we're seeing that the WebResource.axd link occasionally turns into garbage past a certain point, replaced by what looks like javascript. Worse yet, the url generation failure seems to be inconsistant.
In our case, the link should (and usually does look like):
/WebResource.axd?d=D-wd7RbHCvSp_p0mHAmE4g2&t=633464867255568315
All well and good. However, we are getting errors logged from users...and the url they're trying to access looks like (in one case):
/WebResource.axd?d=D-wd7RbHCvS/../../images/icons/Ico_resize.gif')}}function%20ShowFilter_Manufacturer(){var%20div.......
[the remaining encoded javascript from that link has been removed as irrelevant]
Stranger yet, we got a few of these in rapid succession from the same user, who was apparently trying to reload the page...each url slightly different.
/WebResource.axd?d=D-wd7RbHCvS<garbage>
/WebResource.axd?d=D-wd7RbHCvSp<garbage>
/WebResource.axd?d=D-wd7RbHCvSp_<garbage>
In some cases the garbage is encoded JavaScript, I've seen portions of a url...completely empty parameter strings...I don't see an obvious pattern.
As an aside, should it be relevant, it should be noted that I don't believe that this WebResource is anything other than a stock WebResource that is automatically included by .NET when certain features are included on a page...in this case, a field validator. Looking at the contents of the actual WebResource.axd reveals very standard looking set of Javascript functions that seem designed to handle generic .NET events. Not anything we've created.
Has anyone seen anything like this? (or better, has anyone understood why this was happening, and come up with a way to eliminate it?)
EDIT 0: Some additional information:
Item 1: In response to one answer, we made sure that our scripts are encased with CDATA tags, since our doctype is xhtml transitional:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
Unfortunately, though we had high hopes, it does not seem to have solved the problem. We've noticed this more often with IE 8 as a browser, which would lend some credence to the idea that this is browser related...perhaps the way the browser is parsing the stream...but why we would get subtly different responses on subsequent attempts baffles me.
Item 2:
It turns out that the omitted sections seem to be blocks of fairly regular size. Someone reported that he was seeing 1k or 4k blocks missing, and I (so far...I've only looked at two cases thus far) would agree (mine were both missing 4096 bytes of data).
according to this post:
http://bytes.com/topic/asp-net/answers/861764-invalid-viewstate-system-string-decryptstringwithiv
It seems that the problem is caused by the way browsers render pages differently when the doctype is not specified.
Here is another interesting post i found on this subject, still not the solution though:
http://blog.aproductofsociety.org/?p=11
on the above page it has "Response.Cache.SetNoStore()" as a possible solution in the comments, i'll try this next to see if it helps.
Microsoft has responded to this issue:
Note is a bug in Internet Explorer 8. The Internet Explorer team has been investigating this issue.
-=Impact=- Thus far, we believe the problem has no impact on the end-user's experience with the web application; the only negative effect is the spurious/malformed requests sent by the JavaScript speculative-download engine. When the script is actually needed by the parser, it will properly be downloaded and used at that time.
-=Circumstances=- The spurious-request appears to occur only in certain timing situations, only when a META HTTP-EQUIV tag containing a Content-Type with a CHARSET directive appears in the document, and only when a JavaScript SRC URL spans the 4096th byte of the HTTP response body.
-=Workaround=- Hence, we currently believe this issue can be mitigated by declaring the CHARSET of the page using the HTTP Content-Type header rather than specifying it within the page.
So, rather than putting
[META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=utf-8"]
In your head tag, instead, send the following HTTP response header:
Content-Type: text/html; charset=utf-8
Note that specification of the charset in the HTTP header results in improved performance in all browsers, because the browser's parsers need not restart parsing from the beginning upon encountering the character set declaration. Furthermore, using the HTTP header helps mitigate certain XSS attack vectors.
NOTE: There have been reports that this problem still happens when the META HTTP-EQUIV is not on the page. We will update this comment when we have more investigation.
Posted by Microsoft on 6/30/2009 at 12:25 PM
I am from the ASP.NET team -- we are looking for a customer willing to work with us on researching this issue. If anyone is able to reliably reproduce the problem by requesting their own pages and checking the logs, and is willing to work with our support group, please respond or send me a direct message. Thanks!
What version of .NET are you compiling against? What happens if you change your build to build for an older or newer version? (not sure if this would do anything but it's worth a try)
If it's still happening, I think you should post a bug about it on Microsoft Connect. They should come back to you pretty quickly.
Have you got any HttpHandlers or Modules that are registered in the web config that modify the rendered HTML before it's sent to the user?
These can often be:
Minifiying JS and CSS
Ensure HTML is valid
Might be worth a look.
This is an old post... but I've came across through a google search and reminded me of this...
http://www.troyhunt.com/2010/09/fear-uncertainty-and-and-padding-oracle.html
Could it have been related?
This was eventually fixed by Microsoft:
http://blogs.msdn.com/b/ieinternals/archive/2010/04/01/ie8-lookahead-downloader-fixed.aspx

CSS changes not reflecting on site

Whenever we make changes to the CSS, it generally takes 24 hours to reflect those changes on my site. I have tried clearing the server cache and browser cache but it doesn't help too. Is there any other way to make the CSS changes reflect immediately after updation?
it happens in all the browsers... when i check it in the browser , i can access my css file with two paths eg : i store my css in folder named "Cssfolder" and my css name is say 135.css
So when i access the folder paths, Cssfolder/135.css & cssfolder/135.css, one of the path shows me latest css whereas other one shows me old css.Notice the "c" is captital in one path whereas small in other path.
Thanks.
I've found this to be a pretty common problem in a lot of my projects. I would suggest two things...
If it's just an app that you are working on you can use the CSS Cachebuster during development.
Following the idea behind the Cachebuster I have found that often adding the timestamp of the CSS file as a query string off of the CSS link will help in telling the browser that the file is different... something like... whatever.css?12212009035543
You might want to use a monitoring tool, like Live Http Headers for Firefox, to see the requests and responses to and from the server. This usually solves a lot of problems for me. Take a look at the "Expire" headers and conditional requests (like: "If-modified-since"). This said, take a look at server and client local times and timezones - it might be that they differ significantly and conditional GET requests "seem to be" handled correctly, because of future or otherwise mangled timestamps.
You can force to load the current css directly from the server with appending a random unique value to the url, like http://example.com/Cssfolder/135.css?983274928374 and http://example.com/cssfolder/134.css?08973249827. There's no way that this would ever get cached unless you use the same random value twice.
This way you learn where to look further for the solution to your problem: At the server, the ISP/a proxy or your browser.
You really need to see whether this is server side or client side. If the server is still serving the old CSS then clearly you've got no chance on the client side.
I've occasionally seen times where I've had to show the CSS in the browser, and then next time I've been to the real page, it's used that new CSS. Usually just hitting refresh does it.
Do you have any web caches like Akamai involved anywhere?
If you try to go to the CSS page from a computer which has never seen the old version, which version does it show?
EDIT: Changed answer to reflect edits in question.
I have been dealing with this issue in the past, and ended up writing a httpmodule to deal with it.
It's pretty simple, it just finds all script/css links in head tag (they now need to have runat=server) and appends the assembly version number to the link, in the same way as Tim K describes. This way im sure my clients always fetches the newest css/scripts when my app is updated in production, and never have to deal with this issue again.
Maybe Internet Service Provider cache, as in this case?
I was perplexed by this issue then someone said Ctrl+F5. Worked for me :)
When I am developing and I need to be sure that I am seeing changes as I work, I stick the css in the page ie
<style type="text/css">
/* your css */
</style>
Or you could constantly change the name of the css file itself, not very useful in a production environment, but perhaps okay while developing.
I know it doesn't solve the problem, but for developing it is okay.

Large JSP response is truncated :(

I have a JSP accessed through JBoss. It renders a list (a search result).
If the response gets big, approximately larger than 200k the response is truncated. I can see how the page just ends in the middle of a tag in Firefox. IE totally freaks out an so does Fiddler.
Responses smaller than 200k are no problem.
Anyone has experienced this?
I don't know where to look for the problem... any suggestions are welcome.
If your JSP renders a very complex html page, then it might just be the browsers tripping over their own feet. Can you retrieve the page via wget or curl? Is it truncated then, too?
Add this to your code:
<%# page buffer="none" %>
My best guess so far is that - in normal viz. buffered mode the output is written to a buffer and if some how the server page has 'finished' completely - a part of the output is stuck in the ether (buffer).
When you disable the buffer - the output from the jsp gets sent to the client as soon as it is generated.
Maybe it has something to do with flushing the buffer? thath number (200k) ringed the bell of a problem I had with it. Place a page directive like this:
<%#page buffer="500kb" autoFlush="true" %>
and play with the buffer size and autoflush values.
I second Henning's suggestion. I have used JSPs on JBoss to return multi megabyte responses, I would look at the code or possibly an intermediate proxy server rather than JBoss.
Thank you all again. During the pasted days I've experienced a disk crash, vomiting children and a trip to Spain.
Since the disk crash I cannot reproduce this behavior!
I have not lost any code and I have the exact same JBoss. But I have a slightly different Java and Firefox version. No Fiddler installed (although I had it turned off on my old machine).
I still have no clue what caused it. But also I don't care anymore :P

Resources