I'm really hoping that someone can help. I have tomorrow to get this right or we're in trouble. I apologise as I only have the details from memory, being at home.
The error is this:
Sys.WebForms.PageRequestManagerParserErrorException: The message received from the server could not be parsed. Common causes for this error are when the response is modified by calls to Response.Write(), response filters, HttpModules, or server trace is enabled.
Details: Error parsing near '
<!DOCTYPE html P'.
It appears in both FF and IE so far, and only happens on the live servers.
It occurs within partial (or attempted partial as they stop working and don't update) postbacks within an update panel on a form that is on every page. It doesn't always do it, but once it's broken, it's consistent. I have a suspicion that it kicks in after the pages in question have been posted back to by another page, but I'm not 100% about this. This form is absolutely integral to the function of the site.
I've googled and googled, and I've seen the lists of the usual causes (Reponse.Write, tracing etc.) as well as the not so usual causes (e.g. firewalls messing with headers), but none seem to apply, plus some doubly do not apply because this issue does not occur at all on our staging server. I don't know if slower load times would affect it.
Any help would be greatly appreciated!
Edit: I'm using .net 2.0 and the ajax framework 1.0.
Thanks to Greg for his comment, but in the end I found the issue - which is indeed one of the culprits in several posts regarding this that I had looked at previously, but I missed! It was my Output Caching causing the problem, and though we've since looked at more aspects of the caching, this particular issue was fixed by a) turning off the caching (obviously!) or b) setting VaryByParam="*". Further info here: <Removed by OP, dead link>
Related
I have created a little flight simulator based on milktruck demo. The problem is that the plugin crashes every couple of refreshes. I have no idea why that is. It happens on all browsers
You can see the game here
http://www.stepupforisrael.com/plane-game/0205/Mingame.htm
Any clue will be welcome....
I see, the issue is that after a few refreshes you get the error ERR_BRIDGE_OTHER_SIDE_PROBLEM and the detail is bad status
I believe this is a bug with how you are loading the plug-in itself - I can reproduce it using a very basic set up. Also, It looks as if there is already a bug report for it here.
http://code.google.com/p/earth-api-samples/issues/detail?id=736
Form reading around, and testing, it appears that loading the plug-in using the Google Loader resolves the issue. To do this.
Firstly, remove the onload 'init' call from the body element.
So that
<body onload='init();' onunload="//GUnload()">
becomes
<body onunload="GUnload()">
Then use the Google loader callback to handle the initialization. To do that place this line at the end of your script block, right before the closing element.
google.setOnLoadCallback(init);
My thinking is that the onload event occurs immediately after a page is loaded. However, sometimes the plugin has not finished authentication when this event occurs, hence the intermittent error. You can read more about this here: https://groups.google.com/forum/?fromgroups#!topic/google-earth-browser-plugin/vvXKanCJbJU
We had our own piece of pain with that error and we used solution described in a different answer (GUnload etc) - without success. The problem was solved when we moved our code from some murky hosting we used - to Amazon EC2. The error stopped immediately. Was it caused by a timeout between our original host and Google servers? We don't know anything except what we did saved us...
I am using the code to place a linkedIn Follow button (generated here https://developer.linkedin.com/plugins/follow-company) on this page, http://new.janeirodigital.com (view Client Testimonials section). However, the buttons sometimes show up and sometimes don't.
I have read a little bit of cross domains issues that may cause this, but I am not able to find a workaround to fix this. If you visit that page in Chrome and see the errors console, you'll see a couple of errors like regarding the linked in button, like:
Unsafe Javascript attempt to access frame with URL mysite.com from frame with URL http://platform.linkedin.com/js/....Domains, protocols and ports must match.
Has anybody experienced this problem before?
Any help appreciated
Daniel
The messages you are seeing in the developer console in Chrome are unrelated to your problem. They are an artifact of the cross-domain communication and as you'll notice, you see them for Facebook and Twitter as well on the same page.
That said, viewing your page I am also seeing intermittent 403s for some of the backend calls that the FollowCompany plugin is making. I have alerted our NOC to the issue and they should be investigating now.
Reviewing your page, it seems you have done everything necessary and are set, so once we fix the operational issue you should be good to go.
My apologies for any inconvenience!
-Jeremy
I just found that this is happening when there are more than one follow button on the page, if you delete the first one and the second shows without problems but the others have the same problem...
Hope this could help your team Jeremy!!
I have an asp.net C# .net 3.5 page which contains several user controls. I am noticing that sometimes the html loaded on the browser is incomplete. It seems to get cut-off.
Any suggestions on how to troubleshoot whats the root cause?
This can be symptomatic of server errors or proxy problems. I would use Fiddler to check what's going back and forth between your browser and the server. If you are getting any 500 (server error) response codes, that would be a good place to look.
Another thing to check would be javascript errors on the page, because depending on what your javascripts are doing, errors can prevent loading of other content in some cases.
womp probably has most of the bases covered, but the other angle that can lead to issues like this would be exceptions getting eaten but causing processing to stop, thereby sending half the page or somesuch.
Verify that your HTML is being written to the page by viewing the source code of the page after it loads. My guess is that the HTML that is being output is invalid, and that the browser isn't able to properly display it. Make sure all your HTML tags are properly closed and balanced.
It could also be an issue with the request being ended midway through. Try removing one control at a time from the page and see if the situation improves. If it does, you'll know which control is to blame.
It is quite unlikely that it is the same problem, but I had that happen before where the page had a custom filter attached to response.filter which reformatted the output to fix up some dotnet SEO problems. And the one we wrote had a bug where one regular expression consumed a bit too much copy in some instances and broke the output
This might fall under the category of "you can't", but I thought it might be prudent to at least see if there is something I can do about this.
According to FireBug, the major bottleneck in my page loading times seems to be a gap between the loading of the html and the loading of Google adsense and analytics. Notice in the screenshot below that the initial GET only takes 214 ms, and that adsense + analytics loading takes roughly 130 ms combined. However, the entire page load time is 1.12 seconds due to that large pause in between the initial GET and the adsense/analytics loading.
If it makes any difference at all, the site is running off of the ASP.NET MVC RC1 stack.
alt text http://kevinwilliampang.com/pics/firebug.jpg
Update: After removing adsense and analytics, I'm still seeing a slow response time. Hovering over the initial GET request, I see that the following speeds: 96ms Receiving Data, 736ms DOMContentLoaded (event), 778ms 'load' (event). I'm guessing then that the performance is a result of my own jQuery javascript that has processing tied to the ($document).ready() event?
You should place your analytics code at the bottom of the page so that everything else loads first. Other than that, I don't think there's much you can do.
edit: Actually, I just found this interesting blog post on a way to speed up analytics by hosting your own urchin.js file. Maybe it's worth a look.
I've never seen anything like that using Firebug on Stack Overflow and we use Analytics as well.
I just ran a trace and I see the request for the
http://www.google-analytics.com/__utm.gif?...
Happening directly after the DOMContentLoaded event (the blue line). So I'd suspect the AdSense, first. Have you tried disabling that?
As it goes, I happen to have rather heavily researched this just this week. Long story short, you are screwed. As others have said the best you can do is put it at the bottom of the list of requests and make the rest of your code depend on ready rather than onload events - jQuery is really good here. Some portion of the js is static, so you could clone that locally if you keep an eye on it for maintenance purposes.
The google code isn't quite as helpful as it could be in this area*, but it's their ballgame and anything you do to change it is going to be both complex and risky. In theory, wrapping with a non-blocking script call in the header is possible, but would be unlikely to gain you a benefit given the additional abstraction, and ultimately with adsense your payload is an html source, not script.
* it's possible google have a good reason, but nothing I can deduce from the code they expose
Probably not anything you can do aside from putting those includes right before the closing body tag, if you haven't already. JavaScript includes block parallel HTTP requests which is why they should be kept out of <head>
Surely Google's servers will be the fastest part of the loading, given that your ISP and most ISPs will have a local cache of the files too?
You could inject the script into the head on page load perhaps, but I'm not sure how that effects urchin.js.
Could be that your page simply takes that long to parse? It seems nothing network-related is happening. It simply waits around a second before the adsense/analytics requests are even fired off.
I don't suppose you have a few hundred tables on the page or something? ;)
Original Question:
We have an odd error with WebResource.axd url generation. (It does not seem to be related to the fairly common "WebRsource.axd Padding is invalid and cannot be removed" issue).
We have an ASP.NET web page that, when created, adds a script reference to WebResource.axd.
In this case, we're seeing that the WebResource.axd link occasionally turns into garbage past a certain point, replaced by what looks like javascript. Worse yet, the url generation failure seems to be inconsistant.
In our case, the link should (and usually does look like):
/WebResource.axd?d=D-wd7RbHCvSp_p0mHAmE4g2&t=633464867255568315
All well and good. However, we are getting errors logged from users...and the url they're trying to access looks like (in one case):
/WebResource.axd?d=D-wd7RbHCvS/../../images/icons/Ico_resize.gif')}}function%20ShowFilter_Manufacturer(){var%20div.......
[the remaining encoded javascript from that link has been removed as irrelevant]
Stranger yet, we got a few of these in rapid succession from the same user, who was apparently trying to reload the page...each url slightly different.
/WebResource.axd?d=D-wd7RbHCvS<garbage>
/WebResource.axd?d=D-wd7RbHCvSp<garbage>
/WebResource.axd?d=D-wd7RbHCvSp_<garbage>
In some cases the garbage is encoded JavaScript, I've seen portions of a url...completely empty parameter strings...I don't see an obvious pattern.
As an aside, should it be relevant, it should be noted that I don't believe that this WebResource is anything other than a stock WebResource that is automatically included by .NET when certain features are included on a page...in this case, a field validator. Looking at the contents of the actual WebResource.axd reveals very standard looking set of Javascript functions that seem designed to handle generic .NET events. Not anything we've created.
Has anyone seen anything like this? (or better, has anyone understood why this was happening, and come up with a way to eliminate it?)
EDIT 0: Some additional information:
Item 1: In response to one answer, we made sure that our scripts are encased with CDATA tags, since our doctype is xhtml transitional:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
Unfortunately, though we had high hopes, it does not seem to have solved the problem. We've noticed this more often with IE 8 as a browser, which would lend some credence to the idea that this is browser related...perhaps the way the browser is parsing the stream...but why we would get subtly different responses on subsequent attempts baffles me.
Item 2:
It turns out that the omitted sections seem to be blocks of fairly regular size. Someone reported that he was seeing 1k or 4k blocks missing, and I (so far...I've only looked at two cases thus far) would agree (mine were both missing 4096 bytes of data).
according to this post:
http://bytes.com/topic/asp-net/answers/861764-invalid-viewstate-system-string-decryptstringwithiv
It seems that the problem is caused by the way browsers render pages differently when the doctype is not specified.
Here is another interesting post i found on this subject, still not the solution though:
http://blog.aproductofsociety.org/?p=11
on the above page it has "Response.Cache.SetNoStore()" as a possible solution in the comments, i'll try this next to see if it helps.
Microsoft has responded to this issue:
Note is a bug in Internet Explorer 8. The Internet Explorer team has been investigating this issue.
-=Impact=- Thus far, we believe the problem has no impact on the end-user's experience with the web application; the only negative effect is the spurious/malformed requests sent by the JavaScript speculative-download engine. When the script is actually needed by the parser, it will properly be downloaded and used at that time.
-=Circumstances=- The spurious-request appears to occur only in certain timing situations, only when a META HTTP-EQUIV tag containing a Content-Type with a CHARSET directive appears in the document, and only when a JavaScript SRC URL spans the 4096th byte of the HTTP response body.
-=Workaround=- Hence, we currently believe this issue can be mitigated by declaring the CHARSET of the page using the HTTP Content-Type header rather than specifying it within the page.
So, rather than putting
[META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=utf-8"]
In your head tag, instead, send the following HTTP response header:
Content-Type: text/html; charset=utf-8
Note that specification of the charset in the HTTP header results in improved performance in all browsers, because the browser's parsers need not restart parsing from the beginning upon encountering the character set declaration. Furthermore, using the HTTP header helps mitigate certain XSS attack vectors.
NOTE: There have been reports that this problem still happens when the META HTTP-EQUIV is not on the page. We will update this comment when we have more investigation.
Posted by Microsoft on 6/30/2009 at 12:25 PM
I am from the ASP.NET team -- we are looking for a customer willing to work with us on researching this issue. If anyone is able to reliably reproduce the problem by requesting their own pages and checking the logs, and is willing to work with our support group, please respond or send me a direct message. Thanks!
What version of .NET are you compiling against? What happens if you change your build to build for an older or newer version? (not sure if this would do anything but it's worth a try)
If it's still happening, I think you should post a bug about it on Microsoft Connect. They should come back to you pretty quickly.
Have you got any HttpHandlers or Modules that are registered in the web config that modify the rendered HTML before it's sent to the user?
These can often be:
Minifiying JS and CSS
Ensure HTML is valid
Might be worth a look.
This is an old post... but I've came across through a google search and reminded me of this...
http://www.troyhunt.com/2010/09/fear-uncertainty-and-and-padding-oracle.html
Could it have been related?
This was eventually fixed by Microsoft:
http://blogs.msdn.com/b/ieinternals/archive/2010/04/01/ie8-lookahead-downloader-fixed.aspx