I'm trying to debug a weird issue where a control is being rendered with multiple (different) id attributes. I believe something is manually adding the incorrect id, but I'm having issues figuring out when, and myControl.Attributes is empty.
Is it possible to set a breakpoint midway through Page_Render and read what has currently been written to the Output stream?
Note that Response.OutputStream.CanRead is false.
Or better yet, is there a way to view the string that would be rendered by a control at a given point in time?
Related
I'm using VS2010 and the built-in visual Report Designer to create RDLC templates for rendering reports with sub-reports as PDF files in an ASP.NET application using a ReportViewer control and the .LocalReport member. The code iterates over a set of records, producing one report (with its sub-reports) for each record.
I noticed recently that for a small number of the reports, one of the sub-reports was failing and giving the "Error: Subreport could not be shown" message. What's puzzling me about this case, in contrast to the many posts about this error that I've read (and previous times I've wrestled with it myself), is that it is only occurring for a subset of cases; from what I've seen elsewhere, the problem is usually all-or-nothing -- this error always appears until a solution is found, then this error never appears.
So... what could cause this error for only a subset of records? I can run the offending sub-report directly without errors; I can open the .xsd file and preview the DataSet for the offending records without errors; I can run the query behind the DataSet in SQL Server Mgt Studio without errors... I'm not sure where else to look for the cause(s) of this problem which only appears when I run the report-with-subreports?
I tracked this down to an out-of-date .xsd file (DataSet) -- somewhere along the way a table column string width was increased, but the DataSet was not updated or regenerated, so it still had the old width limit on that element, e.g., <xs:maxLength value="50" /> in the .xsd XML instead of the new width of 125 characters. The error was being thrown for those cases where at least one record in the subreport had a data value (string) in that column that exceeded the old width of 50.
An important clue came from adding a handler for the DataSet's .Selected event; I was already using the .Selecting event to set the sub-report's parameter (to tie it to the parent record), but I couldn't see anything useful when breaking in that event. However, examining the event args variable in the .Selected event, after the selection should have occurred, I found an Exception ("Exception has been thrown by the target of an invocation") with an InnerException ("Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints"). There was also a stack trace which indicated the point of failure was executing Adapter.Fill(dataTable).
While this turned out to be pretty misleading -- I had no such constraints in place on the tables involved in the query behind the DataSet -- it at least got me focusing on the specific records in the subreports. After much fruitless searching for anomalies in the subreport record data SQL Server Mgt Studio, I eventually started removing the records one-by-one from one of the offending subreport cases, re-running the report each time to see if I had fixed the error. Eventually I removed a subreport record and the report worked -- the remaining subreport records appeared!
Now I had a specific sub-report record to examine more closely. By chance (wish I could call it inspired intuition...), I decided to edit that record in the web app instead of looking at it as I had been in SQL Server. One of the fields was flagged with an alert saying the string value was too long! That was a mystery to me for a moment: if the string value was too long, how could it already be saved in the database?! I double-checked the column definition in the table, and found it was longer than what the web-app front-end was trying to enforce. I then realized that the column had been expanded without updating the app UI, and I suspected immediately that the .xsd file also had not been updated... Bingo!
There are probably a number of morals to this story, and it leaves me with a familiar and unwelcome feeling that I'm not doing some things as intelligently as I ought. One moral: always update (or better and usually simpler, just re-build) your .xsd DataSet files whenever you change a query or table that its based on... easier said than remembered, however. The queasy feeling I have is that there must be some way that I haven't figured out to avoid building brittle apps where a column width that's defined in the database is also separately coded into the UI and/or code-behind to provide user feedback and/or do data validation... suggestions on how to manage that more robustly are welcome!
I have a table which uses a large number of form fields (the HTML variety - i.e. without runat=server). When a postback occurs, these populate the Requests.Form object, and they appear to be inserted in the same order as they're defined in the page HTML.
Is this behaviour documented and consistent across browsers? I'd like to be able to access the elements by index, which would provide a simple way to find the fields, given that they may be inserted or deleted on the client side.
Edit:
Each row in the table has a hidden field which contains the row ID. This field is named according to the order it was displayed at render time. e.g. the first row has a field like <input type="hidden" name="row0" value="RowID_555252" />, and so on.
Of course the row numbers will be wrong as soon as a row is inserted or deleted in the middle of the table, so the only solution I can think of is to use Javascript to update the row numbers of the entire table whenever the rows move about. The backend would then retreive rows in order by scanning Request.Form for row0, row1, etc until the element is NULL.
Is this behaviour documented ...
Yes it is.
The overall algorithm is here: http://dev.w3.org/html5/spec/constraints.html#concept-form-submit and this defines that it uses a form data set built using the algorithm at http://dev.w3.org/html5/spec/constraints.html#constructing-the-form-data-set.
While that algorithm is quite complicated, in essence it says that the form elements will be put into the form data set in node order. That's not quite the same thing as what they were in the page HTML, for instance, the elements can be moved by JavaScript.
There are further algorithms to turn the form data set into query strings or HTTP content but these too preserve the node order.
There are known to be web pages that depend on this order. (The HTML5 parser has a strange quirk where input elements of most types, placed inside tables but not inside table cells are ejected from the table through a process known as foster parenting, but input elements of type "hidden" are not ejected in this way. This happens because that's the only way to preserve the legacy submit ordering behaviour of browsers.)
...and consistent across browsers?
The whole algorithm of what gets submitted is definitely not consistent - for example, the submissions resulting from clicking on an input element of type "image" are known to vary significantly.
I believe that the order of the submitted elements may well be consistent across browser implementations. However, I would not rely on it being so, and encourage you to find a more robust solution.
Is this behaviour documented and consistent across browsers?
No, it is not documented and it is not guaranteed to be consistent across browsers. That this is how it occurs happens to be an implementation detail of the browser/s you have used.
You could of course use the index, but you cannot assume that this will correspond to the order of the form elements. Furthermore, it is brittle - what happens if you add a new field at the start of the form? Your logic completely breaks.
I have a page where I am pulling a dataset from the database, a few thousand records. I get it when the page is loaded and store it in the cache. Each time an operation is performed on the page, I check the cache to see if its still there, and if not, go get it again (20 minute expiration); fairly typical setup.
When I run the page, the initial data loads fine, and a default RowFilter is applied to the data. When I change the value of a dropdown (which changes the RowFilter), the page hangs for a moment, then returns a javascript error:
Line: 80772370 (yes, thats line 80 million...)
Char: 17
Error: Syntax error
Code: 0
URL: -the url of the page I'm on-
This error is repeated EXACTLY 20 times.
When I re-run the page and the operation that renders that error, I get a different line number (for example, the next time I ran it after I posted the above message, the line is at 80718666), exactly 20 times again.
Now a few curveballs:
I was having the exact same issues when I was using the Session to store the data rather than the cache.
I do not have this problem in the development environment (this is happening in QA). The web.config for each environment are nearly identical, but the primary difference between them is that QA uses a sessionState server that is separate from itself. This is why I moved from Session variables to the cache in the first place.
When the search criteria is intended to return no results at all, it performs as it should (shows no results).
Now this hasn't exactly been my best week, so maybe I'm missing something big, but I could use some guidance.
Thanks SO community.
if you use UpdatePanel, remove it to see what's the real error is because now the error is hidden on a javascript return string, in the position you mention.
After you find your error, bring up again the UpdatePanel.
I can assume that the error is a null object/control that have been cached, and you forget to check if it not null.
When you cache parts on your page, and controls, then you need to check them on your back code, if they are null before use them.
I have always seen a lot of hidden fields used in web applications. I have worked with code which is written to use a lot of hidden fields and the data values from the visible fields sent back and forth to them. Though I fail to understand why the hidden fields are used. I can almost always think of ways to resolve the same problem without the use of hidden fields. How do hidden fields help in design?
Can anyone tell me what exactly is the advantage that hidden fields provide? Why are hidden fields used?
Hidden fields is just the easiest way, that is why they are used quite a bit.
Alternatives:
storing data in a session server-side (with sessionid cookie)
storing data in a transaction server-side (with transaction id as the single hidden field)
using URL path instead of hidden field query parameters where applicable
Main concerns:
the value of the hidden field cannot be trusted to not be tampered with from page to page (as opposed to server-side storage)
big data needs to be posted every time, could be a problem, and is not possible for some data (for example uploaded images)
Main advantages:
no sticky sessions that spill between pages and multiple browser windows
no server-side cleanup necessary (for expired data)
accessible to client-side scripts
Suppose you want to edit an object. Now it's helpful to put the ID into a hidden field. Of course, you must never rely on that value (i.e. make sure the user has appropriate rights upon insert/update).
Still, this is a very convenient solution. Showing the ID in a visible field (e.g. read-only text box) is possible, but irritating to the user.
Storing the ID in a session / cookie is prohibitive, because it disallows multiple opened edit windows at the same time and imposes lifetime restrictions (session timeout leads to a broken edit operation, very annoying).
Using the URL is possible, but breaks design rules, i.e. use POST when modifying data. Also, since it is visible to the user it creates uglier URLs.
Most typical use I see/use is for IDs and other things that really don't need to be on the page for any other reason than its needed at some point to be sent back to the server.
-edit, should've included more detail-
say for instance you have some object you want to update -- the UI sends back a collection of values and the server at that point may or may not know "hey this is a customer object" so you fire off a request to the server and say "hey, give me ID 7" and now you have your customer object as the system knows it. The updates are applied, validated, whatever and now your UI gets the completed result.
I guess a good excuse/argument is using linq. Try to update an object in linq without getting it from the DB first. It has no real idea that it's something it can keep track of until you get the full object.
heres one reason, convenient way of passing data between client code (javascript) and server side.
There are many useful scenarios.
One is to "store" some data on a page which should not be entered by a user. For example, store the user ID when generate a page, then this value will be auto-submitted with the form back to the server.
One other scenario is security. Add some hidden token to the page and check its existence on the server. This will help identify whether a form was submitted via the browser or by some bot which just posted to some url on your site.
It keeps things out of the URL (as in the querystring) so it keeps that clean. It also keeps things out of Session that may not necessarily need to be in there.
Other than that, I can't think of too many other benefits.
They are generally used to store state as an interaction progresses. Cookies could be used instead, but some people disable them. Could also use a single hidden field to point at server-side state, but then there are session-stickiness issues.
If you are using hidden field in the form, you are increasing the burden of form by including a new control.
If there is no need to take hidden field, you should't take it because it is not suitable on the bases of security point. using hidden field does not come under the good programming. Because it also affect the performance of application.
In pages that have a viewstate that spans 10-15KB what would be an optimal value for
<pages maxPageStateFieldLength="">
in the web.config in order to reduce the risk of potential truncation leading to viewstate validation errors?
The maxPageStateFieldLength doesn't limit the size of the view state, so it won't be truncated.
What it does is simply control how much of the view state will be placed in one hidden field. If the view state is larger than the maxPageStateFieldLength value the view state will simply be split into several hidden fields.
The default value is -1, meaning that the view state is put in a single hidden field, which is normally the most optimal.
I am currently running into a problem where we are getting "HttpException Due to Invalid Viewstate" errors for only Firefox users. I am also seeing "System.Web.HttpException: An error occurred while communicating with the remote host. The error code is 0x80070001" exceptions paired up with the viewstate ones.
I believe that firefox has possibly got a max size on (possibly) hidden form fields - i need to verify this. I'm now looking to chunk up the viewstate using maxPageStateFieldLength to hopefully resolve (in the short term). The longer term solution is to refactor the aspx page to do a paging query for grid (instead of pulling down all the rows in one go - which is not a good thing to do).
IMHO, you should put in a maximum that is fairly large but not unlimited. I'd say 1MB is a start.