Request.Form treats Newlines differently in TextArea in ASP .NET - asp.net

I have two pages which have text area controls on them. When the user submits one page, newlines are treated as char(13) + char(10). But on the other page, newlines are treated as char(10). I've confirmed this by looking at the Request.Form dictionary.
The two pages are hosted in the same ASP .NET 4.0 Web Forms application, and the pages look exactly the same from a markup perspective. I'm logged in as the same user in the same browser.
When I use JavaScript to check for the existence of char 10 and char 13 in the control in the browser, both pages only include a char(10).
It seems as if IIS/ASP.NET is configured to handle form requests differently on the two pages, but I can't figure out where the difference would be. What causes this behavior?

Different operating system use different combinations of characters to represent a new line.
On Windows it is CR + LF on Linux its LF and on Macs its CR.
CR = Carriage Return
LF = Line Feed
You can see the line ending characters if you copy / paste the text in Notepad++ and select View > Show all characters.

Related

Visual FoxPro 9.0 report show unicode

I am using Visual Foxpro 9, I want to print Unicode chars in report (frx).
There are some ways to extend report listener to show unicode. I need the code to extend/show reportListner to show unicode.
I've never had to work with Unicode within VFP either, or spent any time working with Reports, but the Help for the Render method of the ReportListener does mention Unicode:
cContentsToBeRendered
Indicates the text to be rendered for Expression (Field) and Label layout elements.
For Picture layout elements sourced from a file, cContentsToBeRendered contains the filename.
When specifying a filename for an image, ReportListener provides cContentsToBeRendered
as a DBCS string, which is the standard format for strings in Visual FoxPro.
However, when indicating text to be rendered, ReportListener provides
cContentsToBeRendered as a Unicode string, appropriately translated to the correct
locale using any regional script information associated with this layout control in
its report definition file (frx) record.
If your derived class sends the text value through some additional processing, such as
storage in a table, you can use the STRCONV() function, and its optional regional
script parameter, to convert the string to DBCS first. For more information, see
STRCONV( ) Function.
Although I could be incorrect, but I believe VFP does NOT support UniCode and only works with the base ASCII character set. But then again, I've never needed to use Unicode either and have used FoxPro since the beginning of its lifetime.
I would imagine Rick Strahl's article Using Unicode in Visual FoxPro
Web and Desktop Applications would be fairly definitive on the topic.

Testing htmlencode issue in asp.net application

In our application developed in html5 and javascript whenever a user submits(with text contain < and > and #) the form (containing a comments text field )we get the following error:
{"Message":"A potentially dangerous Request.Form value was detected from the client ......
Now the dev team has fixed this issue saying that they handled this at the server side.
Now i want to test different scenarios just to ensure that this issue wont repeat next time and for any other special characters .
Can anyone suggest me the different scenarios i can test here apart from entering the special characters in the comments text box and submitting the form?

Why is ASP.NET 4 / IIS7 html-encoding my query string?

We've switched one of our test environments to using .NET 4 on IIS7. Production is using .NET 2.
Certain urls, such as
http://www.example.com/page.aspx?param1=<foo>&param2=<foo>
Aren't getting caught by our stringindex code that looks for < or > in Request.Url.ToString(). Why? Because they're showing up as <foo> when we check. This worked in .NET 2.
What is going on?
NOTE: there are no mistakes in the formatting. I really mean HTML encode.
All data in query string needs to be URL Encoded to be able to parse correctly, so if you want to grab what you entered you need to URL Decode the query string.
HttpServerUtility.UrlDecode(Request.QueryString);
http://msdn.microsoft.com/en-us/library/6196h3wt.aspx :
URL encoding ensures that all browsers will correctly transmit text in URL strings. Characters such as a question mark (?), ampersand (&), slash mark (/), and spaces might be truncated or corrupted by some browsers. As a result, these characters must be encoded in tags or in query strings where the strings can be re-sent by a browser in a request string.

Is IIS performing an illegal character substitution? If so, how to stop it?

Context: ASP.NET MVC running in IIS, with a a UTF-8 %-encoded URL.
Using the standard project template, and a test-action in HomeController like:
public ActionResult Test(string id)
{
return Content(id, "text/plain");
}
This works fine for most %-encoded UTF-8 routes, such as:
http://mydevserver/Home/Test/%e4%ba%ac%e9%83%bd%e5%bc%81
with the expected result 京都弁
However using the route:
http://mydevserver/Home/Test/%ee%93%bb
the url is not received correctly.
Aside: %ee%93%bb is %-encoded code-point 0xE4FB; basic-multilingual-plane, private-use area; but ultimately - a valid unicode code-point; you can verify this manually, or via:
string value = ((char) 0xE4FB).ToString();
string encoded = HttpUtility.UrlEncode(value); // %ee%93%bb
Now, what happens next depends on the web-server; on the Visual Studio Development Server (aka cassini), the correct id is received - a string of length one, containing code-point 0xE4FB.
If, however, I do this in IIS or IIS Express, I get a different id, specifically "î“»", code-points: 0xEE, 0x201C, 0xBB. You will immediately recognise the first and last as the start and end of our percent-encoded string... so what happened in the middle?
Well:
code-point 0x93 is “ (source)
code-point 0x201c is “ (source)
It looks to me very much like IIS has performed some kind of quote-translation when processing my url. Now maybe this might have uses in a few scenarios (I don't know), but it is certainly a bad thing when it happens in the middle of a %-encoded UTF-8 block.
Note that HttpContext.Current.Request.Raw also shows this translation has occurred, so this does not look like an MVC bug; note also Darin's comment, highlighting that it works differently in the path vs query portion of the url.
So (two-parter):
is my analysis missing some important subtlety of unicode / url processing?
how do I fix it? (i.e. make it so that I receive the expected character)
id = Encoding.UTF8.GetString(Encoding.Default.GetBytes(id));
This will give you your original id.
IIS uses Default (ANSI) encoding for path characters. Your url encoded string is decoded using that and that is why you're getting a weird thing back.
To get the original id you can convert it back to bytes and get the string using utf8 encoding.
See Unicode and ISAPI Filters
ISAPI Filter is an ANSI API - all values you can get/set using the API
must be ANSI. Yes, I know this is shocking; after all, it is 2006 and
everything nowadays are in Unicode... but remember that this API
originated more than a decade ago when barely anything was 32bit, much
less Unicode. Also, remember that the HTTP protocol which ISAPI
directly manipulates is in ANSI and not Unicode.
EDIT: Since you mentioned that it works with most other characters so I'm assuming that IIS has some sort of encoding detection mechanism which is failing in this case. As a workaround though you can prefix your id with this char and then you can easily detect if the problem occurred (if this char is missing). Not a very ideal solution but it will work. You can then write your custom model binder and a wrapper class in ASP.NET MVC to make your consumption code cleaner.
Once Upon A Time, URLs themselves were not in UTF-8. They were in the ANSI code page. This facilitates the fact that they often are used to select, well, pathnames in the server's file system. In ancient times, IE had an option to tell whether you wanted to send UTF-8 URLs or not.
Perhaps buried in the bowels of the IIS config there is a place to specify the URL encoding, and perhaps not.
Ultimately, to get around this, I had to use request.ServerVariables["HTTP_URL"] and some manual parsing, with a bunch of error-handling fallbacks (additionally compensating for some related glitches in Uri). Not great, but only affects a tiny minority of awkward requests.

when assigning location.href, please explain url encoding (in asp.net and firefox)

In some javascript, I have:
var url = "find.aspx?" + "location=" + encodeURIComponent( address );
alert( url );
location.href = url;
where the value of address is the string "Seattle, WA".
In the alert I see
find.aspx?Seattle%2C%20WA
as I expect.
But on the server side, when I look at Request.Url, the relevant substring I see is
find.aspx?Seattle, WA
And in the Firefox url window I see
find.aspx?location=Seattle%2C WA
So I'm getting three different representations whereas I would expect that in all three places I should see what I see in the alert. My expectation is that the url I assign to location.href should show up as-is in the browser url window, and should be passed as-is to the server in Request.Url (and I would need to decode the values on the server before using them). What's happening?
Firefox converts certain encoded characters into their literal forms as a way to be friendly to users. It will also convert spaces typed into the address bar into %20 for the server.
Update: The reason Firefox doesn't display the comma unencoded is because commas are allowed in URLs, but spaces are not, so it knows that a space is going to be unambiguously interpreted, whereas the pre-encoded comma is different from a non-encoded comma to some servers. see: Can I use commas in a URL?
ASP is probably trying to help you out by auto-un-encoding the string for you.
Update: It looks like ASP.NET unencodes Request.Url for you by default, as mentioned here: QueryString malformed after URLDecode They also mention that you can use HttpRequest.Url.Query to access the un-decoded version.
The alert is the only thing not doing any "magic" for you.
For the alert, you are doing the encoding yourself. Perhaps it looks the same as on the server-side if you removed encodeURIComponent.
On the server side, ASP.NET will always show you the unencoded form. This is to make it easier to directly map to files that also have text that needed to be (un)encoded.
Note that you can replace every letter for its UTF8 representation in URL Encoding. It will still be the same URL. I.e., type the following in the browser window and it will still work: %66%59%6E%64.aspx?location=Seattle%2C%20WA. To only encode the necessary chars, use UrlEncode on the server side if you create a link yourself.
URL encoding can become fairly tricky. You ask to explain it. To know the correct escape of a certain character, you need to know how that character looks in UTF8. The hexadecimal value of the UTF-8 bytes then become the %XX%YY value of your letter. Sometimes it's one %XX, but it can be up to six byte sequences in total (some Chinese characters for instance).
URL Encoding works one way only. Never double-encode or double-unencode. This is prohibited by the specification. Also, because you can encode any character, it is not always possible (as you found out) to do roundtrip encoding/unencoding. If you unencode and re-encode again, it is well possible that the resulting string is different, but syntactically the same.
In HTML, URL Encoding is sometimes interspersed with HTML Encoding. I.e., the ampersand is valid in HTML, but not in HTML. find.aspx?city=A&name=B becomes find.aspx?city=A&name=B in and HTML URL. However, browsers are lenient and will accept wrongly HTML-encoded strings.
Finally, a not on the browser: if you type in a space in a link, even inside an <a> tag, it will escape the space (or other character) for you. Likewise, it will nowadays show the odd characters (é, ï etc) in the address bar, but when it sends it over HTTP, the browser will correctly do the encoding for you.
Update: about anwering your question of needing a "definitive" reference or proof.
While I couldn't find any on the internet, I decided to look for it myself using Reflector. Going through the methods that set, for instance, the HttpRequest.QueryString, you quickly encounter the private method HttpRequest.FillInQueryStringCollection which then calls HttpValueCollection.FillfromEncodedBytes. Somewhat near the end of that method, HttpUtility.UrlDecode is called for the values. Conclusion: do not call it yourself, to prevent double decoding.
You can see this for yourself when you download Reflector and disassemble the .NET libs of System.Web.
For your example you can change this line
var url = "find.aspx?" + "location=" + encodeURIComponent( address );
to
var url = "find.aspx?" + "location=" + address;
and see the address as it is. Bu if address variable contains any '&' character your variable will be corrupt. So you are using encodeURIComponent to encode these things url.
On the Server side all these encoded strings are decoded back. It means encodeURIComponent is just for sending the address variable (whether it contains & character or not) to server side correctly.

Resources