I've noticed on some ASP.net sites that some of their URLs have the following tacked onto some of their internal URLs:
somePage.aspx?enc=looks_like_a_base_64_encoded_string_here=
Any idea what purpose it serves? I've tried passing it through a base-64 decoder, but it's nothing human-readable. Looks like it's usually 64 bytes though.
Just wondering!
I imagine it is a set of query string parameters; Googling around for examples of this behavior, I stumbled on this blog post that talks about encrypting your query string parameters.
There is nothing special in .net for a query string parameter named "enc".
Most likely a number of dev's are just using the same mechanism for encoding querystring parameters.
Who knows, maybe someone started calling this the "Encoded Query String Pattern" ;)
Related
I have solved a problem with a solution I found here on SO, but I am curious about if another idea I had is as bad as I think it might be.
I am debugging a custom security Attribute we have on/in several of our controllers. The Attribute currently redirects unauthorized users using a RedirectResult. This works fine except when calling the methods with Ajax. In those cases, the error returned to our JS consists of a text string of all the HTML of our error page (the one we redirect to) as well as the HTTP code and text 200/OK. I have solved this issue using the "IsAjaxRequest" method described in the answer to this question. Now I am perfectly able to respond differently to Ajax calls.
Out of curiosity, however, I would like to know what pitfalls might exist if I were to instead have solved the issue by doing the following. To me it seems like a bad idea, but I can't quite figure out why...
The ActionExecutingContext ("filterContext") has an HttpContext, which has a Request, which in turn has an AcceptTypes string collection. I notice that on my Ajax calls, which expect JSON, the value of filterContext.HttpContext.Request.AcceptTypes[0] is "application/json." I am wondering what might go wrong if I were to check this string against one or more expected content types and respond to them accordingly. Would this work, or is it asking for disaster?
I would say it works perfect, and I have been using that for years.
The whole point use request headers is to be able to tell the server what the client accept and expect.
I suggest you read more here about Web API and how it uses exactly that technique.
I want to use the Microsoft AntiXss library for my project. When I use the Microsoft.Security.Application.Encoder.HtmlEncode(str) function to safely show some value in my web page, it encodes Farsi characters which I consider to be safe. For instance, it converts لیست to لیست. Am I using the wrong function? How should I be able to print the user input in my page safely?
I'm currently using it like this:
<h2>#Encoder.HtmlEncode(ViewBag.UserInput)</h2>
I think I messed up! Razor view encodes the values unless you use #Html.Raw right? Well, I encoded the string and it encoded it again. So in the end it just got encoded twice and hence, the weird looking chars (Unicode values)!
If your encoding (lets assume that it's Unicode by default) supports Farsi it's safe to use Farsi, without any additional effort, in ASP.NET MVC almost always.
First of all, escape-on-input is just wrong - you've taken some input and applied some transformation that is totally irrelevant to that data. It's generally wrong to encode your data immediately after you receive it from the user. You should store the data in pure view to your database and encode it only when you display it to the user and according to the possible vulnerabilities for the current system. For example the 'dangerous' html characters are not 'dangerous' for SQL or android etc. and that's one of the main reasons why you shouldn't encode the data when you store it in the server. And one more reason - when you html encode the string you got 6-7 times more characters for your string. This can be a problem with server constraints for strings length. When you store the data to the sql server you should escape, validate, sanitize your data only for it and prevent only its vulnerabilities (like sql injection).
Now for ASP.NET MVC and razor you don't need to html encode your strings because it's done by default unless you use Html.Raw() but generally you should avoid it (or html encode when you use it). Also if you double encode your data you'll result in corrupted output :)
I Hope this will help to clear your mind.
I have looked through lots of Posts and have not been successful in determining how to get rid of the pesky d in the response coming from my asmx web service, as in {"d":{"Response":"OK","Auth-Key":"JKPYZFZU"}}.
This is being created by my class 'public Dictionary UserDevice' by returning the Dictionary object.
I would be perfectly happy if the damn thing just wouldn't put it all into the d object!
Basically JSON array notation ['hello'] is valid JavaScript by itself whereas JSON object notation {'d': ['hello'] } is not by itself valid JavaScript. This has the consequence of the array notation being executable which opens up the possibility of XSS attacks. Wrapping your data in an object by default helps prevent this.
You can read more about why it's there in a post by Dave Ward. (edit: as pointed out by #user1334007, Chrome tags this site as unsafe now)
A comment by Dave Reed on that article is particularly informing:
It’s one of those security features that has a very easy to
misunderstand purpose. The protection isn’t really against
accidentally executing the alert in your example. Although that is one
benefit of ‘d’, you’d still have to worry about that while evaluating
the JSON to convert it to an object.
What it does do is prevent the JSON response from being wholesale
executed as the result of a XSS attack. In such an attack, the
attacker could insert a script element that calls a JSON webservice,
even one on a different domain, since script tags support that. And,
since it is a script tag afterall, if the response looks like
javascript it will execute as javascript. The same XSS attack can
overload the object or array constructors (among other possibilities)
and thereby get access to that JSON data from the other domain.
To successfully pull that off, you need (1) a xss vulnerable site
(good.com) — any site will do, (2) a JSON webservice that returns a
desired payload on a GET request (e.g. bank.com/getaccounts), (3) an
evil location (evil.com) to which to send the data you captured from
bank.com while people visit good.com, (4) an unlucky visitor to
good.com that just happened to be logged into bank.com using the same
browser session.
Protecting your JSON service from returning valid javascript is just
one thing you can do to prevent this. Disallowing GET is another
(script tags always do GET). Requiring a certain HTTP header is
another (script tags can’t set custom headers or values). The
webservice stack in ASP.NET AJAX does all of these. Anyone creating
their own stack should be careful to do the same.
You are probably using some kind of framework that automatically wraps your web service json responses with the d element.
I know that microsoft's JSON serializer adds the d on the server side, and the client-side AJAX code that deserializes the JSON string expects it to be there.
I think jQuery works this way too.
You can read a little more about this at Rick Strahl's blog
And there is a way for you to return pure json (without the 'd' element) using the WCF "Raw" programming model.
I have been wondering how tiny url works.
I would like to develop something similar for my site, but as most people, I use GUIDs for ids. When an object is created, should I then generate a 10 character random string to use as public id, or is there a smarter approach?
Example of old url: www.mysite.com/default.aspx?userId={id}
Example of new url: www.mysite.com/pwzd4r9niy
You can use any kind of random string generator or GUID for this. I don't think there is a much smarter approach. (Palantir offers a nice alternative though, hashing the incoming URL. )
The rest is relatively straightforward: Keep a database table with IDs and target URLs; When a request comes in, look up the ID and do a header redirect to the target URL.
More discussion in this blog post.
There also are redirection services out there now that use words from a dictionary list to build a URL.
Sadly, EvilURL is gone! It used to create "short" URLs like
http://evilURL.com/donkey_porn-shotguns/cracking-virus-exploit
that was the only URL redirection service really worthwhile. :)
And, as a bit of trivia, http://to is the shortest redirection service (and, I think the shortest web URL) known to man.
Just hash the entire string, to a reasonable length.
I am curious if is out-of-date to use query string for id. We have webapp running on Net 2.0. When we display detail of something (can be product) we use query string like this : http://www.somesite.com/Shop/Product/Detail.aspx?ProductId=100
We use query string for reason that user can save the link somewhere and come back any time later. I suppose that we use url rewriting soon or later but in mean time I would like to know your opinion. Thanks.Cheers, X.
A common strategy is to use an item ID in the URL, coupled with some keywords that describe the item. This is good from a user's perspective, because they can easily see what a URL refers to if they save it somewhere. More importantly, it's useful from a SEO (Search Engine Optimisation) point of view, as search engines will - it is said - rate a given URL more highly if it contains the keywords someone is searching for.
You can see this approach on this very site, where the ID after 'questions' is used for the database query and the text is purely for the benefit of users and search engines.
Whether you use a straightforward query string, or a more advanced approach that makes the ID look like part of the folder path, is up to you. It's largely a matter of personal taste.
Yes, it is old fashioned!
However, if you are thinking about changing it to a RESTful implementation as others have suggested, then you should continue to support the old URL and querystring addresses by implementing an HTTP 301 redirect to forward from the querystring URLs, to the new restful URLs. This will ensure that any users old links and bookmarks will continue to work while telling the search engine bots that the url has changed.
Since your post is tagged ASP.Net, there is a good write-up on how you can support both, using the new ASP.Net routing mechanism here: http://msdn.microsoft.com/en-us/magazine/dd347546.aspx
Nothing wrong with query string parameters. Simple to create and understand. A lot of sites are using fancy urls like 'www.somesite.com/Shop/Product/white_sox_t_shirt` which is cool and sort-of user friendly, but more work for us poor developers.
Using query strings is not outdated at all, it just has to be used in the right places. However, never place anything in the query string that could be a security issue and remember that anything you read from the query string could have been modified so you should be validating all input in your checks.
It's not outdated, but anothter alternative is a more RESTful approach:
yourwebsite.com/products/100/usb-coffee-maker
The reason is that a) search engines usually ignore any URL with a QueryString (so the product.aspx?id=100 page may never get indexed) and b) having the name in the url purely for display purposes supposedly helps SEO as well.
Permanent links are best for SEO and also , what if your product moved to another database , and the ID of the product needs to be changed ?
I don't think the chances of a product's name will be changed or the manufacturer.
E.g Apple/Iphone won't change :) Seems to me a good Permalink