How to encrypt the address bar url using asp.net? - asp.net

How to encrypt this url in asp.net (VB.NET), so that user cannot view this address bar text in their browser address bar ?
http://localhost:2486/volvobusesindia/passenger_info.aspx?from=Delhi&to=Manali&journey=21-Nov-2010

You can't. And before someone suggests using POST, that doesn't really hide anything. It's trivial to use Wireshark, Firebug, etc. either way.
Any communication between the user's machine and your server, in either direction, encrypted or unencrypted, can be monitored by the user.
EDIT: An alternative is to generate a unique GUID or session identifier, then keep track of the meaning on the server. This is not encryption, but it may serve the desired purpose.

You can do some really good obfuscating, but you probably want to roll-your-own, as if you are using this for security, you don't want everybody knowing how to decode your encoding.
We do it by using a single querystring parameter that contains ALL of the information we need from the request in our own format. Of course, this does mean giving up all of the handy .Request[] methods, but you've got to make the trade off somewhere.
The full path to a file with the fully encrypted URL also can get obscenely long with everything thrown in there. For example, this is a link that will display an image of a ring with the word "Landrum" on it (in both directions). The image is created the moment you request it, from the information contained in the encrypted query string.
http://www.flipscript.com/data/default/images/catalog/medium/AMBIRingTitanBlue_G1F88E4X57,409-945,591O0M0S2V6.jpgx?xq=45C35129$6zvtnw6m1280kwz8ucqjt6jjb2vtea43bio5ixmnge-5r4i-o1o32j43y58nv
I hope that helps a bit! There is no "out of the box" solution, but this one works pretty well.

Instead of hiding it, you could call this site internally from within some other site and do whatever you wish with the returned results (e.g. display them on your site). That would guarantee you that the user won't ever have the chance to see the actual site being called.

Related

Script exploits in ASP.NET - Is setting validateRequest="true" good advice?

I was reading about ASP.NET Script Exploits, and one of the suggestions is: (emphasis is mine; and the suggestion is #3 in section "Guarding Against Scripting Exploits
" in the web page)
If you want your application to accept some HTML (for example, some formatting instructions from users), you should encode the HTML at the client before it is submitted to the server. For more information, see How to: Protect Against Script Exploits in a Web Application by Applying HTML Encoding to Strings.
Isn't that really bad advice? I mean, an exploiter could send the HTML via curl or something similar, and the HTML would then be sent un-encoded to the server, which can't be good(?)
Am I missing something here or mis-interpreting the statement?
Microsoft is not wrong in their sentence, but on the other hand far from complete, and their sentence is dangerous.
Since by default, validateRequest == true, you indeed should encode special HTML characters in the client in order for them to get into the server in the first place and bypass validateRequest.
But - they should have emphasized that this is certainly not a replacement for server side filtering and validation.
Specifically, if you must accept HTML, the strongest advice is to use white-listing instead of black filtering (i.e. allow very specific HTML tags and eliminate all the others). Use of Microsoft AntiXSS library is highly recommended for strong user input filtering. It's far better than "re-inventing the wheel" yourself.
I don't think that advice is good...
From my experience I would totally agree with your thought and replace that advice with the following:
all input has to be checked server-side first thing on arrival
all input that can possibly contain "active content" (like HTML, JavaScript...) has to be escaped on arrival and never be sent to any client till full sanitazion took place
I would never trust the client to send trusted data. As you stated there are simply too many ways that data can be submitted. Even non-malicious users may be able to bypass the system on the client if they have JavaScript disabled.
However on the link from that item it becomes clear what they mean with point 3:
You can help protect against script exploits in the following ways:
Perform parameter validation on form variables, query-string
variables, and cookie values. This validation should include two types
of verification: verification that the variables can be converted to
the expected type (for example, convert to an integer, convert to
date-time, and so on), and verification of expected ranges or
formatting. For example, a form post variable that is intended to be
an integer should be checked with the Int32.TryParse method to verify
the variable really is an integer. Furthermore, the resulting integer
should be checked to verify the value falls within an expected range
of values.
Apply HTML encoding to string output when writing values back out
to the response. This helps ensure that any user-supplied string input
will be rendered as static text in the browsers instead of executable
script code or interpreted HTML elements.
HTML encoding converts HTML elements using HTML–reserved characters so
that they are displayed rather than executed.
I think that this is just a case of a misplaced word because there is no way you can perform this level of validation on the client and in the examples contained in the link it is clearly server side code being presented without any mention of the client.
Edit:
You also have request validation enabled by default right? So clearly the focus of protecting content is on the server as far as Microsoft is concerned.
I think the author of the article misspoke. If you go to the linked web page, it talks about encoding data before it's sent back to the client, not the other way around. I think this is just an editing error by the author and he intended to say the opposite.. to encode it before it's returned to the client.

Encrypt IDs in URL variables

I am developing an HTTP server application (in PHP, it so happens). I am concerned about table IDs appearing in URLs. Is it possible to encrypt URL variables and values to protect my application?
oh ok, so for sensitive information best to use sessions then, are table Ids etc safe to throw in the GET var?
Yes, sensitive information must not leave your server in the first place. Use sessions.
As for "are table ids safe in the URL": I don't know, is there anything bad a user could do knowing a table id? If so, you need to fix that. Usually you need to pass some kind of id around though, whether that's the "native table id" or some other random id you dream up usually doesn't matter. There's nothing inherently insecure about showing the id of a record in the URL, that by itself means absolutely nothing. It's how your app uses this id that may or may not open up security holes.
Additionally think about whether a user can easily guess other ids he's not supposed to know and whether that means anything bad for your security.
Security isn't a one-off thing, you need to think about it in every single line of code you write.
Sounds like you want to pass sensitive information as a GET param.
Don't do that - use $_SESSION if you can.
However, if you want your params encoded (i.e. => +) use urlencode().
$a = 'how are you?';
echo urlencode($a); // how+are+you%3F
You can encrypt what you pass before you transmit, or you can run the entire communication over an encrypted channel (https or ssh for instance).
Your GET variables are called whatever you choose to call them, and assigned whatever values you choose to give them. So, yes: they can certainly be encrypted or, if you'd rather, simply obscured. If you're planning to encrypt variables, then PHP has quite a few options available.
For the above, I'd recommend using something like urlencode.
In general I'd suggest using POST instead of GET, assuming you're getting your variables from a form element. On the other hand it might be even wiser to use session variables.
Maybe this article can give you more ideas...
http://www.stumbleupon.com/su/1nZ6bS/:1PcFQMI0:6oJD.Hd1/www.ibm.com/developerworks/library/os-php-encrypt/index.html/

Why are hidden fields used?

I have always seen a lot of hidden fields used in web applications. I have worked with code which is written to use a lot of hidden fields and the data values from the visible fields sent back and forth to them. Though I fail to understand why the hidden fields are used. I can almost always think of ways to resolve the same problem without the use of hidden fields. How do hidden fields help in design?
Can anyone tell me what exactly is the advantage that hidden fields provide? Why are hidden fields used?
Hidden fields is just the easiest way, that is why they are used quite a bit.
Alternatives:
storing data in a session server-side (with sessionid cookie)
storing data in a transaction server-side (with transaction id as the single hidden field)
using URL path instead of hidden field query parameters where applicable
Main concerns:
the value of the hidden field cannot be trusted to not be tampered with from page to page (as opposed to server-side storage)
big data needs to be posted every time, could be a problem, and is not possible for some data (for example uploaded images)
Main advantages:
no sticky sessions that spill between pages and multiple browser windows
no server-side cleanup necessary (for expired data)
accessible to client-side scripts
Suppose you want to edit an object. Now it's helpful to put the ID into a hidden field. Of course, you must never rely on that value (i.e. make sure the user has appropriate rights upon insert/update).
Still, this is a very convenient solution. Showing the ID in a visible field (e.g. read-only text box) is possible, but irritating to the user.
Storing the ID in a session / cookie is prohibitive, because it disallows multiple opened edit windows at the same time and imposes lifetime restrictions (session timeout leads to a broken edit operation, very annoying).
Using the URL is possible, but breaks design rules, i.e. use POST when modifying data. Also, since it is visible to the user it creates uglier URLs.
Most typical use I see/use is for IDs and other things that really don't need to be on the page for any other reason than its needed at some point to be sent back to the server.
-edit, should've included more detail-
say for instance you have some object you want to update -- the UI sends back a collection of values and the server at that point may or may not know "hey this is a customer object" so you fire off a request to the server and say "hey, give me ID 7" and now you have your customer object as the system knows it. The updates are applied, validated, whatever and now your UI gets the completed result.
I guess a good excuse/argument is using linq. Try to update an object in linq without getting it from the DB first. It has no real idea that it's something it can keep track of until you get the full object.
heres one reason, convenient way of passing data between client code (javascript) and server side.
There are many useful scenarios.
One is to "store" some data on a page which should not be entered by a user. For example, store the user ID when generate a page, then this value will be auto-submitted with the form back to the server.
One other scenario is security. Add some hidden token to the page and check its existence on the server. This will help identify whether a form was submitted via the browser or by some bot which just posted to some url on your site.
It keeps things out of the URL (as in the querystring) so it keeps that clean. It also keeps things out of Session that may not necessarily need to be in there.
Other than that, I can't think of too many other benefits.
They are generally used to store state as an interaction progresses. Cookies could be used instead, but some people disable them. Could also use a single hidden field to point at server-side state, but then there are session-stickiness issues.
If you are using hidden field in the form, you are increasing the burden of form by including a new control.
If there is no need to take hidden field, you should't take it because it is not suitable on the bases of security point. using hidden field does not come under the good programming. Because it also affect the performance of application.

Keeping track after the back button

I want to write a web app order system using the REST methodology for the first time. I understand the concept of the "message id" when things get posted to a page but this scenario comes up. Once a user posts to the web app, you can keep track of their state with an id attached to the URI but what happens if they hit the back button of the browser to the entry point of the app when they didn't have any id? They then lose their state in the transaction.
I know you can always give them a cookie but you can't do that if they have cookies turned off and, worst case thinking here, they also have javascript turned off.
Now, I understand the answer may be "Yes, that's what will happen", that's the end of the story, and I can live with that but, being new to this, is there something I'm missing?
REST doesn't really have states server-side; you simply point to resources. User sessions aren't tracked; instead cookies are used to track application state. However, if you find that you really do need session state, then you are going to have to break REST and track it on the server.
A few things to consider:
How many of your users have cookies disabled anyway? How many even know how to do that?
Is it really likely that your users will have JS turned off? If so, how will you accomplish PUT (edit) and DELETE (delete) without AJAX?
EDIT: Since you do not want to force cookies and JavaScript, then you cannot have a truly RESTful system. But you can somewhat fake it. You are going to need to track a user server-side. You could do this with a session object, as found in your language/framework of choice or by adding a field to the database for whatever you want to know. Of course, when the user hits the back button, they will likely be going to a cached page. If that's not OK, then you will need to modify the headers to disallow caching. Basically, it gets more complicated if you don't use cookies, but not unmanageable.
What about the missing PUT and DELETE HTTP methods? You can fake those with POSTs and a hidden parameter specifying whether or not you are making something new, editing something, or deleting a record. The GET shouldn't really change.
The answer is that your application (in a REST scenario) simply doesn't keep track of what happens. All state is managed by the client, and state transitions are effected through URI navigation. The "State Transfer" part of REST refers to client navigation to new URIs which are new states.
A URI accessed with GET is effectively a read-only operation as per both the HTTP spec and the REST methodology. That means if the client "backs up" to some previous URI, "the worst" that happens is another GET is made and more data is loaded.
Let's say the client does this (using highly simplified pseudo-HTTP)...
GET //site.com/product/123
This retrieves information (or maybe a page) about product ID 123, which presumably includes a reference to a URI which can be used to POST that item into the user's shopping cart. So the user decides to buy the item. Again, it's oversimplified but:
POST //site.com/shoppingcart/
{productid = 123}
The return from this might be a representation of the shopping cart, or a reference to the added item (which could be used on the shoppingcart URI with DELETE to remove the item again), or a variety of other things (such as deeper XML describing the cart contents with other URIs pointing to the cart items and back to the original products). It's all up to you.
But the "state" is defined by whatever the client is doing. It isn't tracked on the server at all, although you will certainly keep a copy of his shopping cart contents in your database for some period of time. (I once returned to a website two years later and my shopping cart items were still there...) But it's up to him to keep track of the ID. To your server app it's just another record, presumably with some kind of expiration.
In this way you don't need cookies, and javascript is entirely dependent on the client implementation. It's difficult to build a decent REST client without script -- you could probably build something with XSLT and only return XML from the server, but that's probably more pain than anyone needs.
The starting point is to really understand REST, then design the system, then build it. It definitely doesn't lend itself to building it on the fly like most other systems do (right or wrong).
This is an excellent article that gives you a fairly "pure" look at REST without getting too abstract and without bogging you down with code:
http://www.infoq.com/articles/subbu-allamaraju-rest
It is true that the "S" in REST stands for "state" and the "T" for "transfer". But the state is is kept on the client, not on the server. The client always hast all information necessary to decide for himself in whicht direction he wants to change the state.
The way you describe it, your system is not restful.

GET vs. POST does it really really matter?

Ok, I know the difference in purpose. GET is to get some data. Make a request and get data back. POST should be used for CRUD operations other than read I believe. But when it comes down to it, does the server really care if it's receiving a GET vs. POST in the end?
According to the HTTP RFC, GET should not have any side-effects, while POST may have side-effects.
The most basic example of this is that GET is not appropriate for anything like a purchase-transaction or posting an article to a blog, while POST is appropriate for actions-that-have-consequences.
By the RFC, you can hold a user responsible for actions done by POST (such as a purchase), but not for GET actions. 'Bots always use GET for this reason.
From the RFC 2616, 9.1.1:
9.1.1 Safe Methods
Implementors should be aware that the
software represents the user in
their interactions over the Internet,
and should be careful to allow the
user to be aware of any actions they
might take which may have an
unexpected significance to themselves
or others.
In particular, the convention has
been established that the GET and
HEAD methods SHOULD NOT have the
significance of taking an action
other than retrieval. These methods
ought to be considered "safe". This
allows user agents to represent other
methods, such as POST, PUT and
DELETE, in a special way, so that the
user is made aware of the fact that
a possibly unsafe action is being
requested.
Naturally, it is not possible to
ensure that the server does not
generate side-effects as a result of
performing a GET request; in fact,
some dynamic resources consider that a
feature. The important distinction
here is that the user did not request
the side-effects, so therefore
cannot be held accountable for them.
It does if a search engine is crawling the page, since they will be making GET requests but not POST. Say you have a link on your page:
http://www.example.com/items.aspx?id=5&mode=delete
Without some sort of authorization check performed before the delete, it's possible that Googlebot could come in and delete items from your page.
Since you're the one writing the server software (presumably), then it cares if you tell it to care. If you handle POST and GET data identically, then no, it doesn't.
However, the browser definitely cares. Refreshing or clicking back to a page you got as a response to a POST pops up the little "Are you sure you want to submit data again" prompt, for example.
GET has data limit restrictions based on the sending browser:
The spec for URL length does not dictate a minimum or maximum URL length, but implementation varies by browser. On Windows: Opera supports ~4050 characters, IE 4.0+ supports exactly 2083 characters, Netscape 3 -> 4.78 support up to 8192 characters before causing errors on shut-down, and Netscape 6 supports ~2000 before causing errors on start-up
If you use a GET request to alter back-end state, you run the risk of bad things happening if a webcrawler of some kind traverses your site. Back when wikis first became popular, there were horror stories of whole sites being deleted because the "delete page" function was implemented as a GET request, with disastrous results when the Googlebot came knocking...
"Use GET if: The interaction is more like a question (i.e., it is a safe operation such as a query, read operation, or lookup)."
"Use POST if: The interaction is more like an order, or the interaction changes the state of the resource in a way that the user would perceive (e.g., a subscription to a service), or the user be held accountable for the results of the interaction."
source
You be aware of a few subtle security differences. See my question
GET versus POST in terms of security?
Essentially the important thing to remember is that GET will go into the browser history and will be transmitted through proxies in plain text, so you don't want any sensitive information, like a password in a GET.
Obvious maybe, but worth mentioning.
By HTTP specifications, GET is safe and idempotent and POST is neither. What this means is that a GET request can be repeated multiple times without causing side effects.
Even if your server doesn't care (and this is unlikely), there may be intermediate agents between your client and the server, all of whom have this expectation. For example proxies to cache data at your ISP or other providers for improved performance. THe same expectation is true for accelerators, for example, a prefetching plugin for your browser.
Thus a GET request can be cached (based on certain parameters), and if it fails, it can be automatically repeated without any expecation of harmful effects. So, really your server should strive to fulfill this contract.
On the other hand, POST is not safe, not idempotent and every agent knows not to cache the results of a POST request, or retry a POST request automatically. So, for example, a credit card transaction would never, ever be a GET request (you don't want accounts being debited multiple times because of network errors, etc).
That's a very basic take on this. For more information, you might consider the "RESTful Web Services" book by Ruby and Richardson (O'Reilly press).
For a quick take on the topic of REST, consider this post:
http://www.25hoursaday.com/weblog/2008/08/17/ExplainingRESTToDamienKatz.aspx
The funny thing is that most people debate the merits of PUT v POST. The GET v POST issue is, and always has been, very well settled. Ignore it at your own peril.
GET has limitations on the browser side. For instance, some browsers limit the length of GET requests.
I think a more appropriate answer, is you can pretty much do the same things with both. It is not so much a matter of preference, however, but a matter of correct usage. I would recommend you use you GETs and POSTs how they were intended to be used.
Technically, no. All GET does is post the stuff in the first line of the HTTP request, and POST posts stuff in the body.
However, how the "web infrastructure" treats the differences makes a world of difference. We could write a whole book about it. However, I'll give you some "best practises":
Use "POST" for when your HTTP request would change something "concrete" inside the web server. Ie, you're editing a page, making a new record, and so on. POSTS are less likely to be cached, or treated as something that's "repeatable without side-effects"
Use "GET" for when you want to "look at an object". Now, such a look might change something "behind the scenes" in terms of caching or record keeping, but it shouldn't change anything "substantial". Ie, I could repeat my GET over and over and nothing bad would happen, except for inflated hit counts. GETs should be easily bookmarkable, so a user can go back to that same object later on.
The parameters to the GET (the stuff after the ?, traditionally) should be considered "attributes to the view" or "what to view" and so on. Again, it shouldn't actually change anything: use POST for that.
And, a final word, when you POST something (for example, you're creating a new comment), have the processing for the post issue a 302 to "redirect" the user to a new URL that views that object. Ie, a POST processes the information, then redirects the browser to a GET statement to view the new state. Displaying information as a result of a POST can also cause problems. Doing the redirection is often used, and makes things work better.
Should the user be able to bookmark the resulting page? Another thing to think about is some browsers/servers incorrectly limit the GET URI length.
Edit: corrected char length restriction note - thanks ars!
It depends on the software at the server end. Some libraries, like CGI.pm in perl handles both by default. But there are situations where you more or less have to use POST instead of GET, at least for pushing data to the server. Large amounts of data (where the corresponding GET url would become too long), binary data (to avoid lots of encoding/decoding trouble), multipart files, non-parsed headers (for continuous updates pre-AJAX style...) and similar.
The server technically couldn't care one way or the other about what kind of request it receives. It will blindly execute any request coming across the wire.
Which is the problem. If you have an action that destroys or modifies data in a GET action, Google will tear your site up as it crawls through indexing.
The server usually doesn't care. But it's mostly for following good practices, as you mentioned. The client side also matter - as mentioned you cannot bookmark a POST'd page usually, and some browsers have limits on the length of the URL for really long GET queries.
Since GET is intended for specifying resource you wanna get, depending on exact software on the server side, the web server (or the load balancer in front of it) may have a size limit on GET requests to prevent Denial Of Service attacks...
Be aware that browsers may cache GET requests but will generally not cache POST requests.
Yes, it does matter. GET and POST are quite different, really.
You are right in that normally, GET is for "getting" data from the server and displaying a page, while POST is for "posting" data back to the server. Internally, your scripts get the same data whether it's GET or POST, so no, the server doesn't really care.
The main difference is GET parameters are specified in URLs, while POST is not. This is why POST is used for signup and login forms - you don't want your password in a URL. Similarly, if you're viewing different pages or displaying a specific view of some data, you normally want a unique URL.
It really does matter. I have gathered like 11 things you should know abut them.
11 things you should know about GET vs POST
No, they shouldn't except for #jbruce2112 answer and uploading files require POST.

Resources