Ok to pass IDs in query string? - asp.net

Is it okay to pass IDs in the query string? For example:
example.com/viewperson.aspx?personid=22d62e18-2383-42ca-ba6d-a535355b98bb
Does this change (less risk) for an intranet site?
If a public site, assume anyone can access (shoulder serving, browser logging, etc).. even though we’re under SSL I still assume “out there”. Obviously, security will be applied to disallow an unauthenticated user from viewing page. But still a risk?
An added benefit of using query string is bookmarking a few (e.g. persons in this case) you want to call back up without having to go through the front door and look back up.
Would never pass anything meaningful, but maybe an ID is meaningful enough to not pass?
An alternative would be a cookie or session variable, of course.

Assuming these are resources you want protected its not a risk provided that requests for resources by id are both authenticated and authorised. Meaning you should verify that the request comes from a logged in user and that the user has access to that resource.
So if I belong to company 5 i shouldn't be able to open /companies/4.
Otherwise no issues and there is no alternative approach I am aware of. (by which I mean you must provide an identifier somehow)

Related

How can I communicate to the user's browser that a POST request it made is side-effect-free?

I have to add a page to my website that will be accessed via a POST request. The request is side-effect-free, hence it is safe for the user to use their browser's "Refresh" button on the page. The reason why it has to be POST and not GET is that the volume of data needed to characterise the request is large (it includes a collection of arbitrarily many GUIDs identifying resources to be operated upon at a later stage in the process).
When the user of a browser refreshes a page that was the result of a POST request, the browser will typically warn them that the form will be resubmitted and may cause an action to be repeated. This is not a concern in this case, because as I said, the action of requesting this page is side-effect-free. I therefore want to communicate to the user's browser that no such warning should be presented to the user if they use the "Refresh" function. How can I do this?
You cannot prevent the browser from warning the user about resubmitting a POST request.
References
Mozzila forums (Firefox predecessor) discussed the feature extensively starting in 2002. Some discussion of other browsers also occurs. It is clear that the decision was made to enforce the feature and although workarounds were suggested, they were not taken up.
Google Chrome (2008) and other subsequent browsers also included the feature.
The reasons for this related to the difference between GET and POST in rfc2616: Hypertext Transfer Protocol -- HTTP/1.1 (1999).
GET
retrieve whatever information is identified by the Request-URI
POST
request that the origin server accept the
entity enclosed in the request as a new subordinate of the resource
This infers that whilst a GET request only retrieves data, a POST request modifies the data in some way. As per the discussion on the Mozilla forum, the decision was that enabling the warning to be disabled created more risks for the user than the inconvenience of keeping it.
Solutions
Instead a solution is to use sessions to store the data in the POST request and redirect the user with a GET request to a URL that looks in the session data to find the original request parameters.
Assuming the server side application has session support and it's enabled.
User submits POST request with data that generates a specific result POST /results
Server stores that data in the session with a known key
Server responds with a 302 Redirect to a chosen URL (Could be the same one)
The client will request the new page with a GET request GET /results
Server identifies the incoming GET request is asking for the results of previous POST request and retrieves the data from the session using the known key.
If the user refreshes the page then steps 4 & 5 are repeated.
To make the solution more robust, the POST data could be assigned to a unique key that is passed as part of the path or query in the 302 redirect GET /results?set=1. This would enable multiple different pages to be viewed and refreshed, for example in different browser tabs. Consideration must be given to ensuring that the unique key is valid and does not allow access to other session data.
Some systems, Kibana, Grafana, pastebin.com and many others go one step further. The POST request values are stored in a persistent data store and a unique short URL is provided to the user. The short URL can be used in GET requests and shared with other users to view the same result of what was originally a POST request.
You can solve this problem by implementing the Post/Redirect/Get pattern.
You typically get a browser warning when trying to re-send a POST request for security reasons. Consider a form where you input personal data to register an account or order a product. If you would double-send your data it might happen that you register twice or buy the same thing two times (of course, this is just a theoretical example). Thus, the user should get warned when trying to send the same POST request several times. This behaviour is intended and cannot be disabled but avoided by using the aforementioned PRG pattern.
Image from Wikipedia (published under LGPL).
In simple words, this pattern can be used to avoid double submissions of form data that could possibly cause undesired results. You have to configure your server to redirect the affected incoming POST requests using the status code 303 ("see other"). Then the user will be redirected (using a GET request) to a confirmation page, showing that the request has been successful and will now be processed. If the user now reloads the page, he / she will be redirected to the same page without re-submitting the POST request.
However, this strategy might not always work. In case the server did not receive the first submission yet (because of traffic for instance), if the user now re-submits the second POST request could still be sent.
If you provide more information on your tech stack, I can expand my answer by adding specific code samples.
You can't prevent all browsers from showing this "Are you sure you want to re-submit this form?" popup when the user refreshes a page that is the result of a POST request. So you will have to turn this POST request into a GET request if you want to prevent this popup when your users hit F5 on that page.
And for a search form, which you kind of admitted this was for, turning a POST into a GET has its own problems.
For starters, are you sure you need POST to begin with? Is the data really too large to fit in the query string? Taking a reasonable limit of 1024 characters, being around 30 GUIDs (give or take some space for repeated &q=), why do you need the search parameters to be GUIDs to begin with? If you can map them or look them up somehow, you could perhaps limit the size of each parameter to a handful of characters instead of 32 for a non-dashed GUID, and with 5 characters per key you could suddenly fit 200 parameters in the query string.
Still not enough? Then you need a POST indeed.
One approach, mentioned in comments, is using AJAX, so your search form doesn't actually submit, but instead it sends the query data in the background through a JavaScript HTTP POST request and updates the page with the results. This has the benefit that refreshing the page doesn't prompt, as there's only a GET as far as the browser is concerned, but there's one drawback: search results don't get a unique URL, so you can't cache, bookmark or share them.
If you don't care about caching or URL bookmarking, then AJAX definitely is the simplest option here and you need to read no further.
For all non-AJAX approaches, you need to persist the query parameters somewhere, enabling a Post/Redirect/Get pattern. This patterns ends up with a page that is the result of a GET request, which users can refresh without said popup. What the other answers are being quite handwavy about, is how to properly do this.
Options are:
Serverside session
When POSTing to the server, you can let the server persist the query parameters in the session (all major serverside frameworks enable you to use sessions), then redirect the user to a generic /search-results page, which on the server side reads the data from the session and presents the user with the results built from querying the database combined with the query parameters from the session.
Drawbacks:
Sessions generally time out, and they do so for good reasons. If your user hits F5 after, say, 20 minutes, their session data is gone, and so are their query parameters.
Sessions are shared between browser tabs. If your user is searching for thing A on tab 1, and for thing B on tab 2, the parameters of the tab that's submitted latest, will overwrite the earlier tabs when those are refreshed.
Sessions are per browser. There's generally no trivial way to share sessions (apart from putting the session ID in the URL, but see the first bullet), so you can't bookmark or share your search results.
Local storage / cookies
You could think "but cookies can contain more data than the query string", but just no. Apart from having a limit as well, they're also shared between tabs and can't be (easily) shared between users and not bookmarked.
Local storage also isn't an option, because while that can contain way more data - it doesn't get sent to the server. It's local storage.
Serverside persistent storage
If your search queries actually are that complex that you need multiple KB of query parameters, then you could probably benefit from persisting the query parameters in a database.
So for each search request, you create a new search_query database record that contains the appropriate parameters for the query-to-execute, and, given search results aren't private, you could even write some code that looks up whether the given parameter combination has been used before and first perform a lookup.
So you get a unique search_id that points to a set of parameters with which you can perform a query. Now you can redirect your user, so they perform a GET request to this page:
/search-results?search_id=Xxx
And there you render the results for the given query. Benefits:
You can cache, bookmark and share the URL /search-results?search_id=Xxx
You can refresh the page displaying the search results without an annoying popup
Each browser tab displays its own search results
Of course this approach also has drawbacks:
Unless you use a unguessable key for search_id, users can enumerate earlier searches by other users
Each search costs permanent serverside storage, unless you decide to evict earlier searches based on some criteria

Tell if someone is accesing my HTTP-resources directly?

Is there a way to find out if anyone is calling the image located on my website directly on their website?
I have a website and I just want to make sure no one is using my bandwidth.
Sure there are methods, some which can be trusted a little more than others.
Using Referer-Header
There is a HTTP-Header named Referer which most often contain a string representing the URL which a user visited to get access to the current request.
You can see it as a "I came from here"-header.
If it was guaranteed to always exists it would be a piece of cake to prevent people from leeching your bandwitdh, though since this is not the case it's pretty much a gamble to just rely on this value (which might not exists at times).
Using Cookies
Another way of telling whether a user is a true visitor on your website is to use cookies, a user that hasn't got a cookie and tries to get access to a specific resource (such as an image) could get a message saying "sorry, only real visitors of example.com get access to this image".
Too bad that nothing states that a client is forced to implement and handle cookies.
Using links with a set expiration time [RECOMMENDED]
This is probably the safest option, though it's the hardest to implement.
Using links that is only valid for N hours will make it impossible to leech your bandwidth without going into trouble of implementing some sort of crawler which regularly crawls your site and returns the current access token required to get access to a resource (such as an image).
When a user visits the site a token generated N hours is applied to all resources available is appended to their path sent back to the visitor. This token is mandatory and only valid for N hours.
If the user tries to access an image with an invalid/non-existent token you could send back either 404 or 401 as HTTP status code (preferably the later since it's a Forbidden request).
There are however some quirks worth mentioning:
Crawlers from *search-engine*s might not visit the whole site at a given moment inside the N hours, make sure that they can access the whole content of your site. Identify them by using the value of header User-Agent.
Don't be tempted to lower the lifespan of your token to less than any reasonable time, remember that some users are on slow connections and that having a token of 5 seconds might sound cool - but real users can get flagged erroneously.
never put a token on a resource that people should be able to find from external point (search engines for one), such as the page containing the images you wish to protect.
If you do this by accident you will mostly harm the reputation of your site.
Additional thoughts...
Please remember that any method implemented to make it impossible
for leechers to hotlink your resources never should result in true
visitors being flagged for bandwidth leech. You probably want to ease
up on the restriction rather than making it stronger.
I rather have 10 normal visitors and 2 leechers than no leechers but
only 5 normal users (because I accidentally flagged 5 of the real
visitors as leechers without thinking too much).

How do I send a user ID between different application in ASP.Net?

I have two web applications and both are developed in ASP.NET. Now I want to provide a feature which enables the user to click from one URL in application site (one virtual directory of IIS) A to the other URL in application site B (another virtual directory of IIS).
I have two ideas to implement them, but both of them have issues. I want to know what solution should be optimum solution?
Solution 1: using cookie, so from both application sites, we could retrieve user ID information from reading cookie, but I am afraid if cookie is disabled in browser, this "jump" feature never works.
Solution 2: When the user redirects to an URL in another site, I could append user ID after the URL, I could redirect to this URL in another site http://www.anotherapplicationsite.com/somesuburl?userID=foo, but I am afraird that in this way userID will be exposed easily which raise security issues.
I work with this sort of thing a lot. What you're looking for sounds like a candidate Single Sign-on solution or Federated Security.
You might try doing something similar to the following:
Create a simple db or other sort of table storage with two columns "nonce" and "username"
When you build the link to the other site create a GUID or other unique identifier to use as a one-time nonce, passing it as a querystring ?id=. Insert an entry into the table with the current authenticated username and the unique identifier you created.
When you reach the destination of your link, pass the unique identifier to call a webservice that will will match up the identifier with the username in the database you inserted before jumping to the second site (secure this with ssl).
If the nonce checks out with a valid username, you're all set. The webservice should remove the used entry and the table should stay more or less empty any time you are not in the middle of a transaction.
It is also good to include a datetime in your nonce/username table and expire it in 60 seconds or less to minimize the risk of replay attacks. We also require client certificates for external applications to call the webservice in order to verify the identity of the caller. Internal applications don't really necessitate using client certificates.
A nice thing about this is that it scales fairly well to as many sites as you would like to use
Not perfect security, but we've never had a significant compromise with a such as system.
As long as you have a good authentication system in place on the second website I think solution 2 is the one for you, taking into account the remark Andrew made about the sensitive ID's of course.
For more information on encryption: check the documentation of the FormsAuthentication.Encrypt Method . I think they even do something with writing a value in a cookie in that example.
If you put the userid in a query string and that's all the 2nd app uses to allow login, what's to keep me from manually typing in other users id's? You'd still have to prompt for password on the new site.
I'd use a database to hold login information, and have both sites reference that same db. Use it like you'd use a session.
D
I don't think 1) will work due to browser security (cookies from one domain cannot be read by another domain). I would go with 2), except I would encrypt the querystring value.
EDIT: For more info on cookie privacy/security issues, check out the "Privacy and third-party cookies" section here.
What are you using as the user's id? If you are using their social security number or email (something sensitive) then you are going to want to encrypt the value before you put it on the query string. Otherwise (if the user's id is something ambiguous like an integer or a GUID) it should be fine to put the id on the query string.
using cross domain, you can not SHARE the session, so I was thinking about POST
idea 1
if afraid of "showing" the username in the address, why not sending a POST?
<form name="myForm" action="http://www.mydomain.com/myLandingPage.aspx">
<input type="hidden" id="userid" value="myUsername" />
click here
</form>
but then... off course, "View Source Code" will show it
idea 2
then.. I remembered that I do the same, but sending a Encrypted string like:
http://www.anotherapplicationsite.com/somesuburl?userID=HhN01vcEEtMmwdNFliM8QYg+Y89xzBOJJG+BH/ARC7g=
you can use Rijndael algorithm to perform this, link below has VB and C# code:
http://www.obviex.com/samples/EncryptionWithSalt.aspx
then in site 2, just Decrypt and check if the user exists... if it does, continue, if not saying that the user tried to temper the query string :)

JSON Security

Do Pagemethods and Json have security risks?(I dont use cookies).Forexample i have a pagemethod and i am sending user id as a parameter but i dont want to show it to user.Can user get user id from pagemethod?
yes they can (see the user id). Any communication between the server and client can be seen by the user. Take a look with fiddler or firebug to see what goes on. You can treat it the same as any regular get or post request.
I know of no reason why not to use it. Without knowing any of the background I can't give a definitive answer on whether I would choose it but in general there is no reason not to use it just apply the same security you would use for HTTP get and post requests like in regular form submissions.
It has the same security risks as a regulat GET and POST, it is just another format to send the data back and forth. If you were using a regular POST, anyone would be able to see the userid just the same.
So if you don't want to have people messing up with the userid you could add some sort of encrypted string dependent on the userid to go along with it, for validation, to name one of many possible solutions.
JSON has no security by itself, It's an unencrypted data-format.
JSON can utilize FormsAuthentication security just like pages. What I usually do if I don't want the end-user to see an identifier, is to store that value (or something I can use to lookup that value) in User.Identity.Name.
The most complicated part of this approach is that the JSON may not return anything if you aren't authenticated. To work around this, I tend to include a non-authenticated page for getting JSON to tell you if the user is logged in or not.
I am hiding user id parameter in Hidden Field and just concerned that can it be changed while in that Process.Thanks all of your supports
if the userid is in a hidden form field, then it is completely exposed to anyone who views the source code in the browser. Not only can they see the userId, but they can see how you are sending it to the server.
In general, you never trust the client with sensitive data. Assume that they can always manipulate the response.
The way to securely pass messages is to give the user some session token in the form of a string. This session token should be generated with a fair amount of randomness and includes their username in the algorithm. Take a look at resources regarding md5 and salting. With this token that you give them, the assumption is now that they cannot reverse engineer the contents. Since they do not have the algorithm (it is sitting on the server side), then they cannot tamper with it. Your server will have to decrypt the session token to retrieve the userId of course.
This in itself does not mean your application is completely secure - it only fixes one of potentially many issues.

How do I prevent replay attacks?

This is related to another question I asked. In summary, I have a special case of a URL where, when a form is POSTed to it, I can't rely on cookies for authentication or to maintain the user's session, but I somehow need to know who they are, and I need to know they're logged in!
I think I came up with a solution to my problem, but it needs fleshing out. Here's what I'm thinking. I create a hidden form field called "username", and place within it the user's username, encrypted. Then, when the form POSTs, even though I don't receive any cookies from the browser, I know they're logged in because I can decrypt the hidden form field and get the username.
The major security flaw I can see is replay attacks. How do I prevent someone from getting ahold of that encrypted string, and POSTing as that user? I know I can use SSL to make it harder to steal that string, and maybe I can rotate the encryption key on a regular basis to limit the amount of time that the string is good for, but I'd really like to find a bulletproof solution. Anybody have any ideas? Does the ASP.Net ViewState prevent replay? If so, how do they do it?
Edit: I'm hoping for a solution that doesn't require anything stored in a database. Application state would be okay, except that it won't survive an IIS restart or work at all in a web farm or garden scenario. I'm accepting Chris's answer, for now, because I'm not convinced it's even possible to secure this without a database. But if someone comes up with an answer that does not involve the database, I'll accept it!
If you hash in a time-stamp along with the user name and password, you can close the window for replay attacks to within a couple of seconds. I don't know if this meets your needs, but it is at least a partial solution.
There are several good answers here and putting them all together is where the answer ultimately lies:
Block-cipher encrypt (with AES-256+) and hash (with SHA-2+) all state/nonce related information that is sent to a client. Hackers with otherwise just manipulate the data, view it to learn the patterns and circumvent everything else. Remember ... it only takes one open window.
Generate a one-time random and unique nonce per request that is sent back with the POST request. This does two things: It ensures that the POST response goes with THAT request. It also allows tracking of one-time use of a given set of get/POST pairs (preventing replay).
Use timestamps to make the nonce pool manageable. Store the time-stamp in an encrypted cookie per #1 above. Throw out any requests older than the maximum response time or session for the application (e.g., an hour).
Store a "reasonably unique" digital fingerprint of the machine making the request with the encrypted time-stamp data. This will prevent another trick wherein the attacker steals the clients cookies to perform session-hijacking. This will ensure that the request is coming back not only once but from the machine (or close enough proximity to make it virtually impossible for the attacker to copy) the form was sent to.
There are ASPNET and Java/J2EE security filter based applications that do all of the above with zero coding. Managing the nonce pool for large systems (like a stock trading company, bank or high volume secure site) is not a trivial undertaking if performance is critical. Would recommend looking at those products versus trying to program this for each web-application.
If you really don't want to store any state, I think the best you can do is limit replay attacks by using timestamps and a short expiration time. For example, server sends:
{Ts, U, HMAC({Ts, U}, Ks)}
Where Ts is the timestamp, U is the username, and Ks is the server's secret key. The user sends this back to the server, and the server validates it by recomputing the HMAC on the supplied values. If it's valid, you know when it was issued, and can choose to ignore it if it's older than, say, 5 minutes.
A good resource for this type of development is The Do's and Don'ts of Client Authentication on the Web
You could use some kind of random challenge string that's used along with the username to create the hash. If you store the challenge string on the server in a database you can then ensure that it's only used once, and only for one particular user.
In one of my apps to stop 'replay' attacks I have inserted IP information into my session object. Everytime I access the session object in code I make sure to pass the Request.UserHostAddress with it and then I compare to make sure the IPs match up. If they don't, then obviously someone other than the person made this request, so I return null. It's not the best solution but it is at least one more barrier to stop replay attacks.
Can you use memory or a database to maintain any information about the user or request at all?
If so, then on request for the form, I would include a hidden form field whose contents are a randomly generated number. Save this token to in application context or some sort of store (a database, flat file, etc.) when the request is rendered. When the form is submitted, check the application context or database to see if that randomly generated number is still valid (however you define valid - maybe it can expire after X minutes). If so, remove this token from the list of "allowed tokens".
Thus any replayed requests would include this same token which is no longer considered valid on the server.
I am new to some aspects of web programming but I was reading up on this the other day. I believe you need to use a Nonce.
(Replay attacks can easily be all about an IP/MAC spoofing, plus you're challenged on dynamic IPs )
It is not just replay you are after here, in isolation it is meaningless. Just use SSL and avoid handcrafting anything..
ASP.Net ViewState is a mess, avoid it. While PKI is heavyweight and bloated, at least it works without inventing your own security 'schemes'. So if I could, I'd use it and always go for mutual authent. Server-only authentification is quite useless.
The ViewState includes security functionality. See this article about some of the build-in security features in ASP.NET . It does validation against the server machineKey in the machine.config on the server, which ensures that each postback is valid.
Further down in the article, you also see that if you want to store values in your own hidden fields, you can use the LosFormatter class to encode the value in the same way that the ViewState uses for encryption.
private string EncodeText(string text) {
StringWriter writer = new StringWriter();
LosFormatter formatter = new LosFormatter();
formatter.Serialize(writer, text);
return writer.ToString();
}
Use https... it has replay protection built in.
If you only accept each key once (say, make the key a GUID, and then check when it comes back), that would prevent replays. Of course, if the attacker responds first, then you have a new problem...
Is this WebForms or MVC? If it's MVC you could utilize the AntiForgery token. This seems like it's similar to the approach you mention except it uses basically a GUID and sets a cookie with the guid value for that post. For more on that see Steve Sanderson's blog: http://blog.codeville.net/2008/09/01/prevent-cross-site-request-forgery-csrf-using-aspnet-mvcs-antiforgerytoken-helper/
Another thing, have you considered checking the referrer on the postback? This is not bulletproof but it may help.

Resources