How to use the randomly generated code for secure access? - asp.net

I have been thinking of securing the login system by using a nice long random char password produced on the server and send it as an hidden field to the client. Then I will append the credentials with that password and encrypt it. But then I realized that since, how I append will be visible in code in javascript, its decryption will be fairly easy. So, is there any way that this technique of appending can be safe or SSL is the only option?

Anything that is sent via HTTP is visible and could be intercepted. Use HTTPS at least for login pages.

Have a look at the way AntiForgeryToken is implemented in MVC. Here is good link that could help you figure out what you need to do and to give you some ideas.
http://weblogs.asp.net/dixin/archive/2010/05/22/anti-forgery-request-recipes-for-asp-net-mvc-and-ajax.aspx

If you want to minimize the impact of "bad guys" in the middle, SSL is the best way. And when there is proven ways (SSL), why need to reinvent the wheel?

Related

How do I bypass ASP.NET validation

I have legal contract for this purpose.
I was trying to perform XSS attack on a website which uses ASP.NET. It's form validation is preventing me to enter payload. Is there any way to bypass that?
Certain unicode characters make it through.
But it realy depends what you are trying to achieve. If you need to prove the point that request validation is not enough to protect gainst XSS than you realy need to find such a payload.
The more common task would be: find the parts in the application that are affected by missing encoding of the output. For that you would need remove request validation in a test environment.

CSRF protection while making use of server side caching

Situation
There is a site at examp.le that costs a lot of CPU/RAM to generate and a more lean examp.le/backend that will perform various tasks to read, write and serve user-specific data for authenticated requests. A lot of resources could be saved by utilizing a server side cache on the examp.le site but not on examp.le/backend and just asynchronously grab all user-specific data from the backend once the page arrives at the client. (Total loading time may even be lower, despite the need of an additional request.)
Threat model
CSRF attacks. Assuming (maybe foolishly) that examp.le is reliably safeguarded against XSS code injection, we still need to consider scripts on malicious site exploit.me that cause the victims browser to run a request against examp.le/backend with their authorization cookies included automagically and cause the server to perform some kind of data mutation on behalf of the user.
Solution / problem with that
As far as I understand, the commonly used countermeasure is to include another token in the generated exampl.le page. The server can verify this token is linked to the current user's session and will only accept requests that can provide it. But I assume caching won't work very well if we are baking a random token into every response to examp.le..?
So then...
I see two possible solutions: One would be some sort of "hybrid caching" where each response to examp.le is still programmatically generated but that program is just merging small dynamic parts to some cached output. Wouldn't work with caching systems that work on the higher layers of the server stack, let alone a CDN, but still might have its merits. I don't know if there is a standard ways or libraries to do this, or more specifically if there are solutions for wordpress (which happens to be the culprit in my case).
The other (preferred) solution would be to get an initial anti-CSRF token directly from examp.le/backend. But I'm not quite clear in my understanding about the implications of that. If the script on exploit.me could somehow obtain that token, the whole mechanism would make no sense to begin with. The way I understand it, if we leave exploitable browser bugs and security holes out of the picture and consider only requests coming from a non-obscure browser visiting exploit.me, then the HTTP_ORIGIN header can be absolutely trusted to be tamper proof. Is that correct? But then that begs the question: wouldn't we get mostly the same amount of security in this scenario by only checking authentication cookie and origin header, without throwing tokens back and forth?
I'm sorry if this question feels a bit all over the place, but I'm partly still in the process of getting the whole picture clear ;-)
First of all: Cross-Site Scripting (XSS) and Cross-Site Request Forgery (CSRF) are two different categories of attacks. I assume, you meant to tackle CSRF problem only.
Second of all: it's crucial to understand what CSRF is about. Consider following.
A POST request to exampl.le/backend changes some kind of crucial data
The request to exampl.le/backend is protected by authentication mechanisms, which generate valid session cookies.
I want to attack you. I do it by sending you a link to a page I have forged at cats.com\best_cats_evr.
If you are logged in to exampl.le in one browser tab and you open cats.com\best_cats_evr in another, the code will be executed.
The code on the site cats.com\best_cats_evr will send a POST request to exampl.le/backend. The cookies will be attached, as there is not reason why they should not. You will perform a change on exampl.le/backend without knowing it.
So, having said that, how can we prevent such attacks?
The CSRF case is very well known to the community and it makes little sense for me to write everything down myself. Please check the OWASP CSRF Prevention Cheat Sheet, as it is one of the best pages you can find in this topic.
And yes, checking the origin would help in this scenario. But checking the origin will not help, if I find XSS vulnerability in exampl.le/somewhere_else and use it against you.
What would also help would be not using POST requests (as they can be manipulated without origin checks), but use e.g. PUT where CORS should help... But this quickly turns out to be too much of rocket science for the dev team to handle and sticking to good old anti-CSRF tokens (supported by default in every framework) should help.

Asp.net Page access through IP address control

Is it possible to create a page in asp.net that allow the access to a user that has a defined IPaddres? My goal is to add a page "test" (not linked to my website) and I want to define a rule that only a specified IP address can get the access.
How can I implement this throught asp.net?
You could try putting the page(s) in a separate folder and password protect it, then, give the password to your user, so they may access the content. You could go as far as password protecting each file. This helps if your website is password protected or has a login.
You could also create a sub-domain for that user specifically.
These are just a few. I'm sure you'll get better suggestions here on SO!
You could go for a programmatic solution. However, I would use IIS functions to block the access. Less code, easier to configure and no hassle on your developement/test environment.
Assumption: you are using IIS since it is ASP.NET. But other webservers should have similar solutions.
You can add IP restrictions to the directory (meaning you would have to put your page in a separate directory). Example here: http://www.therealtimeweb.com/index.cfm/2012/10/18/iis7-restrict-by-ip
Obviously there are a lot of other and arguably better ways to grant access to a page if what you really want is for a specific "user" or "group" to have access, but assuming that your really want the access control to be based on IP, the answer may still be dependent on peripheral concerns such as what web server you are using. IIS for example has some features for IP based security that you could check out.
Assuming though that you really, really want to check IPs and that you want to do it in code, you would find information about the calling environment in the Request of the current HttpContext, i.e. context.Request.UserHostAddress.
If you want to reject calls based on this information, you should probably do that as early as possible. In the HttpApplication.BeginRequest event you could check if the call is targeted for the page in question and reject the request if the UserHostAddress is not to your liking.
If you prefer to make this control in the actual page, do it in some early page event.
To manage the acceptable IP(s), rather than hard coding them into your checking code, I suggest you work with a ConfigurationSection or similar. Your checking code could be something similar to:
var authorizedIps =
authorizedIpConfiguration.Split(',').Select(ipString => ipString.Trim()).ToList();
isValid = authorizedIps.Any()
&& authorizedIps.Contains(context.Request.UserHostAddress);
If the check fails, you should alter the response accordingly, i.e. at least set its status code to 401 (http://en.wikipedia.org/wiki/List_of_HTTP_status_codes).
NB: There are a lot of things to consider when implementing security features, and the general recommendation would probably stand as "don't do it" - it's so easy to falter. Try to use well proven concepts and "standard implementations" if possible. The above example should not in itself be considered to provide a "secure" solution, as there are generally speaking many ways that restricted data can leak from you solution.
EDIT: From you comment to the answer given by nocturns2 it seems you want to restrict access to the local computer? If so, then there is a much easier and cleaner solution: Just check the Request.IsLocal property. It will return true only for requests originating from the local computer, see HttpRequest.IsLocal Property
(Also, you should really make sure that this "debug page" is not at all published when deploying your solution. If you manage that properly and securely, then perhaps you do not even need the access check any more. If you want debugging options in a "live" environment, you should probably look to HttpContext.Current.Trace or some other logging functionality.)

What is the simplest way for an app to communicate with a website in asp.net?

I have a desktop application.
Users register to use it.
When they register, I need to make sure their email address is unique.
So I need to send a request to a website that keeps a list of all email addresses and returns the results.
What is the simplest, quickest way to do this is ASP.NET?
I could do this:
Send a webrequest:
http://www.site.com/validate.aspx?email=a#a.a
And the aspx returns xml:
<response>Valid</response>
There are a few other simple tasks like this.
There is no need for heavy security or the need to make the system particularly robust due to high demand.
I have used web services before but that seems like too much overhead for this simple task.
Is there an elegant API that wraps up this communication system as it must be very common.
HTTP has you covered. Just check the response code from your server.
http://en.wikipedia.org/wiki/List_of_HTTP_status_codes
I suggest you use
409 Conflict
Indicates that the request could not be processed because of conflict in the request
but HTTP 418 is my favorite since I like OpenGL.
Actually, I'd go with something like what you described. It works for me unless you need something else. And I've seen a number of online services work exactly this way.

How to encrypt information in aspx page?

I know it's a silly question but ,
My client asked for encrypting some information form their payment system to prevent user stealing personal information.
The system is web-base and written by ASP.NET
We have tried some annoying solution such as JavaScript no right-click or css-no-print
but apparently my client didn't like it.
so are there any commercial solution to encrypt information in aspx produced html pages?
or someone can tell me how to pursuit my client to stop these "prevent stealing" idea in a web-base system?
If your client is worried about data being stolen "over-the-wire", do what Jaxidian mentioned and using SSL.
If your client is worried about users stealing data from pages they view, then tell them there's nothing they can do in a web app to stop that. Users need to download a page to view on their computers so no matter what you do, HTML web pages can always have their content downloaded by a user, even if you add some hoops to make it more difficult.
The only way to stop a user from stealing data from pages they view is to not make your app web-based. You'll have to write a native app that gets installed on users' machines with strict DRM in order to stop them from copying content. And even then, DRM can be cracked. Just look at Sony.
If your client was referring to encrypting data within your database, then you should look into AES Encryption in .NET.
SSL Certificates
Verisign
Thawte
There are many others, some trusted and others not trusted - do your homework.
<Edit> Here is a very thorough step-by-step tutorial explaining how you would go about using an SSL Cert in IIS.</Edit>
I come with some really silly answer for my client
I tried to encoding the information in aspx with Base64 like
string encoded = Convert.ToBase64String(Encoding.UTF8.GetBytes("Something"))
and decode the data with JQuery .Base 64 plugin ,
the aspx is like:
<span class="decoding"><%=encoded%></span>
with JQuery script to take all .decoding element to be decoded
$(function() {
$.base64.is_unicode = true;
$(".decoding").each(
function() {
$(this).html($.base64.decode($(this).html()));
}
);
});
so the source data will look like some meaningless string , which is my client want.
and with some evil JavaScript to prevent printing and cleaning user's clipboard.
I have completed a web-site with zero usability
and still can't prevent anything! Well done :)

Resources