How to encrypt information in aspx page? - asp.net

I know it's a silly question but ,
My client asked for encrypting some information form their payment system to prevent user stealing personal information.
The system is web-base and written by ASP.NET
We have tried some annoying solution such as JavaScript no right-click or css-no-print
but apparently my client didn't like it.
so are there any commercial solution to encrypt information in aspx produced html pages?
or someone can tell me how to pursuit my client to stop these "prevent stealing" idea in a web-base system?

If your client is worried about data being stolen "over-the-wire", do what Jaxidian mentioned and using SSL.
If your client is worried about users stealing data from pages they view, then tell them there's nothing they can do in a web app to stop that. Users need to download a page to view on their computers so no matter what you do, HTML web pages can always have their content downloaded by a user, even if you add some hoops to make it more difficult.
The only way to stop a user from stealing data from pages they view is to not make your app web-based. You'll have to write a native app that gets installed on users' machines with strict DRM in order to stop them from copying content. And even then, DRM can be cracked. Just look at Sony.
If your client was referring to encrypting data within your database, then you should look into AES Encryption in .NET.

SSL Certificates
Verisign
Thawte
There are many others, some trusted and others not trusted - do your homework.
<Edit> Here is a very thorough step-by-step tutorial explaining how you would go about using an SSL Cert in IIS.</Edit>

I come with some really silly answer for my client
I tried to encoding the information in aspx with Base64 like
string encoded = Convert.ToBase64String(Encoding.UTF8.GetBytes("Something"))
and decode the data with JQuery .Base 64 plugin ,
the aspx is like:
<span class="decoding"><%=encoded%></span>
with JQuery script to take all .decoding element to be decoded
$(function() {
$.base64.is_unicode = true;
$(".decoding").each(
function() {
$(this).html($.base64.decode($(this).html()));
}
);
});
so the source data will look like some meaningless string , which is my client want.
and with some evil JavaScript to prevent printing and cleaning user's clipboard.
I have completed a web-site with zero usability
and still can't prevent anything! Well done :)

Related

Do I need extra XSS security for ASP.NET 4 websites?

From what I understand about what ASP.NET does and my own personal testing of various XSS tests, I found that my ASP.NET 4 website does not require any XSS prevention.
Do you think that an ASP.NET 4.0 website needs any added XSS security than its default options? I cannot enter any javascript or any tags into my text fields that are then immediately printed onto the page.
Disclaimer - this is based on a very paranoid definition of what "trusted output" is, but when it comes to web security, I don't think you CAN be too paranoid.
Taken from the OWASP page linked to below: Untrusted data is most
often data that comes from the HTTP request, in the form of URL
parameters, form fields, headers, or cookies. But data that comes from
databases, web services, and other sources is frequently untrusted
from a security perspective. That is, it might not have been perfectly
validated.
In most cases, you do need more protection if you are taking input from ANY source and outputting it to HTML. This includes data retrieved from files, databases, etc - much more than just your textboxes. You could have a website that is perfectly locked down and have someone go directly to the database via another tool and be able to insert malicious script.
Even if you're taking data from a database where only a trusted user is able to enter the data, you never know if that trusted user will inadvertently copy and paste in some malicious script from a website.
Unless you absolutely positively trust any data that will be output on your website and there is no possible way for a script to inadvertently (or maliciously in case of an attacker or disgruntled employee) put dangerous data into the system, you should sanitize all output.
If you haven't already, familiarize yourself with the info here: https://www.owasp.org/index.php/XSS_%28Cross_Site_Scripting%29_Prevention_Cheat_Sheet
and go through the other known threats on the site as well.
In case you miss it, the Microsoft.AntiXss library is a very good tool to have at your disposal. In addition to a better version of the HtmlEncode function, it also has nice features like GetSafeHtmlFragment() for when you WANT to include untrusted HTML in your output and have it sanitized. This article shows proper usage: http://msdn.microsoft.com/en-us/library/aa973813.aspx The article is old, but still relevant.
Sorry Dexter, ASP.NET 4 sites do require XSS protection. You're probably thinking that the inbuilt request validation is sufficient and whilst it does an excellent job, it's not foolproof. It's still essential that you validate all input against a whitelist of acceptable values.
The other thing is that request validation is only any good for reflective XSS, that is XSS which is embedded in the request. It won't help you at all with persistent XSS so if you have other data sources where the input validation has not been as rigorous, you're at risk. As such, you always need to encode your output and encode it for the correct markup context (HTML, JavaScript, CSS). AntiXSS is great for this.
There's lots more info specifically as it relates to ASP.NET in OWASP Top 10 for .NET developers part 2: Cross-Site Scripting (XSS).

Scraping ASP.NET with Python and urllib2

I've been trying (unsuccessfully, I might add) to scrape a website created with the Microsoft stack (ASP.NET, C#, IIS) using Python and urllib/urllib2. I'm also using cookielib to manage cookies. After spending a long time profiling the website in Chrome and examining the headers, I've been unable to come up with a working solution to log in. Currently, in an attempt to get it to work at the most basic level, I've hard-coded the encoded URL string with all of the appropriate form data (even View State, etc..). I'm also passing valid headers.
The response that I'm currently receiving reads:
29|pageRedirect||/?aspxerrorpath=/default.aspx|
I'm not sure how to interpret the above. Also, I've looked pretty extensively at the client-side code used in processing the login fields.
Here's how it works: You enter your username/pass and hit a 'Login' button. Pressing the Enter key also simulates this button press. The input fields aren't in a form. Instead, there's a few onClick events on said Login button (most of which are just for aesthetics), but one in question handles validation. It does some rudimentary checks before sending it off to the server-side. Based on the web resources, it definitely appears to be using .NET AJAX.
When logging into this website normally, you request the domian as a POST with form-data of your username and password, among other things. Then, there is some sort of URL rewrite or redirect that takes you to a content page of url.com/twitter. When attempting to access url.com/twitter directly, it redirects you to the main page.
I should note that I've decided to leave the URL in question out. I'm not doing anything malicious, just automating a very monotonous check once every reasonable increment of time (I'm familiar with compassionate screen scraping). However, it would be trivial to associate my StackOverflow account with that account in the event that it didn't make the domain owners happy.
My question is: I've been able to successfully log in and automate services in the past, none of which were .NET-based. Is there anything different that I should be doing, or maybe something I'm leaving out?
For anyone else that might be in a similar predicament in the future:
I'd just like to note that I've had a lot of success with a Greasemonkey user script in Chrome to do all of my scraping and automation. I found it to be a lot easier than Python + urllib2 (at least for this particular case). The user scripts are written in 100% Javascript.
When scraping a web application, I use either:
1) WireShark ... or...
2) A logging proxy server (that logs headers as well as payload)
I then compare what the real application does (in this case, how your browser interacts with the site) with the scraper's logs. Working through the differences will bring you to a working solution.

How can I prevent/make it hard to download my flash video?

I want to at least prevent normal users to download my flash video.
What's the best way to do it?
Create a httphandler, add a token (e.g. timeid), set the cache control to no-cache so that only the users with correct token can view the correct video. Is that feasible?
It is the requirement from client that the video should not be downloaded by users and should be watched only in the particular website.
I want to know if this works:
http://www.somesite.com/video.swf?time=1248319067
Server will generate a token(time in the above example) so that user can only have one request to this link. If the user wants to watch the video again, he needs to go to our website to get the token again. Is this okay to prevent novices from downloading?
I can't download this flash video by the downloadHelper firefox plugin:
http://news.bbc.co.uk/2/hi/americas/8164177.stm
Updated (13:49 pm 2009/07/23):
The above file can be downloaded using some video download software.
The video files of following Chinese sites are well protected (I can't download it using many video download software):
http://programme.tvb.com/drama/abrideforaride/video/
Do you know how it is done?
I dont think there is an easy way to stop people from getting your videos if they want them,
there are plenty of plugins for firefox that allow downloading from even youtube and many places. And i imagine those plugins would disable any attempt you made to hide your videos.
not too terribly different than taking an image from flicker, they put a clear gif image over the image that you want to view, so that when you right click and save you get "the shield" image, however can be defeated by the lowly print screen button.
if you want casual users from getting your file, use a flash control and buffer a minute or two of your videos and make that flash authenticate with the server to get those files. that seems reasonable to me
I don't think there really is an easy way to limit people from getting at it. Your sending them the video, that is how they are able to view it. Any user could just use FRAPS or a similar tool to copy the video from the screen as well.
If your worry is being copied and used elsewhere then you can watermark it or use a few other types of copy protection methods that will allow you to identify your work on other sites. If your worried about people copying it for personal use, then you really have no way of stopping it, you are sending it to them.
Edit: Due diligence would be to inform your customer of how easy it is to copy the work that they will be posting. Most clients have really no idea how easy it is.
This is how I like to tackle this issue.
This method works by creating a ticket to download the content over one http request...Another attempt to use the same ticket to download the content will fail, hence any extensions that attempt to download the content again or a user manually attempting to fail to do so, hence the flash player will be the only way to download the content. However there is one downfall for this approach, users will not be able to skip to a part of the video that has not been download...in some standard player implementation that may even stop the video from loading. Any ideas on this will be highly appreciated.
I begin by writing a PHP script that takes in a video_id, file_name, or a local path to your video file (Depending on the storage infrastructure of your video collection) in a GET request along with a unique hash value (a hard to guess and come up with probably generated with a secret key so it can be validated to be coming from our reciever (flash player), if the hacker send us a used hash or an invalid hash (does not satisfy our key), we will not send him the file). The PHP script then opens the video file and sends its content with the correct video mime type. for FLV the mime type is video/x-flv. It makes sure that once a unique hash has not been used before and is validly generated from your secret encryption key.
Then once the page with the flash player is loading we can give the .php file with the right get parameters as the video url to the video player. (If it is a prude player that only allows flv files you can always program your .htaccess file to parse .flv files as php script in the specific folder only, and rename your .php file as .flv and try your luck)...anyways...Also generate a hash key...perhaps you can take the servers current time and append it to a salt value such as another key known by both scripts, and encrypt this final concatenation with your secret key.
So once the video gateway php script will recieve a filename or hash key...it will decrypt the hash key and figure out if it is validly generated from teh sister script, and make sure not to send the video again to the same hash key...
For added security you can perhaps reset the secret key everyday using either a cronjob or bootstrap mechanism. To prevent duplicate use of hashkeys you can store them in a mysql database, file operations, or NOSQL (depending on your needs and infrastructure).
Make sure that the file is requested by the same user agent the hash key was generated for. In case the hacker trys to cURL or Wget your videos unused url before the flash player gets a chance to consume the hash key. In this case the hacker will have to imitate the browser's user agent or download the file using their command line tool as well...However please note that this is not your average champ.
It sounds like you need to add authorization and authentication.
You could put the flash video under a different folder in your ASP.Net application and add a web.config file in that folder to deny access to unauthorized users. For example:
Then you need to enable authentication for your website. The simplest method is forms authentication. A trivial example with hard coded username and password is provided here.
There is loads that you can do with the authentication framework in ASP.Net I suggest googling a bit.
The only way to do this is with a trusted client, DRM and an encrypted source.
Your player opens up a connection, the user has a connection to the stream, you perform some magic authentication with their token and then transmite the encrypted data to them.
If you don't do this then anyone can download your video and save it out.
However with all that aside, someone can run screen capture, then save your video and do it again. This is again where the DRM comes in as one of the key features of the DRM in windows clients is that the buffer cannot be sniffed as it's on the protected media pathway.
I guess its a question of how to protect your revenue but dealing with pirates is always going to be a problem for software devs no matter what their business is.
I have a solution that i'm gonna try for myself (as I have the same worries) but I know that it includes a lot of extra time and work...
Solution: using flash compress the video into an swf file. Before compressing add some AS code to the movie for authentication. suggestions for authentication:
1 test url
2 create a dedicated flash player that has handshake code checked by the video.swf
I like #2 better, and as an extra measure, you can overlay an id code over the video, so if someone captures the video using screen recording software, you'd at least be able to track the original source of the copied video.. and exact suitable retribution...
Simply you can't prevent it.
But..you can make it difficult.
Here some ideas come in my mind
1 First of all add your identifier to the video (always someone can download it)
2 The hard way... Add Ajax call back to server to check a random generated key that it will stored in the session every N seconds. After every post back clear the buffer of the player and start the video from were i was (using javascript).
Use again JavaScript prevent the video source from downloading by "view source".
3 Handle all your videos in urls like http://www.example.com/viewvideo/1 OR ../?id=1.
Add blank image overlay with transparent background.
Serve the original video and a blank video somewhere on the page with normal extension and style attribute "display:none". (will create problems to some download helpers)
4 Everytime you serve a video CHECK if the request is from a browser (ie check UserAgent)
5 Cookie with some random value combined with the id of the video. Check it client-side and server side and then serve the video.
6 On focusout event hide the video with javascript. put a resume button in the flash and leave the frame unchange (like pause but with no original video in buffer).
7 Combine those methods
these are random generated ideas,
not tested neither i say that guaranties no video downloading.
I have attempted two way to prevent the downloading but fails.
Using javascript to dynamically generate the object for flash.
Using the token idea proposed in the question.
What annoying me most is that a simple SAVE/AS from the firefox browser could easily bypass the tricks.
The only variable way so far is to using an empty swf file to load another swf file in. Combined with the token idea, it works.
in my answer you cant stop image/video theft but you can make harder for normal users but you can't make it harder for the programmers like us( i mean thiefs that knows little web programming) there are some tricks you can try:-
1.) Using flash as youtube and many others sites like http://www.funnenjoy.com does .
2.) Div overlaping or background pic setting (but users with little sense can easily save all resources by opening inspect element or other developer option).
3.) You can disable right click and specific keys like CTRL + S and others possibles with JAVASCRIPT but main drawback is that if user disable JAVASCRIPT our all tricks fail down.
4.)Save image in none online directories(if you have full access to web server) and read that files with server side languages like PHP every time when image / video is required and change image id time to time or create script that can automatically change ID after every access.
5.)Use .htaccess in apache to prevent linking of your images by others sites. you can use this site to automatically generate .htacess http://www.htaccesstools.com/hotlink-protection/

How to restrict what files a desktop app can download from an online server

The closest example I can think of is iTunes. I'm thinking about a system where a server stores loads of files, and each user only has access to those they have paid for. Using a desktop app, they can download these to their local PC where they are stored as regular files.
How might one approach this? I can see a couple of possible options, and have some initial thoughts, but would welcome feedback on these or other ideas. If you post your preferred design, people can vote on them!
1)Use HTTP requests, and the response is the file data. Then a simple servlet (or similar) can act as a control on which files are downloaded.
PROs: easy to do
CONs: seems a little hacky, how would you display a progress bar?
2)Use sockets, and a custom server app which pipes data to the server
PROs: Perhaps more performant (?), can send data in nice sized chunks
CONs: A little more work on the client side, quite a bit more to write a custom server-side app that runs 24/7
Thanks in advance. Someone please edit my tags, I can't think of the right ones!
Use HTTP requests, and the response is the file data. Then a simple servlet (or similar) can act as a control on which files are downloaded. PROs: easy to do CONs: seems a little hacky, how would you display a progress bar?
I don't see why this is hacky? Your App would authenticate using the user's user name and password (if you want it to work like iTunes) and fetch files according to permission level. A progress bar is easy to do because you will get the content-length header in the response. It's a more flexible approach than FTP - but if FTP already does everything you need, go for that.
As said, FTP is what you need. To control per user, per file permissions you can create one system user and then you can apply filesystem level ACLs. Then, a FTP server like PureFTPd will let you login with system accounts with the specified permissions.

Finding the right caching and compression strategy for asp.net

I'm trying to figure out the best way to do caching for a website I'm building. It relies heavily on screen scraping the wikipedia website. Here is the process that I'm currently doing:
User requests a topic from wikipedia via my site (i.e. http://www.wikipedia.org/wiki/Kevin_Bacon would be http://www.wikipediamaze.com/wiki?topic?=Kevin_Bacon) NOTE: Because IIS can't handle requests that end in a '.' I'm forced to use the querystring parameter
Check to see if I've already stored the formatted html in my database and if it does then just display it to the user
Otherwise I perform a web request to wikipedia
Decompress the stream if needed.
Do a bunch of DOM manipulation to get rid of the stuff I don't need (and inject stuff I do need).
Store the html in my database for future requests
Return the html to the browser
Because it relies on screen scraping and DOM manipulation I am trying to keep things speedy so that I only have to do it once per topic instead of for every single request. Here are my questions:
Is there a better way of doing caching or additional things I can do to help performace?
I know asp.net has built in caching mechanism, but will it work in the way that I need it to? I don't want to have to retrieve the html (pretty heavy) from the database on every request, but I DO need to store the html so that every user get's the same page. I only ever want to get the data from Wikipedia 1 time.
Is there anything I can do with compression to get it to the browser quicker and if so can the browser handle uncmopressing and displaying the html? Or is this not even a consideration. The only reason I'm asking is that because some of the pages wikipedia sends me through the HttpWebRequest come through as a gzip stream.
Any and all suggestions, guidance, etc. are much appreciated.
Thanks!
You can try to enable the OutputCache for your page with VaryByParam=topic. That stores a copy of the page in memory if multiple clients request it. When the page is not in memory, the server can retrieve it from your database. The beauty of OutputCache is that you can even store a gzipped version of the HTML (use VaryByEncoding)
If it's a problem for you to decompress the stuff you get from Wikipedia, then don't send an Accept-Encoding header. That should force Wikipedia to send the page to you uncompressed.
Caching strategy: write the HTML to a static file and let users download from that file.
Compression strategy: check out Google's PageSpeed Best Practices.

Resources