How vunerable to XSS attacks is Flash? - apache-flex

The reason why I ask is that I'm telling a vendor of ours they have to use the MS AntiXSS library with the ASP.NET UI components they make, but they also work with Flex to build Flash based UIs - and I was wondering if there's an equivalent for Flash (assuming it's vunerable).

If I understand correctly...
ASP.NEt is used to make web pages, and all UI Components they make will be running in a browser as HTML / JavaScript. Is that correct?
If that is the case, I can understand why preventing cross site scripting would be important in that case.
With Flex (which runs in the Flash Player), everything is compiled down into a binary file, called a SWF. Most of the time, the SWF runs inside the Flash Player, which runs in the browser as a plugin. There would be no way to hack an individual Flex component using XSS.
I don't believe the code you write needs to be protected from cross site scripting. Your biggest fear is player vulnerabilities, which you don't have much control over.
None of this should be a reason not to validate user input.

The short answer is: the Flash player has a lot of features in place to prevent XSS attacks, but they're built in to the player itself, so there isn't any particular library you need to use. If you don't call any security-related APIs, and don't put config files on your server, then security-wise, you are already using the most restrictive settings available. (Assuming you also pay attention to how you make use of user input.)
More generally, APIs that have the potential to lead to XSS vulnerabilities are as a rule disabled in XSS situations unless you actively enable them. For example, if an HTML page on your site loads in a flash file from another site, and that flash content tries to, say, make javascript calls into your page, those calls will be blocked by default unless you allow them. Similarly, if flash content on your site loads in components from another site, those components will not be able to introspect into their parent unless you call APIs to allow them to. There are also various restrictions on what happens when another site tries to load in Flash content from your site without your having allowed it.
For all the details, I highly recommend this excellent overview:
Creating more secure SWF web applications
With all that said, since you also asked about sanitizing user inputs, it's worth noting that since AS3 has no equivalent of an eval command there is never any question of user input being executed as script. However, any user input that relates to content being loaded could be a vector of XSS attack. (For example, if you append a user-input string to a URL you then load, the user could cause your site to load in their malicious SWF.) But such a case is no different from a situation where you load in a benign 3rd-party SWF, and someone later replaces it with malicious content. Hence in context of Flash, protecting against XSS attacks is not so much about sanitizing user input as it is about making sure that externally loaded contents are not granted permission to run as if they were locally trusted.
And further, since it's often useful or necessary to relax the default restrictions if you want to do something interesting with 3rd-party content (like flash avatars, components, or even banner ads), in those situations it's important for the site admin to understand what they are allowing, and how to prevent the relaxed restrictions from exposing a vulnerability.

Related

How to block anyone downloading web page from browser using Ctrl+S or through browser download option?

I am trying to restrict the user from downloading the page as .html or .aspx file from browser.
Or is there a way to change the content of file if its downloaded?
This is a complex area, with lots of moving parts. The short answer is "there is no way to do this with 100% success; there are a few things you can do which make it harder".
Firstly, you can include JavaScript to disable the right click context menu. This doesn't stop Ctrl+S, but might discourage casual attempts.
Secondly, you can use DRM in the browser (though this is primarily aimed at protecting media content. As browser support is all over the show, this isn't realistic right now.
Thirdly, you could write your site as a single page web application, and build some degree of authentication into the "retrieve content" logic. This way, saving the page to disk wouldn't bring the content along, just the "page furniture". However, any mechanism you include to only download content when you think you should is likely to be easily subverted by anyone who is moderately motivated.
Also, any steps you take to stop people persisting your pages locally are likely to break the caching mechanisms on which the internet depends for performance, so your site would likely be dramatically slower.
No you can't stop them.
Consider how the web actually works here: once the user has visited your website and loaded your page into their browser, they have already downloaded it - the web page was transmitted from your server to their computer and appeared on their screen.
All they have to do then is click the Save button to keep it permanently on their disk. That doesn't involve downloading it again, it just copies the page data from a temporary folder to a permanent one. Of course it's also possible for people to use another HTTP client (i.e. not a browser, but maybe an existing program, or some code they wrote themselves) to visit the URL of your page and save the returned contents.
It's not clear what problem think you would solve by stopping people from saving pages. Saving the page is something done within the browser - you as a site developer don't control the user's browser, so you can't prevent that. And if you stop them from downloading your page in the first place then - by definition - you also stop them from using your website...which kind of defeats the point of having one :-).
If you've got some sort of worry about security, you'll have to clarify exactly what you are concerned about, and maybe you can get advice about a sensible way to deal with it.

Preventing cross site scripting attack on flash container pages

I have a website with a flex application. The flex application has no user input - except for clicks for navigation. The website also uses no scripting language - i.e. no php, asp, jsp or cfm.
The website just consists of one page which contains the flash file for the flex application. The source code of this page is here: http://pastebin.com/n5b4RxqT
I have been advised (by a software program used by my client) that this website is vulnerable to a reflective type XSS attack and have been advised to 'sanitize' all user input.
I am a noob with respect to XSS and would respectfully like to ask that AFAIK there is no user input. What should I sanitize and how?
Thanks in advance
Using AC_OETags is the old way of doing things and is deprecated. What you want to do is use SWFObject which is now the default in Flash Builder 4.

Should I embed CSS/JavaScript files in a web application?

I've recently started embedding JavaScript and CSS files into our common library DLLs to make deployment and versioning a lot simpler. I was just wondering if there is any reason one might want to do the same thing with a web application, or if it's always best to just leave them as regular files in the web application, and only use embedded resources for shared components?
Would there be any advantage to embedding them?
I had to make this same decision once. The reason I chose to embed my JavaScript/CSS resources into my DLL was to prevent tampering of these files (by curious end users who've purchased my web application) once the application's deployed.
I doubting and questioning the validity of Easement's comment about how browsers download JavaScript files. I'm pretty sure that the embedded JavaScript/CSS files are recreated temporarily by ASP.NET before the page is sent to the browser in order for the browser to be able to download and use them. I'm curious about this and I'm going to run my own tests. I'll let you know how it goes....
-Frinny
Of course if anyone who knew what they were doing could use the assembly Reflector and extract the JS or CSS. But that would be a heck of a lot more work than just using something like FireBug to get at this information. A regular end user is unlikely to have the desire to go to all of this trouble just to mess with the resources. Anyone who's interested in this type of thing is likely to be a malicious user, not the end user. You have probably got a lot of other problems with regards to security if a user is able to use a tool like the assembly reflector on your DLL because by that point your server's already been compromised. Security was not the factor in my decision for embedding the resources.
The point was to keep users from doing something silly with these resources, like delete them thinking they aren't needed or otherwise tamper with them.
It's also a lot easier to package the application for deployment purposes because there are less files involved.
It's true that the DLL (class library) used by the pages is bigger, but this does not make the pages any bigger. ASP.NET generates the content that needs to be sent down to the client (the browser). There is no more content being sent to the client than what is needed for the page to work. I do not see how the class library helping to serve these pages will have any effect on the size of data being sent between the client and server.
However, Rjlopes has a point, it might be true that the browser is not able to cache embedded JavaScript/CSS resources. I'll have to check it out but I suspect that Rjlopes is correct: the JavaScript/CSS files will have to be downloaded each time a full-page postback is made to the server. If this proves to be true, this performance hit should be a factor in your decision.
I still haven't been able to test the performance differences between using embedded resources, resex, and single files because I've been busy with my on endeavors. Hopefully I'll get to it later today because I am very curious about this and the browser caching point Rjlopes has raised.
Reason for embedding: Browsers don't download JavaScript files in parallel. You have a locking condition until the file is downloaded.
Reason against embedding: You may not need all of the JavaScript code. So you could be increasing the bandwidth/processing unnecessarily.
Regarding the browser cache, as far as I've noticed, response on WebRecource.axd says "304 not modified". So, I guess, they've been taken from cache.
I had to make this same decision once. The reason I chose to embed my JavaScript/CSS resources into my DLL was to prevent tampering of these files (by curious end users who've purchased my web application) once the application's deployed.
Reason against embedding: You may not need all of the JavaScript code. So you could be increasing the bandwidth/processing unnecessarily.
You know that if somebody wants to tamper your JS or CSS they just have to open the assembly with Reflector, go to the Resources and edit what they want (probably takes a lot more work if the assemblies are signed).
If you embed the js and css on the page you make the page bigger (more KB to download on each request) and the browser can't cache the JS and CSS for next requests. The good news is that you have fewer requests (at least 2 if you are like me and combine multiple js and css and one), plus javascripts have the problem of beeing downloaded serially.

Programmatically set name of file to upload in webpage

Is there a way to programmatically set the name of a file to be uploaded from a web page? I suspect that browser security restrictions make this impossible, but I'm hoping someone will prove me wrong.
I have a web application that needs to let the administrator upload HTML. The admin selects the HTML file, then the app uploads that file, plus figures out all the supporting files (images, stylesheet, etc) and uploads them too. There doesn't seem to be a way to programmatically upload the supporting files from a web page, since the user has to specify each file explicitly.
Currently I have a separate Windows app to do this, but it would be ideal to have this functionality integrated with the rest of the app. My back end is ASP.NET with C#.
There is no way to programatically grab files from a user's computer via the browser. This would be a security violation if a website could just grab things.
Yes you can (in modern browsers)...
You can get and set the value of HTMLInputElement.files.
See this answer.
No, you cannot do this without a client-side application or special plug-in.
Browser security doesn't allow the server to obtain information about the hard drive contents of the client.
You may be able to do this using some form of browser plug-in. This is more work for you (and there are potential security implications for this beyond those found when you just have users run your app). However, it may prevent a more integrated experience for your users. I'd hesitate to eliminate the application completely, though. Browser compatibility issues are common.

Should you code ASP.NET sites so they can live in a VDIR?

Back in the days before Visual Studio Web Server we would host our local dev inside IIS. If you have a workstation version of IIS that means only 1 website. What if you are working on several websites. Easy: create them in VDIRs, e.g. http://localhost/ProjectA, http://localhost/ProjectB.
Living in a VDIR doesn't sound so hard. Make sure all your images/CSS/links are relative paths, use the "~" a lot. Sounds like a good practise. Hardcoding images etc so they only work when the application is served from "/" sounds like a bad practise.
There are some nuances anywhere you have to build a link (mostly not common scenarios):
eg; PROD: http://prodserver.com/images/up.jpg -> DEV: http://localhost/ProjectA/images/up.jpg
links / images in emails
flash/javascript/silverlight requests data from the server which contains links to images
The full link to the a PayPal IPN (the page paypal POSTs their response too)
So.. Are you doing this? Advantages / disadvantages? Any other gotchas I've missed?
I always avoid hardcoded paths, URLs, etc. unless there's a specific reason to do otherwise. Things inevitably change, and there's always the jump from your dev site to production.
The part that is usually the biggest nuisance would be in reusable client behaviors that need to reference other paths, and themselves can be reused in pages across the application's directory structure.
I like the idea handler that responds to "globalvars.ashx" (or something like that; there are many ways to handle this) which dynamically emits (and allows caching) properties regarding the global application properties.
Say the handler responsible for globalvars.ashx writes the result of something like this:
String.Format("var ApplicationProperties = {{ RootPath:{0} }};", Request.ApplicationPath);
Your JS behaviors could theoretically reference that property object at any point via ApplicationProperties.RootPath.
In short, yes. The disadvantages of not doing so outweigh the benefits. I actually think your first two points can also be mostly mitigated from using app-relative paths ("~"), but nevertheless, some scenarios such as "integration-level" ones (like PayPal) may indeed prove tricky.
But at the end of the day, if you need to host your app in a virtual directory you are virtually guaranteed problems if you haven't coded your app to be vdir-friendly from the get go. I know I have.
Some background/context: My current production environment at work is almost always a virtual directory anyway, so I do this by necessity. And I've never had a problem when an application was created as a root-level website. This certainly wouldn't be the case if it was the other way around.

Resources