We have existing ASP.Net Application 3.5. We have done Security Audit and we found that we can have XSS attack with our application.
As we have existing application and more than 100 modules - can we have easy way where with some configuration (minimal effort rather than going to every page and encode output).
We also have few pages where we need to allow even HTML Editor as well.
Please suggest me the easy and quick way to resolve this.
As we have existing application and more than 100 modules - can we have easy way where with some configuration (minimal effort rather than going to every page and encode output).
There are hacks, like the built-in ASP.NET “Request Validation”, or the “Security Runtime Engine” in MS Anti-XSS, which attempt to filter out what it thinks might be an attack on the way into your application.
These are generally a very poor solution in the long term, because they can't address all possible attacks and they'll break some input that isn't an attack. But this might be a useful temporary workaround measure until you can fix the app properly to escape everywhere it puts strings into a containing context, and set suitable development standards and controls to stop more context-naïve templating getting into the codebase.
We also have few pages where we need to allow even HTML Editor as well.
You will need an HTML sanitiser library to filter out all but acceptable whitelisted tags/attributes. Here is one.
Related
I just began working with ASP.NET and I'm trying to bring with me some coding standards I find healthy. Among such standards there is the multilingual support and the use of resources for easily handling future changes.
Back when I used to code desktop applications, every text had to be translated, so it was a common practice to have the language files for every languages I would want to offer to the customers. In those files I would map every single text, from button labels to error messages. In ASP.NET, with the help of Visual Studio, I have the resort of using the IDE to generate such Resource Files (from Tools -> Generate Local Resource), but then I would have to fill my webpages with labels - at least that is what I've learned from articles and tutorials. However, such approach looks a bit odd and I'm tempted to guess it doesn't smell that good as well. Now to the question:
Should I keep every single text in my website as labels and manage its contents in the resource files? It looks/feels odd especially when considering a text with several paragraphs.
Whenever I add/remove something, e.g.: a button, to an aspx file I would have to add it to the resource file as well, because generating the resource file again would simply override all my previous changes to it. That doesn't feel like a reusable code at all for me.
Perhaps I got it all wrong from tutorials as it doesn't seem like a standardized matter - specially if it required recompiling the entire application whenever some change has to be done.
Best practices for ASP.NET Web Forms localization have not really changed much over the years. If you don't have much dynamic content then you can get away with implicit localization and bind web forms controls (form elements and yes, even labels) to resource files. Explicit localization is useful if you want a bit more control over where localized text is rendered in a control with multiple captions or something you've created yourself. You don't need to look very far for instructional steps from MS on how to do either of these.
Walkthrough: Using Resources for Localization with ASP.NET
If your localization requirements are more dynamic, for example, you want to easily provision new languages, centralize resources, or you need to provision new string captions on a new dimension (like per client), then you need to get a bit more creative. .NET allows you to extend the
the resource provider and you can implement a database backend that allows for easy administration of localized resources.
Extending the ASP.NET 2.0 Resource-Provider Model, Building a Database Resource Provider
Extending Resource-Provider for storing resources in the database * A more recent implementation
Or you could just roll your own!
I've also dug up a duplicate SO post. It's a few years old, but speaking from experience I believe the advice found on the referenced code project page are still true (for Web Forms): Globalization and localization demystified in ASP.NET 2.0
I hope that helps! If you have any more specific questions regarding localization please add them to your Questions or comments.
We’re currently evaluating development with Sitecore 6 for a project. The client already bought it, so using another CMS isn't an option. The proposed setup would have Sitecore as our site’s content data provider; which would be consumed by a site built in ASP.Net MVC 3. We’d use Sitecore’s libraries to retrieve data from the Sitecore database on the server side.
In some cases, we also may want to consume content data via client side AJAX calls. I’ve been working on prototypes for this to see what data I can get back from a custom proxy service. This service calls GetOuterXml on the item, converts the Xml to JSON, and sends back the JSON to the calling script. So far, I’m finding using this method limiting; as it appears GetOuterXml only returns fields and values for fields that were set on the specific item, ignoring the template’s standard value fields and their default values for example. I tried Item.Fields.ReadAll(), still wouldn’t return the standard values. Also, there are circular references in the Item graph (item.Fields[0].Item.Fields[0]...); which has made serialization quite difficult without having to write something totally custom.
Needless to say, I've been running into many roadblocks on my path down this particular road and am definitely leaning toward doing things the Sitecore way. However, my team really wants to use MVC for this project; so before I push back on this, I feel its my responsibility to do some due diligence and reach out to the community to see if anyone else has tried this.
So my question is, as a Sitecore developer, have you ever used Sitecore as purely a content data provider on the client-side and/or server-side? If you have, have you encountered similar issues and were you able to resolve them? I know by using Sitecore in this way; you lose a lot of features such as content routing/aliasing, OMS, the rendering and layout engine; among other features. I’m not saying we’re definitely going down this path, we’re just at the R&D phase of using Sitecore and determining how it would best be utilized by our team and our development practices. Any constructive input is greatly appreciated.
Cheers,
Frank
I don't have experience with trying to use Sitecore solely as a data provider, but my first reaction to what you're suggesting is DON'T!
Sitecore offers extremely rich functionality which is directly integrated into ASP.Net and configured from within the Sitecore UI. Stripping that off and rebuilding it in MVC is lnot so much reinventing the wheel as reinventing the car.
I think that in 6.4 you can use some MVC alongside Sitecore, so you may be able to provide a sop to your colleagues with that.
I'm thinking of developing the following but wondering if it already exists out there:
I need a SQL based solution for assigning and managing localization text values for an asp.net site instead of using RESX files. This helps maintain text on the site without having to take it down for deployment whenever an update is required.
Thanks.
We actually went down that path, and ended up with a really really slow web site - ripping out the SQL-based translation mechanism and using the ASP.NET resources gave us a significant performance boost. So I can't really recommend you do that same thing.... (and yes - we were caching and optimizing for throughput and everything - and the SQL based stuff was still significantly slower).
You get what you pay for - the SQL based approach was more flexible in terms of being able to "translate" on the fly, and fix typos and stuff. But in the end, in our app (Webforms, .NET 2.0 at that time), using resources proved to be the only viable way to go.
We did this (SQL-Based Translation) and we are really happy with the result! We developed an interface for translation-agencies to perform the updates to the page online. As a side effect, the solution started to serve as content-management system. If you cache your data, performance is not an issue. The downside is, that we invested multiple hundreds of hours into our solution. (I would guess sth. arround 600 hours, but I could check.).
We ended up with a hybrid solution where users could edit content into a database but the application then created a .resx which was deployed manually.
You could also bypass the server translation altogether and do translation in jQuery on the client which is an approach I have used successfully.
I'm not sure about the website restart, but at least using .NET MVC is very convenient and I haven't noticed that restart problem, and, if occurs, how often you need to update the resx files? For bigger projects I use to create a solution with multiple projects, one for the localization, something like this:
MyApp.Localization
Model
Page
File1.resx
MyApp.Core
MyApp.Web
Then in the Web project I add a reference to the Localization project, and use it like
#MyApp.Localization.Model.Customer.CustomerName
#MyApp.Localization.Page.About.PageTitle
#MyApp.Localization.File1.Paragraph1
Everytime I change the translated text, I either upload an updated .dll or copy the .resx files.
NOTE: You need to set your resx files to PUBLIC, so can be accessed as strongly typed.
I created a SQL based translation scheme. But I only load the needed translations for a given page when it is requested, and just the ones for that particular page.
Those get loaded into a dictionary object when the page reloads and cached during the session. Then is just does text replacement based off a lookup on that.
Pretty much all of it is dynamically generated, and includes user defined content that must be translated, so the flexibility is key.
Performance is quite fast, the SQL queries to retrieve all the data take much longer (relatively speaking).
I am about to write a tender. The solution might be a PHP based CMS. Later I might want to integrate an ASP.NET framework and make it look like one site.
What features would make this relatively easy.
Would OpenId and similar make a difference?
In the PHP world Joomla is supposed to be more integrative than Druapal. What are the important differences here?
Are there spesific frameworks in ASP.NET, Python or Ruby that are more open to integration than others?
The most important thing is going to be putting as much of the look-and-feel in a format that can be shared by any platforms. That means you should develop a standard set of CSS files and (X)HTML files which can be imported (or directly presented) in any of those platform options. Think about it as writing a dynamic library that can be loaded by different programs.
Using OpenID for authentication, if all of your platform options support it, would be nice, but remember that each platform is going to require additional user metadata be stored for each user (preferences, last login, permissions/roles, etc) which you'll still have to wrangle between them. OpenID only solves the authentication problem, not the authorization or preferences problems.
Lastly, since there are so many options, I would stick to cross-platform solutions. That will leave you the most options going forward. There's no compelling advantage IMHO to using ASP.NET if there's a chance you may one day integrate with other systems or move to another system.
I think that most important thing is to choose the right server. The server needs to have adequate modules. Apache would be good choice as it supports all that you want, including mod_aspnet (which I didn't test, but many people say it works).
If you think asp.net integration is certanly going to come, I would choose Windows as OS as it will certanly be easier.
You could also install reverse proxy that would decide which server to render content based on request - if user request aspx page, proxy will connect to the IIS and windoze page, if it asks for php it can connect to other server. The problem with this approach is shared memory & state, which could be solved with carefull design to support this - like shared database holding all state information and model data....
OpenID doesn't make a difference - there are libs for any framework you choose.
We have a legacy ASP.net powered site running on a IIS server, the site was developed by a central team and is used by multiple customers. Each customer however has their own copy of the site's aspx files plus a web.config file. This is causing problems as changes made by well meaning support engineers to the copies of the source aspx files are not being folded back into the central source, so our code base is diverging. Our current folder structure looks something like:
OurApp/Source aspx & default web.config
Customer1/Source aspx & web.config
Customer2/Source aspx & web.config
Customer3/Source aspx & web.config
Customer4/Source aspx & web.config
...
This is something I'd like to change to each customer having just a customised web.config file and all the customers sharing a common set of source files. So something like:
OurApp/Source aspx & default web.config
Customer1/web.config
Customer2/web.config
Customer3/web.config
Customer4/web.config
...
So my question is, how do I set this up? I'm new to ASP.net and IIS as I usually use php and apache at home but we use ASP.net and ISS here at work.
Source control is used and I intend to retrain the support engineers but is there any way to avoid having multiple copies of the source aspx files? I hate that sort of duplication!
If you're dead-set on the single app instance, you can accomplish what you're after using a custom ConfigurationSection in your single web.config. For the basics, see:
http://haacked.com/archive/2007/03/12/custom-configuration-sections-in-3-easy-steps.aspx
http://msdn.microsoft.com/en-us/library/2tw134k3.aspx
Example XML might be:
<YourCustomConfigSection>
<Customers>
<Customer Name="Customer1" SomeSetting="A" Another="1" />
<Customer Name="Customer2" SomeSetting="B" Another="2" />
<Customer Name="Customer3" SomeSetting="C" Another="3" />
</Customers>
</YourCustomConfigSection>
Now in your ConfigSection Properties, expose Name, SomeSetting, and Another. When the Property is accessed or set, use a condition (request domain or something else that uniquely identifies the Customer) to decide which to use.
With the proper implementation, the app developers don't need to be aware of what's going on behind the scenes. They just use CustomSettings.Settings.SomeSetting and don't worry about which Customer is accessing the app.
I know it might seem annoying, but the duplication is actually a good thing. The problem here is with your process, not with the way the systems are setup.
Keeping the sites separate is actually a good thing. Whilst it looks like "duplication" it's actually not. It's separation. Making changes in the production code by your support engineers should be actively discouraged.
You should be looking at changing your process to change once deploy everywhere. This will make everything a lot easier for you in the long run.
To actually answer your question, the answer is no, you can't do it. The reason is that web.config isn't designed to store user level settings, it's designed to store per application instance settings. In your case, you need an application instance per user which means separate config files.
For your system to work, you need to be able to preemptively tell the application which config file to use, which isn't possible without some sort of input from the user.
Use an external source control application and keep rolling out updates as required.
It isn't really a good idea to let your live site be updated by support engineers in real time anyway.
Depending on what is actually in the web config, and what settings differ between customers, you could opt to use a single web config, and store other customer specific configuration options in a database or some other custom xml/text file. As long as the specific customer settings in the web.config don't have to do anything with how IIS operates, and you are just using it to store values, then this solution might work out well for you.
Thank you all again for your answers. After reading through them and having a think what I think I will do is leave the multiple instances alone for now and I will try to improve our update process first. then I will develop a new version of the application that has the user configuration information in the database layer and then pick the user based on the request domain or URL as someone suggested. That way I can have a single application instance supporting multiple different client configurations cleanly.
As most of the client configuration data is really presentation or data source related, nothing complicated. I think we ended up with multiple application instances mostly because the original programmer hadn't been expecting multiple customers and didn't design for that so when someone came along later and added a second customer they just duplicated the application which is wasteful as each instance is about 99.99% identical to the original.
I am implementing this as we speak.
In the main web.config, I have 1 item per installation. It points me toward the custom config file I built for each client (and toward the custom masterpage, css, images, etc).
Using WebConfigurationManager.OpenWebConfiguration, I open the new webconfigs in their subdirectories. I determine which one to use by using System.Web.HttpContext.Current.Request.Url.OriginalString, and determining the uRL that called me. Based on that URL, I know which web.config to use.
From that point forward the clients all use the same codebase. They have their own databases too.
The idea of having to update 30-40 installations when we make an update scares the death out of me. We do not want to support 30-40 codebases, so there won't be customization beyond the master page, css, and images.
I wrote a custom class lib that knows how to switch to the proper webconfig, and read the custom section I built with all our settings.
The only issue I have now is the FormsAuthentication Cookie. I need to be able to switch that as well. Unfortunately, the property for the name is read only
If I understand correctly, it sounds like you have multiple deployments (one for each client) where the only difference is the web.config, right?
First off, although I don't know your unique situation, I would generally urge you to stay with separate installs. It usually allows much more flexibility. Off the top of my head: are you ever going to have customizations, or different clients running different versions? Are you sure? The easiest way to stay flexible here is to keep going with separate installs.
In my opinion, it isn't ugly at all if your practices are aligned properly. Based on some things you mentioned, you have trouble in that area - obviously, possible source control buy-in/training issues. But you are aware of that. I would also take a hard look at your deployment procedures and so on. I have a feeling you might have further issues in that area, and I mean absolutely no offense.
That said, let's say you want to move forward with this.
You didn't say whether all the clients share a single common database, but I'm thinking no, since designing that type of system is often not worth the extra complexity (which can be severe in systems of any size) so people often opt to keep them separate.
What that means is that you have store your connection string somewhere. Usually that would be web.config... So that seems to break our plan.
Really, the apparent elegance of this situation is almost always wildly offset by the challenges it introduces. If I thought about it hard enough, I could maybe find a way around this by introducing another database that intelligently manages connection strings or maybe delving into keeping all your login info directly in web.config (which is possible but... not ideal), however my gut says the work will be wasted because some day you will end up going back to how you're doing it now.
Also: changing code directly in production is obviously not the best practice here. But you if you are on a monolithic shared platform with any amount of traffic, that can never ever ever happen. Food for thought.
Let me know if I'm missing something!