I'd like to know which are the security risks of running an asp net application with an administrator account.
I might end up doing this and I'd like to be aware of the known security wholes I would have.
I'm connecting to a sql server using sql authentication so excessive privileges to execute queries is out of the list.
I am having trouble coming up with a scenario where this would actually make sense -- you can always delegate specific permissions to a named user to get them the specific admin-style rights they need.
As for the question at hand, direct risk isn't any greater than any other web application inasmuch as a web app is a big honking hole through your firewall. The indirect risk is very, very scary. You are trying to turn the clock back to 2000 when IIS5 was setup to run as local system making every single case of "IIS can be made to run arbitrary commands" into "anyone can own your box over port 80."
If you do have to do this, I'd consider putting firewalls behind the server too. That way, when it does get rooted, you've got some defenses. I'd also use unique accounts, etc.
You should rarely need to run IIS under an administrative account. It's usually a sign of poorly written code. For example, I have seen it done when an app needs to shell out a batch job or executable, and needs to run those files under admin context to work (ie. very poorly coded applications).
You don't want to run any services under admin context. Not your IIS service, and especially not your database service. Any exploit triggered on your system will take on admin privileges, leading to a complete box compromise. Notice that in newer versions of Windows, both client and server, there are much fewer services running as either admin or system. This is due to Microsoft's own learnings that running apps under excessive privileges is a very bad idea.
The risk is completely endless; a single mistake in your code, or an as-yet-undiscovered IIS/ASP.NET security vulnerability and you would be giving potential crackers complete, unfettered access to the server.
Compared to running it under the default account(s), where access would be pretty restricted.
There's really no reason to need to do this anymore, especially with the newer Windows servers, ASP.NET versions and IIS.
Related
Our company releases updates of our rich client application (written mainly with ASP.NET, WCF services, and ASP.NET AJAX) on our client's Windows Server 2008 IIS 7 web server. Once in awhile, we have large releases of updates. And sometimes there are bugs that users catch right after the release that are not caught during automation testing nor stage testing. Is there a way to smoothly deploy ASP.NET code on IIS 7 while users are still on without disrupting their workflows containing code that was not affected? I've found that if I just copy the code from stage (without the web.config) manually, and paste it into the production web root folder, nobody is really kicked off. But I'm wondering if there are any side-effects to this strategy for users diligently working in the application. I'm just wondering if any other connections may get interrupted or how they're even handled in this situation (ie: SQL connection, WCF Service calls, whether they keep the same session and if that will have any impact, etc..)? If I chose this method, I'd have something in the web.config that would display a message to every user (in the master page)--like a banner, that says "Please log off, and clear your cache", so they would see updates to issues addressed. But this would only be relevant to the impacted users.
If someone doesn't think this is a good strategy for minor updates, and has a better strategy, like changing a web.config setting that forces the user to a different server or something while the deployment is taking place. Or some other methodology, my ears are listening. Obviously the latter sounds safer, but I just don't know how this could be done. I've read about load balanced servers, but I think this type of server setup is done for different purposes, like if a server goes down, doesn't it? Or would this be the best solution as you take one site down? I'm welcome to any idears.
I used to stress about minimal-impact for releases too, but now we take it down. The reality is twofold:
You cannot guarantee that everything someone is working on right now is NOT something you're about to update. Consider this: A user is working on x.aspx and is in the middle of a postback. You drop a new x.aspx.
With enough notice, maintenance windows are a way of life. Users should expect that, from time to time, you need exclusive access to the application to make updates, etc.
It's just too hard to keep all the plates in the air when you really don't know what someone might be working on while you deploy. Especially if database updates are in the mix!
If there is a load balancer in the mix: you would remove the server(s) with the old code and add add the server(s) with the new code. This lets the traffic die out on the old code servers without kicking people out. The new server(s) picks up the traffic.
We do this with a new release to one server at a time until we replace all of the code in our server farm. It gives the application time to bake in the real world. If issues comes up, and they have, you only have to revert a single server. using the load balances makes it easier.
It is (usually) a seamless transition. Of course you need to make sure you app can handle any database changes etc.
Ok, so this is my dilemma... I have an ASP.NET MVC site that is running into some conditions that it is pegging the processor on the iss boxes it's running on. I don't have access to these servers (it's a farm of about 5 iis6 boxes behind a netscalar). I am doing some logging to a sql database, but the problem is that when the cpu pegs my database starts timing out. The iis servers are hosted in house, but I can't get access to them.
And to make things ever more complicated, I can't reproduce any of these issues in my qa environment (which I don't have access to either). QA is setup to similarly to our prod environment, but it runs on a single box that isn't behind a netscalar.
So, any thoughts on the best way to try to track down where my issues lie? Thanks!
Since you are already logging to a database, why you don't log to another database, install this DB on another computer, so that when your MVC application starts killing the CPU the database won't be affected (since it is working on another computer).
or you could log to an FTP folder that you can access.
Hope I helped.
Regards.
ASP.NET Trace. Haven't used MVC, but I'm assuming it still works...
http://msdn.microsoft.com/en-us/library/y13fw6we%28VS.71%29.aspx
If you want to know what is going on with the system you could read from the event viewer programatically:
http://support.microsoft.com/kb/815314
This should help you to learn what is going on with the system. This way you can build a web interface for it and capture any info you may want to look at for what is going.
I have recently started using a shared-host for my clients so see project progress or to play with a new technology for myself. I seems like every time I deploy a new project that runs fine locally, I run into something new on the shared-host.
Do you have a shared hosting deployment checklist?
What are the common problems you run into when deploying to a shared host?
Medium Trust. If you are developing code to go into a shared host, you should set your local application to run in medium trust otherwise you can almost guarantee you'll get security issues with code that executes fine in full trust but dies in a medium trust environment.
This MSDN article explains about medium trust in more detail:
http://msdn.microsoft.com/en-us/library/ms998341.aspx
They don't always offer you direct access to the database (Enterprise Manager / Management Studio).
You end up using some weird web GUI for creating database entities, which does not accept otherwise valid SQL syntax and you have to update all your queries and stored procedures to accommodate their custom changes and restrictions.
One of mine is file IO permission problems. An example being writing to a file on the web server from ASP.NET. You have to use a provided online tool to allow permission to do more than reads.
No preview site.
That is, a host based path to your web application without actually pointing the DNS to it.
Example:
http://www234.your-shared-host.com/preview/user/bla/default.aspx (don't try it, it's just an example..)
Inconvenient cancellation procedure
In some of the shared hosts I used, I found out that for cancellation I must make a phone call. Nothing over the web, not even an email.
I bet the host thinks most people won't bother calling until it's really needed. They're right.
I recently joined a firm and when analyzing their environment I noticed that the SharePoint web.config had the trust level set to Full. I know this is an absolutely terrible practice and was hoping the stackoverflow community could help me outline the flaws in this decision.
Oh, it appears this decision was made to allow the developers to deploy dlls to the Bin folder without creating CAS policies. Sigh.
Just want to clarify and make matters worse, we are also deploying third party code to this web application.
Todd,
The book, "Programming Microsoft ASP.Net 3.5", by Dino Espisito provides some sound reasoning for not allowing Full Trust in ASP.Net applications.
Among other reasons, Dino states that web applications exposed to the internet are "one of the most hostile environments for computer security you can imagine." And:
a publicly exposed fully trusted
application is a potential platform
for hackers to launch attacks. The
less an application is trusted, the
more secure that application happens
to be.
I'm surprised the StackOverflow community did not outline the problem with Full Trust better. I was hoping for the same thing so I didn't have to go digging through my pile of books to find the answer, lazy me.
If they're ignoring the CAS policies, it might be a tough sell to get them to dial it back, since it makes their job a little harder (or, at least, a little less forgiving). Changing security practices is always tough - like when I had to convince my boss that using the SA accounts in the SQL connection string of our web applications was a bad idea - but hang in there.
Full Trust allows the application to escalate to control of any resource on the computer. While you'd have to have a security flaw in your application to allow these, and they'll probably claim that they've prevented any escalations through astute programming, remind them that in the case that something happens, wouldn't they rather the web application didn't have control of the whole computer? I mean, just in case?
EDIT: I was a little overzealous with my language here. Full Trust would allow the application to control whatever it wants, but only if the Application Pool process has sufficient rights to do it. So if you're running as a limited user with no rights on the server except what the applicaition needs, then I suppose there's essentially no risk to "Full Trust". The reality is that the app pool owner most likely has a number of rights you wouldn't want your app to have (and in some cases, many, many more), so it's much safer to limit app security and grant additional rights needs individually to the application. Thanks for the correction, Barry.
Flaws? Many. But the most damning thing is straight out of the CAS utility:
"...it allows full access to your computer's resources such as the file system or network access, potentially operating outside the control of the security system."
That means, code granted Full Trust can execute any other piece of code (managed or otherwise) on the system, can call across the network to any machine, can do anything in the file system (including changing permissions on restricted files - even OS files).
Most web programmers would say "that's not a problem, it's just my code," which is fine.... until a security flaw crops up in their code that allows an attacker to use it to do unsavoury things. Then previously-granted Full Trust becomes quite unfortunate.
I honestly have found sharepoint to be too restrictive.
Take a look at the following page to see what can and cannot be done based on trust levels
http://msdn.microsoft.com/en-us/library/ms916855.aspx
One problem I ran into immediately was I could not use the Caching Application Block. We were using this application block instead of the ASP.NET caching because we had used an MVP pattern and may open up a win forms application.
Another problem is no reflection, this caused the About page to fail because the version number is pulled from the metadata of the assembly.
I think the best solution is to not use Sharepoint as an application host. I would only use Sharepoint as an application host if the amount of coding was so small that it didn't affect the trust level and it would be less work then setting up a new application. If you are doing some type of coding which is starting to hit the walls of the trust level, move your application into a proper ASP.NET enviroment. But that is just me, and I am biased. Maybe you should try to aim for a Medium trust level compromise.
I use full trust on my development machines.. so I can deploy to the BIN when building new code.
I trust my own code and run it in the GAC on production because creating CAS policies is a pain.
The third party thing would have me worried.. however:
Most 3rd party solutions found on the web also deploy to the GAC (assuming for the same reasons). This gives them all rights regardless of trust level.
Feels like it has more to do with if you trust the 3rd parties or not.. and do you really trust your own developers?
What would a hacker do?
The scenario where a hacker drops an evil dll in your BIN folder I don't see as very realistic.. regardless if he can do that he can also probably change the trust level.
I would appreciate any advice regarding tools and practices I could use to confirm my recently completed website is performing correctly.
Although I am confident the code is not producing errors and is functionally operating as it should, I have little understanding of how to identify IIS, SQL Server and Windows performance/concurrency issues. For example if the website was briefly hit by a huge deluge of traffic, how would I be aware that event had ever happened and how would I know whether the website coped with it.
The website was written using ASP.NET 2.0 and C# running on Windows 2003 R2 Standard Edition, SQL Server 2005 Workgroup Edition and IIS 6.
Consider using a logging mechanism that also raises alerts, so when a database call takes too long, indicating a high server load, the logger raises a warning. Check out log4net.
Regarding tools and practises, I recommend badboy and jmeter as tools for load testing your site. Badboy is simple and can generate urls that may also be used in jmeter. The latter does a very good job load testing your site. Do tests that run over a long period and use different hardware setups to see how adding more web/app servers affect performance.
Also, check out PerfMon, a tool that lets you monitor a local or remote Windows server regarding contention rate, cpu load and so on.
You can use a load generating tool like WebLoad to capture and then replay (with possible variations through scripting) user interactions with your application's UI with lots of threads and connections.
As mentioned, load generation tools are quite helpful. One thing you can add for the database side is to use SQL Tracing. Setup a test plan with very specific steps, and as you step through your plan, trace the SQL that is running on the server.
This way, you can identify if certain actions are causing unnecessary/duplicate database calls. Also, you may discover very large and non-performant queries being run for very simple actions.
For SQL Server use the sys.dm_exec_requests DMV and check for CPU usage, reads, writes, blocking etc etc
select blocking_session_id,wait_type,*
from sys.dm_exec_requests