Securing large static files without hogging ASP.Net threads using IIS - asp.net

While my ASP.NET code is streaming a large file, is it tying down a thread completely? In other words, if 8 people are downloading large files and I only have 8 threads available, will no further requests be processed?
In any case, I need to find an alternative way of securing large static files, preferably by letting IIS serve it directly after the user has been authorized, in order to free the application server from having to deal with something that IIS, Nginx, etc can do better without hitting any managed code.
I believe Nginx allows this if your app puts the "X-Accel-Redirect" header in its response: http://kovyrin.net/2006/11/01/nginx-x-accel-redirect-php-rails/.
Apache and Lighttpd have the same feature.
Any advice?

Returning the URL of the file is an appropriate solution.
You can prevent unauthorised users who have that URL from also downloading the file by using the standard authentication providers in asp.net. If you turn runAllManagedModulesForAllRequests on (see http://www.iis.net/configreference/system.webserver/modules) the users authentication will be verified when they hit the URL, if they are authorised, they will be allowed to access the file.
In either case, downloading doesn't lock threads, just execution. This is why the maxConnections settings has a default setting of 4294967295. (see http://www.iis.net/configreference/system.applicationhost/sites/site/limits)

Related

Wordpress logging requests into a database

I am trying to create a plugin which logs http requests from users into a database. So far I've logged the requests for php files by hooking my function to the init function. But now I want to know if I can also log requests for files such as images, documents, etc. Is there any php code executed when someone requests files? Thank you.
Not by default, no. The normal mod_rewrite rules (not to be confused with WP's own rewrite rules) Wordpress uses specifically exclude any existing files such as images, css or javascript files. Those will be handled directly by Apache.
You obviously could add a custom script that runs on each request, logs the access to the database, reads those files and prints their content to the client, but it would come at a considerable cost, I'm afraid.
Apache, albeit not the fastest webserver around, is much, much faster in delivering a file to a client than running a php script, setting up a database connection, logging etc pp would be.
You'd get much higher server load, and probably noticeably slower page loads.
Instead, I recommend that you parse the access logs. They'll most likely contain all of the data you're looking for, and if you have access to the configuration, you can add specific headers sent by the client. You can easily do this with a cronjob that runs once a day, and it doesn't even have to run on the same server.

Suspicious PUT and GET requests in IIS logs

The Situation
I have come across some very suspicious PUT and GET requests in my IIS server logs. After Googling the requesters address, I have found information linking the IP's to known hacking teams. After each PUT there is an immediate GET for the same resource that was attempted to be uploaded to my server.
Question 1:
Would this be considered a remote code execution attack?
Additional Testing Completed By Me:
The IIS logs show that the response given for the PUT request was 412 'Invalid file type all files are not uploaded'
I have turned on Failed Request Tracing and attempted to upload text files using CURL and this is the same response I am provided with and was not able to upload a file.
Question 2:
What can I do to help prevent these type of attacks from being successful?
I can turn on IIS request filtering, but I am concerned that if I deny PUT my IIS application may be negatively impacted for any future web services.
Question 1: Would this be considered a remote code execution attack?
It is impossible to determine the intentions of the attacker from the information given. They could be looking to gain code execution, or they may simply settle for uploading their own content to your server for you to host, or to try and deface your site with their content.
Question 2: What can I do to help prevent these type of attacks from being successful?
Server configuration and patching. The best advice I could give you is to reduce the attack surface - only enable the features you need. If you're not using PUT in your application, then disable it. Only reenable it if needed. Make sure you have the latest updates for your OS installed.
Security is a wide subject. You need everything from secure code when developing applications to rigorous security testing after.

IIS Request Logging

We are looking to add some performance measuring into our LOB web application. Is there a way to log all requests into IIS including the details of the request, the upload speed and time, the latency and the download speed and time?
We will store this into a log file so the customer can post this to us for analysis (the customer internally hosts our LOB web application).
Thanks
IIS 7 natively provides logging features. It will give you basic informations about requests (status code, date, call duration, IP, referer, ...) It's already a good starting point and it's very easy to enable in IIS manager.
Advanced Logging, distributed here or via WPI, give you a way to log additional information (http headers, http responses, custom fields...) . A really good introduction is available here.
that's the best you can do without entering into asp.net
There is no out-of-box direct solution for your problem. As Cybermaxs suggests you can use W3C logs to get information about requests, but those logs do not break down the request/response times in the way you seek.
You have two options:
1) Write an IIS module (C++ implementing CHttpModule in HTTPSERV.H) which intercepts all the relevant events and logs the times as you require. The problem with this solution is that writing these modules can be tricky and is error-prone.
2) Leverage IIS's Failed Request Tracing (http://www.iis.net/learn/troubleshoot/using-failed-request-tracing/troubleshoot-with-failed-request-tracing) which will cause IIS to write detailed logs which include a break down of time spent per request in a verbose/parseable XML format. You can enable "Failed Request Tracing" even for successful requests. The problem is that an individual XML file is generated for each request so you'll have to manage the directory (and Failed Request tracing configuration) so that this behaviour doesn't cause too much pain for your customer.

Increase request queue limit of asp.net

Not sure if my title is technically correct but I have a problem.
I have a asp.net 4.5 site on IIS 8 and use asp.net to control/limit file downloads.
Instead of letting IIS to server large (10-20MB) static files such as zip I use asp.net.
It works fine until around 500 to 700 users start downloading. After that asp.net starts to queue all the request to the domain until the active request count goes below some sort of pre determined number.
Static content such as html isn't affected. If I enable more than one worker processes it handles more requests but that brings issue of managing session state.
There is no queue issue if I let IIS serve files.
Is there any way increasing queue length of asp.net?
You can modify the request queue limit and the maximum concurrent threads allowed per CPU
<system.web>
<applicationPool
maxConcurrentRequestsPerCPU="12"
maxConcurrentThreadsPerCPU="0"
requestQueueLimit="5000"/>
</system.web>
http://blogs.msdn.com/b/rakkimk/archive/2009/07/08/iis7-improving-asp-net-performance-concurrent-requests-while-on-integrated-mode.aspx
However, I would suggest serving larger static files via a CDN if that is possible in your situation. The default limits are in place for a reason. If you lift the configured values too far above the defaults, you may start to experience performance issues and runtime errors.
Amazon's S3 CDN (and probably other CDN's) provide a range of options to control access to files on the CDN
enables you to leverage the fine grained access control that IAM User policies provides while also reducing your exposure by enabling you to further restrict and limit the request to a predefined time for each one of your users.
http://aws.amazon.com/articles/5050/

What are your experiences implementing/using WebDAV?

For a current project, I was thinking of implementing WebDAV to present a virtual file store that clients can access. I have only done Google research so far but it looks like I can get away with only implementing two methods:
GET, PROPFIND
I think that this is great. I was just curious though. If I wanted to implement file uploading via:
PUT
I haven't implemented it, but it seems simple enough. My only concern is whether a progress meter will be displayed for the user if they are using standard Vista Explorer or OSX Finder.
I guess I'm looking for some stories from people experienced with WebDAV.
For many WebDAV clients and even for read only access, you will also need to support OPTIONS. If you want to support upload, PUT obviously is required, and some clients (MacOS X?) will require locking support.
(btw, RFC 4918 is the authorative source of information).
I implemented most of the WebDAV protocol in about a day's work: http://github.com/nfarina/simpledav
I wrote it in Python to run on Google App Engine, and I expect any other language would be a similar effort. All in all, it's about two pages of code.
I implemented following methods: OPTIONS, PROPFIND, MKCOL, DELETE, MOVE, PUT, GET. So far I've tested Transmit and Cyberduck and both work great with it.
Hopefully this can provide some guidance for the next person out there interested in implementing a WebDAV server. It's not a difficult protocol, it's just very dense with abstracted language like 'depth' and 'collections' and blah.
Here's the spec: http://www.webdav.org/specs/rfc4918.html
But the best way to understand the protocol is to watch a client interacting with a working server. I used Transmit to connect to Box.net's WebDAV server and monitored traffic with Charles Proxy.
Bit late to the party, but I've implemented most of the webdav protocol and I can tell with confidence you'll need to implement most of the protocol.
For OS/X you'll need class-2 WebDAV support, which includes LOCK and UNLOCK (I found it particularly difficult to fully implement the http If: header, but for Finder you'll only need a bit of that.)
These are some of my personal findings:
http://sabre.io/dav/clients/windows/
http://sabre.io/dav/clients/finder/
Hope this helps
If you run Apache Jackrabbit under, say, Tomcat, it can be configured to offer WebDAV and store uploaded files. Perhaps that will be a useful model, or even a good enough replacement for the planned implementation.
Apache Jackrabbit Support for WebDAV
Also, you may want to be aware of the BitKinex client (free 30 day trial), which I have found to be a useful tool for testing a WebDAV server.
BitKinex Home Page
We use WebDAV internally to provide a folder-based view of some file shares to clients outside of our firewall. We're using IIS6 for this.
Basically, it boils down to creating a Virtual Directory in IIS that maps to each network file system that you want to make available via WebDAV. Set it up with the content coming from "A share located on another computer" -- use the UNC path to the share for the Network Directory value. We turn on all options except Index this resource. Disable all default content pages. Turn on Windows Integrated Authentication (ours is set up using SSL as well). I have the root set up to deny access to anonymous and allow access to any authenticated user. We also have a wildcard MIME mapping (.* to application/octet-stream). Enable the WebDAV web service extension in IIS. You also need to set up the web server to delegate permissions to all the file servers you may be accessing so it can pass on the user's credentials.
If you have Macintosh clients you may also need an ISAPI filter that maps 401 to 403 errors for Darwin clients. Microsoft and Apple disagree on how to handle the situation when you don't have permission to write to a directory. Apple keeps resending the credentials on a 401 (Access Denied) error, translating it to a 403 (Forbidden) error keeps this from happening. By default Apple likes to write a "dot" file to every directory it accesses. Navigating through directories where you don't have write access will end up crashing the Finder if you don't have the filter. I have source code for this if needed.
This is all off the top of my head. It's possible (probable?) that I may have missed something. Feel free to contact me via the contact information on my web site if you have problems.
We have a webDAV servlet on our web based product.
i've found Apache Jackrabbit a good help for implementing it. however webDav is a serious P.I.T.A on the client side support.
many client implementation differ widely in their behavior and you most likely will have to support several different kinds of bugged implementations.
some examples:
MS vista only supports authentication over SSL
most windows based webDAV client assume your webdav-server/let is a sharepoint server and will act accordingly (thus not according to the webDAV protocol)
one example of this is that you NEED to allow and Unauthenticated LOCK request on the root of your server (ie yourdomain.com/ not yourdomain.com/where/webdav/should/live) else you wont be able to get write acces in MS windows.
(this is a serious P.I.T.A on a tomcat machine where your stuff usualy lives in server.com/servlets/paths/thelocation)
most(all?) versions of MS office respond different to webdav links.
i guess my point is integrating webdav support into an existing product can be a LOT harder then you would expect. and if possible i would advice to use a (semi)-standalone webDAV server such as jackrabbit webdavServer, or apache mod_webdav
I've found OS X's Finder WebDAV support to be really finicky. In order to get read-write support, you have to implement LOCK, in addition to other bits.
I wrote a WebDAV interface to a Postres database, where python modules were stored in the database in a hierarchical folder-like structure. Accessing it with cadaver worked fine, and IIRC a GUI windows browser worked too, but Finder refused to mount the share as anything other than read-only.
So, I don't know if if would give a progress bar. The files I was dealing with were small enough that a read/copy from them was virtually instantaneous. I think a copy of a large file using the Finder would probably give a progress bar - it does for any other type of mounted share.
Here is another open source project for WSGI WebDAV
http://code.google.com/p/wsgidav/
where I picked up the PyFileServer project.

Resources