As part of a CMS, I have created a custom VirtualPathProvider which is designed to serve a single file in place of an actual file structure. I have it set up such that if a file actually exists on the server, that file will be served. If the file does not exist, the virtual content stored for that address will be served instead. This is similar to the concept of serving a website from files stored in a database, though in this case the content is stored in XML files on the server.
This setup works perfectly when a request is made to a specific page. For example, if I ask for "www.mysite.com/foobar.aspx", the content that is stored for "foobar.aspx" will be served. Further, if I ask for "www.mysite.com/subdir/foobar.aspx", the appropriate content will also be served.
The problem is this: If I ask for something like "www.mysite.com/foobar", things begin to fall apart. If the directory exists on disk (and doesn't have a configured default page in IIS, such as index.aspx), I will get a "Directory Listing Denied" error. If the directory does not exist, I'll simply get a 404 - Resource Not Found.
I've tried several things, and so far nothing I've done has made a bit of difference. It seems as though IIS is simply noting the nonexistence of a directory (or default file in an existing directory) and serving up its own error code, without ever asking my application what to do with the request. If it ever did get to the application, I would be able to solve the problem, but as it stands, I'm quite lost. Does anyone know if there is some setting in IIS that is causing this?
I've looked for every resource I can find on the subject, and am coming up empty. I know this should be possible, because I have read tutorials on serving content from both databases and ZIP files. HELP!
p.s., I am running IIS6 and .NET 3.5
IIS will only pass a request to the ASP.NET process if it is configured to do so for the particular extension. The default is aspx, ascx, etc. In other words, if you request a .html file, ASP.NET will never see that HTTP request. Likewise for empty extension.
To change this behavior, add a wildcard mapping to the ASP.NET process. Load IIS Manager, go to the Properties for your web site and look at the Home Directory tab. Click on "Configuration" and there you will see the extension-to-applicaiton mappings.
Related
I was trying to run a .cshtml file but it gave an error:
Server Error in '/' Application.
--------------------------------------------------------------------------------
This type of page is not served.
Description: The type of page you have requested is not served because it has been explicitly forbidden. The extension '.cshtml' may be incorrect. Please review the URL below and make sure that it is spelled correctly.
Requested URL: /index.cshtml
So I searched for the solution and found that I had to edit the web.config file which is in the root directory(here it is My Site), but there is no file like that there there is only an index.cshtml which I had created.
I searched even the IIS and the IIS Express folders in Programs Files\ but there was no file like that?
If you are using WebMatrix, maybe your mistake was the starting point choice.
If you want to create a new Web Pages site you must start from a template in the Template Gallery of WebMatrix. Pay attention that the Empty Site template differs from the Empty Site option outside the Template Gallery because the first holds the files (binaries, packages and the web.config too) that are needed.
As the server error suggests, .cshtml files are not served, the reason for this is because they are server-side files that make up your application, they are just one piece of a much bigger picture.
If you launched your web application in debug mode and the URL in the browser was something like http://localhost:2932/Views/Home/Index.cshtml, just drop the /Views/Home/Index.cshtml part of the URL.
Suppose the URL http://example.com/test.php. If I type this URL on the browser address bar, the PHP code is executed, and its output is returned to me. Fine. But, what if instead of executing it, I wanted to view it's source as plain text. Is there a a way to issue such request?
I believe that there must be some way, and my concern is that some outsider could retrieve sensitive code, such as configurations file, by guessing it's location. For example, Joomla instalations have a configuration.php on it's root folder. If someone retrieves such file as plain text, then these database credentials have been seriously compromised. Obviously, this could be prevented with proper permissions, but it's just too common to just issue 0777 as everything permissions and forgetting about access denials.
For PHP: if properly configured, there is no way to download it. File permissions won't help either way, as the webserver needs to be able to read the files, and that's the one serving contents. However. a webserver can for instance be configured to serve them with x-httpd-php-source, or the PHP/webserver configuration may be broken. Which is why files which don't need direct access (db config, class definitions, etc.) should be outside the document root, so there is no way those files will get served by accident even when the webserver config is incorrect / failing. If your current hoster does not allow you to store files outside the document root, switch hosting a.s.a.p.
There is a way to issue such request that downloads the source code of http://example.com/test.php if the server is configured to provide a URL to do so. Usually it isn't, so usually there is no way to issue such a request.
I'm creating a robots.txt file for my website, but looking through my project structure, I'm not sure what to disallow.
Do I need to disallow standard .NET MVC directories and files like /App_Data, /web.config, /Controllers, /Models, /Global.asax? Or will those not be indexed already?
What about directories like /bin and /obj?
If I want to disallow a page, do I disallow /Views/MyPage/Index.cshtml, or /MyPage?
Also, when specifying the sitemap in the robots.txt file, can I use my Web.sitemap, or does it need to be a different xml file?
'robots.txt' refers to paths as they are publically seen from Web crawlers.
There's nothing particularly special about a crawler: it merely uses HTTP to request pages from your site precisely like a user does.
So, given that your MVC site is properly configured, files like /web.config or the paths you mention won't be visible to the outside world as neither IIS nor your application will be configured to serve them. Even if it was pointed to those files the spider would receive a 404 Not Found and continue.
Similarly, your .cshtml or .aspx content files won't be seen with those extensions. Rather, a Web crawler will see precisely what you'll show to users.
I have just started to work for a new company as a web developer, previous research has led me to find out their site is built in asp.net which isn't a problem, I just dont have any experience in this, all my experience is html, css, php and Js.
Upon gaining access via ftp, I noticed there is no traditional index.bla, so I went to the homepage on their website, and in stead of index, it was default.aspx.
Is this "default.aspx" file the .Net replacement / equivalent of an index file, and does it work in the same way?
Yes. In IIS (the web server) you can specify which files will be shown when a directory (like the root, when accessed through http://www.sitename.tld/) is requested.
You can configure which files will be shown and in what order. Like here (IIS 6):
So when a user requests a directory on that site, IIS will search for "Default.htm", if that isn't found it'll look for "Default.asp" and so on. If none of the default documents are found, you will either see the directory's contents (disabled by default) or an error saying you can't see the directory's contents.
In Apache this is set through the DirectoryIndex directive in httpd.conf.
Yes. index is an arbitrary name that Apache defaults to. The index page can be named anything, and with IIS it is usually default.
I have a full working web site that i ported to a new hosting company.
In some pages i have links to PDF on the server (they do exist!)
On the old server no problem.
On the new one when user clicks on the link : error 404 file does not exist...
Should i look in the web.config ? i don't know where to start
thanks
John
Start from the file read permissions.
You need to read the log files, or the event viewer to see whats really is the problem.
This is probably as simple as the files not being in the exact relative location to the page that they used to be in - e.g. there was a folder /pdfs in the root of the web where all the files were, now they are just in the root folder, and the links were not updated.
You've not said which version of IIS you're using. However for IIS5, this has been answered over at ServerFault -- see https://serverfault.com/questions/79094/serve-pdf-fies-in-iis
It should be similar for IIS6. It's possible your hosting provider may have revoked the MIME type so IIS no longer recognises it.
What you may end up needing to do if your hosting company isn't forthcoming is write a "file provider" page that takes the file to download on the query string (obviously with some sanity checking so folk can't request any old file), then just writes it out, bypassing what IIS would do normally.