Where to put a new ASP.NET website? - asp.net

Where's the best place for a production asp.net application? I mean a place that we need less permission manipulation on folders and probably the experts choice.
under C:\inetpub\wwwroot or C:\inetpub or elswhere ?
In development/test phases I usually put it under C:\inetpub\wwwroot and create a new web application without setting bindings. But on production version with binding I'm not sure where's the right place.

You can put it anywhere you like, they key thing is to ensure that the app pool it is running under is set to run as a low privileged user (like NT AUTHORITY\NETWORK SERVICE), then ensure that user has Read (and possibly Browse if you want it) permissions on the folder you put your web app in. Very seldom (if ever) will the user need Write or Modify permissions on the folder.
and on a new system I had a lot of problem to modify batch files, setting permissions
Setting permissions should not be a problem, you should set the same basic permissions i mentioned above for the user you want to run the app pool as. You can use PowerShell or WMI for this, and you should use the same permissions no matter what folder you install in to.
You could always wrap all this up into an installer, then it can be as simple as hitting Next.. Next... Finish... in an installer wizard to set up your website on any machine. Doing this in an installer also gives you some certainty that nothing has been missed.

Personally I have a 'Development' folder on my D: drive which is then subdivided into different categories depending on the work. I generally don't use inetpub directory and any permission issues I come across I just set directly onto the relevant folder within my own development structure.
On production environments I've used in the past, we've generally done the same thing. Mainly to help backup scenarios really, but also because there's no strict need to use the default IIS directories - you're free to structure things how you like.

Personally, I always create a new folder (in the root of a drive) called WebSites. I then make sure it has the appropriate permissions for the website process(es) (aka App Pools).
eg.
C:\
|_WebSites
|_www.Foo.com
|_www.Bah.com
It also makes it easier to manage because you don't have to hunt through the folder structures to find any/all websites.
But technically, it can be (more or less) anywhere - just needs to have the correct permissions set.
Bonus Answer
I also remove the Default Website from IIS .. which in effect means I can also delete c:\inetpub\wwwroot.

You can put the website any where on the server hard disk, Just make sure it is a secure folder and also I recommend to don't put it in the same OS drive, in case it failed and you needed to formate it.
C:\inetpub\wwwroot and C:\inetpub are just the default places nothing more.

Really depends on how the production server is configured and how operations likes to operate over there. Typically we setup a second "data" drive on servers for a few reasons:
a) Back in the old days, there were a lot of cannonical attacks where the attacker would try to navigate from c:\inetpub to c:\winnt\cmd.exe. Putting things on a different drive prevented this sort of thing.
b) Recovery -- if the OS gets hosed, you can pretty easily reinstall/reimage or move the data disks to another box and get things stood up fast.
c) Typically is lots easier to do things like swap the non-os disk in case you need more disk space or faster disks or whatever.
Basically, off the OS drive is a good idea. Though virtualization and modern deployment tools make lots of this matter less.

Related

How to Copy Kentico Instance to a Local Machine?

I just started a new position replacing a developer who left abruptly working on a project that is based in the Kentico CMS. I am completely unfamiliar with ASP and Kentico, so the answer here needs to be tailored for a total beginner. I am familiar with other languages (PHP, Ruby, SQL, etc.) but have no idea where to begin with this.
So, want I am wanting to do is copy everything from our production site (db and all) to my local machine so I can develop on it easier. I have already exported the db into an SQL file, and copied all the files in our Kentico Instance folder into github, and cloned it on my local machine. I assume since Kentico is already "setup," going through the installation process in their documentation is not the way to go about this.
Any help would be incredibly appreciated!
David, basically there are a few "pieces" to running Kentico locally. Since, as you mentioned, Kentico is already set up, you should have an easier go of it.
A database with the necessary Kentico tables (it sound like you
already have this)
The codebase (all of the code files that you copied to github)
A valid license for any domain you want to run Kentico on. Was the site already public facing? Do you know what licenses you have
or can you log into the CMS Desk on the site that you copied
everything from?
Set up IIS for your local website. If you are unsure on this one I can explain further, but basically you need to add a new site,
point it to the root code folder for your site, and set the domain
to be a domain you have a Kentico license key for. You'll also need
to change the app pool settings to "integrated" mode (most likely)
and also set the appropriate version of .NET (if it's a recent
version of Kentico you'll want .NET 4.0)
Next you'll need to edit your hosts file to add the domain and point it to your localhost IP address. So add a line like "127.0.0.1
dev.yourdomain.com" or the equivalent.
Edit the web.config file so that your code can connect to your database. You will need to edit the connection string accordingly to
point to the database on your machine.
Once you have done these steps, your site should start to run just as it had before. I didn't give great detail on all of these pieces so let me know what problems you encounter so I can further clarify. More information about the current situation would also help.
One other note I would make: if you need the client to be able to review your work, it will most likely be more efficient and easier for you to leave the original database on the web server, and (if possible) connect to it remotely from your local machine. Since almost any change you make will result in a database change in Kentico, I find it much easier to be working on 1 database for development with distributed codebases. Otherwise you will probably need to overwrite the other database with your changes constantly and this can be annoying. If you leave the database on the server and just connect remotely, you can just ftp (or use git) to push files to the server that you have edited locally.

Random 401 errors on an auto compiled asp.net site when updated pages are pushed

We have an asp.net web site that is deployed on several IIS servers. The site is compile-on-demand as opposed to a pre-compiled web application.
Normally deployments go fine but every now and again we get a 401 for one of the deployed pages on one of the servers. There is nothing special about which page or which server apart from the fact that it's generally the higher traffic pages that it happens to.
The only way to rectify this is to deploy the same page again.
The ACLs look fine on the files themselves so the thought is that there is a file locking issue in the Temporary ASP.NET Files folder when the specific page is re-compiled.
Has anyone seen this before or have any suggestions how to avoid this?
Note: This only seems to have happened since we moved to .net 4.0
As far as I can tell we are getting a 401.3 Denied by resource ACL http://support.microsoft.com/kb/907273
But I have not been able to confirm this.
Those kinds of locks have always been a problem with live site deployment. The reason it's hard to replicate is because you are mid-request when copying/compiling on the server, and this ends up confusing IIS.
We operate a Blue/Green deployment strategy on a 4 tier architecture which has a web site over 4 servers at the top tier. Due to the complexity the architecture introduced for deployments, we needed a way to deploy without disturbing any traffic to the "live" site. Following Fowler's advice, but not quite in the same way, we came up with a solution that means we have 2 sites on each server (a blue and a green, or in our case site A and site B). The live site has the appropriate host header, and once we have deployed and tested to the non-live site, we then flip the headers of the 2 sites so that what was once live is now the non-live site, and vice-versa. The effect is, a robust deployment that can be done in hours and with the highest level of confidence.
This of course complicates your configuration and deployment slightly, but it's worth the effort. I guess it kind of goes without saying that you want to script both the deployment, and the host header swapping.
When i deploy to a server i bring the site down for a minute (or however long the deployment takes) - it may be down anyway during this time as pages are recompiled so it is not too much of a hit. You can do this by creating a file in the root of the app called app_offline.aspx (it needs at least 512 characters in length) once that file is created you can then copy the resources ot the folder knowing there will not be any locking issues. then when the copy is complete remove the app_offline file.
For those that want to achieve a .net website deployment without these issues, one option is to copy the new website files into a new folder first ( not the active website). Then you just change IIS to point to the new folder after all copying is complete.
This can be done in a single server environment for those of us on more limited resources without multiple servers per website.
At my work we write power shell scripts to deploy websites. The powers shell script creates a new directory with a time stamp, copies the new deployment there, then tells IIS to point the website to the new directory (leaving the old directories "orphaned" but still there).
If we really messed something up, we can simply revert by pointing IIS back at the previous date stamp directory. Otherwise if everything tests ok, we can delete the old folder.
This technique works well because you are never writing over a file while it is in use. However it still results in zero downtime. The only effect you will see is the normal .net "warm up" that occurs anytime you change the code behinds or assemblies.
I had several answers suggesting a new environment to deploy. This is something we have been considering for the long term but it's hard to justify the extra work when we regularly deploy only one or two files without a problem. I was really more interested in finding out what is actually happening and why.
In terms of a workaround, and this might sound obvious after the fact, a simple app_pool recycle solves the permissions issue and is much easier than testing for the issue and redeploying the file until the problem goes away.

Junctions or Virtual Directories for Web Applications?

I see that junctions are a common way of referencing shared code in many projects. However, I have not seen them used in web applications before.
Our team is exploring the possibility of abandoning virtual directories in favor of junctions to simplify our build process. My goal is to compile a list of pros and cons in order to make an informed decision regarding this change.
Is it more appropriate to use junctions or virtual directories on web application projects?
Environment is ASP.NET, IIS6/IIS7, VS.NET.
Virtual directories vs. junctions is like comparing apples to pears: they both create sort of a virtual copy of a directory, as well as apples and pears are both fruits, but the comparison ends there.
First off, since Windows Vista, the new thing is symbolic links (which are essentially the same as junctions but can also point to a file or remote SMB path).
Symbolic links enable you to, for example, share every part of a web application except its Web.config and a stylesheet. This is something virtual directories can never do.
Also, virtual directories participate in ASP.NET's change monitoring. If you try and delete (a file or) directory from within your application, for instance, ASP.NET kills your app after the request completes, resulting in loss of session, etc. If instead of using a virtual directory, you use a symbolic link, the change will not be noticed and your app will keep on churning.
It's important to keep in mind that symbolic links are not an everyday feature in Windows. Yes, you can see that a file or directory is linked in Explorer, but it's not instantly visible to what it is linked. Also, from code it is much harder to see if a file is linked, so if you accidently delete the file that is being linked to from a million symbolic links, all those symbolic links suddenly 'stop existing'.
Symbolic links also speed up deployment of multiple instances of the same application, since the only thing you have to do is copy a few actual files, and then create symbolic links to the source files for all the rest.
In case with virtual folders you need IIS installed on each environment. However with both approaches you need to manually maintain all references after each change (For example situation when someone added one more reference), which is not convinient.
Consider using VCS with referncing system. For example SVN with externals. In that case you will have:
Automatic update of references on each environment.
Ability to have references in different versions of external code. This will avoid situations when it is needed to change all dependent applications after each external code change.

SCM for ASP.net

As part of my overall development practices review I'm looking at how best to streamline and automate our ASP.net web development practices.
At the moment, our process goes something like this:
Designer builds frontend as static HTML/CSS on a network share. This gets tweaked until signed off. (e.g. http://myserver/acmesite_design)
Once signed off, developer takes over and copies over frontend HTML/CSS to a new directory on the same server (e.g. http://myserver/acmesite_development)
Multiple developers work on local copy until project is complete.
Developer publishes code to an external publicly accessible server for a client to review/signoff.
Edits made locally based on feedback.
Republish to external server.
Signoff
Developer publishes to live public server
What goes wrong? Lots of things!
Version Control — this is obviously a must and is being introduced
Configuration errors — many many times, there are environment specific paths and variables (such as DB names, image upload directories, web server paths etc. etc.) which incorrectly get copied from local to staging to live etc. etc. with very embarrassing results.
I'm pretty confident I've got no.1 under control. What about configuration management? Does anyone have any advice as to how best to manage an applications structure within asp.net apps to minimize these kinds of problems?
I found that using SVN, NAnt and NUnit with Cruise Control.net solves a lot of the issues you describe. I think it works well for small groups and it's all free. Just need to learn how to use them.
CruiseControl.net helps you put together builds and continuous integration.
Use NAnt or MSBuild to do different environment builds (DEV, TEST, PROD, etc).
http://confluence.public.thoughtworks.org/display/CCNET/Welcome+to+CruiseControl.NET
You got the most important part right. Use version control. Subversion is a good choice.
I usually store configuration along with the site; i.e. when coding a PHP-based site I have a file named config.php-dist. If you want the site to work at all you'll have to copy + edit in all the required parameters (this avoids storing passwords in version control). The -dist file should have reasonable defaults.
Upload directories should be relative if possible; actually all directories should be relative. I'm not experienced in ASP.net, but if it's anything like PHP the current directory is always the directory of the file being requested. If you channel all requests through a single file (i.e. index.asp), then this can even be found programmatically. Or you could find it programmatically by using the equivalent of dirname(____FILE____) in your configuration file.
I also recommend installing IIS (or whatever webserver you are using) on all development workstations (including the designers). Makes life easier as noone can step on each others toes. What one has to do is simply add test hosts to the hosts file (\windows\system32\drivers\etc\hosts iirc) in addition to adding a site to the local IIS. This plays well with version control (checkout, add site to IIS and hosts-file, edit edit edit commit).
One thing that really helps is making sure you keep your paths relative where you can and centralise them where you can't, so when I've been working with ASP.Net I have tended to use web.config to store any configuration and path related data that can't be found programmatically. It is quite possible to find information like your current application path programmatically through the Request object - it's worth looking in some detail over what the environment makes available to you.
One way to make sure you don't end up on something that is dependent on the path name is having a continuous integration server executing your test suite against your application. Each time this happens you create a random filepath. As soon as someone introduces a dependency on the filepath it will fail.

Running a SWF from file:/// without having the user change their Flash Player security settings

I have a Flex app that does a a fair amount of network traffic, it uses ExternalInterface to make some javascript calls (for SCORM), it loads XML files, images, video, audio and it has a series of modules that it could be loading at some point...
So the problem is - we now have a requirement where the user needs to run this content locally on a machine that is not connected to the internet (which means they can't connect to Adobe's site to change their security settings.) As you can imagine, when the user doubles clicks on the html page to launch this thing, they are greeted with a security warning that the swf is trying to communicate with another domain other than the one it's in. We can't wrap it in an exe or an AIR app so I unless there is some way to tweak some obscure security settings we may be hosed. Any idea's?
What you are trying to do is exactly the problem solved by AIR. You should really give it a try, it's not that hard to pick up. If you really really can't use AIR (you didn't specify why, so I assume it's just because you don't want to have to learn a new system), then modifying the security config file will solve the problem.
Basically what you need to do is create a 'trust' file in the "Global FlashPlayerTrust" directory. This can be done by your installer (which installs all the javascript, SWF, html, etc files onto the local machine). You should create the directory if it does not exist. The directory for each OS is:
Windows - %WINDIR%\System32\Macromed\Flash\FlashPlayerTrust
Mac - /Library/Application Support/Macromedia/FlashPlayerTrust
Linux - /etc/adobe/FlashPlayerTrust
Next, you need to create the trust file. You can name it anything, so pick a unique name that would be unlikely to conflict with others. Something like CompanyName.cfg. It's a text file, with one path per line. You can trust either one SWF at a time, or an entire directory. Example:
C:\Program Files\MyCompany\CoolApp
C:\Program Files\MyCompany\OtherApp\Main.swf
To test that it's working, inside your flash movie you can check System.security.sandboxType (ActionScript 1 or 2), or Security.sandboxType (ActionScript 3). It should have the value of "localTrusted"
I hesitate to say "you can't do it", but in my experience, there's no way to do what you're describing. Anyone, if I'm wrong, I'd love to know the trick.
Sorry that I haven't actually tried this to see if it works or not ... but ...
Page 20 (and/or 26) of this document may be of help. The document is referenced here. In a nutshell it describes directories which contain cfg files which in turn contain lists of locations on disk which should be regarded as trusted. An installer for the application would then be responsible for creating appropriate .cfg files in the desired location (global or for the installing user).
The short answer is that if your swf is compiled with use-network to true, it isn't going to work.
Is it possible to compile a version with use-network to false? Or is it running on an Intranet that is closed off from the Internet and still communicating with the LMS?
It is possible. Please chek that the swfs you are calling from the main swf have the "Access local files only" property enabled or not.
Did you try to specify the authorized domain with:
System.security.allowDomain("www.yourdomain.com");

Resources