Very close to killing myself, so please lend your ears and hands.
I've been trying to migrate an IIS7 webserver (rather large one), and so far am appalled by the state of msdeploy and using it for what I need to do. Enough on that, here's my current problem:
The server I'm migrating from has all the sites on a D:\ drive. It isn't possible for us to have a D:\ drive in the new environment, so I need to change all the site paths to C:.
I obviously do not want to do this for each site manually, so I thought to edit the paths in applicationHost.config. However, IIS doesn't seem to care one bit. All the paths are C:\, yet IIS still tries to reference D:.
When I look at site settings, it still says D:. Where in the WORLD is this value coming from?
It's on a 64-bit server, and I checked the OTHER applicationHost.config in \syswow64\, and that has magically changed to C:\ as well.
Any help, or perhaps a completely different way of migrating from D:\ to C:\ is greatly appreciated.
Thanks.
"Fixed" it by copying applicationhost.config to a 2003 box, editing it there and copying back. Apparently 2008 "pretends" to save your file (i.e. it shows my changes when I open it in notepad) but read them from, either some cached version, or from another file elsewhere.
Related
i want to copy contents, code, build files and configuration files from a IIS(7.0) server which is running a live file and copy everything
to another machine which got IIS 8 installed.
the destination computer has other web sites installed, so i dont want to disturb configuration for those sites.
do i first copy the code from the source and create a folder on destination and copy the files there and than follow with the configuration settings?
Probably meant for serverfault...but here goes..
Review/inventory current application. e.g.
"config files" for any explicit settings that may have to do with paths, connections (db), certificates (ssl), and/or expected depedencies (assemblies, framework version/s, other web applications, etc.)
hopefully the application is documented and/or original developers are handy to assist in this inventory process
the point is to first "know" what you're moving before doing anything, this would be the most important step..the rest is more or less config....
create folder in target machine
doesn't have to be in C:\inetpub\wwwroot, but you'll have to modify permissions as necessary.
also if there's an existing app in wwwroot, possible headaches can arise with settings inheritance
copy files from source -> target
IIS Manager -> set up the new "web site"/"Application" as necessary (AppPool, etc.)
Test, test, test
Fallback plan (if all goes to hell, need to "revert")
Applicable network stuff - DNS, etc. (the moment of truth)
Coffee. Lots. Pray you don't have to go to #6 :)
Hth...
I just started a new position replacing a developer who left abruptly working on a project that is based in the Kentico CMS. I am completely unfamiliar with ASP and Kentico, so the answer here needs to be tailored for a total beginner. I am familiar with other languages (PHP, Ruby, SQL, etc.) but have no idea where to begin with this.
So, want I am wanting to do is copy everything from our production site (db and all) to my local machine so I can develop on it easier. I have already exported the db into an SQL file, and copied all the files in our Kentico Instance folder into github, and cloned it on my local machine. I assume since Kentico is already "setup," going through the installation process in their documentation is not the way to go about this.
Any help would be incredibly appreciated!
David, basically there are a few "pieces" to running Kentico locally. Since, as you mentioned, Kentico is already set up, you should have an easier go of it.
A database with the necessary Kentico tables (it sound like you
already have this)
The codebase (all of the code files that you copied to github)
A valid license for any domain you want to run Kentico on. Was the site already public facing? Do you know what licenses you have
or can you log into the CMS Desk on the site that you copied
everything from?
Set up IIS for your local website. If you are unsure on this one I can explain further, but basically you need to add a new site,
point it to the root code folder for your site, and set the domain
to be a domain you have a Kentico license key for. You'll also need
to change the app pool settings to "integrated" mode (most likely)
and also set the appropriate version of .NET (if it's a recent
version of Kentico you'll want .NET 4.0)
Next you'll need to edit your hosts file to add the domain and point it to your localhost IP address. So add a line like "127.0.0.1
dev.yourdomain.com" or the equivalent.
Edit the web.config file so that your code can connect to your database. You will need to edit the connection string accordingly to
point to the database on your machine.
Once you have done these steps, your site should start to run just as it had before. I didn't give great detail on all of these pieces so let me know what problems you encounter so I can further clarify. More information about the current situation would also help.
One other note I would make: if you need the client to be able to review your work, it will most likely be more efficient and easier for you to leave the original database on the web server, and (if possible) connect to it remotely from your local machine. Since almost any change you make will result in a database change in Kentico, I find it much easier to be working on 1 database for development with distributed codebases. Otherwise you will probably need to overwrite the other database with your changes constantly and this can be annoying. If you leave the database on the server and just connect remotely, you can just ftp (or use git) to push files to the server that you have edited locally.
I have the following scenario:
I publish a page which contains multiple binaries which is then received by an HTTP Receiver and Deployed using an in-process Deployer all hosted in IIS in a dedicated application pool running as the Local Service user.
The Page is stored in the Broker Database, and the binaries are published to the local file system using a path like "D:\Binaries\Preview".
The preview folder is shared to a domain user as a read only share at something like \machinename\PreviewBinaries so that the binaries can be displayed using the web application.
Nine time out of ten everything works fine, but occasionally publishing fails, and it seems to be because the binaries can not be overwritten due to them being locked by another process. I have used ProcessMon and other tools to try and establish what might be locking these files (to no avail). Sometimes I can manually delete the images, and then publishing works again. If I restart IIS on the server I can always delete the files and publish.
Does anyone have any suggestions on what processes could be locking these images? Has anyone seen this issue before? Could there be any issues that I am publishing to a share? Or could SiteEdit 2009 possibly be locking these files as it only seems to occur on our preview server and live (no SiteEdit) seems fine.
Thanks in advance
If you're on Windows 2008, you can try and delete the file from disk. It will then tell you what process has locked the file. But given that restarting IIS unlocks the file, it seems quite likely that it is IIS that keeps a lock on them.
I don't see how SiteEdit 2009 could cause a lock on these files. Given that you can have your preview server on another box, SiteEdit only talks to that server through HTTP. It never accesses the files on the preview server directly and not even through a CD API. Just regular requests to your web server, just like a visitor would.
Again, not a direct answer but I wanted to share this anyway:
I've seen a similar situation where I published Pages to the Broker Database and Binaries to the file system. When I changed the Identity of the Application Pool to Network Service this problem disappeared, and I haven't looked into it further.
OK, well it seems the offending code was in the Presentation Framework we are using. The framework used Response.TransmitFile(binaryPath) to asynchronously transmit the binaries to the clients. It seems that this puts a temporary lock handle on the binaries (even when they are on a read only share).
We have removed this line of code, and modified the application to server binaries in another way (we now rewrite the path so that IIS can transmit the files directly). This seems to have solved the issue, and improved site performance.
Thanks for all your suggestions, it helped me rule out all the things that were not causing the issue, so I was able to find the root cause.
Are there any Anti-virus or indexing services running. These tend to take very short-lived locks at just the moment you don't want them to. Particularly with Anti-virus, this is typically just as one process relinquishes its lock and just before your other process tries to take one. If this is the issue, then setting up some exclusion directories should help.
I see you have used Process Monitor, but have you tried Sysinternals Process Explorer? "Find->Find Handle or Dll" is pretty useful for this kind of thing. Or if you prefer a command line tool, Sysinternals aslo make handle.exe, which dumps everything out for you.
We have an asp.net web site that is deployed on several IIS servers. The site is compile-on-demand as opposed to a pre-compiled web application.
Normally deployments go fine but every now and again we get a 401 for one of the deployed pages on one of the servers. There is nothing special about which page or which server apart from the fact that it's generally the higher traffic pages that it happens to.
The only way to rectify this is to deploy the same page again.
The ACLs look fine on the files themselves so the thought is that there is a file locking issue in the Temporary ASP.NET Files folder when the specific page is re-compiled.
Has anyone seen this before or have any suggestions how to avoid this?
Note: This only seems to have happened since we moved to .net 4.0
As far as I can tell we are getting a 401.3 Denied by resource ACL http://support.microsoft.com/kb/907273
But I have not been able to confirm this.
Those kinds of locks have always been a problem with live site deployment. The reason it's hard to replicate is because you are mid-request when copying/compiling on the server, and this ends up confusing IIS.
We operate a Blue/Green deployment strategy on a 4 tier architecture which has a web site over 4 servers at the top tier. Due to the complexity the architecture introduced for deployments, we needed a way to deploy without disturbing any traffic to the "live" site. Following Fowler's advice, but not quite in the same way, we came up with a solution that means we have 2 sites on each server (a blue and a green, or in our case site A and site B). The live site has the appropriate host header, and once we have deployed and tested to the non-live site, we then flip the headers of the 2 sites so that what was once live is now the non-live site, and vice-versa. The effect is, a robust deployment that can be done in hours and with the highest level of confidence.
This of course complicates your configuration and deployment slightly, but it's worth the effort. I guess it kind of goes without saying that you want to script both the deployment, and the host header swapping.
When i deploy to a server i bring the site down for a minute (or however long the deployment takes) - it may be down anyway during this time as pages are recompiled so it is not too much of a hit. You can do this by creating a file in the root of the app called app_offline.aspx (it needs at least 512 characters in length) once that file is created you can then copy the resources ot the folder knowing there will not be any locking issues. then when the copy is complete remove the app_offline file.
For those that want to achieve a .net website deployment without these issues, one option is to copy the new website files into a new folder first ( not the active website). Then you just change IIS to point to the new folder after all copying is complete.
This can be done in a single server environment for those of us on more limited resources without multiple servers per website.
At my work we write power shell scripts to deploy websites. The powers shell script creates a new directory with a time stamp, copies the new deployment there, then tells IIS to point the website to the new directory (leaving the old directories "orphaned" but still there).
If we really messed something up, we can simply revert by pointing IIS back at the previous date stamp directory. Otherwise if everything tests ok, we can delete the old folder.
This technique works well because you are never writing over a file while it is in use. However it still results in zero downtime. The only effect you will see is the normal .net "warm up" that occurs anytime you change the code behinds or assemblies.
I had several answers suggesting a new environment to deploy. This is something we have been considering for the long term but it's hard to justify the extra work when we regularly deploy only one or two files without a problem. I was really more interested in finding out what is actually happening and why.
In terms of a workaround, and this might sound obvious after the fact, a simple app_pool recycle solves the permissions issue and is much easier than testing for the issue and redeploying the file until the problem goes away.
I am having an ongoing situation where when I try to upload files via FTP that I get an error that the DLL is locked and currently cannot be overwritten. This is only DLLs that this is happeneing to and normal files (aspx, ascx, css etc) can be overwritten fine.
Our Setup
We have 2 webservers that are kept in sync via DFS which is managed from a separate server.
They all belong to the same domain.
They all do internal transfers on 1GB ethernet cards on a private network.
Our Problem
We develop in VS2010 and build the site we are working on, when it gets to a level where it needs to be checked on the server then its hit and miss whether we can overwrite the DLL's in the BIN folder. I only started experiencing this issue when we migrated from our old, unreliable sync tool to the super Windows 2008 DFS tool. Its a good tool and works well but this is the only thing I can think thats causing this issue.
To actually overwrite the file I need to take down all the sites that are using this base level code which then releases the lock on the DLL and I can upload it.
I come today in desperation, I am fed up and bored of having to take sites down every so often just so I can upload a DLL.
It is my understanding that ASP.Net caches the DLLs into a temporary folder, so god knows why the lock remains on the DLL itself in the BIN folder.
The weird thing is, this does not always happen, it can go for weeks and not do it. Or like recently, its around every day I have to take the IIS sites down so I can upload.
As of writing this, I cannot upload to FTP even though I have taken the sites down.
Could anyone please shed any light on this so I can actually just get on with my work rather than messing with this every ten mins. It's bad enough that VS2010 is so unstable and visual source safe only checks in what it wants without this being an issue as well!
Trying using UnLocker to free the handles.