Updating a Classic ASP website without interrupting service - asp-classic

A couple of questions:
1) How can I update a Classic ASP website/page without interrupting service (users getting an error or service unavailable message) or shutting the website down temporarily?
2) When updating/restoring a MSSQL DB via SQL Server Management Studio, will the website users get an error message?
Thanks in advance.

A smart practice is to use at least one separate development environment with the same setup as your production environment and debug all changes there to ensure that they work. Once your entire site is running and tested on the other, identical environment to your production environment, you should be able to simply move the files and they should work in production. This model being effective is dependent on actually being able to maintain environments as close to identical to each other as possible.
When updating/restoring a MSSQL DB
Be careful with your terminology; UPDATE and RESTORE are two very different commands.
If the database is locked by the changes being made, then it will be inaccessible to users and may cause error messages depending on your IIS and code setup. Scheduling a maintenance period and blocking user access to any pages that access the database is will help avoid messy errors and revealing any information about your infrastructure while the changes are being made.
It seems like you might want to do some basic research on development and databases both in order to make sure you understand what you're doing and can cover all of your bases. Looking up commands like RESTORE and UPDATE and using them correctly is crucial.
For example, when you rewrite one or more of your website files
via FTP, in that very moment when rewriting is taking place,
users will get a 500 Service Unavailable error. How can I avoid this?
This really shouldn't happen, although you could upload the files to a different folder, avoiding any delay there, and sync the files with a diff tool such as Winmerge (also helping you keep track of changes and revert quickly) when done uploading.

Related

Updating code on production server when using Go

When I develop and update files on production server with PHP I just copy the files on the fly and everything seems to work without interrupting the server.
But if I am to update the code on the Go server and application and would need to kill the server, copy the src files to the server, run go install, and then start the server, this would interrupt the service, and if I do this quite often then it is going to look very bad for my users of the service.
How can I update files without the downtime when using Go with Go's http server?
PHP is an interpreted language, which means you provide your code in source format and the PHP interpreter will read it and execute it (it may create a more compact binary form so that it doesn't have to analyze the source again when needed).
Go is a compiled language, it compiles into a native executable binary; going further it is statically linked which means every code and library your app is referring to is compiled and linked when the executable is created. This implies you can't just "drop-in" new go modules into a running application.
You have to stop your running application and start the new version. You can however minimize the downtime: only stop the running application when the new version of the executable is already created and ready to be run. You may choose to compile it on a remote machine and upload the binary to the server, or upload the source and compile it on the server, it doesn't matter.
With this you could decrease the downtime to a maximum of few seconds, which your users won't notice. Also you shouldn't update in every hour, you can't really achieve significant updates in just an hour of coding. You could schedule updates daily (or even less frequently), and you could schedule them for hours when your traffic is low.
If even a few seconds downtime is not acceptable to you, then you should look for platforms which handle this for you automatically without any downtime. Check out Google App Engine - Go for example.
The grace library will allow you to do graceful restarts without annoyance for your users: https://github.com/facebookgo/grace
Yet in my experience restarting Go applications is so quick, unless you have an high traffic website it won't cause any trouble.
First of all, don't do it in that order. Copy and install first. Then you could stop the old process and run the new one.
If you run multiple instances of your app, then you can do a rolling update, so that when you bounce one server, the other ones are still serving. A similar approach is to do blue-green deployments, which has the advantage that the code your active cluster is running is always homogeneous (whereas during a rolling deploy, you'll have a mixture until they've all rolled), and you can also do a blue-green deployment where you normally have only one instance of your app (whereas rolling requires more than one). It does however require you to have double the instances during the blue-green switch.
One thing you'll want to take into consideration is any in-flight requests -- you may want to make sure that in-flight requests continue to go to old-code servers until their finished.
You can also look into Platform-as-a-Service solutions, that can automate a lot of this stuff for you, plus a whole lot more. That way you're not ssh'ing into production servers and copying files around manually. The 12 Factor App principles are always a good place to start when thinking about ops.

Quartz.NET: in the Asp.Net Web vs Console Application

I need to run 4 background gobs for cleaning temp files and proccessing some files. I have chosen Quart.net for the job.
I have a Asp.Net website, which accepts uploading files that will be processed by the Quartz Jobs at night.
First i thought about making a console application for the Quartz jobs, keeping the website and the jobs totally decoupled.
But then, i've seen that i will need some config values (connectionstring and paths to files) that are on the asp.net web.config. So a question came to my mind:
Should i run the jobs through the asp.net instance or should i do this on a console application?
Furthermore, i want that when the Quartz jobs start running, the website show a special page (like "We are processing the files...).
What i care the most is the performance, i don't want the website to be affected by the Quartz jobs, neither the jobs' performance affected by the website.
So, what should i do? Have you done something like this and can give me an advice?
Should i run the jobs through the asp.net instance or should i do this on a console application?
If you want to have to manually trigger them each night, sure. But a console application using the host system's task scheduler seems like a more automated solution. A web application is more of a request/response system, it's not really suited for periodic or long-running actions. Scheduling some sort of background operation on the host, such as a scheduled console application or a windows service, would serve that purpose better.
Note that if it truly needs to be unattended and run even when there's nobody logged in to the server console, a windows service may be a more ideal approach than a console application.
i've seen that i will need some config values (connectionstring and paths to files) that are on the asp.net web.config
Console application have App.config files which serve the same purpose. You can use that.
i want that when the Quartz jobs start running, the website show a special page
You definitely want to keep the two de-coupled. But you may be able to accomplish this easily enough. Maybe have some sort of status flag in the database which indicates if any particular record is "currently being processed". The website can simply look for any records with that flag when a page loads and display that message.
There are likely a couple of different ways to synchronize status here, it doesn't really matter what you choose. What does matter is that the systems remain decoupled and that any status which is statically persisted is handled somewhat carefully to avoid an errant process from leaving an incorrect status. (For example, a background task sets a status of "processing" and then fails in some way. The website would forever indicate that it's processing.)

Deployed binaries getting locked during SDL Tridion 2011 SP1 publishing

I have the following scenario:
I publish a page which contains multiple binaries which is then received by an HTTP Receiver and Deployed using an in-process Deployer all hosted in IIS in a dedicated application pool running as the Local Service user.
The Page is stored in the Broker Database, and the binaries are published to the local file system using a path like "D:\Binaries\Preview".
The preview folder is shared to a domain user as a read only share at something like \machinename\PreviewBinaries so that the binaries can be displayed using the web application.
Nine time out of ten everything works fine, but occasionally publishing fails, and it seems to be because the binaries can not be overwritten due to them being locked by another process. I have used ProcessMon and other tools to try and establish what might be locking these files (to no avail). Sometimes I can manually delete the images, and then publishing works again. If I restart IIS on the server I can always delete the files and publish.
Does anyone have any suggestions on what processes could be locking these images? Has anyone seen this issue before? Could there be any issues that I am publishing to a share? Or could SiteEdit 2009 possibly be locking these files as it only seems to occur on our preview server and live (no SiteEdit) seems fine.
Thanks in advance
If you're on Windows 2008, you can try and delete the file from disk. It will then tell you what process has locked the file. But given that restarting IIS unlocks the file, it seems quite likely that it is IIS that keeps a lock on them.
I don't see how SiteEdit 2009 could cause a lock on these files. Given that you can have your preview server on another box, SiteEdit only talks to that server through HTTP. It never accesses the files on the preview server directly and not even through a CD API. Just regular requests to your web server, just like a visitor would.
Again, not a direct answer but I wanted to share this anyway:
I've seen a similar situation where I published Pages to the Broker Database and Binaries to the file system. When I changed the Identity of the Application Pool to Network Service this problem disappeared, and I haven't looked into it further.
OK, well it seems the offending code was in the Presentation Framework we are using. The framework used Response.TransmitFile(binaryPath) to asynchronously transmit the binaries to the clients. It seems that this puts a temporary lock handle on the binaries (even when they are on a read only share).
We have removed this line of code, and modified the application to server binaries in another way (we now rewrite the path so that IIS can transmit the files directly). This seems to have solved the issue, and improved site performance.
Thanks for all your suggestions, it helped me rule out all the things that were not causing the issue, so I was able to find the root cause.
Are there any Anti-virus or indexing services running. These tend to take very short-lived locks at just the moment you don't want them to. Particularly with Anti-virus, this is typically just as one process relinquishes its lock and just before your other process tries to take one. If this is the issue, then setting up some exclusion directories should help.
I see you have used Process Monitor, but have you tried Sysinternals Process Explorer? "Find->Find Handle or Dll" is pretty useful for this kind of thing. Or if you prefer a command line tool, Sysinternals aslo make handle.exe, which dumps everything out for you.

Random 401 errors on an auto compiled asp.net site when updated pages are pushed

We have an asp.net web site that is deployed on several IIS servers. The site is compile-on-demand as opposed to a pre-compiled web application.
Normally deployments go fine but every now and again we get a 401 for one of the deployed pages on one of the servers. There is nothing special about which page or which server apart from the fact that it's generally the higher traffic pages that it happens to.
The only way to rectify this is to deploy the same page again.
The ACLs look fine on the files themselves so the thought is that there is a file locking issue in the Temporary ASP.NET Files folder when the specific page is re-compiled.
Has anyone seen this before or have any suggestions how to avoid this?
Note: This only seems to have happened since we moved to .net 4.0
As far as I can tell we are getting a 401.3 Denied by resource ACL http://support.microsoft.com/kb/907273
But I have not been able to confirm this.
Those kinds of locks have always been a problem with live site deployment. The reason it's hard to replicate is because you are mid-request when copying/compiling on the server, and this ends up confusing IIS.
We operate a Blue/Green deployment strategy on a 4 tier architecture which has a web site over 4 servers at the top tier. Due to the complexity the architecture introduced for deployments, we needed a way to deploy without disturbing any traffic to the "live" site. Following Fowler's advice, but not quite in the same way, we came up with a solution that means we have 2 sites on each server (a blue and a green, or in our case site A and site B). The live site has the appropriate host header, and once we have deployed and tested to the non-live site, we then flip the headers of the 2 sites so that what was once live is now the non-live site, and vice-versa. The effect is, a robust deployment that can be done in hours and with the highest level of confidence.
This of course complicates your configuration and deployment slightly, but it's worth the effort. I guess it kind of goes without saying that you want to script both the deployment, and the host header swapping.
When i deploy to a server i bring the site down for a minute (or however long the deployment takes) - it may be down anyway during this time as pages are recompiled so it is not too much of a hit. You can do this by creating a file in the root of the app called app_offline.aspx (it needs at least 512 characters in length) once that file is created you can then copy the resources ot the folder knowing there will not be any locking issues. then when the copy is complete remove the app_offline file.
For those that want to achieve a .net website deployment without these issues, one option is to copy the new website files into a new folder first ( not the active website). Then you just change IIS to point to the new folder after all copying is complete.
This can be done in a single server environment for those of us on more limited resources without multiple servers per website.
At my work we write power shell scripts to deploy websites. The powers shell script creates a new directory with a time stamp, copies the new deployment there, then tells IIS to point the website to the new directory (leaving the old directories "orphaned" but still there).
If we really messed something up, we can simply revert by pointing IIS back at the previous date stamp directory. Otherwise if everything tests ok, we can delete the old folder.
This technique works well because you are never writing over a file while it is in use. However it still results in zero downtime. The only effect you will see is the normal .net "warm up" that occurs anytime you change the code behinds or assemblies.
I had several answers suggesting a new environment to deploy. This is something we have been considering for the long term but it's hard to justify the extra work when we regularly deploy only one or two files without a problem. I was really more interested in finding out what is actually happening and why.
In terms of a workaround, and this might sound obvious after the fact, a simple app_pool recycle solves the permissions issue and is much easier than testing for the issue and redeploying the file until the problem goes away.

SCM for ASP.net

As part of my overall development practices review I'm looking at how best to streamline and automate our ASP.net web development practices.
At the moment, our process goes something like this:
Designer builds frontend as static HTML/CSS on a network share. This gets tweaked until signed off. (e.g. http://myserver/acmesite_design)
Once signed off, developer takes over and copies over frontend HTML/CSS to a new directory on the same server (e.g. http://myserver/acmesite_development)
Multiple developers work on local copy until project is complete.
Developer publishes code to an external publicly accessible server for a client to review/signoff.
Edits made locally based on feedback.
Republish to external server.
Signoff
Developer publishes to live public server
What goes wrong? Lots of things!
Version Control — this is obviously a must and is being introduced
Configuration errors — many many times, there are environment specific paths and variables (such as DB names, image upload directories, web server paths etc. etc.) which incorrectly get copied from local to staging to live etc. etc. with very embarrassing results.
I'm pretty confident I've got no.1 under control. What about configuration management? Does anyone have any advice as to how best to manage an applications structure within asp.net apps to minimize these kinds of problems?
I found that using SVN, NAnt and NUnit with Cruise Control.net solves a lot of the issues you describe. I think it works well for small groups and it's all free. Just need to learn how to use them.
CruiseControl.net helps you put together builds and continuous integration.
Use NAnt or MSBuild to do different environment builds (DEV, TEST, PROD, etc).
http://confluence.public.thoughtworks.org/display/CCNET/Welcome+to+CruiseControl.NET
You got the most important part right. Use version control. Subversion is a good choice.
I usually store configuration along with the site; i.e. when coding a PHP-based site I have a file named config.php-dist. If you want the site to work at all you'll have to copy + edit in all the required parameters (this avoids storing passwords in version control). The -dist file should have reasonable defaults.
Upload directories should be relative if possible; actually all directories should be relative. I'm not experienced in ASP.net, but if it's anything like PHP the current directory is always the directory of the file being requested. If you channel all requests through a single file (i.e. index.asp), then this can even be found programmatically. Or you could find it programmatically by using the equivalent of dirname(____FILE____) in your configuration file.
I also recommend installing IIS (or whatever webserver you are using) on all development workstations (including the designers). Makes life easier as noone can step on each others toes. What one has to do is simply add test hosts to the hosts file (\windows\system32\drivers\etc\hosts iirc) in addition to adding a site to the local IIS. This plays well with version control (checkout, add site to IIS and hosts-file, edit edit edit commit).
One thing that really helps is making sure you keep your paths relative where you can and centralise them where you can't, so when I've been working with ASP.Net I have tended to use web.config to store any configuration and path related data that can't be found programmatically. It is quite possible to find information like your current application path programmatically through the Request object - it's worth looking in some detail over what the environment makes available to you.
One way to make sure you don't end up on something that is dependent on the path name is having a continuous integration server executing your test suite against your application. Each time this happens you create a random filepath. As soon as someone introduces a dependency on the filepath it will fail.

Resources