Web Deploy 3.0 very slow on large website .net MVC Website - asp.net

I am using Web Deploy over the web for a large website and it is redeploying all of the images files every on every single deployment which takes quite a while.
I am considering copying the entire web deployment package over first to the server to deploy rather than deploying via a build server.
Is there a setting for Web Deploy so that it the package does not overwrite the images files upon every deploy?

MSDeploy only deploys files that it has deemed as having changed. By default, this involves comparing the files' Last Modified timestamps. If your source control / build process causes these to be updated, but they will always be updated.
There is an alternative. If you pass -useCheckSum to the command line, it will compare the contents of the files (via a checksum).
It would be my preference to fix the issue that's causing your last modified dates to change, though, as -useCheckSum is obviously going to be slower.

Related

Deploy large artifect for static site

I'm developing a website backed by a static site generator which will build about 100K static HTML pages. Currently, my workflow is building the project on my local machine, and use an FTP tool to upload the output folder (about 40G) from my local machine to a remote production server, which is a long and painful uploading process, which could take about 24 hours.
I'm wondering if there's a recommended way to set up a better build & deployment process to make it faster and more automated?
With an extremely large number of pages, build times have been the crux of static-site generators. The solution is to defer generating some pages from build time to request time.
For example, you can only statically generate the top 10,000 most requested at build time to keep your build times low. Then, at request time you can use Next.js and Incremental Static Regeneration to build static pages.
Let's say a request comes in for one of the other 90,000 pages you haven't statically generated. Instead of getting a static site, the first request will hit the server to fetch the data and generate the static page. Then, that page is cached. When another person visits this page, they will see the static page (which is much faster than talking directly to the server).
You can also invalidate the cache using the revalidate flag. For example, you could make your page fetch new information every minute using revalidate: 60.
Deploying a Next.js app to Vercel using this setup should reduce your build & deploy times to less than 10 minutes, while still creating a performant static site. Simply git push to your repository and the GitHub integration will build and deploy your application for you. No more FTP!
Gatsby recently added a feature to fix this (or to try to), which is called incremental builds. Basically, it caches all your data and it will redeploy the changed files and the code changes.
Actually, it seems a feature restricted to Gatsby Cloud or other CMS, not available for a custom deploy server. Here's an example to achieve it in Netlify.
Before these recent changes in Gatsby's policy, you only need to add an environment variable to your command deploy:
GATSBY_EXPERIMENTAL_PAGE_BUILD_ON_DATA_CHANGES=true gatsby build
Keep in mind that this option is not suitable for all deployment servers. If you are regenerating your /public folder in each build from scratch it won't work.
Another option is to use a huge S3 machine of Amazon and kill it once the deploy is done.

Temporary ASP.NET files & TeamCity

Our TeamCity agent machines have been struggling for disk space recently. I did a little snooping on each of the machines to find that the Temporary ASP.NET folder in the .NET installation directory was taking up more than 10GB of space on each box, each folder comprising of about 5MB each.
C:\Windows\Microsoft.NET\Framework\v4.0.30319\Temporary ASP.NET Files
I've done my research on the subject and I know that the files are a biproduct of ASP.NET's dynamic compliation, I also understand how IIS uses them for request optimisation (See: Understanding ASP.NET Dynamic Compilation)
What I don't understand is why no one else is complaining about how these files are taking up disk space on their build servers when they really only need to be used on their web servers.
Surely someone out there has run into this problem before, can anyone offer me a solution other than
Disabling dynamic compliation (outlined here)
Doing a brute force scheduled job deletion (outlined here)
These files are created when a website is actually run under IIS. They are not created when it is built and should not be on your build server. I am using TeamCity to build websites and do not have this issue. The files are created on the webservers of course, but not the build server.
Are you launching the website (for UI unit testing) perhaps?
In the end I had to settle for a PowerShell script that is run once a month as scheduled task, I followed Bredan's process outlined in this SysAdminSpot post, with only a few modifications

Rapid CSS Testing with JSP

I am new to web development and am unfamiliar with some of the methods for best testing the web end information. On my current project I am using Weblogic to deploy my JSP project through Eclipse.
The webpages build fine and everything works, but currently I am restarting my localhost server every time I make a change to the css or the jsp, which a single round trip can be up to 3-5 min. is there a more efficient way of testing the css other than restarting my server every time? What about jsp?
Edit: Followup:
For any that stumble past this: After reading up on different staging methods as #better_use_mkstemp suggested, I couldn't figure out how to explicitly set eclipse up to stage or nostage deploy, however I did find some nifty things.
You can do an Incremental push (expand drop down in the servers view in eclipse, right click project that is deployed on a running server > click "Incremental Publish". This only pushes what has changed since last push) which helps a bit.
The other thing I found was an automatic deploy when a file changes, but that had it pushing while I was still editing the file which was a bit annoying. Using this feature you can also have it auto deploy each time there is a new version, but I didn't play around with that since we're not changing the version number yet.
Check how you are deploying your application. If you deploy using nostage mode, the server(s) will keep looking in the original deployment directory and will automatically detect jsp changes for refreshes.
This is as opposed to stage mode, where the original war/ear deployment is actually copied to each individual server.
Read more here: http://docs.oracle.com/cd/E11035_01/wls100/deployment/deploy.html#wp1027044
Sounds like if you keeps dropping your updated deployment files in the same directory with nostage you won't need a restart.

What is the difference between copying whole website to server and MSDeploy kind of tools

Maybe i'm totally outdated but for last four years i've been using simple FTP upload feature while uploading new website even without building it within Visual Studio. Just bunch of ASPX and CS files as in Visual Studio.
I do understand that compiling the project will provide me with some security defence so ones who have access to the server won't be able to read those files in text editors and i will avoid first time compilation but is that so important?
I mean, you can always do a lot of harm if you have access to server that just reading CS files instead of DLL.
First time compilation usually takes no more than 1 minute just searching for compiled version of the site will take as much time.
Now i'm watching video on PluralSight which explains new MSDeploy tool available from ASP.NET and i can't see any good reason to use it.
So what's wrong with the old fashioned way of just sending files via FTP without compiling or using fancy tools?
I did speed test and with MSDeploy i can deploy a website twice faster than old-fashioned FTPing. So instead of 4 minutes it will take 2.
Now from another perspective, when i already have alive project on the web. In which have to change Default.aspx because i have typo in some html tag. Deployment via MSDeploy will take 10 times more than uploading one file
Maybe i miss something?
MSDeploy does things which FTPing to a site can't do. Need to change a machine.config? You're unlikely to have FTP write access to the folder which contains it. Want to change a server setting in a server-version-independent manner? FTP won't do that. Etc. FTP works fine for copying files to folders in which you have write access, but that's all it can do.
When you deploy a project you can do a lot of things with it.
You can set up a job in your deploy that packages all your javascript into one file and all your css into one file.
You can set up a job in your deployment that changes a bunch of config settings to match your production server settings (rather then development settings).
The idea of deployment is that you take your current development website and transform it into a production website without having to do any of that manually.
The most important thing is that when you can only deploy your website you will never forget to package your js or forget to remove some debugging code because you can't just sneakly update a single file.

How do you deploy your ASP.NET applications to live servers?

I am looking for different techniques/tools you use to deploy an ASP.NET web application project (NOT ASP.NET web site) to production?
I am particularly interested of the workflow happening between the time your Continuous Integration Build server drops the binaries at some location and the time the first user request hits these binaries.
Are you using some specific tools or just XCOPY? How is the application packaged (ZIP, MSI, ...)?
When an application is deployed for the first time how do you setup the App Pool and Virtual Directory (do you create them manually or with some tool)?
When a static resource changes (CSS, JS or image file) do you redeploy the whole application or only the modified resource? How about when an assembly/ASPX page changes?
Do you keep track of all deployed versions for a given application and in case something goes wrong do you have procedures of restoring the application to a previous known working state?
Feel free to complete the previous list.
And here's what we use to deploy our ASP.NET applications:
We add a Web Deployment Project to the solution and set it up to build the ASP.NET web application
We add a Setup Project (NOT Web Setup Project) to the solution and set it to take the output of the Web Deployment Project
We add a custom install action and in the OnInstall event we run a custom build .NET assembly that creates an App Pool and a Virtual Directory in IIS using System.DirectoryServices.DirectoryEntry (This task is performed only the first time an application is deployed). We support multiple Web Sites in IIS, Authentication for Virtual Directories and setting identities for App Pools.
We add a custom task in TFS to build the Setup Project (TFS does not support Setup Projects so we had to use devenv.exe to build the MSI)
The MSI is installed on the live server (if there's a previous version of the MSI it is first uninstalled)
We have all of our code deployed in MSIs using Setup Factory. If something has to change we redeploy the entire solution. This sounds like overkill for a css file, but it absolutely keeps all environments in sync, and we know exactly what is in production (we deploy to all test and uat environments the same way).
We do rolling deployment to the live servers, so we don't use installer projects; we have something more like CI:
"live" build-server builds from the approved source (not the "HEAD" of the repo)
(after it has taken a backup ;-p)
robocopy publishes to a staging server ("live", but not in the F5 cluster)
final validation done on the staging server, often with "hosts" hacks to emulate the entire thing as closely as possible
robocopy /L is used automatically to distribute a list of the changes in the next "push", to alert of any goofs
as part of a scheduled process, the cluster is cycled, deploying to the nodes in the cluster via robocopy (while they are out of the cluster)
robocopy automatically ensures that only changes are deployed.
Re the App Pool etc; I would love this to be automated (see this question), but at the moment it is manual. I really want to change that, though.
(it probably helps that we have our own data-centre and server-farm "on-site", so we don't have to cross many hurdles)
Website
Deployer:
http://www.codeproject.com/KB/install/deployer.aspx
I publish website to a local folder, zip it, then upload it over FTP. Deployer on server then extracts zip, replaces config values (in Web.Config and other files), and that's it.
Of course for first run you need to connect to the server and setup IIS WebSite, database, but after that publishing updates is piece of cake.
Database
For keeping databases in sync I use http://www.red-gate.com/products/sql-development/sql-compare/
If server is behind bunch of routers and you can't directly connect (which is requirement of SQL Compare), use https://secure.logmein.com/products/hamachi2/ to create VPN.
I deploy mostly ASP.NET apps to Linux servers and redeploy everything for even the smallest change. Here is my standard workflow:
I use a source code repository (like Subversion)
On the server, I have a bash script that does the following:
Checks out the latest code
Does a build (creates the DLLs)
Filters the files down to the essentials (removes code files for example)
Backs up the database
Deploys the files to the web server in a directory named with the current date
Updates the database if a new schema is included in the deployment
Makes the new installation the default one so it will be served with the next hit
Checkout is done with the command-line version of Subversion and building is done with xbuild (msbuild work-alike from the Mono project). Most of the magic is done in ReleaseIt.
On my dev server I essentially have continuous integration but on the production side I actually SSH into the server and initiate the deployment manually by running the script. My script is cleverly called 'deploy' so that is what I type at the bash prompt. I am very creative. Not.
In production, I have to type 'deploy' twice: once to check-out, build, and deploy to a dated directory and once to make that directory the default instance. Since the directories are dated, I can revert to any previous deployment simply by typing 'deploy' from within the relevant directory.
Initial deployment takes a couple of minutes and reversion to a prior version takes a few seconds.
It has been a nice solution for me and relies only on the three command-line utilities (svn, xbuild, and releaseit), the DB client, SSH, and Bash.
I really need to update the copy of ReleaseIt on CodePlex sometime:
http://releaseit.codeplex.com/
Simple XCopy for ASP.NET. Zip it up, sftp to the server, extract into the right location. For the first deployment, manual set up of IIS
Answering your questions:
XCopy
Manually
For static resources, we only deploy the changed resource.
For DLL's we deploy the changed DLL and ASPX pages.
Yes, and yes.
Keeping it nice and simple has saved us alot of headaches so far.
Are you using some specific tools or just XCOPY? How is the application packaged (ZIP, MSI, ...)?
As a developer for BuildMaster, this is naturally what I use. All applications are built and packaged within the tool as artifacts, which are stored internally as ZIP files.
When an application is deployed for the first time how do you setup the App Pool and Virtual Directory (do you create them manually or with some tool)?
Manually - we create a change control within the tool that reminds us the exact steps to perform in future environments as the application moves through its testing environments. This could also be automated with a simple PowerShell script, but we do not add new applications very often so it's just as easy to spend the 1 minute it takes to create the site manually.
When a static resource changes (CSS, JS or image file) do you redeploy the whole application or only the modified resource? How about when an assembly/ASPX page changes?
By default, the process of deploying artifacts is set-up such that only files that are modified are transferred to the target server - this includes everything from CSS files, JavaScript files, ASPX pages, and linked assemblies.
Do you keep track of all deployed versions for a given application and in case something goes wrong do you have procedures of restoring the application to a previous known working state?
Yes, BuildMaster handles all of this for us. Restoring is mostly as simple as re-executing an old build promotion, but sometimes database changes need to be manually restored, and data loss can occur. The basic rollback process is detailed here: http://inedo.com/support/tutorials/performing-a-deployment-rollback-with-buildmaster
web setup/install projects - so you can easily uninstall it if something goes wrong
Unfold is a capistrano-like deployment solution I wrote for .net applications. It is what we use on all of our projects and it's a very flexible solution. It solves most of the typical problems for .net applications as explained in this blog post by Rob Conery.
it comes with a good "default" behavior, in the sense that it does a lot of standard stuff for you: getting the code from source control, building, creating the application pool, setting up IIS, etc
releases based on what's in source control
it has task hooks, so the default behaviour can be easily extended or altered
it has rollback
it's all powershell, so there aren't any external dependencies
it uses powershell remoting to access remote machines
Here's an introduction and some other blog posts.
So to answer the questions above:
How is the application packaged (ZIP, MSI, ...)?
Git (or another scm) is the default way to get the application on the target machine. Alternatively you can perform a local build and copy the result over the Powereshell remoting connection
When an application is deployed for the first time how do you setup the App Pool and Virtual Directory (do you create them manually or with some tool)?
Unfold configures the application pool and website application using Powershell's WebAdministration Module. It allows us (and you) to modify any aspect of the application pool or website
When a static resource changes (CSS, JS or image file) do you redeploy the whole application or only the modified resource? How about when an assembly/ASPX page changes?
Yes unfold does this, any deploy is installed next to the others. That way we can easily rollback
when somehting goes wrong. It also allows us to easily trace back a deployed version to
a source control revision.
Do you keep track of all deployed versions for a given application?
Yes, unfold keeps old versions around. Not all versions, but a number of versions. It makes rolling back almost trivial.
We've been improving our release process for the past year and now we've got it down pat. I'm using Jenkins to manage all of our automated builds and releases, but I'm sure you could use TeamCity or CruiseControl.
So upon checkin, our "normal" build does the following:
Jenkins does a SVN update to fetch the latest version of the code
A NuGet package restore is done running against our own local NuGet repository
The application is compiled using MsBuild. Setting this up is an adventure, because you need to install the correct MsBuild and then the ASP.NET and MVC dll's on your build box. (As a side note, when I had <MvcBuildViews>true</MvcBuildViews> entered in my .csproj files to compile the views, msbuild was randomly crashing, so I had to disable it)
Once the code is compiled the unit tests are run (I'm using nunit for this, but you can use anything you want)
If all the unit tests pass, I stop the IIS app pool, deploy the app locally (just a few basic XCOPY commands to copy over the necessary files) and then restart IIS (I've had problems with IIS locking files, and this solved it)
I have separate web.config files for each environment; dev, uat, prod. (I tried using the web transformation stuff with little success). So the right web.config file is also copied across
I then use PhantomJS to execute a bunch of UI tests. It also takes a bunch of screenshots at different resolutions (mobile, desktop) and stamps each screenshot with some information (page title, resolution). Jenkins has great support for handling these screenshots and they are saved as part of the build
Once the integration UI tests pass the build is successful
If someone clicks "Deploy to UAT":
If the last build was successful, Jenkins does another SVN update
The application is compiled using a RELEASE configuration
A "www" directory is created and the application is copied into it
I then use winscp to synchronise the filesystem between the build box and UAT
I send a HTTP request to the UAT server and make sure I get back a 200
This revision is tagged in SVN as UAT-datetime
If we've got this far, build is successful!
When we click "Deploy to Prod":
The user selects a UAT Tag that was previously created
The tag is "switched" to
Code is compiled and synced with Prod server
Http request to Prod server
This revision is tagged in SVN as Prod-datetime
The release is zipped and stored
All up a full build to production takes about 30 secs which I'm very, very happy with.
Upsides to this solution:
It's fast
Unit tests should catch logic errors
When a UI bug gets into production, the screenshots will hopefully show what revision # caused the it
UAT and Prod are kept in sync
Jenkins shows you a great release history to UAT and Prod with all of the commit messages
UAT and Prod releases are all tagged automatically
You can see when releases happen and who did them
The main downsides to this solution are:
Whenever you do a release to Prod you need to do a release to UAT. This was a conscious decision we made because we wanted to always ensure that UAT is always up to date with Prod. Still, it's a pain.
There's quite a few configuration files floating around. I've attempted to have it all in Jenkins, but there's a few support batch files needed as part of the process. (These are also checked in).
DB upgrade and downgrade scripts are part of the app and run at app startup. It works (mostly), but it's a pain.
I'd love to hear any other possible improvements!
Back in 2009, where this answer hails from, we used CruiseControl.net for our Continuous Integration builds, which also outputted Release Media.
From there we used Smart Sync software to compare against a production server that was out of the load balanced pool, and moved the changes up.
Finally, after validating the release, we ran a DOS script that primarily used RoboCopy to sync the code over to the live servers, stopping/starting IIS as it went.
At the last company I worked for we used to deploy using an rSync batch file to upload only the changes since the last upload. The beauty of rSync is that you can add exclude lists to exclude specific files or filename patterns. So excluding all of our .cs files, solution and project files is really easy, for instance.
We were using TortoiseSVN for version control, and so it was nice to be able to write in several SVN commands to accomplish the following:
First off, check the user has the latest revision. If not, either prompt them to update or run the update right there and then.
Download a text file from the server called "synclog.txt" that details who the SVN user is, what revision number they are uploading and the date and time of the upload. Append a new line for the current upload and then send it back to the server along with the changed files. This makes it extremely easy to find out what version of the site to roll back to on the off chance that an upload causes problems.
In addition to this there is a second batch file that just checks for file differences on the live server. This can highlight the common problem where someone would upload but not commit their changes to SVN. Combined with the sync log mentioned above we could find out who the likely culprit was and ask them to commit their work.
And lastly, rSync allows you to take a backup of the files that were replaced during the upload. We had it move them into a backup folder So if you suddenly realised that some of the files should not have been overwritten, you can find the last backup up version of every file in that folder.
While the solution felt a little clunky at the time I have since come to appreciate it a whole lot more when working in environments where the upload method is a lot less elegant or easy (remote desktop, copy and paste the entire site, for instance).
I'd recommend NOT just overwriting existing application files but instead create a directory per version and repointing the IIS application to the new path.
This has several benefits:
Quick to revert if needed
No need to stop IIS or the app pool to avoid locking issues
No risk of old files causing problems
More or less zero downtime (usually just a pause at the new appdomain initialises)
The only issue we've had is resources being cached if you don't restart the app pool and rely on the automatic appdomain switch.

Resources