I am experimenting with using subversion to deploy updates to my ASP.Net application, one issue that I am facing is that whenever the working copy(containing the build) is updated the ".svn" folder inside of bin gets updated and this causes the ASP.Net appdomain to recycle. I don't want this to happen unless something in bin has actually changed.
Is there a way to tell ASP.Net to ignore the ".svn" folder inside of bin? Basically not watch that folder for changes?
If this does not work out, I'll be using a staging folder outside the web folders to download the builds onto the servers and then use scripts to patch/update the actual web folders.
[Edit:] The svn export option will not keep my deployment under version control, I want to be able to do "svn update" on the web folders to deploy and rollback releases.
If you use svn export instead of svn checkout to get the files from your repository you will not get the .svn folder on your server.
[Edit] Another option would be to delete "bin" from your repository (and possibly commit it to another one, if you need revisions), and then just copy the bin-catalog to your webroot manually when it changes. Remember to add "bin" to your svn-ignore-list.
You probably want to add the "Bin" directory to your svn:ignore list; it should not be committed anyway as it contains compiled code, not source code.
In any case as your final deployment "svn export" is probably a better choice, as others have noted.
Have you thought about using a Continuous Integration server?
Continuous Integration basically refers to a development practice designed to increase the frequency of commits to the repository.
The more often you commit the better granularity you have over rollbacks and also the less that can be broken between commits.
The tools listed below all work with subversion and can be combined with MSBuild on the server to produce an automated build & deployment system.
MSBuild directives include the option to ignore certain files (i.e. code behind) when copying to live directory. In addition, some files you may want to have a development version and a live version... in this case we should write a "transform" script for MSBuild which just makes sure that the file is correct for the live server when copying live. (i.e. web.config)
Hudson - http://java.net/projects/hudson/
Draco - http://draconet.sourceforge.net/
CruiseControl - http://cruisecontrol.sourceforge.net/
Well unfortunately if you do this, then you will, as you're experiencing, an AppDomain restart. So unless you do as Espo has said and use svn export, you'll see this issue.
Would it be easier to write a 2 line batch file that svn updates a local copy and then copies the files across?
An app pool recycle should not be that big a deal. Is your issue perhaps that your users are losing their session when this happens? If so, switch to StateServer or SQLServer sessions instead of using InProc.
Subversion 1.7 and up doesn't create .svn files in each subdirectory anymore which makes it possible to do what you want without the .svn files getting in the way.
I'm a little late to the game but I'll throw my 2 cents:
You can use svn export passing a -r REV param. This enables you to rollback your app to the specified revision.
So you can use:
svn export REPOSITORY DESTINATION --force to update to HEAD (current state of you application)
or
svn export REPOSITORY -r REV DESTINATION --force to update to an other revision (maybe you should be using tags)
DonĀ“t forget the --force param so it can replace the existing files in DESTINATION.
Related
Total newbie question but what is the best practice when it comes to using SSH with Git? I'm working on a WordPress project. In the root I have gulp and other dev files/folders like SASS and Scripts that I don't need on the server and in the same project I have my WordPress folder that contains a theme and a few custom plugins. As you can imagine when the theme or any of the plugins are ready to be deployed I don't want to pull everything in my repository on the server. So far as a newbie I've always just pull and pushed the entire repository and used FTP to upload what I need to the server, so how is this done with SSH and Git and is there a better way to have my setup?
EDIT: To make my question a little bit more clear let me give you an example of what I think my issue is. In my main project folder, I have a SASS folder next to my WordPress folder. All I really need to deploy to the server is the WordPress folder. My build process that happens on my dev machine combines all of the SASS files into a single CSS that is then placed into the WordPress folder. I need the SASS folder to be tracked by Git so that any other developer can pull them and continue developing so I can't have git ignore it. However none of those SASS files need to be on the server for WordPress to work either. I just simply need to deploy the WordPress folder and everything that's in it.
I understand the idea of creating a bare repository on the server and using post-receive hook to point the git folder sitting outside your web root to point to where the web root is. But that's basically how GIT and SSH work and that's not answering my concern.
Not with Git
Git is not designed to pull specific files or directories only. It's a directed acyclic graph with binary blobs as objects and sometimes multiple objects get compacted into a single larger object.
Due to Git design, your specific request is not possible.
Alternatives
post-receive hook
If your website only contains simple static files then it's okay to push to a git repository over SSH. In reality, it's unlikely your repository will be large as long as you don't have non-text files.
Take for example the following setup.
/var/lib/www - apache web dir which is the cloned copy of www.git
/var/lib/www.git - a bare git repository.
/var/lib/www.git/hooks/post-recieve - A server side git hook. It can be a shell script that pulls the www repository when this repository is updated.
Sample post-recieve hook script:
#!/bin/bash
cd /home/sam/sandbox/git-hooks/www
unset GIT_DIR
git fetch origin master
git reset --hard origin/master
Zip up build in a tar.gz
At the end of your build you can zip up your files in a tar.gz. This file should be hosted somewhere (perhaps GitHub releases if you're using GitHub). Some enterprises use on premise artifact hosting like Nexus or Artifactory.
The idea being: you have a tested artifact that has a specific sha256sum. The artifact you test is the exact same artifact which eventually goes to production.
Diving into more detail such as continuous integration, continuous delivery, and the software development life cycle might be out of scope for your question.
No best practice.
Git is for source control, not for deployment. There is no best practice for using git this way because git is not a deployment tool. You also don't need git history on your server. In fact, you don't need git at all unless you insist on using it for deployment. You are welcome to use it this way but it's not ideal because of exactly the kind of problem you're asking about.
What is the best practice?
There are a number of tools you could use to handle your deployments. Most of the tools generally let you set up a series of steps that let you deploy the code you want into the environment you want. You could go with simple tools such as Phing or Deployer in the PHP world, or something more sophisticated like Puppet or Chef if you have more complex needs. You could just write your own bash scripts if what you need is really very simple. I recommend Phing or Deployer given the info you've provided. https://deployer.org/ https://www.phing.info/
You'll just configure whichever tool you want to ssh into your target box and copy over only the files you want into the directory you want on the server, in whatever way you would like to do that. Usually, you have the script copy files into a temp dir, tarball them up, ssh them over and untar them. After that, you'll usually do some additional work on the server to move files around, change symlinks, whatever else you might need to do.
What about compiled SASS, ES6 js files, or modern static stuff?
All you need to do is add steps to the handle the static files and where you want them to go. Include the generated static files in your tarball when you push stuff up, and put them in the right directories in the server once you untar it.
When you configured your SASS compiler, and whatever other pre-compiled static code you may have - you configured it to create a destination file. That is, the file(s) of actual CSS and JS that they generate. That's all you need to bring along - and if you have the destination directory set to be inside your wordpress theme, you may not even have to pay all that much special attention to it's handling. You may need to move them somewhere else once they are on the server but that all depends on the specific setup in your server, which I think is outside the scope of this question.
Additional Notes:
You didn't ask about this but I thought it was worth mentioning, that you shouldn't be sending the entire wordpress repository every time you update. Just like you don't need the uncompiled SASS code, you also don't need to be repackaging core WordPress. You don't even need to be commiting core WordPress, its a dependency and you don't need to be changing it.
All that should be getting committed by you is your theme and plugin code, and the uncompiled static files. Compiled static files and external dependencies like the WordPress core don't belong in your git history. For deployment purposes, WordPress should already be installed. The stuff in your tarballs should just be plugins and themes, and additional static files if they aren't already in there for some reason.
TLDR;
Don't use git for this. Use a tool like Phing or Deployer. Build your static files into your theme, and create phing/deployer scripts that tarball up only the code you want, SSH's it over to your server, and untars it into the directories you want. If you have some special location on the server for your static files, just make sure to add steps in your script for that.
So, based on your question and comment, there are three computers involved. There is a web server (when you say "server", I take it as a Web server in this scenario, or the server computer that runs a Web server program). There is another server where your git repo is hosted. And, there is your dev workstation. Is this correct?
It seems like, you have a cloned git repo on your Web server. Your current practice/workflow appears to be (1) (based on your expression "SSH'ed into my server") you log into the web server via SSH (just like Telnet) from your workstation (SSH is just a protocol, which can be used for different purposes). (2) you pull from your repo on hosted service (e.g., github), and (3) deploy it to your "www" directory on the same server. Is this correct?
(I can think of an alternative scenario based on your use of the word "FTP", etc., but let's focus on the above scenario, for now.)
Now, your question is, whenever you "pull" (on your Web server), you feel like you are pulling everything from your repo on your hosted service. And, is there a better way? Am I understanding your question correctly?
If so, as another commenter suggested, git (and, any version control system, in general) is very good at fetching "deltas" only. If you are worried about "fetching everything" every time you pull (the step (2) above), then your worry is unfounded.
Now, the question is, why do you have a git repo on your Web server, if that is indeed the case? This is a pretty legit setup and I've done this before (e.g., on EC2). But, as a best practice, people generally don't do that on "production" servers. It's because you have to "build" your web app, and you really don't want to do that on production servers.
The next question is, what do you exactly do in Step (3)? The build process (whatever process you use) typically generates an "output" which can be directly deployed to the web server. (The convention is the output is generally a single folder, "public", "www", "dist", or whatever, or a single file (e.g., tar.gz, zip, jar, war), etc.) Regardless of whether you build the deployable output on your dev workstation (or, a build machine) or on your Web server, you don't generally do "deltas" in this context. Even if you've only changed a single file (say, a CSS file), you generally build the whole output again (instead of, say, just replacing the changed CSS file only). When you use FTP to upload files, etc., you can selectively upload certain files and/or directories, etc., but as a general practice, we don't do that. We always build the complete output from scratch and deploy it to the Web server. (This is mainly to reduce the potential deployment errors and increase the reliability.)
So, to answer your question, (A) If you are pulling git repo on your Web server, you should really change that practice, and move the build process to your dev computer or a dedicated build machine. (BTW, services like github, gitlab, TFS, ... provide the build service for you.) (B) If you are currently selectively FTP'ing your web app files to your Web server, then you should really consider adopting some kind of formal build, and deployment, process moving forward.
After your SASS build process is done use scp or rsync to move the files to the prod server:
scp -r /[local wordpress dir]/wp-content/themes/your-theme/ username#your.prod.server.com:/path/to/dir/wordpress/wp-content/themes/
scp -r /[local wordpress dir]/wp-content/plugins/* username#your.prod.server.com:/path/to/dir/wordpress/wp-content/plugins/
I am working in a project and using git ssh with bitbucket following is the process i am using it may work for you also if not please correct me :
Step 1 ->I have setup git and create repo in bit-bucket.
Step 2 ->And setup project with my local and linked with my repo.
Step 3 ->connect my server using ssh.
Step 4 ->Work in my local and commit and push all changes in my git repo.
Step 5 ->Run git pull on ssh so all changes deployed in my server.
I am using above process and i love this process.i have used .gitignore file that is not required for push on my repo.
Thanks
I'm going to start using grunt-rev & grunt-usemin with grunt-watch for my web development needs (a RESTful Web App specifically).
I have a local development machine which will run grunt-watch to attach revision identifiers on my JS files. I git commit and git push my tree to a git repo, and then ask the production server to git pull the changes from the git repo to show them to the web visitors.
The problem is that I don't want my git repo to store different filenames (due to grunt-rev) on each commit. That would be bad, because then I wouldn't be able to do git diff between commits without having my screen get flooded with the contents of files that appear and disappear, and also it could (sometimes) take up a lot more storage than if it only stored the small diffs of the files.
The only solution I see is to add the build directory containing the versioned filenames in my .gitignore, so as to not store those files (with the constantly changing filenames) in git. But wouldn't that mean that I would have to run grunt-watch on my production server as well, in order to produce the build directory with the versioned filenames there as well? But that gets complicated: a new process has to run on the remote server, maybe with its small chances of error in processing the files. Not the solution I was hoping for.
Do you people have another solution? What would you suggest I did?
What I do to solve this, is to remove previous "build" files before committing and deploying a new file. There is no need to keep older files that have been generated, because you can always rebuild them with the source files (which are in git).
I'm using asp.net and WebDeploy to publish the latest bits of my website. The production site has a couple folders that I would like to keep in sync, though.
Since it's an asp.net site, I would rather not have my entire repository on the server when I can get by with just the views and dll. Additionally, I would rather not add the extra clutter of additional class projects & files to my production server file system.
How do I keep the folders in production in sync with the master branch in git? An automated solution would be optimal.
you could create a git hook that will export your files using git archive to create a zip (or tar) to a tmp folder. Then, uncompress the files to the given tmp folder, finally, transfer them to your server.
Also, check this answer Do a "git export" (like "svn export")?
Setting up a scheduled task to commit those files to their respective git repositories will work. It's not ideal, but it will make sure those repositories are relatively up to date with the assets on the live site.
I am looking for different techniques/tools you use to deploy an ASP.NET web application project (NOT ASP.NET web site) to production?
I am particularly interested of the workflow happening between the time your Continuous Integration Build server drops the binaries at some location and the time the first user request hits these binaries.
Are you using some specific tools or just XCOPY? How is the application packaged (ZIP, MSI, ...)?
When an application is deployed for the first time how do you setup the App Pool and Virtual Directory (do you create them manually or with some tool)?
When a static resource changes (CSS, JS or image file) do you redeploy the whole application or only the modified resource? How about when an assembly/ASPX page changes?
Do you keep track of all deployed versions for a given application and in case something goes wrong do you have procedures of restoring the application to a previous known working state?
Feel free to complete the previous list.
And here's what we use to deploy our ASP.NET applications:
We add a Web Deployment Project to the solution and set it up to build the ASP.NET web application
We add a Setup Project (NOT Web Setup Project) to the solution and set it to take the output of the Web Deployment Project
We add a custom install action and in the OnInstall event we run a custom build .NET assembly that creates an App Pool and a Virtual Directory in IIS using System.DirectoryServices.DirectoryEntry (This task is performed only the first time an application is deployed). We support multiple Web Sites in IIS, Authentication for Virtual Directories and setting identities for App Pools.
We add a custom task in TFS to build the Setup Project (TFS does not support Setup Projects so we had to use devenv.exe to build the MSI)
The MSI is installed on the live server (if there's a previous version of the MSI it is first uninstalled)
We have all of our code deployed in MSIs using Setup Factory. If something has to change we redeploy the entire solution. This sounds like overkill for a css file, but it absolutely keeps all environments in sync, and we know exactly what is in production (we deploy to all test and uat environments the same way).
We do rolling deployment to the live servers, so we don't use installer projects; we have something more like CI:
"live" build-server builds from the approved source (not the "HEAD" of the repo)
(after it has taken a backup ;-p)
robocopy publishes to a staging server ("live", but not in the F5 cluster)
final validation done on the staging server, often with "hosts" hacks to emulate the entire thing as closely as possible
robocopy /L is used automatically to distribute a list of the changes in the next "push", to alert of any goofs
as part of a scheduled process, the cluster is cycled, deploying to the nodes in the cluster via robocopy (while they are out of the cluster)
robocopy automatically ensures that only changes are deployed.
Re the App Pool etc; I would love this to be automated (see this question), but at the moment it is manual. I really want to change that, though.
(it probably helps that we have our own data-centre and server-farm "on-site", so we don't have to cross many hurdles)
Website
Deployer:
http://www.codeproject.com/KB/install/deployer.aspx
I publish website to a local folder, zip it, then upload it over FTP. Deployer on server then extracts zip, replaces config values (in Web.Config and other files), and that's it.
Of course for first run you need to connect to the server and setup IIS WebSite, database, but after that publishing updates is piece of cake.
Database
For keeping databases in sync I use http://www.red-gate.com/products/sql-development/sql-compare/
If server is behind bunch of routers and you can't directly connect (which is requirement of SQL Compare), use https://secure.logmein.com/products/hamachi2/ to create VPN.
I deploy mostly ASP.NET apps to Linux servers and redeploy everything for even the smallest change. Here is my standard workflow:
I use a source code repository (like Subversion)
On the server, I have a bash script that does the following:
Checks out the latest code
Does a build (creates the DLLs)
Filters the files down to the essentials (removes code files for example)
Backs up the database
Deploys the files to the web server in a directory named with the current date
Updates the database if a new schema is included in the deployment
Makes the new installation the default one so it will be served with the next hit
Checkout is done with the command-line version of Subversion and building is done with xbuild (msbuild work-alike from the Mono project). Most of the magic is done in ReleaseIt.
On my dev server I essentially have continuous integration but on the production side I actually SSH into the server and initiate the deployment manually by running the script. My script is cleverly called 'deploy' so that is what I type at the bash prompt. I am very creative. Not.
In production, I have to type 'deploy' twice: once to check-out, build, and deploy to a dated directory and once to make that directory the default instance. Since the directories are dated, I can revert to any previous deployment simply by typing 'deploy' from within the relevant directory.
Initial deployment takes a couple of minutes and reversion to a prior version takes a few seconds.
It has been a nice solution for me and relies only on the three command-line utilities (svn, xbuild, and releaseit), the DB client, SSH, and Bash.
I really need to update the copy of ReleaseIt on CodePlex sometime:
http://releaseit.codeplex.com/
Simple XCopy for ASP.NET. Zip it up, sftp to the server, extract into the right location. For the first deployment, manual set up of IIS
Answering your questions:
XCopy
Manually
For static resources, we only deploy the changed resource.
For DLL's we deploy the changed DLL and ASPX pages.
Yes, and yes.
Keeping it nice and simple has saved us alot of headaches so far.
Are you using some specific tools or just XCOPY? How is the application packaged (ZIP, MSI, ...)?
As a developer for BuildMaster, this is naturally what I use. All applications are built and packaged within the tool as artifacts, which are stored internally as ZIP files.
When an application is deployed for the first time how do you setup the App Pool and Virtual Directory (do you create them manually or with some tool)?
Manually - we create a change control within the tool that reminds us the exact steps to perform in future environments as the application moves through its testing environments. This could also be automated with a simple PowerShell script, but we do not add new applications very often so it's just as easy to spend the 1 minute it takes to create the site manually.
When a static resource changes (CSS, JS or image file) do you redeploy the whole application or only the modified resource? How about when an assembly/ASPX page changes?
By default, the process of deploying artifacts is set-up such that only files that are modified are transferred to the target server - this includes everything from CSS files, JavaScript files, ASPX pages, and linked assemblies.
Do you keep track of all deployed versions for a given application and in case something goes wrong do you have procedures of restoring the application to a previous known working state?
Yes, BuildMaster handles all of this for us. Restoring is mostly as simple as re-executing an old build promotion, but sometimes database changes need to be manually restored, and data loss can occur. The basic rollback process is detailed here: http://inedo.com/support/tutorials/performing-a-deployment-rollback-with-buildmaster
web setup/install projects - so you can easily uninstall it if something goes wrong
Unfold is a capistrano-like deployment solution I wrote for .net applications. It is what we use on all of our projects and it's a very flexible solution. It solves most of the typical problems for .net applications as explained in this blog post by Rob Conery.
it comes with a good "default" behavior, in the sense that it does a lot of standard stuff for you: getting the code from source control, building, creating the application pool, setting up IIS, etc
releases based on what's in source control
it has task hooks, so the default behaviour can be easily extended or altered
it has rollback
it's all powershell, so there aren't any external dependencies
it uses powershell remoting to access remote machines
Here's an introduction and some other blog posts.
So to answer the questions above:
How is the application packaged (ZIP, MSI, ...)?
Git (or another scm) is the default way to get the application on the target machine. Alternatively you can perform a local build and copy the result over the Powereshell remoting connection
When an application is deployed for the first time how do you setup the App Pool and Virtual Directory (do you create them manually or with some tool)?
Unfold configures the application pool and website application using Powershell's WebAdministration Module. It allows us (and you) to modify any aspect of the application pool or website
When a static resource changes (CSS, JS or image file) do you redeploy the whole application or only the modified resource? How about when an assembly/ASPX page changes?
Yes unfold does this, any deploy is installed next to the others. That way we can easily rollback
when somehting goes wrong. It also allows us to easily trace back a deployed version to
a source control revision.
Do you keep track of all deployed versions for a given application?
Yes, unfold keeps old versions around. Not all versions, but a number of versions. It makes rolling back almost trivial.
We've been improving our release process for the past year and now we've got it down pat. I'm using Jenkins to manage all of our automated builds and releases, but I'm sure you could use TeamCity or CruiseControl.
So upon checkin, our "normal" build does the following:
Jenkins does a SVN update to fetch the latest version of the code
A NuGet package restore is done running against our own local NuGet repository
The application is compiled using MsBuild. Setting this up is an adventure, because you need to install the correct MsBuild and then the ASP.NET and MVC dll's on your build box. (As a side note, when I had <MvcBuildViews>true</MvcBuildViews> entered in my .csproj files to compile the views, msbuild was randomly crashing, so I had to disable it)
Once the code is compiled the unit tests are run (I'm using nunit for this, but you can use anything you want)
If all the unit tests pass, I stop the IIS app pool, deploy the app locally (just a few basic XCOPY commands to copy over the necessary files) and then restart IIS (I've had problems with IIS locking files, and this solved it)
I have separate web.config files for each environment; dev, uat, prod. (I tried using the web transformation stuff with little success). So the right web.config file is also copied across
I then use PhantomJS to execute a bunch of UI tests. It also takes a bunch of screenshots at different resolutions (mobile, desktop) and stamps each screenshot with some information (page title, resolution). Jenkins has great support for handling these screenshots and they are saved as part of the build
Once the integration UI tests pass the build is successful
If someone clicks "Deploy to UAT":
If the last build was successful, Jenkins does another SVN update
The application is compiled using a RELEASE configuration
A "www" directory is created and the application is copied into it
I then use winscp to synchronise the filesystem between the build box and UAT
I send a HTTP request to the UAT server and make sure I get back a 200
This revision is tagged in SVN as UAT-datetime
If we've got this far, build is successful!
When we click "Deploy to Prod":
The user selects a UAT Tag that was previously created
The tag is "switched" to
Code is compiled and synced with Prod server
Http request to Prod server
This revision is tagged in SVN as Prod-datetime
The release is zipped and stored
All up a full build to production takes about 30 secs which I'm very, very happy with.
Upsides to this solution:
It's fast
Unit tests should catch logic errors
When a UI bug gets into production, the screenshots will hopefully show what revision # caused the it
UAT and Prod are kept in sync
Jenkins shows you a great release history to UAT and Prod with all of the commit messages
UAT and Prod releases are all tagged automatically
You can see when releases happen and who did them
The main downsides to this solution are:
Whenever you do a release to Prod you need to do a release to UAT. This was a conscious decision we made because we wanted to always ensure that UAT is always up to date with Prod. Still, it's a pain.
There's quite a few configuration files floating around. I've attempted to have it all in Jenkins, but there's a few support batch files needed as part of the process. (These are also checked in).
DB upgrade and downgrade scripts are part of the app and run at app startup. It works (mostly), but it's a pain.
I'd love to hear any other possible improvements!
Back in 2009, where this answer hails from, we used CruiseControl.net for our Continuous Integration builds, which also outputted Release Media.
From there we used Smart Sync software to compare against a production server that was out of the load balanced pool, and moved the changes up.
Finally, after validating the release, we ran a DOS script that primarily used RoboCopy to sync the code over to the live servers, stopping/starting IIS as it went.
At the last company I worked for we used to deploy using an rSync batch file to upload only the changes since the last upload. The beauty of rSync is that you can add exclude lists to exclude specific files or filename patterns. So excluding all of our .cs files, solution and project files is really easy, for instance.
We were using TortoiseSVN for version control, and so it was nice to be able to write in several SVN commands to accomplish the following:
First off, check the user has the latest revision. If not, either prompt them to update or run the update right there and then.
Download a text file from the server called "synclog.txt" that details who the SVN user is, what revision number they are uploading and the date and time of the upload. Append a new line for the current upload and then send it back to the server along with the changed files. This makes it extremely easy to find out what version of the site to roll back to on the off chance that an upload causes problems.
In addition to this there is a second batch file that just checks for file differences on the live server. This can highlight the common problem where someone would upload but not commit their changes to SVN. Combined with the sync log mentioned above we could find out who the likely culprit was and ask them to commit their work.
And lastly, rSync allows you to take a backup of the files that were replaced during the upload. We had it move them into a backup folder So if you suddenly realised that some of the files should not have been overwritten, you can find the last backup up version of every file in that folder.
While the solution felt a little clunky at the time I have since come to appreciate it a whole lot more when working in environments where the upload method is a lot less elegant or easy (remote desktop, copy and paste the entire site, for instance).
I'd recommend NOT just overwriting existing application files but instead create a directory per version and repointing the IIS application to the new path.
This has several benefits:
Quick to revert if needed
No need to stop IIS or the app pool to avoid locking issues
No risk of old files causing problems
More or less zero downtime (usually just a pause at the new appdomain initialises)
The only issue we've had is resources being cached if you don't restart the app pool and rely on the automatic appdomain switch.
I am using MSBuild to Publish a web site, then copy the published site to a web server on the same network. I set the copy command to "SkipUnchangedFiles."
It works swimmingly, but Skip Unchanged won't work because when I use AspNetCompiler to publish the website, each and every file is "new" -- its date is set to the moment of publishing, so even if the contents of a given file have not changed, the timestamp is different, so it's copied over anyway.
Is there a workaround that will prevent file whose contents have not changed from being copied?
Depending on how you're publishing the site, you may be able to do Incremental Build instead of a full build.
There is no existing process for this as the deployment process isn't aware of the deployment target filesystem.
If you were aware you could do a diff using a tool like beyond compare and then grab only the binary diff'ed items and copy those across.
Looking to automate this you are probably going to have to dig into writing msbuild targets or post build scripts.