The autogenerated PrecompiledApp.config is causing me some headache.
Im automating the deployment of an older web site and 50% of the time when I deploy I get this error:
System.IO.IOException: The process cannot access the file '\\web.prod.local\c$\Sites\Website\PrecompiledApp.config' because it is being used by another process.
Content:
<precompiledApp version="2" updatable="true"/>
To the best of my knowledge websites uses some shadow copy feature to allow updating the site "runtime", with things such as app.config etc.
However this 1 file seems to be an exception.
Can anyone suggest a workaround besides stopping the website while deploying?
Kind regards
Judging by the path in the error message I see you're trying to copy the files over a network share while deploying. This is bad practice to update the files directly over a network share or FTP etc. And this is the reason, actually. Network deployment is slow and while some files are still being updated/uploaded - the ASP.NET on the server is already trying to recycle the app, copy the files to "Temporary ASP.NET folders" etc. etc. etc.
Deployment best practice:
ZIP your precompiled site, upload, then run UNZIP on the server remotely
Here's how you run UNZIP remotely:
plink -ssh -l USERNAME -pw PASSWORD web.prod.local c:\Sites\Website\unzip -q -o c:\Sites\Website\site.zip -d c:\Sites\Website\
"plink" is a free SSH tool for windows (command-line) you need it on your dev machine
"web.prod.local" is your server address.
"c:\Sites\Website\" is the path your website on the server
You need SSH installed on your server to run commands remotely, the simplest option is too install the free tool: "freesshd" (google it)
Drop "unzip.exe" on the server as well, you see it's being called right there. Simplest way is to drop it right into the c:\Sites\Website\
PS. This is just an example, you can come up with your own solution
Related
I have a couple of development machines that I code my changes on and one production server where I have deployed my Symfony application. Currently my deployment process is tedious and consists of the following workflow:
Determine the files changed in the last commit:
svn log -v -r HEAD
FTP those files to the server as the regular user
As root manually copy those files to their destination and, if required because the file is new, change the owner to the apache user
The local user does not have access to the apache directories which is why I must use root. I'm always worried that something will go wrong either due to a forgotten file during the FTP or the copy to the apache src directory.
I was thinking that instead I should FTP the entire Symfony app/ and src/ directories along with composer.json to the server as the regular user then come up with a script using rsync to sync all of the files.
New workflow would be:
FTP app/ src/ composer.json to the server in the local user's project directory
Run the sync script to sync the files
clear the cache
Is this a good solution or is there something better for Symfony projects?
This question is similar and gives an example of the rsync, but the pros and cons of this method are not discussed. Ideally I'd like to get the method that is the most reliable and easy to setup preferably without the need to install new software.
Basically every automated solution would be better than rsync or ftp. There are multiple things to do as you have mentioned: copy files, clear cache, run migrations, generate assets, list goes on.
Here you will find list of potential solutions.
http://symfony.com/doc/current/cookbook/deployment/tools.html#using-build-scripts-and-other-tools
From my experience with symfony I can recommend capifony, it takes a while to understand it, but it pays off
When developing for asp.net using visual studio for web, it is really convenient that when you deploy your website to the web server, it will cleverly check which files have been changed and only upload those files. In addition, if you deleted some files from your source, it detects that too and deletes those files from the web server since they are no longer needed.
I started developing with the LAMP stack and am wondering how you can deploy to a web server in a similar way.
I tried using Filezilla and on copy/pasting the source files to the web server, you have these options if there are similar files:
-Overwrite
-Overwrite if source is newer
-Overwrite if different size
-Overwrite if different size or source newer
"Overwrite if source is newer" works, kind of, but it only checks the date modified, not the content of the file. Also, the above method does not delete files from the web server that were deleted from the source.
Is there a better way to do this with Filezilla? (or maybe use some other program?)
Thanks.
You can use rsync to accomplish this.
When you want to push out changes you would do something like this form your production server.
rysnc -av user#<developmentIp>:/web/root/* /production/web/root/
The pattern is rsync --flags [user#host:]/source/dir [user#host:]/destination/dir
You only need the user#host stuff for remote hosts. The user must have ssh access to the host.
Couple little suggestions.
The command can be run from the source or destination. I find it better to run the command from the destination, for permissions issues (i.e. your reading from the remote and writing to the local)
Do some tests first, I always mix up the directory stuff; do I need the end slash, should I use the star, ...
Read the man page, there are alot of available options that may be helpful (--delete, --exclude, -a)
I've setup a Vagrant box that runs my webserver to host my Symfony2 application.
Everything works fine except the folder synchronization.
I tried 2 things:
config.vm.synced_folder LOCALFOLDER, HOSTFOLDER
config.vm.synced_folder LOCALFOLDER, HOSTFOLDER, type="rsync"
Option 1: First option works, I actually don't know how file is shared but it works.
Files are copied in both way, but the application is SUPER slow.
Symfony is generating cache files which might be the issue, but I don't really know how to troubleshoot this and see what is happening.
Option 2: Sync is only done in one way (from my local machine to the vagrant box), which covers most of the case and is fast.
Issue is that when I use symfony command line on the vagrant box to generate some files they are not copied over to my local machine.
My question is:
What is the best way to proceed with 2 ways syncing? With option 1 how can I (as it might be the issue) exclude some files from syncing.
With Option 2 how can I make sure changes on remote are copied to my local machine?
If default synced folder strategy (VirtualBox shared folders, I imagine) is slow for your use case, you can choose a different one and, if you need, maintain the two-way sync:
If your host OS is Linux or Mac OS X, you can go with NFS.
If your host OS is Windows you can instead choose SMB.
Rsync is very fast but, as you've pointed out, is one-way only.
As it doesn't seem Vagrant offers a "built-in" way to do this here is what I did:
Configure Vagrant RSYNC folder on the folders that will contains application generated files (in Symfony2 it is your Bundle/Entity folder). Note that I didn't sync the root folder because some folders doesn't have to be rsynced (cache/logs...) and also because it was taking way too much time for the rsync process to parse all the folders/subfolders when I know that only the Entity folder will be generated.
As the Rsync has to be done from the Vagrant box to the host, I use vagrant-rsync-back plugin and thus run this manually everytime I use a command that generates code.
https://github.com/smerrill/vagrant-rsync-back#getting-started
Create an watcher on my local machine that will track any change in code and rsync it to the vagrant box.
https://gist.github.com/laurentlemaire/e423b4994c7452cddbd2
Vagrant mounts your project root as /vargrant folder inside box as 2 way share.
You can run your command there do get required files synced. Any I/O will be damn slow (like you already mentioned), however you will get your files. For other stuff use your 1-way synced folder.
I am just starting to get my head wrapped around continuous deployment with Jenkins, but I am running into some roadblocks and I haven't really found very many good, definitive resources on the topic in regards to ASP.NET applications.
I have set up a local build server than successfully pulls down code from a SVN repo, and builds it OK with MSBuild. This works well so far, but now I'd like to automate pushing this compiled code to a development server.
My problem is this - from what I gather based on what I read (which may be an incorrect assumption...) is that the staging server is typically within the same network as the build server, meaning you can share network resources, servers, etc.
In my case, I want to run the Jenkins server on a remote VPS, then deploy to other remote VPSes (so, essentially individual isolated machines communicating with each other).
I have seen alot of terms, but I am very new in my Sys Admin / DevOps type skills.
So, my question is this:
Is it even possible to, using Jenkins on a VPS, to then deploy to any particular server I choose? (I have full access to all of them, so if its a security thing, I can fix that... but they are not within the same network/domain)
What is the method to achieve this? I've seen xcopy, Web Deployment Packages (msdeploy), batch scripts, etc. mentioned, but not really a guidance behind what to use in what situations. Are any of these methods useful to achieve my goal?
Thanks for any help or guidance!
How is your Powershell? ;) You should check out psake.
psake is a build automation tool written in PowerShell. It avoids the
angle-bracket tax associated with executable XML by leveraging the
PowerShell syntax in your build scripts. psake has a syntax inspired
by rake (aka make in Ruby) and bake (aka make in Boo), but is easier
to script because it leverages your existent command-line knowledge.
psake is pronounced sake – as in Japanese rice wine. It does NOT rhyme
with make, bake, or rake.
You can deploy your files to the target server through SSH. Jenkins do support transfers through SSH. All you need to do is setting up a SSH server ex : CopSSH and a user account with admin permissions. and configuring the Jenkins to transfer through SSH.
Create host configurations in the main Jenkins configuration
Add an SSH Server
Add the public key to the remote server (the build server)
Click "Test Configuration"
Save
Configure a job to Publish Over SSH (Post Build Action)
Add Transfer Set.
Refer Publish Over SSH For More details
We are getting frequent errors in the Event Viewer, Application section. The source is ASP.NET 4.0.30319.0, category is File Monitoring. The Event ID is 1185. Text says "Failed to start monitoring changes to "file-path-here" because the network BIOS command limit has been reached." Then there is a reference to Microsoft knowledge base article 810886.
The question is: what process or service is doing this file monitoring, and why? We are not aware of how this is running or how it started. The monitoring seems to look at various folders on our web site, some are .NET folders, some are not.
We are looking for explanation of what is causing this monitoring; then we will try to address the errors.
When a asp.net starts to run a site, is monitoring one basic file (on the root of the web site), this is the app_offline.htm and if he finds it is stop the program showing only this file.
If find that other file change is recompile them if this is nessesary, but still showing the app_offline.htm if exist and not run the site.
Ones you remove the app_offline.htm the web pages starts run again, but asp.net still monitoring for this file - if exist or not.
So this is the monitoring of the asp.net you search for. Of course this is the default behaviour of asp.net. If you have install other software or something else have been on computer and fill it with monitoring, this is something different. I assume that you have too many web sites asp.net on the same server ? 500 or more ? if not then maybe you start searching for other software that make this monitoring of your files.
Analysis
How to find your self whats happends, download the Handle from sysinternals and run it, make the out on a text file like handle.exe >> result.txt and look the results.
http://technet.microsoft.com/en-us/sysinternals/bb896655
Look there if any suspicious program have open huge amount of files and what program is that. Monitoring files and directories are shown like
runningprogram.exe pid: 1352 ServerName\User
AC: File (RW-) D:\Monitor1
E8: File (RW-) D:\Monitor2
F8: File (RW-) D:\Monitor3
408: File (RWD) D:\InetPub\MySite
More
I check on my servers and found that a blog creation program have add a monitoring on every directory blog - I do not know why - but this is the way they have made it, to monitoring every blog for some reason, maybe you have something similar that creates a lot of file/directory monitoring for some reason.
The monitoring is being done by IIS (or the aspnet process with IIS6). It's watching for changes to files so that the site can be recompiled when needed.
You didn't mention your environment, but I used to run into this problem frequently when trying to run websites from Windows XP when the sites were located on a remote file share. I think the error comes up due to a limitation in CIFS (the network stack for file shares). Windows Server didn't seem to have the same limitations.
So, a few possible fixes:
Switch to Windows Server (or possibly Win 7)
Switch to a Web Application (doesn't allow recompiles)
Move your files from a remote share to a local drive