Deploying websites with git (or generally: deployment workflow) - wordpress

So I do create websites since some years already but I never cared about a good workflow. So I did bad things like working on the production server etc.
I want to improve all that and so I came across git and tools like wordmove (for wordpress).
I tried to visualize what I want or what I think could work:
workflow visualization
Now I think something there is "wrong" or "not so good" and can be done better but I dont really know what or how to do it.
So I have my local machine where I develop, then I have a bitbucket repository, a staging server to show the customer the current status and a production server which is the live server of the customer.
I'd appreciate some help :P How it can be understood.

From git 2.3, a feature named "Push to deploy" was added, you can search document for it or read here

What type of websites are you making? WordPress, Drupal, ect? It looks like you're on the right track.
As that diagram shows, I'd recommend creating a development, staging, and production branch for each project and setting up a webhook for the repo that listens for pushes and deploys (and builds) accordingly. This way you can deploy to private servers to "stage" your project/features for the client first before merging in to production.
stackahoy.io is built to do exactly this. It's free for 1 repo and unlimited branches. Some benefits to using Stackahoy are:
Maintain deployments for your git repositories in one place
Maintain static configuration files (stuff you keep in the .gitignore file)
Preform post-deployment scripts
Securely and instantly deploy your code based on the branch that was pushed and see real-time logs as it deploys.
Deploy to multiple servers at once (good for load-balanced applications)
Disclaimer: I work for Stackahoy and would be happy to answer any Q's.

Related

Sync wordpress plugin updates to git repo

I have a wordpress site that I am finally getting into git. Previously I was editing changes locally and uploading via ftp. I want to come up with a strategy to be able to do the following:
Update my git repository with any changes made on the ftp server (plugin updates etc). That I will then sync that down into my local copy
Push locally tested changes into git and then deploy via ftp (I know how to do this part)
I don't know the best way to approach 1 and I'm very new to using git (usually use svn).
Thanks
Using a conventional development workflow on Wordpress is challenging.
Because WP puts too much in the database, including site config, it makes syncing changes from one environment to the other a real pain.
We have a number of WP installs (against my better judgement) and organise our workflow like this.
You might lighten this process if you're not part of a dev team and are going alone.
Local Dev
codebase changes only. Including shortcode development and templating. This lives under a GIT workflow (branches, tags, etc). Plugin/framework updates are installed here, and tested before adding to the gitball.
Beta Live
All content is built here, with any code based changes being requested as needed. Testing is carried out here, responsive, UAT etc.
Code changes are pushed up via deployment scripts which trigger either a GIT pull, or by syncing files via ssh:rsync. heres a good resource on how to use git hooks for this.
Live
For live deployment, we use a WP database migration plugin to get content across, and the git repository to push tagged releases up to the live server. We also use the sync tool to pull down up to date copies of the database to our local dev machines.
I'm sure there are lots of other opinions on what's best, and how to implement them.
My opinion
The fact is is that WP is a bad choice for websites that require a lot of under the hood attention. It was never designed for this, and consequently is a real pain to force into a proper development cycle. If I must use a CMS, I generally use Drupal for sites that need more functionality, its a steeper learning curve, but more than worth it.
You could follow a git flow pattern. This consist of three main branches:
master
release
develop
Preform all of your development in the develop branch which you would build and test locally. Then when you are ready to test on a server merge your changes into the release branch for testing. When everything checks out you can then merge into your master branch. From here you can either manually ftp the files from the master branch like you have been or set up a deployment hook to automate deployment to your server when changes are detected.
As your project grows you can implement a couple of other branches to further maintain your build. You can base a new branch feature off of develop when you need to add a new feature then merge it back into develop following the flow back up again. As well as a hotfix branch to take care of any bugs "on the fly" straight off the master branch.
Here is a good example of the git flow pattern:
[http://nvie.com/posts/a-successful-git-branching-model/][1]
This article goes into a lot more detail about the git flow pattern and is an awesome resource!
Be sure that whenever you start a new development phase you merge from master to develop so you know you are starting with the most recent version of your project.

Automated Deployment and Upgrade Strategy for ASP.Net MVC Application

I am working on a ASP.net MVC4 project where a same project needs to be deployed to many clients on daily basis, each client will have its own domain / sub domain and a separate app pool and db (MSSSQL).
Doing each deployment manually could take at least 1-2 hours if everything goes well. Is there anyway using which I can do this in some automated way?
Moreover, we also need to update all of the apps when a new version is released.. may be one by one or all of them at same time. However, doing this manually could take weeks and once we have more clients then it will not possible doing this update manually.
The update involves, suspending app for some time, taking a full backup of files and db, update application code/ files in app folder, upgrade db with a script and then start app, doing some diagnosis script to check if update was successful or not, if not we need to check what went wrong?
How can we automate this updates? Any idea would be great on how to approach this issue.
As a developer for BuildMaster, I can say that this scenario, known as the "Core Version" pattern, is a common one. If you're OK with a paid solution, you can setup your deployment plans within the tool that do exactly what you described.
As a more concrete example, we experience this exact situation in a slightly different way. BuildMaster has a set of 60+ extensions that rely on a specific SDK version. In our recent 4.0 release, we had to re-deploy every extension because of breaking API changes within the SDK. This is essentially equivalent to having a bunch of customers and deploying to them all at once. We have set up our deployment plans such that any time we create a new release of the SDK application, we have the option to set a variable that says to build every extension that relies on the SDK:
In BuildMaster, the idea is to promote a build (i.e. an immutable object that travels through various environments like Dev, Test, Staging, Prod) to its final environment (where it becomes the deployed build for the release). In your case, this would be pushing your MVC application to its final environment, and that would then trigger the deployments of all dependent applications (i.e. your customers' instances of your application). For our SDK, the plan looks like this:
For your scenario, you would only need the single action, "Promote Build". As I mentioned before, any dependents would then be promoted to their final environments, so all your customer deployments would kick off once that action is run during deployment. As an example, our Azure extension's deployment plan for its final environment looks like this (internal URLs redacted):
You may have noticed that these plans are marked "Shared", which means every extension we have has the exact same deployment plan, but utilizes different variables to handle the minor differences like names, paths, etc.
Since this is such an enormous topic I could go on for ages, but I think that should be sufficient for your use-case if you wanted to try it out.
There are others but you could setup Team Server Foundation to deploy automated builds.
http://msdn.microsoft.com/en-us/library/ff650529.aspx
I find the easiest way to do this from an MVC project is to create a publish profile.
This is done by right-clicking your project selecting publish and then configuring it to your needs.
Then from TFS you create a new build definition, this kicks of a wizard which takes you through it.
There are quite a few options which would be too long to go into for every scenario.
The main change I usually find the most important is to set an MSBuild Argument to deploy with the publish profile.
This can be found at Process > Advanced > MSBuild Arguments.
Once this is configured correctly it's a simple case of right-clicking and queue new build to build and deploy.
You wil need different PublishProfile/Build configuration per deployment environment.
For backups I use a powershell script which can be called manually or from TFS.
You also have a drop folder in TFS which keeps a backup of x many releases.
The datbases are automatically configured via Sql server to backup, TBH I didn't set that up it was a DB admin guy who is also involved with releases.
From a dev testing side I use jMeter (http://jmeter.apache.org/) to run some automated scripts that check that users can login and view certain screens, just to confirm nothing major has gone wrong. However there is usually a testing team to run more detailed tests, again not setup by me.
All of the above will probably take you sometime to setup but in the long run it will literally save you weeks of time over a year.
A free alternative to TFS is http://www.cruisecontrolnet.org/, I have used this in the past too and is pretty good.
You can automate your .Net deployments with Beanstalk, which will give you a way to trigger deployments with a single click, watch progress, manage permissions and see history of deployments. Check out this guide on the topic:
http://guides.beanstalkapp.com/deployments/deploy-dotnet.html
I hope you will find it useful.
P.S. - I work at Beanstalk.

Automated Deployement of ASP.Net Website with a Continuous Deployment model

For our company websites we develop in a style that would be most accurately called Continuous Deployment (http://toni.org/2010/05/19/in-praise-of-continuous-deployment-the-wordpress-com-story/)
We have a web farm and every site is located on two servers for fail over purposes. We wish to automate the process of deploying to our dev server, test server, and prod servers. Out websites are ASP.Net websites (not web applications) so we don’t do builds, we just push the pages. We generally change our main site anywhere from 2-20 times a day depending on many variables. These changes can be simple text/html changes to adding new pages/functionality.
Currently we use TFS for our source control, and each site is its own repository. We don’t have branches, you manually push your changes to dev (we develop on local machines) when it needs to be reviewed, then after sign off you check in your changes and push them to prod manually (this ensures that the source files are what is actually on prod and merges happen right then). Obviously there is a lot of room for error in here and it happens, infrequently, where a dev forgets a critical file and brings down the entire site.
It seems that most deployment tools are built upon a build process, which isn’t something we are likely to move to since we need to be extremely agile in meeting our business needs. What we’d really like, we think, is something like the following:
Dev gets latest version of website from source control, which is essentially a copy of production
Dev does work on his own box.
When it is ready to be reviewed by task owner he creates some form of changeset (either checking in the change or something else) and a tool would then push that change to dev
After review (and further possible changes) dev then is able to have the tool push all the changes for this task to test (UAT)
After this sign off the changes are automatically pushed to prod and checked into the prod branch, or whatever, for anyone else to get
We’ve toyed around with the idea of having three branches of the code in TFS, but we aren’t sure how you would pull from one branch (prod) but check in to a dev branch (none of us are TFS gurus).
So, how have other people handled this scenario? We are not wedded to TFS as a source control provider if there is something out there that would do what we wanted, nor do we require a vertical solution, we are more than happy to plug in different components as long as they can be plugged in, or even develop our own if we have to.
Thanks in advance.
I'm sure you can do this with TFS, but I've successfully done what you're describing with Mercurial, and it works pretty well.
With Mercurial (I'd assume Git is the same way), you can pull from one repo, and push to another. So you'd pull from Prod, do your change, commit locally, push to Dev, and then whenever you're happy, push the changesets into Prod (either from Dev or from Local, doesn't matter).
To get started, you'd have a Prod repo, then clone it into a Dev repo - both of these are on your server. At this point they're identical.
TortoiseHG on Windows gives you the ability to easily choose which repository you're synchronizing with. You can keep both the Prod and Dev repos in your list, and decide which one to use when it's time to pull or push.

How should I set up a Development to Production Plone workflow?

I worked with Plone 2 several years ago, but its workflow and security systems (not to mention the new theming engine) seem the best fit for a current project. Plone 4 looks much better from a usability standpoint, but in its drive not to force a particular setup on users, it has become a bit of a maze as to how best to host it on our public-facing Ubuntu servers. I’ve got Plone installed using the Unified Installer (in /home/plone/Plone/zinstance), and have been running it in the foreground for development, but I’m now going to need to set things up for the move to making this website live.
I want to achieve a development instance on the project server that we can use for our own customisations (with additional local instances on our desktops for testing before checking into the development instance), then a site that our client can test things on before they go live, and finally the live website.
In particular:
I’m assuming that the live site will be based on ZEO, while testing and development will be standalone; does that make sense?
At present, everything lives in /home/plone/Plone, while all our other (generally PHP CMS) sites are hosted from /srv/<domain>/www; everything for the domain is then backed up in one go. What’s the best way to layout the different Plone instances to fit into this?
What’s the best way to move our changes to buildouts and products through the cycle from development to testing and then to a live site? We currently have a basic deployment process using Mercurial for other systems.
Current best practice is to use buildout to define a fully reproducible deployment.
Generally, you use a production.cfg and a development.cfg configuration that share configuration via inclusion, to tweak the deployment to the specific needs of either environment; you can expand on this model as needed. A big project might have a staging.cfg to deploy new features to a test environment first, for example.
Whether or not you use ZEO in development depends entirely on your use cases. Most development setups certainly do not need it, but in a large deployment that includes async workers you may find you need a ZEO server when developing anyway. Again, buildout will make switching your development environment around for changing needs easier, especially when combined with supervisord to manage the various daemons used in the setup.
For the biggest deployment I manage, we use a combination of subversion and git, but only for historic reasons. This is all being moved to one git repository. In the end, the repository will hold the complete buildout with it's various configurations, development.cfg, staging.cfg and per-production-machine configuration files (one for each server in the production cluster). For a 3-machine ZEO setup, that'd be one zeo.cfg and instances-1.cfg and instances-2.cfg, for example. The core development eggs are stored in the same repository in src/.
Development takes place exclusively in per-feature and per-issue branches. Once complete, a branch is merged into the staging branch, and the staging server is updated and restarted. Once the customer has signed off on each merged branch, and a roll-out has been planned, we merge the approved branches to the main branch and tag a release. The production cluster machines are then switched to that new tag and restarted.
We also use Jenkins for continuous integration testing; both the main and the staging branches are put through their tests at least once a day, letting us catch problems early.
All daemons are managed by a dedicated supervisord, which is started via a crontab entry (#reboot), not the native OS init.d structure. This way we can manage the running daemons entirely with buildout. The buildouts even generate logrotate configurations and munin monitoring plugins; these are symlinked into the OS locations as needed.
Because a buildout is entirely self-contained, it doesn't matter where you set up the production buildout on a machine. Do pick something consistent, document it and stick to that. Make sure you have sufficient disk space for your daemon needs and otherwise not worry too much about it.
ZEO or not - for production or development - depends on personal needs and preferences. Many devs use ZEO for production and development and many do not. ZEO is needed for scaling (multiple app servers)
Nobody cares...install Plone where you want it or need it...re-running buildout after moving an installation fixes relocation issues. It is your system, not ours...you decide.
Usually buildout configurations live in a repository and can be pulled on the production server from the repository....otherwise you need to copy over the related buildout files yourself some. Typical installation works like this:
run virtualenv
checkout the buildout configuration from git/subversion
run bootstrap
run buildout

Does anybody create installers to deploy internal asp.net web applications?

I've always deployed my web applications via FTP (sometimes even xcopy), and then manually run database scripts myself.
I started deploying this way in the 90's, but lately, I've seen a few web apps with installers. I'm starting to question, if I'm locked into an out dated process. I'm a consultant, my apps are usually internal, so I don't worry about distributing and having others installing them.
But I'm curious; does anybody create installers to deploy internal asp.net web applications?
If so, why? (Voluntarily, mandated, or part of an automation process)
And have you had any problems doing it this way?
absolutely. We use it to do all of our apps. That way we create the installer and run it on the qa and uat environments to test and we know exactly what is going to happen in production. There are no guesses as to what order someone might do something in, or if they miss a step. It makes things a lot easier.
Ooh I forgot about the automated process too. We have systems in place (Ant Hill Pro) which automatically deploy it to the proper environments. The qa people don't have to wait for something to be done, because it's all done at 2 am. If they need to rerun the build with updates, the devs check the code in and we push a button, and it's automatically deployed. No waiting for the build engineer, because he's in a meeting or sick or whatever.
You always want to have an automated way to build and deploy - it greatly reduces the chances of a one-off error if you forget a certain step. Also, it allows you to offload the deploy to someone else easily without having to teach them 100 customized steps. Whether the project is internal or not, all applications should follow best practices.
Personally I'm a bit like the OP; generally I just deploy using FTP, but in saying that typically my applications are internal, or in the case of other projects, 100% managed by me.
I've also been thinking about this lately however, and have started to think about how using proper deployment may improve the process - having to document a detailed install process can be a real pain.
I use Powershell and found really easy to automate lots of tasks. You will probably find a bit different at the very begining but at the end you will see that it's all about the power of the .NET libraries !!!
I have use the "Web Setup Project" to create an MSI that installed the output of a "Web Deployment Project" for an internal app. Our server admin wasn't up to the task to doing a 50 step manual install. For my current app, my server admin doesn't like the 'black box' feel of MSI installers and prefers getting a pile of files and a 50 step deployment manual. (See a pattern here? Ask your server admin what he wants.)
The Web Setup Project doesn't make it immediately obvious how to install to anything other than the "Default Website", other than that, it made the installation process repeatable and created a built in way to rollback (by just running the installer from 1 version ago).
This of course assumes that your virtual directory doesn't hold any user modified content-- I wouldn't trust an MSI to properly merge user created and new files.
We use the "XCopy" deploy model here, since the Ops folks have their own method of setting up security on a new web application on the server.
However, we did need to use an installer when we had to install a web application that was using a newer version of Crystal Reports since it had to do something special with a key and we didn't have a full blown version of CR on the server itself. So keep that in mind when working with third party apps, they may need to do some kind of merge module that the MSI handles easily.
Yep...we have an app that needs a lot of pre-requisites set up....web service, windows service, user accounts, security, folder creation, GAC bits etc....I rolled it all up into a nice MSI with custom actions that can install and uninstall cleanly. Saved about one hours worth of work to deploy on a new box.
A lot of the other smaller apps are just deployed by doing Publish Website to a local folder then ftp'ing the contents to the target.
It greatly depends upon the scale of your project, your enviornment and your internal user base. I rarely deploy with an msi because we are too small an operation to have multiple environments (except for SharePoint, that's different all together) . We develop and use VS to deploy web apps to a development box, assuming they are approved then we use VS again to deploy to the live box.
The only proviso is that we have multiple copies of the web.config (appended with test, dev and live) and we then delete the suffix off the relevant file depending upon where its been deployed.
It's probably not the best methodology (I know it's not), but it works and it aids rapid deployment of small to medium sized solutions in a small-scale user environment.
F5ToDebug...
Your saying its OK to take short cuts if you dont have time to do it properly?
"who's going to test the code on the test environment?" You said it yourself that you have config files for _test - why would that not be a suitable test?

Resources