I am currently working on a Drupal project. Being a developer, I have often heard disagreements on whether it is better to develop on a local stack or on a shared development server. To anyone here with Drupal experience, do you have any insight or advice for me?
This is nothing Drupal-specific.
Everybody should work on their local machine. In a local development environment. There are thousands of options to choose from to make your own computer run a website locally. Start with MAMP maybe.
Use Git (and the Gitflow Workflow) to commit your changes to a repo where your co-workers can pull them from into their local environment.
Use deployment routines or webhooks to have your changes pulled onto the live server on every release automatically.
When developing things always move in two directions:
Database copies move down (from live to dev to local).
Code changes move up (from local to dev to live).
What should be good when multiple persons work on one and the same code instance at once? Nothing. Imagine all the conflicts that can arise from two persons working on one file at the same time. Insane!
Related
I have a simple Wordpress website that is created using twelve twenty. I need to transfer it from server to my PC to be able to edit it and transfer it back to server. Now that I copied wordpress directory from server to my PC it asks me to reinstall it. How can I do it? Am I missing something? Is it possible to work on website locally and once it is done transger it to the server?
Reason for doing this are:
-don't want to loss data for wrong doing things.
-have a copy of website on local machine.
-easier to work on the local machine offline rather than bing online and accssing the server.
From my point of view , i would suggest u to back-up full website and database from server [every hosting control panels has option to backup same]
Connect via ftp and edit ur files.. there to make lots of changes when u want to move from local to server or server to local.
cheers!!!
Editing the files on a live site directly is a terrible, terrible idea. It invites any number of points of failure. If you're actively developing a site things will break at some point - it's part of the developing process - and you definitely want to break things locally, not on a live site.
There's actually several steps involved, all of which are too long to go into great detail, but here's an overview of what is required along with a few links to get you started. The first time you do it seems laborious, but once you figure it all out it actually only takes 10 minutes.
Firstly you will need a local environment MySQL and PHP environment to install it. As you're on PC look at WAMP; here's a fairly good tutorial on installing it.
If you have a lot of content on the live site you might want to import it into your local environment; you'll need to export the database (probably using PHPMyAdmin) and then import it into your local database. You will need to update a couple of database options so that it points towards your localhost. It's basically the reverse of the process detailed here: you'll be changing your-site.com to localhost:8888. If your site is relatively simple you could skip this stage*
Now you should be able to update your local copy of wp-config.php with the database connect details for your local database (usually it's just localhost for the host name and root for the user and password`). With that in place you should now be able to install WP.
Now that it's installed you can edit away to your heart's content on your local copy, safe in the knowledge that anything you do on your local copy doesn't affect your live copy. When you're ready to push your changes live you can use FTP to copy your local files to your live environment.
* For a lot of projects I don't actually go to the trouble of synching databases - if there's a live site that the client can change the content on it often becomes a futile exercise attempting to synch a database with a moving target. In those instances I'll just use a comprehensive set of test content that contains every conceivable type of content that could possible inserted in to the live site.
your going to love working on your project from your locally. Here's an article I found on wpmudev http://premium.wpmudev.org/blog/how-to-install-wordpress-locally-for-pcwindows-with-xampp/ which gives you a step by step guide for PC / Windows.
For our company websites we develop in a style that would be most accurately called Continuous Deployment (http://toni.org/2010/05/19/in-praise-of-continuous-deployment-the-wordpress-com-story/)
We have a web farm and every site is located on two servers for fail over purposes. We wish to automate the process of deploying to our dev server, test server, and prod servers. Out websites are ASP.Net websites (not web applications) so we don’t do builds, we just push the pages. We generally change our main site anywhere from 2-20 times a day depending on many variables. These changes can be simple text/html changes to adding new pages/functionality.
Currently we use TFS for our source control, and each site is its own repository. We don’t have branches, you manually push your changes to dev (we develop on local machines) when it needs to be reviewed, then after sign off you check in your changes and push them to prod manually (this ensures that the source files are what is actually on prod and merges happen right then). Obviously there is a lot of room for error in here and it happens, infrequently, where a dev forgets a critical file and brings down the entire site.
It seems that most deployment tools are built upon a build process, which isn’t something we are likely to move to since we need to be extremely agile in meeting our business needs. What we’d really like, we think, is something like the following:
Dev gets latest version of website from source control, which is essentially a copy of production
Dev does work on his own box.
When it is ready to be reviewed by task owner he creates some form of changeset (either checking in the change or something else) and a tool would then push that change to dev
After review (and further possible changes) dev then is able to have the tool push all the changes for this task to test (UAT)
After this sign off the changes are automatically pushed to prod and checked into the prod branch, or whatever, for anyone else to get
We’ve toyed around with the idea of having three branches of the code in TFS, but we aren’t sure how you would pull from one branch (prod) but check in to a dev branch (none of us are TFS gurus).
So, how have other people handled this scenario? We are not wedded to TFS as a source control provider if there is something out there that would do what we wanted, nor do we require a vertical solution, we are more than happy to plug in different components as long as they can be plugged in, or even develop our own if we have to.
Thanks in advance.
I'm sure you can do this with TFS, but I've successfully done what you're describing with Mercurial, and it works pretty well.
With Mercurial (I'd assume Git is the same way), you can pull from one repo, and push to another. So you'd pull from Prod, do your change, commit locally, push to Dev, and then whenever you're happy, push the changesets into Prod (either from Dev or from Local, doesn't matter).
To get started, you'd have a Prod repo, then clone it into a Dev repo - both of these are on your server. At this point they're identical.
TortoiseHG on Windows gives you the ability to easily choose which repository you're synchronizing with. You can keep both the Prod and Dev repos in your list, and decide which one to use when it's time to pull or push.
I worked with Plone 2 several years ago, but its workflow and security systems (not to mention the new theming engine) seem the best fit for a current project. Plone 4 looks much better from a usability standpoint, but in its drive not to force a particular setup on users, it has become a bit of a maze as to how best to host it on our public-facing Ubuntu servers. I’ve got Plone installed using the Unified Installer (in /home/plone/Plone/zinstance), and have been running it in the foreground for development, but I’m now going to need to set things up for the move to making this website live.
I want to achieve a development instance on the project server that we can use for our own customisations (with additional local instances on our desktops for testing before checking into the development instance), then a site that our client can test things on before they go live, and finally the live website.
In particular:
I’m assuming that the live site will be based on ZEO, while testing and development will be standalone; does that make sense?
At present, everything lives in /home/plone/Plone, while all our other (generally PHP CMS) sites are hosted from /srv/<domain>/www; everything for the domain is then backed up in one go. What’s the best way to layout the different Plone instances to fit into this?
What’s the best way to move our changes to buildouts and products through the cycle from development to testing and then to a live site? We currently have a basic deployment process using Mercurial for other systems.
Current best practice is to use buildout to define a fully reproducible deployment.
Generally, you use a production.cfg and a development.cfg configuration that share configuration via inclusion, to tweak the deployment to the specific needs of either environment; you can expand on this model as needed. A big project might have a staging.cfg to deploy new features to a test environment first, for example.
Whether or not you use ZEO in development depends entirely on your use cases. Most development setups certainly do not need it, but in a large deployment that includes async workers you may find you need a ZEO server when developing anyway. Again, buildout will make switching your development environment around for changing needs easier, especially when combined with supervisord to manage the various daemons used in the setup.
For the biggest deployment I manage, we use a combination of subversion and git, but only for historic reasons. This is all being moved to one git repository. In the end, the repository will hold the complete buildout with it's various configurations, development.cfg, staging.cfg and per-production-machine configuration files (one for each server in the production cluster). For a 3-machine ZEO setup, that'd be one zeo.cfg and instances-1.cfg and instances-2.cfg, for example. The core development eggs are stored in the same repository in src/.
Development takes place exclusively in per-feature and per-issue branches. Once complete, a branch is merged into the staging branch, and the staging server is updated and restarted. Once the customer has signed off on each merged branch, and a roll-out has been planned, we merge the approved branches to the main branch and tag a release. The production cluster machines are then switched to that new tag and restarted.
We also use Jenkins for continuous integration testing; both the main and the staging branches are put through their tests at least once a day, letting us catch problems early.
All daemons are managed by a dedicated supervisord, which is started via a crontab entry (#reboot), not the native OS init.d structure. This way we can manage the running daemons entirely with buildout. The buildouts even generate logrotate configurations and munin monitoring plugins; these are symlinked into the OS locations as needed.
Because a buildout is entirely self-contained, it doesn't matter where you set up the production buildout on a machine. Do pick something consistent, document it and stick to that. Make sure you have sufficient disk space for your daemon needs and otherwise not worry too much about it.
ZEO or not - for production or development - depends on personal needs and preferences. Many devs use ZEO for production and development and many do not. ZEO is needed for scaling (multiple app servers)
Nobody cares...install Plone where you want it or need it...re-running buildout after moving an installation fixes relocation issues. It is your system, not ours...you decide.
Usually buildout configurations live in a repository and can be pulled on the production server from the repository....otherwise you need to copy over the related buildout files yourself some. Typical installation works like this:
run virtualenv
checkout the buildout configuration from git/subversion
run bootstrap
run buildout
I am switching away from Dreamweaver into Aptana. I am using Codeigniter PHP.
Iam trying to figure a good environment setup. It is a small shop, 3 users. We have 2 external servers. One is for test builds for sites and the other is the live server for the sites.
I am used to Dremweaver where you can open a file from the remote site and it pulls it to your local. This does not happen in Aptana. It will open the remote file but then just saves it to the remote site.
So I need to come up with a way for the users to keep their files up to date as multiple people or working on the same sites.
The only thing I have figure so far is to just work off of the remote connections node and at the end of the day do a sync so your local has the latest files.
I would prefer to keep away from local testing environments. We have a testing server and that is what is for.
thanks
Sounds like a good opportunity to introduce version control like git there. Aptana has some git integration built in so getting started with that should not be too difficult.
I'm accustomed to writing desktop applications in C#.NET. Where I have a nice little solution folder which is also under version control. So at any time, on any computer I can check out whatever version of my software I want run the compiler and have a working copy of my program.
Now I'm looking into developing websites, where the files and data are a lot more dispersed. I'm using ASP.NET, but really my question is more general and could apply to any website framework.
I'm trying to understand the proper work-flow between developing my website, a version control server, and the actual live website that users will see. Obviously this can vary a lot depending on the type and scale of the website, but I'm only considering a pretty simple site. I'm just getting started with this stuff.
The diagram below shows my current idea. All the source files for the site would be stored on a subversion server, which I would check out onto my local computer. My local computer would have a local database which I would use for development of the site. Next I would publish to a test version on my hosted server, which would point to a separate test database. This test database may periodically be replaced by a copy of the live database.
If all goes well I would then publish to a beta version of the site which points to the live data. Users could then check out the beta version to provide feedback. Finally if there are still no problems the source files for the live site would be updated.
Does this make sense? Does anyone have any comments on how this could be improved? Are there any good books or online tutorials available on developing these kind of workflows?
Also the one thing that I'm really not sure about is how to manage changes to the actual schema of the database. I figure with each version I could generate a SQL script that can be use to update the Test and Live databases on the host. However, I'd also like to be able to easily setup a new database for any version of my site with out having to run every update SQL script for every version up to the desired version. Is the best solution to use an ORM like NHibernate or Subsonic so I could always generate my database schema directly from my code?
I currently use a workflow very similar to this. It's not flawless, but for the most part it works well. It sounds like you pretty much have the important parts figured out.
Where I work we have a powerful web server. Our subversion repos also live on this server. For myself, as a LAMP developer, I do all of my development on my local Linux machine (or VM) with a local MySQL database, operating on a local working copy.
At all times I maintain two primary branches of the application: A Dev branch (trunk) and a Live branch (which reflects the current production version). My repo looks like this:
/repo/trunk/ [My current active development version]
/repo/archive/ [All other versions not in active development live here]
/repo/archive/2010.12/ [This happens to be the current production version]
On the web server we maintain three separate instances of the software: Live (or Production), Beta, and Dev. The following should illustrate how we use these versions to
Dev - Always points to our development version at `/repo/trunk'. Uses a non-vesioned config file to point to the development database. Development is never physically done here however; instead we work on our local machines, where our working copies point to the Trunk, and do all our development testing on our local machines. After several commits from multiple developers, though, it's important to test on the Dev server to make sure we're on the right track and no one is breaking someone else's changes.
Live - Always points to the most recent stable production version. In this case, it points to /repo/archive/2010.12/. This version is world-accessible.
Beta - Most of the time, Beta will be a mirror of whats on Live and points to the same production version in our repository (/repo/archive/2010.12). Anytime we need bug fixes on Live, we make them here, test them, and then commit & update live. (We also merge them into Trunk, if necessary.)
When the version in Trunk is deemed complete and ready for testing, I create a new archive branch in the repository for the upcoming new release (i.e. 2011.01) by svn copying the existing production branch. Then I merge the Trunk version into the new branch and commit, so we have a version that mirrors whats currently on Dev. Of course now active development for the next release can continue on Dev, while we Beta test this new Archived version on Beta. When beta testing is complete, we commit any fixes and switch Live to the new version (/repo/archive/2011.01).
Now, you've probably figured out that the database merging is a bit trickier. We use MySQL and, to my knowledge, there's no suitable versioning system for MySQL. So with each migration (VM to Dev, Dev to Beta, Beta to Live) I'll first make a backup of the current database, then make the necessary changes. Personally I use a commercial version of SQLYog which allows me to synchronize the database schema (super handy).