I have a very scary problem. The last couple of times I have pushed code to production, silverstripe will set a table or two as obsolete, even if the changes made are unrelated to that class. When I run a build for a second time the table is back but with no rows.
The really odd thing is that this only seems to happen on our production environment (of course).
On staging and production we are running sake dev/build in a post deployment hook through beanstalk which is when the obsolete tables are being created.
I read in another question that this could be because the table doesn't have a $db defined or it doesn't have a $has_one relationship. But that is not the case for us, the page has both set and more.
Server configuration:
SilverStripe version 3.1 (up to date)
PHP
Dev 5.6.16
Staging 5.5.14
Production 5.5.28
Mysql
Dev 5.6.27
Staging 5.1.73
Production 5.1.73
It sounds to me like it could be a config cache of some sort.
I am not sure what other information is required to diagnose just let me know and I will get additional info.
I am not sure exactly what was causing this but it looks like our automated deployment process had left behind a couple of directories and files. We have moved to composer, deployed from scratch on the affected projects and everything is behaving now.
Related
I've been using EasyAdminBundle in a symfony 4 application generated using flex (composer req admin) for quite some time now. Recently I added a few entities to the existing EasyAdmin-configuration. In my development environment they show up as excepted, but a soon as I switch to production these new entries are gone. I already cleared/warmed the cache, checked for production specific configuration files, read the symfony docs several times, watched the easy admin tutorial series on KnpUniversity and even deployed to a second server. ;-)
There must be some kind of production specific configuration I'm totally missing. Does anyone have a hint where to look?
I really appreciate any idea! Thanks!
I've come across this really strange scenario that I'm hoping that someone else has come across (best scenario) or perhaps that someone with a lot of CRM experience can perhaps point me in the direction of where to look for a possible solution. I cant seem to find anything online thats close to this issue.
We have 4 environments : DEV, DEV-IST, UAT and PreProd. We progress CRM deployments in the following order: DEV -> DEV-IST -> UAT -> PreProd. We havent gone live with our development and hence I've not included Production.
We progress CRM Deployments using the following steps.
Go to DEV.
Export any unmanaged solutions to managed solutions. We then copy those managed solutions into the 'PkgFolder' of the package deployer.
We then take a backup of the DEV-IST CRM (Organisation A) database and restore it to the DEV database server as (Organisation A). We then run the package deployer on Organisation A so that it now has all the changes that we'd made in development. Before we do anything further, we login to Organisation A and go to the following screen: Home -> Settings -> Customisations -> Customize the System. I can see under the Entities node all the custom entities.
I now backup Organisation A (which now includes my Dev updates) and copy the backup file to the DEV-IST database server.
I first go to Deployment Manager on DEV-IST and disable the organisation and then delete it.
Now I go to the SQL Database for DEV-IST and restore the backup from step 4 above and overwrite the Organisation A database.
I go back into Deployment Manager on DEV-IST and use the import organisation to reimport the organisation A and thus now have a organisation A now updated with my dev updates. The above process ensures that we deploy updates with very little downtime.
However, my issue is that after step 7 when I login to the organisation and go to Home -> Settings -> Customisations -> Customize the System... I cant see any of the custom entities!
Any ideas what might be causing this. Strangely I've noticed that I cant see custom entities in DEV-IST and PreProd but can see them in the DEV and UAT environments.
PS. I've already tried clicking the 'Publish all customizations' and it didnt have any effect.
Your deployment process is really complicated and I can't see any reason why are you doing customization transfer in so complex way. You should be simply transferring Solutions with your customizations from DEV environment to DEV-IST environment and then to UAT and PreProd.
Backing up your database on one environment and restoring it on another is for sure not faster way (because you stated that such process gives you short downtime of environment).
Also if you insist of doing deployment this way, you should not only remove your organization from Deployment Manager, but also database from the server (and restore the backed up database).
If you are simply customizing CRM for your client/company, you should not use managed solutions. Managed solutions are used to share some reusable customizations that you want to share or sell to the others. Please read this great post by Shan McArthur, that will make you better understand the problem:
https://community.adxstudio.com/blogs/shan/2014-01-17-converting-crm-solutions-from-managed-to-unmanaged/
And if you must use managed solutions - you should only do that on your final PROD environment, not on your staging environments (where you should have full control over customizations to provide hotfixes)
EDIT:
I would add it as a comment, but I cannot comment yet, so I have to edit my response instead. I replicated your process and I can see all the entities on all environments, so there must be something specific that you are doing which is causing this behaviour. Have you checked that after you restored the database all the custom entities exist in your restored database? (every entity is a separate table which has the same name as the entity). Also I would compare all the solution-related tables in database where you can see the custom entities vs the database where you cannot see custom entities (so that would be tables SolutionBase, SolutionComponenBase, MetadataSchema.Entity and all other MetadataSchema.* tables). Last experiment you can do is to save somewhere the db where you cannot see the custom entities, remove the managed solution, import the managed solution (standard way) and check if the entities are there. I bet that they would be there, so the next step would be to again compare the databases (all the metadata/solution tables) with some red-gate comparison tool, maybe that would point you to the right direction.
Basically the metadata in CRM is built this way - take everything from the default solution, then apply all the managed solutions in order they were imported and build the resulting metadata. Your case looks like CRM somehow "forgot" to apply the managed solutions over the default solution ( I believe that this might be an unknown bug of deployment manager, that should set something during organization import). I hope that you will discover this when you compare your database data between environments.
I understand that this is not a valid response and normally I would not write it as an answer but as a comment, but I cannot comment yet, unfortunately :(
I finally got to the bottom of this issue and thought I'd update it here for anyone else who comes along with the same issue. 'Someone' put a registry edit on the CRM server to get CRM to return only the top 250 records on any query. As custom entities are above this number, they weren't being shown. I've since reverted this change and I'm able to see the custom entities now.
I'm not a WordPress developer, but I'm trying to determine the optimal way for a team of folks to work with WordPress. For a Rails project or most anything else, it's easy to work locally and deploy upstream, but my understanding is that WordPress doesn't make this quite as easy. Maybe this is a myth?
From what I gather, it's not uncommon for URLs and file paths to be stored in the database which seems like it would make it difficult to deploy a WP project from dev --> stg --> prd (where each environment has it's own URL and possibly different file paths), much less for individual developers to have their own dev environment that would need to be "merged" into a unified copy for deployment.
I could configure all developer sandboxes to use a single database, but here again, if URLs and file paths are stored then nothing is gained.
There are a series of smaller questions here, but the more I think about those, the more I realize that what I'm really asking for is advice about how to structure things for optimal development of a WordPress site that will be hacked on by a team of developers. I'd prefer the sandboxed approach we use for other projects, but I have no idea if/how things can be unified once all development is complete.
Warning: incoming wall of text..
#Rob, WP is hell when it comes to working in teams; however, with a little work (and some symlink magic) you can set up your WP projects so that your working files for your themes or plugins can reside separately from the WP core. Some of this uses WP's built in mechanisms, some of it is related to SVN externals (hint). I'll let you google that since it's outside the scope of your question.
A note on WP GUIDs
WARNING: DO NOT replace guids. WP GUIDs are there for external feed readers. Feed readers use the GUID to determine if the content is recent. Changing it basically tells those readers that every entry in the feed is new (especially for posts.) That introduces a lot of extra overhead for legacy content that you just don't need. GUID are a legacy feature that should have been changed a long time ago to UUIDs. Technically, you can use anything int he guid field, but WP uses the permalink to populate that field -- legacy.
The only time it is ever acceptable to change the GUID is for new wp projects where content is brand-spankin' new.
To answer your question:
WP stores explicit references to the current domain in a dozen places in it's DB. These locations are a pain to track down and change, and the last thing you want to do is deal with manual edits to a *.sql dump file that you're going to import into production. It just smacks of bad development practices.
There's a couple ways to get around this, but it means a little bit of work if you're already further down your development lifecycle. I'll address the first case.
Case 1: Project Onset
When you're starting the project, you'll likely have a development sandbox and DB ready. You'll likely have WP already installed by now, so it's essentially clean for all intents and purposes.
The first thing you're going to want to do is change how your config file works. Most folks keep with the standard wp-config.php file (beyond a team production project, there's not really any reason to edit it.) However, you can set it up with some logic to include developer-specific or environment-specific config files. For example:
wp-config.php
switch( $current_environment )
{
case 'jack.local' : include( 'wp-config-jack.php' ) break; // Jack's sandbox
case 'jill.local' : include( 'wp-config-jill.php' ) break; // Jill's sandbox
default : ... break; // Staging & Production
}
The next thing you're going to want to do is include the normal contents of the wp-config.php file in a wp-config-remote.php file for use on staging/production. Next, edit your wp-config-remote.php file so that you can use 1 config file across multiple environments (staging,production). An if(...) or switch(...) block is all you need, e.g.
if( (strpos( $_SERVER[ "HTTP_HOST" ], "localhost" ) !== false) || (strpos( $_SERVER[ "HTTP_HOST" ], "local" ) !== false) )
(There are better ways to write that condition... this is just a crude example.)
Configure all of your WP settings specific to each of your remote environments. Hopefully you'll be checking this into a source control repository.
That basically frees you up to let your team have config settings specific to their environment, while letting you check in settings for each of the remote environments once.
The second thing you're going to want to do is build a mechanism to intercept and filter domain-specific links. The intent behind this mechanism is to replace any references to the current domain with a token/placeholder. I've outline the technique to do this here: http://www.farfromfearless.com/2010/09/07/url-token-replacement-techniques-for-wordpress-3-0/
It basically amounts to creating a filter that acts on the content before it's submitted to the DB and before the content is rendered to the page. The technique is transparent in that it won't affect normal editing practices. You can still create your content in the editor, reference other pages, posts, images, etc. and they'll show up just fine while editing in different environments.
In recent projects, I've wrapped all of this and a few other WP "normalization" features into a single bootstrap plugin that I set & forget.
Case 2: Project Ongoing
Now, in your case, you're further along in your development lifecycle. It's going to take some work to replace those domain references, but if you follow the steps I've outline above you should only ever have to do this once. The link I supplied above gives you the SQL you'll need to do that job. It's important to note that in a multi-site environment, you'll need to do this for every "sub-site" you've created.
Once you've updated your DB, I suggest implementing the steps in CASE 1 so you don't have to repeat the steps again.
Bonus: synchronizing content
Synchronizing content is a pain. What I've done in recent projects is had clients work on the staging server and promote changes upstream to production. So then, that leaves you with synchronizing downstream to your sandbox(es). Write a shell script that dumps a copy of SPECIFIC content tables from your staging DB, and imports them into your sandbox DB (effectively replacing content tables.) You should be able to see the benefit of the domain-token-replacement technique.
Images that aren't checked into source control, e.g. client images should be pushed to a common location, e.g. an S3 Bucket. There are WP plugins that can help you with that. That'll save a lot of time having to synchronize assets across environments.
I hope this helps you out -- if not, there's always SilverStripe ;)
It easy we build on a dev server and move to live server with this sql query:
UPDATE wp_posts SET guid = REPLACE(guid, 'devserver.com', 'liveserver.com');
UPDATE wp_posts SET post_content = REPLACE(post_content, 'devserver.com', 'liveserver.com');
UPDATE wp_options SET option_value = REPLACE(option_value, 'devserver.com', 'liveserver.com');
Last year I wrote a bash script to mirror a live MU installation into a sandbox. It's not perfect and not ideal, but a good starting point. It consists of mirroring the databases, files and rewriting the mirrored database to reflect the sandbox.
See http://pp19dd.com/2011/01/bash-script-to-mirror-wordpress-mu-installation-into-a-sandbox/
It's important for developers to be able to take live and exact content snapshots to replicate conditions.
I just ran into this issue myself launching a new website. My solution was to use Vagrant. Vagrant is also platform agnostic so you could be developing on a Mac and a teammate is using Windows. Same Vagrant project runs on both.
I wrote a guide on how to set Vagrant up with Wordpress from a production environment running locally on your machine. I don't use Wordpress that often but every time I do its always a hassle to setup Apache and PHP on my Mac, then make sure all the Wordpress site urls are updated in the database.
Once you've configured your Vagrant project, its a single command for any developer on your team to be up and running with a local instance of Wordpress. In short, Vagrant will mount your project directory from your host machine in the guest machine and run Apache, MySQL, PHP through the guest machine. You still use your host machine's IDE (as you normally would) and your host machines browser. There is no uploading of files anywhere, its just code, save, refresh browser all on your local machine.
http://www.distilnetworks.com/wordpress-development-with-vagrant/ <- Explains how to set it all up with Vagrant
https://gist.github.com/markmalek/fd2e6e65385400d9cd47 <- the shell script when provisioning Vagrant
The shell script could probably be a lot better, this is what worked for me but I would love to hear some better suggestions or ideas. I'm new to Vagrant and now use it for some of our other projects so thought it would be a good fit here as well.
I do update the GUID in the shell script, which I've read in another answer that you shouldn't do because feed readers use this. In this case its irrelevant since its just for your local instance of Wordpress but I wouldn't make this change in production. See this answer for better explanation.
Not a big deal, simply backup you whole site with all in one wp migration plugin and import on live server installation, plugin will replace all url automatically.
At work we currently use the following deployment strategy:
Run a batch script to clear out all Temporary ASP.NET files
Run a batch script that compiles every ASPX file into its own DLL (ASP.NET Web Site, not Web Application)
Copy each individually changed file (ASPX and DLL) to the appropriate folder on the live server.
Open up the Deployment Scripts folder, run each SQL script (table modifications, stored procs, etc.) manually on the production database.
Say a prayer before going to sleep (joking on this one, maybe)
Test first thing the next morning and hope for the best - fix bugs as they come up.
We have been bitten a few times in the past because someone will forget to run a script, or think they ran something but didn't, or overwrote a sproc related to some module because there are two files (one in a Sprocs folder and one in a [ModuleName]Related folder) or copied the wrong DLL (since they can have the same names with like a random alphanumeric number generated by .NET).
This seems highly inefficient to me - a lot of manual things and very error prone. It can sometimes take 2-3 or more hours for a developer to perform a deployment (we do them late at night, like around midnight) due to all the manual steps and remembering what files need to be copied, where they need to be copied, what scripts need to be run, making sure scripts are run in the right order, etc.
There has got to be an easier way than taking two hours to copy and paste individual ASPX pages, DLLs, images, stylesheets and the like and run some 30+ SQL scripts manually. We use SVN as our source control system (mainly just for update/commit though, we don't do branching) but have no unit tests or testing strategy. Is there some kind of tool that I can look into to help us make our deployments smoother?
I did not go through all of it, but the You're deploying it wrong series from Troy Hunt might be a good place to look at.
Points discussed in the series :
Config transforms
Build Automation
Continuous Integration
We have four stages before it can be deployed.
Development
QA
UAT
Production
We have build scripts (inside bamboo build server) running against QA and against UAT. Our DBA is the only person who can run create scripts against QA, UAT, and PROD. Anything that goes from QA -> UAT is like a test run deployment. UAT gets reverted by copying the production systems down again.
When we release into Production we just create a whole new site and point it at the UAT database and test that environmentally it is working fine. Then when this is working good we flick the 'switch' and point the production IIS record at the next site, and change the DB connection to point at Prod DB.
Because we are using a completely diff folder structure all of our files get copied up so there is no chance of missing one. Because we have had test runs of deployment into UAT we know we haven't missed a DB script (DB Scripts are combined into one generally). Because we have tested a shadow copy of the IIS website we know that environmentally it should work. We can then do all this set up during the day - and then do the final switch flicking at midnight or whenever - reducing the impact on devs.
tl;dr; Automated build and deploy; UAT system for test running deployment; Deployment during work hours; Flick switch/run DB update at midnight.
I am a developer for BuildMaster, a tool which can very easily automate the steps you have outlined above, and we have a limited version free for a team of 5 developers.
Most of your pain points will disappear the moment you set up the deployment automation - mainly the batch script execution and the file-by-file copying. Once you're fully automated, you can even schedule the deployment for night time and only have to worry about it if there's an error in the process (you can set up a notifier for a failed build).
On the database side, you can integrate your database with BuildMaster as well and if you upload the scripts into the tool it will keep track of which ones were run against which database.
To see how to set up a simple web application deployment plan, you can run one of the example applications included. You can also check out: http://inedo.com/support/tutorials/lunchmaster/part-1 to see how to create one yourself - it's slightly outdated since we've made it even easier to get started out-of-the-box but the main concepts are the same.
Please see this blog post and associated talk by Scott Hanselman titled "Web Deployment Made Awesome"
Blog
Video
As for SQL Deployment, you might want to consider one of the following:
RikMigrations
Migrator.NET
FluentMigrator
Mantee Introduction & Source
Have a User Acceptance Testing (UAT) environment which is completely isolated from your development environment, and only accessible to the UAT manager.
Setup a UAT build which you can manually trigger upon each release, when triggered this should send all your deployment files as well as a deployment checklist to the UAT manager, who will redeploy all files to the UAT environment, and run any database upgrade scripts.
Once the applications users and testers have signed off the UAT release, the UAT manager can be authorised to deploy to the PRODUCTION environment using the exact same procedure and checklists as the UAT release. This will guarantee that you never miss any deployment steps, and test the deployment process prior to moving it into production.
Caveats- I'm in an environment where we can't use MSI, batch, etc for the final deployment
Things that helped:
A build server that does the full compilation on a build server and runs all unit tests and integration tests. Why find out you have something in an aspx page that doesn't compile on deployment night? (I admit your Q doesn't make it clear if compilation is happening on deployment night)
I have a page that administrators can reach that exercises environment and deployment failure points, e.g. connect to db, connect to reporting services, send an email, read and write to the temp folder.
Also, put all the things that the administrator needs to change into a file external from web.config. The connection string and app settings sections natively support a way to do this (i.e. don't reinvent the web.config system just to create a separate file)
Here is an article on how to do better integration tests: http://suburbandestiny.com/Tech/?p=601 There is a ton of good literature how to do unit tests, but often if you app already exists, you will have to refactor until unit testing becomes possible. If that isn't an option, then don't be a purist and put together some integration tests that are fast and repeatable as possible.
Keep your dependencies in bin instead of GAC, since it's easier to tell an administrator to copy files than it is to teach them to administer the GAC.
Can you help me to understand, how do I do Drupal website deployment and development?
Suppose, I developed 1.0 version of Berty&Frank website. I copied everything to their production server and it is alive and kicking now. Site is already full of contents and is growing.
I am asked to add additional features to the website. I am now experimenting with the way how I can implement them in a dev version. I am creating/deleting content types, fill created nodes with demo data just to see how they look like etc. Now I found the way and I want to upgrade production website to the same structure as my dev version now. How do I do that?
Is the only way to manually make every change I made in dev version?
I would explore the Aegir project for the future management of your website. It allows you to clone a site, then to upgrade the site to a new "platform" which could be the next release of Drupal or another Drupal system (such as OpenAtrium).
More can be found at the aegir wiki.
You can export/import views and contenttypes, but a lot of settings etc is stored in the db. This gives two options
Either to use something like backup & migrate to import your settings from dev. This wont work if you have test data though, as you would overwrite the db.
The other options is to repeat what you did on the live site.
A third options could be to take a fresh dump of the live site, do all the settings in that db in dev environment and overwrite the live db with that. You could loose some comments etc, but shouldn't be a big deal.
I use Subversion, and just do an update on my production server when I am satisfied with the code on my development server (actually, I have a staging server that is a duplicate of the production machine, so I update that before the production; I can see any bugs that might pop up).
For database changes, I haven't found anything better than just keeping track of my changes (usually adding/modifying CCK fields) and performing the same changes to the production database. I also download my production database regularly, so that dev and staging have almost the same content. That helps to minimize the confusion.
read http://www.drupal.org/upgrade/