I've come across this really strange scenario that I'm hoping that someone else has come across (best scenario) or perhaps that someone with a lot of CRM experience can perhaps point me in the direction of where to look for a possible solution. I cant seem to find anything online thats close to this issue.
We have 4 environments : DEV, DEV-IST, UAT and PreProd. We progress CRM deployments in the following order: DEV -> DEV-IST -> UAT -> PreProd. We havent gone live with our development and hence I've not included Production.
We progress CRM Deployments using the following steps.
Go to DEV.
Export any unmanaged solutions to managed solutions. We then copy those managed solutions into the 'PkgFolder' of the package deployer.
We then take a backup of the DEV-IST CRM (Organisation A) database and restore it to the DEV database server as (Organisation A). We then run the package deployer on Organisation A so that it now has all the changes that we'd made in development. Before we do anything further, we login to Organisation A and go to the following screen: Home -> Settings -> Customisations -> Customize the System. I can see under the Entities node all the custom entities.
I now backup Organisation A (which now includes my Dev updates) and copy the backup file to the DEV-IST database server.
I first go to Deployment Manager on DEV-IST and disable the organisation and then delete it.
Now I go to the SQL Database for DEV-IST and restore the backup from step 4 above and overwrite the Organisation A database.
I go back into Deployment Manager on DEV-IST and use the import organisation to reimport the organisation A and thus now have a organisation A now updated with my dev updates. The above process ensures that we deploy updates with very little downtime.
However, my issue is that after step 7 when I login to the organisation and go to Home -> Settings -> Customisations -> Customize the System... I cant see any of the custom entities!
Any ideas what might be causing this. Strangely I've noticed that I cant see custom entities in DEV-IST and PreProd but can see them in the DEV and UAT environments.
PS. I've already tried clicking the 'Publish all customizations' and it didnt have any effect.
Your deployment process is really complicated and I can't see any reason why are you doing customization transfer in so complex way. You should be simply transferring Solutions with your customizations from DEV environment to DEV-IST environment and then to UAT and PreProd.
Backing up your database on one environment and restoring it on another is for sure not faster way (because you stated that such process gives you short downtime of environment).
Also if you insist of doing deployment this way, you should not only remove your organization from Deployment Manager, but also database from the server (and restore the backed up database).
If you are simply customizing CRM for your client/company, you should not use managed solutions. Managed solutions are used to share some reusable customizations that you want to share or sell to the others. Please read this great post by Shan McArthur, that will make you better understand the problem:
https://community.adxstudio.com/blogs/shan/2014-01-17-converting-crm-solutions-from-managed-to-unmanaged/
And if you must use managed solutions - you should only do that on your final PROD environment, not on your staging environments (where you should have full control over customizations to provide hotfixes)
EDIT:
I would add it as a comment, but I cannot comment yet, so I have to edit my response instead. I replicated your process and I can see all the entities on all environments, so there must be something specific that you are doing which is causing this behaviour. Have you checked that after you restored the database all the custom entities exist in your restored database? (every entity is a separate table which has the same name as the entity). Also I would compare all the solution-related tables in database where you can see the custom entities vs the database where you cannot see custom entities (so that would be tables SolutionBase, SolutionComponenBase, MetadataSchema.Entity and all other MetadataSchema.* tables). Last experiment you can do is to save somewhere the db where you cannot see the custom entities, remove the managed solution, import the managed solution (standard way) and check if the entities are there. I bet that they would be there, so the next step would be to again compare the databases (all the metadata/solution tables) with some red-gate comparison tool, maybe that would point you to the right direction.
Basically the metadata in CRM is built this way - take everything from the default solution, then apply all the managed solutions in order they were imported and build the resulting metadata. Your case looks like CRM somehow "forgot" to apply the managed solutions over the default solution ( I believe that this might be an unknown bug of deployment manager, that should set something during organization import). I hope that you will discover this when you compare your database data between environments.
I understand that this is not a valid response and normally I would not write it as an answer but as a comment, but I cannot comment yet, unfortunately :(
I finally got to the bottom of this issue and thought I'd update it here for anyone else who comes along with the same issue. 'Someone' put a registry edit on the CRM server to get CRM to return only the top 250 records on any query. As custom entities are above this number, they weren't being shown. I've since reverted this change and I'm able to see the custom entities now.
Related
I have a very scary problem. The last couple of times I have pushed code to production, silverstripe will set a table or two as obsolete, even if the changes made are unrelated to that class. When I run a build for a second time the table is back but with no rows.
The really odd thing is that this only seems to happen on our production environment (of course).
On staging and production we are running sake dev/build in a post deployment hook through beanstalk which is when the obsolete tables are being created.
I read in another question that this could be because the table doesn't have a $db defined or it doesn't have a $has_one relationship. But that is not the case for us, the page has both set and more.
Server configuration:
SilverStripe version 3.1 (up to date)
PHP
Dev 5.6.16
Staging 5.5.14
Production 5.5.28
Mysql
Dev 5.6.27
Staging 5.1.73
Production 5.1.73
It sounds to me like it could be a config cache of some sort.
I am not sure what other information is required to diagnose just let me know and I will get additional info.
I am not sure exactly what was causing this but it looks like our automated deployment process had left behind a couple of directories and files. We have moved to composer, deployed from scratch on the affected projects and everything is behaving now.
I'm trying to set up a deployment chain for some of our ASP.NET applications. The tool of choice is Web Deploy (msdeploy) - for now. Unfortunately I'm stuck on a problem.
A high level overview of the chain is thus:
Web developer creates the code and checks it in SVN;
Buildserver sees the update and builds the msdeploy .zip package of the website;
The .zip package is automatically put inside our installer and sent to various clients;
The clients run the installer on their webserver(-s);
The installer uses msdeploy internally to deploy the .zip package and create a new website or upgrade an existing one.
Msdeploy makes it easy to deploy a new instance, but I'm stumped about how to perform an "upgrade" install. The main problem is the web.config file. Each client will most certainly have made some customizations there to suit their specific environment. The installer itself offers to set some more critical parameters at the first-time installation (achieved by msdeploy's parameter mechanism), but they can do others by hand.
On the other hand, we developers also occasionally make changes to web.config, adding some new settings or removing obsolete ones. So I can't just tell msdeploy to ignore the file entirely. I need some kind of advanced XML modification mechanism. It could be a script that the developers maintain, but then it needs to be run ONLY at upgrades, not new installs.
I've no idea how to accomplish this.
Besides that, sometimes there's also some completely weird upgrade logic. For example, the application comes with our company logo, but some clients have replaced that .png file to show their own logo. Recently we needed to update the logo - but only for clients that hadn't replaced it with their own.
Similarly, there might be some cache folders that might need to be cleaned at SOME upgrades but not at others. Or folders with user content that may not be touched (but come with default content at the initial installation). Etc.
How do you normally achieve this dual behavior for msdeploy packages? Do I really need to create 2 distinct packages for every application?
Suggestion from personal experience:
Isolate customisations
Your customers should have the ability to customise their set up and the best way is to provide them with something like an override file. That way you install the new package and follow by superimposing your customer's customisations on top of your standard setup. If its a brand new install then there will be nothing to superimpose.
> top-level --
> standard files |
images | This will never be touched or changed by customer
settings.txt |
__
> customer files --
images | Customer hacks this to their heart's content
settings.txt_override |
--
Yes, this does mean that some kind of merging process needs to happen and there needs to be some script that does that but this approach has several advantages.
For settings that suddenly become redundant just issue a warning to that effect
If a customer has their own logo provide the ability to specify this in the override file
The message is clear to customers. Stay off standard files.
If customers request more customisable settings then write the default if it does not exist into the override file during upgrades.
Vilx, in answer to your question, the logic for knowing whether it is an upgrade or not must be contained in the script itself.
To run an upgrade script before installation
msdeploy -verb:sync -source:contentPath="C:\Test1" -dest:contentPath="C:\Test2" -preSync:runcommand="c:\UpgradeScript.bat"
Or to run an upgrade script after installation
msdeploy -verb:sync -source:contentPath="C:\Test1" -dest:contentPath="C:\Test2" -postSync:runcommand="c:\UpgradeScript.bat"
More info here
As to how you know its an upgrade your script could check for a text file called "version.txt" and if it exists the upgrade bat script will run. Version to be contained within the text file. Bit basic but it should work.
This also has the added advantage of giving you the ability of more elegantly merging customer's custom settings between versions as you know which properties could be overriden for that particular version.
There are some general suggestions (not specific to msdeploy), but I hope that helps:
I think you'll need to provide several installers anyway: for the initial setup and for each version-to-version upgrade.
I would suggest to let your clients to merge the config files themselves. You could just provide them either detailed desciption of waht was added/changed/removed, and/or include the utility that simplifies the merge. Maybe this and this links will give you some pointers.
as for merging the replaced logos, other client's customization, I think the best approach would be to support branding your application. I mean - move all branding details to the place where your new/upgrade installers won't touch that.
as for the rest of the adjustments made by your clients, they do that on their own risk, so the only help you could provide them is to include the detailed list of changes (maybe even the list of changed files since the previous version) and the How-To article about merging the sources with tools like Araxis Merge or similar
Or.. you could create a utility and include it to the installer, which will try to do all the tricky merging stuff on client's machine. I would not recommend this way as it requires a lot of efforts/resources to maintain.
One more thing: you could focus on backup-ing the previous client copy before upgrade. So even client will have troubles with upgrading - that will be always possible to roll back. The only thing here for you is to provide a good feedback channel which your clients can use to shoot their troubles. This feedback will allow you to figure out what the troubles your clients have and how to make their upgrade process more comfortable.
I would build on what the above have said, but I would do it with transformations, and strict documentation about who configures what. The way you have it now relies on customer intervention against a config that is mission critical to the app deploy process.
Create three config file areas. One for development, one for the "production generic" build, and one that is an empty template for the customer to edit.
The development instance should be self explanatory. This is the transform that takes the production generic template and creates a web config for your development server. (it sounds like you are shooting for a CI type process here)
The "production generic" transform should set the app up for a hypothetically perfect instance of the app. This is what the install would look like if the architect had his way.
The customer transform is used by the customers to set up the web config as required to meet their own needs. Write some documentation and see what happens. Edit the docs as you help customers through the process.
It that what you were looking for? Thoughts?
I'm not a WordPress developer, but I'm trying to determine the optimal way for a team of folks to work with WordPress. For a Rails project or most anything else, it's easy to work locally and deploy upstream, but my understanding is that WordPress doesn't make this quite as easy. Maybe this is a myth?
From what I gather, it's not uncommon for URLs and file paths to be stored in the database which seems like it would make it difficult to deploy a WP project from dev --> stg --> prd (where each environment has it's own URL and possibly different file paths), much less for individual developers to have their own dev environment that would need to be "merged" into a unified copy for deployment.
I could configure all developer sandboxes to use a single database, but here again, if URLs and file paths are stored then nothing is gained.
There are a series of smaller questions here, but the more I think about those, the more I realize that what I'm really asking for is advice about how to structure things for optimal development of a WordPress site that will be hacked on by a team of developers. I'd prefer the sandboxed approach we use for other projects, but I have no idea if/how things can be unified once all development is complete.
Warning: incoming wall of text..
#Rob, WP is hell when it comes to working in teams; however, with a little work (and some symlink magic) you can set up your WP projects so that your working files for your themes or plugins can reside separately from the WP core. Some of this uses WP's built in mechanisms, some of it is related to SVN externals (hint). I'll let you google that since it's outside the scope of your question.
A note on WP GUIDs
WARNING: DO NOT replace guids. WP GUIDs are there for external feed readers. Feed readers use the GUID to determine if the content is recent. Changing it basically tells those readers that every entry in the feed is new (especially for posts.) That introduces a lot of extra overhead for legacy content that you just don't need. GUID are a legacy feature that should have been changed a long time ago to UUIDs. Technically, you can use anything int he guid field, but WP uses the permalink to populate that field -- legacy.
The only time it is ever acceptable to change the GUID is for new wp projects where content is brand-spankin' new.
To answer your question:
WP stores explicit references to the current domain in a dozen places in it's DB. These locations are a pain to track down and change, and the last thing you want to do is deal with manual edits to a *.sql dump file that you're going to import into production. It just smacks of bad development practices.
There's a couple ways to get around this, but it means a little bit of work if you're already further down your development lifecycle. I'll address the first case.
Case 1: Project Onset
When you're starting the project, you'll likely have a development sandbox and DB ready. You'll likely have WP already installed by now, so it's essentially clean for all intents and purposes.
The first thing you're going to want to do is change how your config file works. Most folks keep with the standard wp-config.php file (beyond a team production project, there's not really any reason to edit it.) However, you can set it up with some logic to include developer-specific or environment-specific config files. For example:
wp-config.php
switch( $current_environment )
{
case 'jack.local' : include( 'wp-config-jack.php' ) break; // Jack's sandbox
case 'jill.local' : include( 'wp-config-jill.php' ) break; // Jill's sandbox
default : ... break; // Staging & Production
}
The next thing you're going to want to do is include the normal contents of the wp-config.php file in a wp-config-remote.php file for use on staging/production. Next, edit your wp-config-remote.php file so that you can use 1 config file across multiple environments (staging,production). An if(...) or switch(...) block is all you need, e.g.
if( (strpos( $_SERVER[ "HTTP_HOST" ], "localhost" ) !== false) || (strpos( $_SERVER[ "HTTP_HOST" ], "local" ) !== false) )
(There are better ways to write that condition... this is just a crude example.)
Configure all of your WP settings specific to each of your remote environments. Hopefully you'll be checking this into a source control repository.
That basically frees you up to let your team have config settings specific to their environment, while letting you check in settings for each of the remote environments once.
The second thing you're going to want to do is build a mechanism to intercept and filter domain-specific links. The intent behind this mechanism is to replace any references to the current domain with a token/placeholder. I've outline the technique to do this here: http://www.farfromfearless.com/2010/09/07/url-token-replacement-techniques-for-wordpress-3-0/
It basically amounts to creating a filter that acts on the content before it's submitted to the DB and before the content is rendered to the page. The technique is transparent in that it won't affect normal editing practices. You can still create your content in the editor, reference other pages, posts, images, etc. and they'll show up just fine while editing in different environments.
In recent projects, I've wrapped all of this and a few other WP "normalization" features into a single bootstrap plugin that I set & forget.
Case 2: Project Ongoing
Now, in your case, you're further along in your development lifecycle. It's going to take some work to replace those domain references, but if you follow the steps I've outline above you should only ever have to do this once. The link I supplied above gives you the SQL you'll need to do that job. It's important to note that in a multi-site environment, you'll need to do this for every "sub-site" you've created.
Once you've updated your DB, I suggest implementing the steps in CASE 1 so you don't have to repeat the steps again.
Bonus: synchronizing content
Synchronizing content is a pain. What I've done in recent projects is had clients work on the staging server and promote changes upstream to production. So then, that leaves you with synchronizing downstream to your sandbox(es). Write a shell script that dumps a copy of SPECIFIC content tables from your staging DB, and imports them into your sandbox DB (effectively replacing content tables.) You should be able to see the benefit of the domain-token-replacement technique.
Images that aren't checked into source control, e.g. client images should be pushed to a common location, e.g. an S3 Bucket. There are WP plugins that can help you with that. That'll save a lot of time having to synchronize assets across environments.
I hope this helps you out -- if not, there's always SilverStripe ;)
It easy we build on a dev server and move to live server with this sql query:
UPDATE wp_posts SET guid = REPLACE(guid, 'devserver.com', 'liveserver.com');
UPDATE wp_posts SET post_content = REPLACE(post_content, 'devserver.com', 'liveserver.com');
UPDATE wp_options SET option_value = REPLACE(option_value, 'devserver.com', 'liveserver.com');
Last year I wrote a bash script to mirror a live MU installation into a sandbox. It's not perfect and not ideal, but a good starting point. It consists of mirroring the databases, files and rewriting the mirrored database to reflect the sandbox.
See http://pp19dd.com/2011/01/bash-script-to-mirror-wordpress-mu-installation-into-a-sandbox/
It's important for developers to be able to take live and exact content snapshots to replicate conditions.
I just ran into this issue myself launching a new website. My solution was to use Vagrant. Vagrant is also platform agnostic so you could be developing on a Mac and a teammate is using Windows. Same Vagrant project runs on both.
I wrote a guide on how to set Vagrant up with Wordpress from a production environment running locally on your machine. I don't use Wordpress that often but every time I do its always a hassle to setup Apache and PHP on my Mac, then make sure all the Wordpress site urls are updated in the database.
Once you've configured your Vagrant project, its a single command for any developer on your team to be up and running with a local instance of Wordpress. In short, Vagrant will mount your project directory from your host machine in the guest machine and run Apache, MySQL, PHP through the guest machine. You still use your host machine's IDE (as you normally would) and your host machines browser. There is no uploading of files anywhere, its just code, save, refresh browser all on your local machine.
http://www.distilnetworks.com/wordpress-development-with-vagrant/ <- Explains how to set it all up with Vagrant
https://gist.github.com/markmalek/fd2e6e65385400d9cd47 <- the shell script when provisioning Vagrant
The shell script could probably be a lot better, this is what worked for me but I would love to hear some better suggestions or ideas. I'm new to Vagrant and now use it for some of our other projects so thought it would be a good fit here as well.
I do update the GUID in the shell script, which I've read in another answer that you shouldn't do because feed readers use this. In this case its irrelevant since its just for your local instance of Wordpress but I wouldn't make this change in production. See this answer for better explanation.
Not a big deal, simply backup you whole site with all in one wp migration plugin and import on live server installation, plugin will replace all url automatically.
I have recently deployed my web role to Windows Azure. In the properties of my WebRole I have set Enable Diagnostics.
I can also see that it correctly maps to a storage account once deployed by viewing the configuration file of the hosted service.
I have not setup anything else for diagnostics, I am unaware that I need to do anything else.
I am now setting up AzureWatch (by paraleap) to monitor my instances however it reports that WADPerformanceCountersTable does not exist.
I am very new to Azure, don't have a clue how the diganostics work and can't find anything on Google that shows me how. Could someone please show me the way.
Ok I figured it out and will leave this here for others to follow.
Step 1
If you follow http://dunnry.com/blog/2012/02/27/SettingUpDiagnosticsMonitoringInWindowsAzure.aspx Windows Azure Diagnostics will start saving data into your attached Blob storage, full of diagnostic information.
Special Note: These count towards your storage transaction, which is why you will see them go up.
Step 2
However I needed the WADPerformanceCounterTable, which should have been located in the tables section of the storage account but it never was created. I needed this to use services like AzureWatch to monitor and spin up or down instances.
Special Note: This is performance counters, a specific subset of diagnostic information and this isn't stored in the blob section by default.
Step 3
In your project you need to add which performance counters to monitor in the WebRole.cs.
Special Note: You won't have this if you just added an existing project to an Azure deployment project. Unless you specifically started the project from scratch and chose the Azure templates, you will need to create this manually. You would also need to add: Microsoft.WindowsAzure.Diagnostics, Microsoft.WindowsAzure.ServiceRuntime and Microsoft.WindowsAzure.StorageClient as References. Best way to see how it all works is to create a blank project from an Azure template and copy over the necessary items.
Step 4
Next you need to define which performance counters to monitor. As such here is a great sample: http://code.msdn.microsoft.com/windowsazure/Windows-Azure-PerformanceCo-7d80ebf9
Extra Reference
Microsoft also has a few steps you can follow here that might help out if things still aren't working: http://msdn.microsoft.com/en-us/library/windowsazure/hh411521.aspx
Take a look at:
http://dunnry.com/blog/2012/02/27/SettingUpDiagnosticsMonitoringInWindowsAzure.aspx
There is also a lot of information on:
http://msdn.microsoft.com/en-us/library/windowsazure/gg433048.aspx
I have been developing a Drupal 6 site on my PC using XAMPP. I'm done now, and everything looks peachy.
Problem is, I need to put all my content (including custom modules and themes) up onto a staging server which only has a fresh Drupal 6 install on it. I can't imagine having to set up all my custom content types and whatnot all over again on the staging server.
So I ask, how does one go about doing what I need to do? Which is essentially duplicating my Drupal install from my PC, to the staging server.
The staging server is running Linux, and I develop on a Windows PC, if that helps.
Thanks in advance.
Copy up all the files from development to live, and mysqldump your database and run that on the live server. Then all you have to do is change the settings.php file to point at the right database, if for some reason 'localhost' is not also your mysql database.
The quickest solution is probably the backup_migrate module. It is only a tool to copy your database. You could also use phpmyadmin or similar instead if you wanted. The backup_migrate module do have some good defaults settings as to which tables to skip (like cache tables). All the settings etc. that is not defined in code is stored in your db. So you only need to copy the db to be set. You can choose to exclude some tables, like the node or user table if you don't want to bring over your test data.
If you don't use subversion, then you gotta manually copy the files (rsync, scp, whatever) and the db (mysqldump).
what we usually do is have a hierarchy of independent subversion repos as follows:
core
sites/all/modules/contributed
sites/all/modules/custom
sites/all/themes/ (we develop our own and don't use contributed themes)
sites/all/libraries
then we use the svn:externals properties so that if you check out "core" you get every associated repo.
we got about 2 main developers with 4 other guys that may also contribute code to the site. each have their own local dev environment and we all got a common sandbox - where we make sure the stuff we wrote doesn't break someone else's module (it has happened before!).
we use svn commit hooks to update the beta/staging/sandbox site upon commit.
with all that setup, [re]deploying a site is a simple matter of going to the proper folder and issuing a "svn co http://repolocation/reponame ." and then updating the DB.
two last things to consider:
we are moving from svn to git
the features module will allow you to save changes you make to your own modules (views, content types, etc) and package all that into a deployable module so you don't have to duplicate your efforts. we are also looking into using this for ourselves.
I hope this helps you.
I second using backup_migrate. It's great.
When I'm installing a fresh site from development to production, I:
backup the site using backup_migrate module
copy all the files up to the server
edit the sites/default/settings.php to have the right database path and account info
do an import of the last backup_migrate dump (usually using mysql < backupfilename.sql, unless I already have drupal setup and have backup_migrate installed, then I use the GUI
But take a look here for the official version:
http://drupal.org/node/776864
Now, you didn't ask, but when the site is live and users are contributing content, moving future development versions of your site from development/staging to production without blowing away live content is a whole different problem, and one that Drupal doesn't have a good answer for...
Andy-