I asked about developer setup for Tridion DTAP (development, test, acceptance, and production) in this other question, but understand the example didn't match a typical DTAP scenario.
Chris Summers explains the CM-side well in his Fifth Tridion Environment post. But for clarity, could I get help understanding the ideal setup?
CMS environments
D = Development
T = Test
A = Acceptance
P = Production
I understand typical authors only use Production and publish from CMS Production to "Live" and "Staging." The other environments are for development.
CD?
Does "Live and Staging" apply to each of the other environments--does that mean 8 content delivery setups (per website)?
And if so, where is it okay to consolidate--fewer CMS environments? Fewer target types?
I believe you should have Live and Staging in every environment - The main reason I see for this (assuming you use SiteEdit in your staging environment only), is to validate that SiteEdit syntax is output only to the correct targets.
It is not uncommon to have code which checks which target you are publishing to, and alters the output accordingly. If this is the case, it is essential to test with the same collection of targets and target types that the production environment has.
Additional reasons for having matching target setups in the lower environments may include the need to validate security models where you give rights to deploy to certain targets to distinct users or groups, and if you are using the "Minimal Level of Approval" feature on targets when combined with workflow.
Yes. This is the model what the enterprises want and we often build this as a practice so consistent environment model is maintained across all environments.
SiteEdit/Workflow plays a big role on Staging environment and must have in all the environment if they are being used.
Can you consolidate CMS environments? obviously yes with more target types/CDs but there are implications on your DTAP model. (e.g.; combining CMS for QA & UAT ).
Related
I'm setting up a data warehouse in my company. In my experience the initial tables you insert data into, before you transform it into something user friendly, are called staging tables.
However the tech team here use dev, staging and production environments for software development. I've asked and seeing something called *_staging on a production environment would seem really confusing to them.
This naming clash must be a common one so I'm wondering is it normal practice to just put up with it or is there a standard alternative name?
In traditional data warehouse design these are usually called "staging tables", although sometimes called "raw tables". You might mitigate the confusion by using the common contraction "stg" instead of "staging".
And increasingly, and especially in big data projects, the term "data lake" is used to refer to the repository that stores copies of source system data. "data lake" is essentially persistent staging with some direct access for analytics allowed.
I've never worked with any CMS and I simply wanted to play with such ones. As originally I come from .NET roots, so I was thinking about choosing Orchard Core CMS.
Let's imagine very simple scenario, together with my colleague I'd like to create a blog. As I'm used to work with web based systems and applications for a business for me it's kinda normal to work with code repository, having multiple environments dev/test/stage/prod, implementing CI / CD, adjusting database via migrations or scripts.
Now the question is do I need all of this with working on our blog with a usage of CMS.
To be more specific I can ask few questions:
Shall I create blog using CMS locally (My PC) -> create few articles and then deploy it to the web or I should create a blog over the internet and add articles in prod environment directly.
How to synchronize databases between environments (dev / prod).
I can add, that as I do not expect many visitors on a website I was thinking to use Orchard Core CMS together with SQLite. Also I expect that I can customize code, add new modules, extend existing ones etc. - not only add content (articles). You can take that into consideration in answering the question
So basically my question is what should be the workflow of a person who want to create / administer and maintain CMS (let it be blog) as a single person or as a team.
Shall I work and create content locally, then publish it and somehow synchronize both application and database (database is my main question mark - also in a context how to do that properly using SQLite).
Or simply all the changes - code + content should be managed directly on a server let's call it production environment.
Excuse me if question is silly and hard to understand, but I'm looking for any advice as I really didn't find any good examples / information about that or maybe I'm seeking in totally wrong direction.
Thanks in advance.
Great question, not at all silly ;)
When dealing with a CMS, you need to think about the data/content in very different terms from the code/modules, despite the fact that the boundary between them is not always completely obvious.
For Orchard, the recommendation is not to install modules in production, but to have a dev - staging - production type of environment: install new modules on a dev environment, test them in staging, and then deploy to production when it's safe to do so. Depending on the scale of the project, the staging may be skipped for a more agile dev to prod setting but the idea remains the same, and is not very different from any modular application.
Then you have the activation and configuration of the settings of the modules you deploy. Because in a CMS like Orchard, those settings are considered data and stored in the database, they should be handled like content. This includes metadata such as the very shape of the content of your site: content types are data.
Data is typically not deployed like code is, with staging and prod environments (although it can, to a degree, more on that in a moment). One reason for this is that a CMS will often feature user-provided data, such as reviews, ratings, comments or usage stats. Synchronizing all that two-ways is very impractical. Another even more important reason is that the very reason to use a CMS is to let non-technical owners of the site manage content themselves in a fast and direct manner.
The difference between code and data is also visible in the way you secure their changes: for code, usual source control is still the rule, whereas for the content, you'll setup database backups.
Also important to mention is the structure of the database. You typically don't have to worry about this until you write your own modules: Orchard comes with a rich data migration feature that makes sure the database structure gets updated with the code that uses it. So don't worry about that, the database will just update itself as you deploy code to production.
Finally, I must mention that some CMS sites do need to be able to stage contents and test it before exposing it to end-users. There are variations of that: in some cases, being able to draft and preview content items is enough. Orchard supports that out of the box: any content type can be marked draftable. When that is not enough, there is an optional feature called Deployments that enables rich content deployment workflows that can be repeated, scheduled and validated. An important point concerning that module is that the deployment only applies to the subset of the site's content you decide it should apply to (and excludes, obviously, stuff like user-provided content).
So in summary, treat code and modules as something you deploy in a one-way fashion from the dev box all the way to production, with ordinary source control and deployment methods, and treat data depending on the scenario, from simple direct in production database instances with a good backup policy, to drafts stored in production, and then all the way to complex content deployment rules.
We are a small team of 3 developers. We have a mix of classic ASP code and ASPX pages. All the code is contained in one solution with multiple projects. We are currently not using any VC software and have just install TFS 2013 and want to move to using its VC. Our current environment is setup as follows.
Development environment - new code or changes to existing code.
Test Environment - once the code from development passes unit testing, it is moved here to allow users to test changes.
Staging Environment - this is a mirror of production. once the users have accepted the changes in test we migrate the code here to test and make sure it works against the mirror copy of the database(sql).
Production Environment - code is not modified in this environment.
All of this is done manually and now that our staff has grown from 1 to 2 to 3 developers over the last 6 months we need to make use of version control. What we are not sure of is how to implement this same environment using TFSVC. Do we need to install TFS in each environment and have the 4 separate copies of the code and then how do we migrate the code between each environment using TFS. We need help and suggestions on how to set this up. We want to keep it simple since there is only 3 of us.
Normally you would have one TFS server that holds the sources for all of your environments. Many people implement a branching strategy to support different versions of source code deployed as part of different releases or in different staging environments.
Many people treat TFS as a Development Tool and as such it ends up in the development "network". We recommend people to treat TFS as a production server though, it contains your source code (Intellectual property and an large investment in knowledge and tme) and you might also use it to hold your Product Backlog (which could contain sensitive information on where your company wants to move in the future). If you were to lose any of them it would be a great loss. So make sure you treat the TFS server as something holding value and implement a proper backup & restore and disaster recovery procedure.
Helpful links:
ALM Rangers Planning Guide
ALM Rangers Version Control Guide (aka Branching Guide)
We are using Tridion 2011 SP1 and DD4T framework.
We have websites in both Staging and Live servers. We have published to both the servers from Tridion 2011 SP1 using different targets (Staging and Live). Now i am planning to add staging servers into Live target, So while publishing to Live it will be published to staging also. I am not going to use Staging target after this.
Here my question is. Will it create any problem or any issues in this. Does it have any disadvantage?
Thanks,
Jagadeesh.
So, all you want to do is to publish to Staging also when you publish to Live?
If this is all you want to do, then the easiest is to chain both Publication Targets with the same Target Type. Open the Staging Publication Target and in the advanced tab, link it with the Live Target type (and unlink it from the Staging Target Type).
You should probably also remove the Staging Target Type so as to not confuse your editors.
PS - I am answering here, but your question is not a Stack Overflow/Programming question. You should have asked this in serverfault instead.
2nd part of the question: Will it cause issues?
Think why you had staging to begin with, since you're going to lose that now. You are probably removing the ability to implement Experience Manager (ex-SiteEdit), but maybe that's not a requirement. Staging is also typically a smaller environment. If you're doing this because you need more capacity on your Live server, then you should have considered buying a new server instead, and linking it to the same database (since you're using DD4T there's no nastyness related to file system replication or multiple deployers).
We are looking to build a cube in Microsft SQL server analysis services but would like to be able to use some of the automated testing infrastructure we have.
such as Cruise control for automated build, deployments and test.
I am looking for anyone that can give me any pointers on building tests against analysis services, and also any experience with adding these to a build pipeline.
Also if automation is not possible some manual test methods.
Recently I came upon BI.Quality project on codeplex and from what I can tell it's very easy to learn and to integrate into existing deployment process.
There is another framework named NBi. You've additional features compared to BI.Quality as to check the existence of a measure, dimension, attributes, the order of members, the count of members. Also when comparing two result sets it's often easier to spot the difference between them with NBi. The edition of the test-suites is also done in one single xml file validated by an XSD (better user-experience).