What are good resources for learning how to manage builds and releases? - build-process

I recently took on the responsibility for managing our company's builds and releases. We ship our products as both a web service and as a licensed product that customers can install on their internal servers.
My job involves making sure QA has the builds they need for testing, which may come from the main development branch or feature-specific branches, depending on their current focus, and may be for one of two different products. It also means releasing our products internally for dogfooding, which means we deploy to an internal server. Finally, I cut official builds for our customers by creating new versions of the installer for those who install internally, and pushing updates to the website for our hosted customers.
So far, I've picked up the Pragmatic Programmers' Ship It! and Release It!, both of which seem useful. What other books I should pick up and read? Are there communities or well-known bloggers I should follow that deal specifically with the challenges of building, deploying and shipping web services to our own servers and to customers internal servers?

I really liked Pragmatic Project Automation

CM Crossroads is an excellent resource.

Related

How to implement Agile Scrum for a testing team that does BAU and project work simultaneously

Recently, I have taken over the role as a test manager for a company. I am looking at avenues to implement agile. The team does BAU tasks and project work in parallel.
Basically, the work is shared between the team members. We have internal releases which is handled by our in house developers and external releases handled by our COTS supplier. I am planning to implement the following
Bring internal projects and BAU tasks under Agile
All external release and projects to remain waterfall as we depend on other agencies and external vendors.
Does this approach make sense or should I split the team into projects and BAU? Please share your experiences and thoughts.
I would say the big problem here is, why is there a testing team? Scrum doesn't have testing teams - development and testing is done in the same sprint, by the same team.
If there is a BAU dev team and project dev team, then think about splitting your "testing team" and inject people into each of those teams.

Alfresco Community Enterprise Feature Comparison

I've seen this question but the answers are simply not good enough. I've searched the web and could find a clear listing of the main differences.
I am particularly surprised to see contradictions in the above link, that holds only 4 short answers.
So the question is, beyond support, what are (all) the differences between Alfresco Community and Enterprise editions (for the current versions of course)?
Are there functional or technical features that available in the Enterprise edition, that are not in the community edition?
I find it strange that it's so difficult to get a clear list. Looking at the forums to find this answer is not a serious option from a business perspective.
Until now, I found this link to be useful, but it's from 2009.
In particular, I find the platform support interesting, with the community edition supporting only lamp stuff:
Linux
MySQL
Tomcat
OpenLDAP
Firefox
And the enterprise edition supporting:
Windows
SQL Server
WebLogic, WebSphere
AD/Kerberos
IE and Safari
Apparently, these features are only available in the enterprise edition:
JMX monitoring
Runtime admininstration: What's that exactly? And what's in the community edition then?
Runtime indexing consistency check and update: What's in the community edition then?
High performance and availability: How is that implemented and what's in the community edition then?
Storage policies
Open source and proprietary technology stack support: which ones exaclty? Which ones are supported in the community edition?
If anyone could guide me towards serious documentation about these differences, that would be great.
I also went through the wiki but could not find an answer to my questions in there.
differences between Enterprise and Community vary in detail from version to version and are mainly visible for administrators. We see or maintain both flavors of Alfresco in midsize to very large environments and I would say it's more or less a question of taste and budget what the best decision / edition is for you. Excellent skills in infrastructure and java are highly advisable for both editions to run Alfresco in production.
The technical differences are not as dramatic as not being able to provide very similar functionality for the users - so if you're actually in a decision you should focus on a good technical partner, the support services and maybe the fact that you only get official patches in the Enterprise subscription, not on the Community. BTW Alfresco Enterprise is not Open Source but this is not a real point of interest for most end users. You can access the code as a subscription customer but it is not public available/accessible.
The main differences in features are already named more or less:
Administration
Enterprise has more views and setting in the admin web GUI. In Community you can access most configuration only from the command line. This may be a restriction but in real live Administrators prefer the command line and scripting automation.
Enterprise lets you change some Alfresco settings during runtime (most settings still require restart). Some can be change in the GUI and more in the jmx interface. Also you're able to stop and start subsystems like the CIFS protocol server. We use this feature to switch a system in read only mode. This point is meant with "runtime admininstration". Community requires restart of the service for most configuration changes. It is possible to work around this by advanced scripting like groovy or by implementing modules.
Indexing
Runtime indexing consistency check and update is not a self healing functionality as expected. You will have to learn (at least for now) that you have to recreate the Alfresco index from time to time even in Enterprise environments and that it is better to focus on good strategies how to speed recreation or how to setup standby indexes instead of hunting failed indexing transactions using the check and update methods. For major document model changes you need to recreate the index anyway.
High performance and availability
This is mainly the cluster and replication functionality which is no longer available in Community. It's similar to MS Clusters: It's a lot, lot work for very view more availability since some concepts are missing. The price is high in terms of complexity and can end up in loss of robustness. Even with enterprise support it's a hard job to keep a alfresco cluster running - so you need very good arguments why to go this way. But of course: its possible and available!
High performance: There shouldn't be any difference and if - I'm very curious about the explanation.
Technology stack
The main difference is the database support. In the Community you only can choose between MySQL and Postgres (No Oracle or MS SQL for Community). All other technologies are independent from Enterprise or Community (AD, Kerberos, OS, Browser, ...)
Java Container: I believe over 95% of all Alfresco installations run in tomcat. That's the configuration which is documented, tested and scales. Using WebLogic or WebSphere gives you no added value except new challenges - quite the contrary: You have to solve most issues for yourself and can't benefit from others experience.
Storage policies: I'm not pretty sure and should check in 4.2.x if the Content Store Selector / Storage policies is no longer available in the Community, but it was there in the 3.x versions.
[Edit]: storage policies have been removed in Community 4.2.x:
NoSuchBeanDefinitionException: No bean named 'storeSelectorContentStoreBase' is defined
If there is a really need for this functionality someone may re-enable that feature by coding a module for Community.
Regards
This page explains the difference between the editions:
https://wiki.alfresco.com/wiki/Enterprise_Edition
This page is the canonical, comprehensive list of the differences.
If you are considering an Enterprise Subscription and you have a question that isn't answered by what you can find on that page, you should talk to your account rep.
Well, regarding JMX monitoring:
Runtime administration: Alfresco enterprise allows to perform certain actions on Alfresco subsystems without restarting the server. This allows you to be very fast during debugging/developing and also making changes in production environment. Also you can access the JMX interface that supports JMX Remoting.
There is no consistency check or update, until you restart the server (during the startup you have to validate/check/rebuild your indexes). There is an option in alfresco.global.properties (or the original repository.properties config file) for that. If you have some inconsistencies in the Alfresco Community index, you're gonna have a bad time xD.
Alfresco Enterprise has specific license for clustering your architecture, the Community edition doesn't support those systems. Replicate and cluster Alfresco is one of the main improvements in performance/scalability/availability you could achieve.
The storage policies allow you to use Content Store selectors in Alfresco Enterprise. You can manage a primary and a secondary file store, and map/connect these stores in your architecture. The Community Edition allows you only to use one content store at a time.
These include everything inside Alfresco (Spring Framework, Apache-Lucene/Solr, Tomcat, and so on), because with the Enterprise license you have also the full support with everything inside the Alfresco package. The difference is that the Community is based on daily builds, supported by community, and therefor not guaranteed. The Enterprise support helps you resolve many problems that you might encounter during developing and in production environment, not only Alfresco related, but also on some configurations on supported platforms (Windows/Linux), your web application servers, and so on.
Hope it helps.

Proper DTAP setup for Content Delivery

I've had this setup, but it didn't seem quite right.
How would you improve Content Delivery (CD) development across multiple .NET (customer) development teams?
CMS Server -> Presentation Server Environments
CMS Production -> Live and Preview websites
CMS Combined Test + Acceptance (internally called "Staging") -> Live ("Staging")
CMS Development (DEV) -> Live (Dev website) and sometimes Developer local machines (laptops)
Expectations and restrictions:
Multiple teams and multiple websites
Single DEV CMS license (typical for customers, I believe?)
Enough CD licenses for each developer
Preferably developer could program and run changes locally--was this a reasonable expectation?
Worked
We developed ASP.NET pages using the Content Delivery API against the same broker database for local machines and CD DEV. Local machines had CD dlls, their own license files, and ran/debug fine with queries and component presentation calls.
Bad
We occasionally published to both the Dev presentation server and Developer machines which doesn't seem right now, but I think it was to get schema files on our local machines. But yes, we didn't trust the Dev broker database.
Problematic:
Local machines sometimes needed Tridion-published pages but we couldn't reliably publish to local machines:
Setting multiple publication destinations for a single "Local Machine" publication target wouldn't work--we'd often take these "servers" home.
VPN blocked access to laptops offsite (used "incoming" folder at the time).
Managing publication targets for each developer and setting up CD for each new laptop was good practice (as in exercise, not necessarily as a good idea) but just a little tedious.
Would these hindsight approaches apply?
Synchronize physical files from Dev to local machines on our own?
Don't run presentation sites locally (localhost) but rather build, upload dll, and test from Dev?
We were simply missing a fourth CMS environment? As much as we liked our Sales Guy, we weren't interested in purchasing another CM license.
How could you better setup .NET CD for several developers in an organization?
Edit: #DominicCronin pointed out this is only a subset of a proper DTAP setup. I updated my terms and created a separate question to clarify DTAP with Tridion.
The answer to this one is heavily depending on the publish model you choose.
When using a dynamic model with a framework like DD4T you will suffice with just a single dev environment. There is one CMS, and one CD server in that environment and everything is published to a broker database. The CD environment could be used as an auto build system, the developers purely work locally on a localhost website (which gets the data from the dev broker database), and their changes are checked in an VCS (based on which the auto build could be done).
This solution can do with only a single CMS because there is hardly any code developed on the CMS side (templates are standardized and all work is done on the CD side).
It gets more complex if you are using a static or broker publishing model. Then I think the solution is to split Dev up in Unit-Dev and Dev indeed as indicated by Nuno and Chris.
This solution requires coding on both the CMS and CD side, so every developer has a huge benefit in having its own local CMS and CD env.
Talk to your Tridion account manager and agree a license package that suits the development model you want to have. Of course, they want to maximise their income, but the various things that get counted are all really meant to ensure that big customers pay accordingly, and smaller customers get something they can afford at a price that reflects the benefits they get. In fact, setting up a well-thought-out development street with a focus on quality is the very thing that will ensure good customer satisfaction and a long-running engagement.
OK - so the account managers still have internal rules to follow, but they also have a fair amount of autonomy in coming to a sensible deal with a customer. I'm not saying this will always work, but its way better than blindly assuming that they are going to insist on counting every server the same way.
On the technical side - sure, try to have local developer setups and a common master dev server a-la Chris's 5th. These days, your common dev environment should probably be seen as a build/integration server: the first place where the team guarantees all the tests will run.
Requirements for CM and CD development aren't very different, although you may be able to publish to multiple developer targets from one CM if there's not much CM development going on. (This is somewhat true of MVC-ish approaches, but it's no silver bullet.)

Migrating an ASP.Net App to Azure

I'm getting close to finishing a public-facing ASP.Net app and I'm starting to weigh deployment options. I'm an ASP.Net/SQLServer veteran but noob when it comes to Azure. I'm wondering how others have felt about the learning curve to effectively migrate a local dev ASP.Net/SQLServer apps into Azure cloud.
More specifically:
How steep is the learning curve towards understanding administration and programming concepts, and do you think it's worth the investment?
What is Microsoft's support like if I have catastrophic problems from my cloud infrastructure and my live site is down? My expectation is a large price tag for a not-so-urgent SLA.
Will my non-Azure ASP.Net app require significant modification and/or coupling to run in the Azure environment?
Thanks
I answered a similar question a while back, here. Azure has evolved since then:
Azure's AppFabric Cache is currently in CTP (community technology preview) and will go live some time later this year (sorry, I can't quote a date). With a single configuration change, you'll be able to enable the asp.net session state provider without changing any code, and have your session state available to all of your web role instances.
With Azure v1.3 which rolled out in November, you have have the ability to run tasks at startup with elevated privileges (e.g. to run an MSI to install some prerequisite control suite).
For monitoring, you can take advantage of Microsoft System Center, which now supports Azure directly. Alternatively, you can look into 3rd-party options such as AzureWatch.
With Azure's extra-small instance, you can run a site for approx. $44 monthly. You mentioned catastrophic failures and SLA. With Azure, you need a minimum of two instances for SLA to take effect (this is because your virtual machines are located in physically different areas of the data center, in separate fault domains). So you're looking at approx. $90 / month to run a site with 99.95% uptime. Only you can determine whether this is worth it to you. Yes, you can host with a simple hosting provider for significantly less (such as GoDaddy). However, if your site fails there, you have to wait for it to be detected and then installed on a separate box. Also, you share each server with potentially dozens of other tenants, which will impact your site's performance. With Azure, at most 8 tenants will occupy a box, depending on how many cores you configure your virtual machines to use. And it's incredibly simple to scale up or down to handle traffic increases and decreases.
My personal experience is that there isn't much documentation and you have to search through blogs / forums to find answers for more advanced questions. If you have a nicely design app then there shouldn't be much problem with porting - you can google for Azure version of ASP.NET providers, ie. membership.
The biggest disadvantage may be cost: you have to do your maths but for me it turned out that a VPS hosting is much cheaper than Azure.
I would say that unless you get considerable savings on infrastructure don't move to Azure for just the sake of doing it. A hosted server with SQL and IIS will give you less problems and a bit more freedom.
I see an excellent answer by David Makogon already. The following might be helpful for you as well. The last episode of the Connected Show podcast was about migrating Wold Maps to Azure. If you are considering moving to Azure it is certainly worth listening to, as they explain the challenges they faced during the migration.
You could give a look at Moving Applications to the Cloud on the Microsoft Windows Azure Platform in MSDN.
Cheers.

Lucene.Net and incubation status

I'm evaluating options to make our search more powerful on our .Net website. I need to look into whether we purchase software/hardware such as the Google Search Appliance (GSA) or develop the solution using a framework such as Lucene.Net
We're a startup, and the GSA provides a lot of good functionality out of the box, but we would need two boxes, with the second as the backup/dev environment and things start getting expensive.....
We have used SQL Server full text in the past, but we're keen to provide very intuitive "Googlesque" type searching to our site and we've struggled to do everything we want with SQL Server.
But, I am not sure what "incubator status" for the Lucene.Net project actually implies. Should I be considering a project that is in incubator status? Is it not active? Will it at some point move into a more active status or be archived off?
Thanks
Lucene.NET is a currently active and updated project. The fact that is hosted as incubated under Apache is a good thing and not a negative one. As you can read on Apache incubation site, Lucene.NET is awaiting for a review and a final approval, but this doesn't mean it's unstable or unsupported.
Concerning your main question, i think using it for the development stage would be an accepptable choiche if you're a startup.
I am not sure what "incubator status" for the Lucene.Net project actually implies
It means that the project, which was an external project, is being evaluated by apache for inclusion in the apache "stable" - I guess they have to make sure the processes are right, that there isn't patented code in there etc etc.
It has NO reflection on the code. Lucene.NET trunk is stable (v2.1), and the downloadable version (v2.0) is also stable, but not "as stable" or as updated.
If you have more questions, I'd suggest you jump on the mailing list (http://incubator.apache.org/lucene.net/) and ask George or DIGY. I've been using it on commercial projects - both internal (http://www.topgear.com for example) and packaged (not sure I can say, but it's an email archiver) since 1.xx, and it works GREAT.
I'd suggest you have a look at Solr, too. It uses the Java Lucene, and is basically an external search server, but you push info into it, rather than it trawling your site. It's on the apache lucene site.
Log4net was in incubation status for a long time in the Apache project. It was still recommended and used extensively. I'd be ok with using Lucene.Net for a couple of reasons. First, as #ste09, says incubation status is a good thing. Second, Lucene (the Java version) is a full-fledged project at Apache. Similar to log4j/log4net, I think this bodes well for Lucene.Net making it out of incubation status.

Resources