How to get a handle on all this middleware? - soa

My organization has recently been wrestling the question of whether we should be incorporating different middleware products / concepts into our applications. Products we are looking at are things like Pegasystems, Oracle BPM / BPEL, BizTalk, Fair Isaac Blaze, etc., etc., etc.
But I'm having a hard time getting a handle on all this. Before I go forward with evaluating the usefulness (positive or negative) of these different products I'm trying to get an understanding of all the different concepts in this space. I'm overwhelmed with an alphabet soup of BPM, ESB, SOA, CEP, WF, BRE, ERP, etc. Some products seem to cover one or more of those aspects, others focus on doing one. The terms all seem very ambiguous and conflated with each other.
Is there a good resource out there to get a handle on all these different middleware concepts / patterns? A book? A website? An article that sums it up well? Bonus points if there is a resource that maps the various popular products into which pattern(s) they address.
Thanks,
~ Justin

I've spent the last 3-4 years blogging on the topics you mentioned (http://www.UdiDahan.com) as well as writing my own lightweight ESB (http://www.NServiceBus.com) and many more years working and consulting in this space. The main conclusion that I've come to is that strong business analysis and technologically-agnostic architecture is needed - no tool or technology can prevent a mess by itself.
There is the Enterprise Integration Patterns book which provides a good catalog of the technical patterns involved but doesn't touch on the necessary business analysis. I've found that Value Networks (http://en.wikipedia.org/wiki/Value_network_analysis) can be used as a good start for identifying business boundaries to which IT boundaries can be then aligned, resulting in the benefits of SOA, and the use of an ESB across those boundaries is justified.
CEP, WF, and BRE should be used within a boundary and not across them.
ERP packages tend to cross boundaries and, as such, should be integrated piecemeal into the boundaries mentioned - DDD anti-corruption layers can be used to insulate custom logic from those apps.
Hope that helps.

IBM and Oracle have SOA certifications. Since they're the leaders in the marketplace (Gartner Magic Quadrant), I would read about how they define SOA and ESBs (along with methodology and the components needed to support SOA like Governance, Registry, etc etc). It'll give you the high level overview that you're looking for and the use cases "all this middleware" is trying to solve.

Related

What is "Privacy by Design"? And how to achieve it?

I noticed that tutanota and mega.io mentioned "Privacy by design" in their homepages. So, I became curious and found the wikipedia page about Privacy by design, but it seems to be an abstract concept (a collection of principals). However, I was looking for something like - do a and b or implement y and z. For example, mega.io uses Zero Knowledge Encryption (User-Controlled End-to-End Encryption). What other features do a product need to have to be called a "Privacy by Design" service.
By their very nature, abstract principles do not concern themselves with implementation detail. There are many different ways to implement them, and mandating one approach over another is simply out of scope – what matters is the net effect. It's also applicable to non-tech environments, paper records, etc; it's not exclusive to web dev.
Privacy by design (PbD) is a term coined by Ann Cavoukian, an ex-information commissioner in Canada, and it has a collection of principles, as that Wikipedia page describes. PbD is also referenced by GDPR. I've given various talks on privacy and security at tech conferences around the world – you can see one of my slide decks on PbD.
So how do you use them in web development? Take the second principle: "Privacy as the default". This means that if a person using your web app does nothing special, their privacy must preserved. This means, amongst other things, that you should not load any tracking scripts (perhaps even remote content), and not set any cookies that are not strictly necessary. If you do want to track them (and thus break the user's privacy to some extent), then you need to take actual laws into account, such as the EU privacy directive, which is what requires consent for cookies and trackers.
So although the principle itself did not require these measures, it influenced the technical decisions you needed to make in your implementation in order to comply with the spirit of the principle. If that happens, the principle has done its job.
So what you have to do in order to claim privacy by design (though it's not like you get a badge!) is to introspect and consider how these principles apply to your own services, then act on those observations and make sure that the things you design and build conform to the principles. This is a difficult process (especially at first), but there are tools to help you perform "privacy impact assessments" (also part of GDPR) such as the excellent PIA tool by the French information commissioner (CNIL).
If you're thinking about PbD, it's worth looking at two other important lists: the data protection principles that have been the basis of pretty much all European legislation since the 1980s, including GDPR, and the 6 bases for processing in GDPR. If you get your head around these three sets of concerns, you'll have a pretty good background on how you might choose to implement something privacy-preserving, and also a good set of critical guidelines that will help you to spot privacy flaws in products and services. A great example of this is Google Tag Manager; it's a privacy train wreck, but I'll leave it to you to contemplate why!
Minor note: the GDPR links I have provided are not to the official text of GDPR, but a reformatted version that is much easier to use.

How much unity across different teams?

Our company builds several (Java) applications that loosely communicate with eachother via web services, remote EJB and occasionally via shared data in a DB.
Each of those applications are build and maintained by their own teams. 1 or 2 persons for the smaller apps, and almost 10 for the largest one. The total amount of developers is approximately 25 FTE.
One problem we're facing is that there are some big egos among the teams. Historically the team of the largest app has set up a code convention and general guide lines. For instance our IDE is Netbeans, we use Hg for SCM, build with Ant and emphasize to first use as much from Java EE as possible, if that doesn't suffice use an external library and only resort to writing something yourself as a last resort. Writing things like yet another logging framework, orm, cms or web framework is pretty much not allowed following these guide lines.
Now some of the smaller teams go against this and start using Eclipse, Git and Maven and have an approach of writing as much as possible themselves and only look at existing things if time is short or they 'just don't feel like writing it themselves'. Where the main team uses log4j, one of the smaller teams just started writing their own logging framework.
There have been talks going on about all teams adhering to the same standards, but these have been 'troublesome' at best.
Now the big question I'd like to ask: does it actually matter that different teams do things differently? As long as each seperate app implements its requirements and provides the agreed upon interfaces, should we really force everyone to use Hg, Ant, the same code conventions, etc etc?
There is not much harm in letting each team use the technologies that work best for them. In fact if you restrict teams to the "standard" way of doing things you'll stifle innovation and have bad morale.
But you don't want things to diverge too much. There a few things you can do to prevent libraries and tools getting out of hand. The first thing is to have regular rotation of each member through the teams to cross pollinate ideas. In this way the best ideas will spread through the teams.
You can also enforce a "rule of 3", which simply says it is ok to introduce a second library, tool, logging approach, whatever. But as soon as you want to introduce a 3rd one, you have to remove one of the first two. In other words it is ok to have 2 competing logging frameworks but if there are 3 logging frameworks, choose one to kill.
A 3rd idea is to let developers run regular presentations to the entire developer group to demonstrate the pros and cons of each idea or approach. Encourage lots of discussion and constructive criticism. The purpose is to try many things and let everyone find the best way as a group.
Finally, Management 3.0 talks a lot more in depth about how teams make decisions. Well worth the read.

Feasibility of Scrum with certain modifications to the philosophy [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I would prefer if those who answer this question state whether or not they have experience developing in an Agile Environment or if they are speaking from a theoretical standpoint.
Backstory:
Let's say there is an opportunistic company that develops technologically innovative products (multi-touch interfaces, speech recognition devices, etc, etc) all of which are fundamentally unrelated. However, as one may see, the key advantage of working on products like these are that libraries can be created / extracted from the product and sold to other companies, developers, etc. Thus, working in an incremental fashion is advantageous as it allows the milestones to be separated from the final product.
Question1 : Is this advantageous from a business standpoint? Have any of you encountered the separating of libraries into individual products within your company?
Question2 : If products are indeed created in such an incremental manner, does Scrum seem like a valid methodology to apply?
Let's assume that this incremental process of creating components to piece together into a final application is set in place. The development team is initially very small, 6 or 7 people. For the fun of it, let's call this team a Guild. The company is just starting out, and they need to make something profitable. For argument's sake, let's say the Guild developed the FaceAPI Library. All of this was done within the Scrum methodology, let's say in one sprint. Now, the company has enough funding to employ 7 more people. These new 7 people are put into their own Guild, and their skills mirror the skills of the original Guild.
So now, this company has 2 Guilds, and 1 library off which to develop. Let's say that the one Guild is tasked with creating Product1 using the original library, and the other Guild is tasked with extending the library with more features. These two "sprints" would be carried out concurrently, and at the end the updated library would be merged into the application. As you can see, it is possible that some modifications might need to be made to the library by the team working on Product1, in which case the merge will be non-trivial.
In any case, this is the general idea. The company would have individual Guilds, or teams of people (Question 3: What do you think of this idea? Since teams are smaller, they would want to hire members that have good synergy. Is this likely to increase overall morale and productivity?), which would carry out sprints concurrently. Because of the nature of the service the company offers, the teams would work with more or less the same components, and parts of the applications, however their sprints could be created so that the teams could always carry out work without impediments. Each Guild would be a self-enclosed unit, having testers, designers, and QA's.
Final Questions:
As developers or testers, what are
your opinions on a company that
functions in this manner? Does it
foster leadership skills in
developers? Does it sound appealing?
Does it sound destined to fail?
Anyone with knowledge or experience
with Scrum, does it seem to apply
naturally in this kind of
environment?
Has anyone worked for
a company that functions similarly to
the above description? If you don't
mind answering, what was it called?
Was it successful?
To start with, I have been working on 3 more or less Scrum projects so far.
There are a couple of unclear things in your story. What is the company aiming for - developing libraries or final products? To me the two seems fairly conflicting, especially for a small company.
Another thing is, starting development with a library itself without any real users doesn't sound very agile to me. IMO an agile setup would start the other way around: develop a concrete product first, refactoring the design as dictated by the concrete situation, to possibly arrive to some sort of layered architecture, in which the lower layer(s) could be extracted into a reusable library. Then start developing more concrete products, looking for possibilities to reuse code between the projects, and evolving the design of the common library - again, as dictated by the concrete usage and needs of its clients (the product development teams).
At some point, library development would probably require its own team - in the beginning, it might suffice to have its design and its backlog coordinated between the different teams.
Regarding your question about teams treading on each other's code - this is what source control is for. Fork for the new stuff, then in the next sprint reintegrate and stabilise.
Regarding q2, scrum is an incremental approach so if the design lends itself to incremental segments of work then of course it's appropriate.
Regarding q3, how could it ever be a bad thing to hire "people that would work well within them and that they would want to work with"?
Team organization and system structure are highly dependent. See Conway's Law
This means that for you to have two separate teams working on two separate code modules (the Library team and the product team) you will need to have a clearly defined communication channel between the teams and thus, the code developed will reflect those channels in the design. Traditionally what this means is you end up defining an API or interface for the library which acts like a contract to which each team can develop. Agile practices normally adopt a more emergent design philosophy so it can be difficult to create an API that makes sense.
The way most agile teams get around this is by time boxing development to manageable increments. So while it might be unrealistic to design the entire API, the product team and library team could probably agree on an API design enough for 2 weeks of work. Write the code, deploy, design for the next iteration, and repeat. This way communication paths between the teams and code modules are established so the two teams can work independently without stepping on one another's toes.
Another option I've seen used recently is to have larger teams managed with a Kanban/Limited WIP process. Having everyone on the same team managed by a Kanban allows for more organic and flexible self-organization which means your system will be able to evolve more easily. By keeping work-in-progress highly visible it increases communication and by limiting work-in-progress you constrain developers from clobbering each other by keeping the system from evolving too far in any one direction. Combined with a solid VCS you should be good to go.
Finally, another option is that you take some time to really think about your architecture before diving into development. Using a software architecture design process such as the Architecture Centric Design Methodology (ACDM) in a limited "spike 0" kind of role could help you resolve many of the issues commonly encountered when allowing emergent design. By the end of the design sprint, you'll be able to lay out a plan that makes much more sense for what you need to do. And remember, just because it's a design phase doesn't mean you don't write code - quite the opposite. ACDM advocates strongly for experimentation.

How to balance DRY principle with minimizing dependencies?

I'm having a problem with the DRY principle (Don't Repeat Yourself) and minimizing dependencies that revolves around Rete rules engines.
Rules engines in large IT organizations tend to be Enterprise (note the capital "E" - that's serious business). All rules must be expressed once, nice and DRY, and centralized in an expensive rules engine. A group maintains the rules engine and are the keepers of the rules sets.
When that IT organization is part of an American insurance company, there tend to be lots of rules. There are rules that apply to all states and products, but each state tends to evolve its own laws for different products, so the rules need to reflect these quirks. The categories are many: actuarial, underwriting, even for ordering credit and motor vehicle reports from 3rd party bureaus.
The problem that I have from a design standpoint is that centralizing rules and processing is certainly nice and DRY, but there are costs:
Additional network hops to access the centrally located rules service and return results;
Additional complexity if the rules engine is exposed as a SOAP web service - consumers have to package up SOAP requests and OXM the response back to their own domain;
Additional interfaces between the enterprise group that maintains the rules engine, the business that sets and maintains the rules, and the developers that consume them;
Additional complexity - sometimes a data-driven solution might be enough.
Additional dependencies - components who don't have control of their own rules have to worry about external dependencies on the rules engine for testing, deployment, releases, etc.
These problems crop up with lots of other Enterprise technologies (e.g., B2B gateways, ESBs, etc.)
The same Enterprise groups also tout SOA as a foundational principle. But my understanding of proper service design is that they should tile the business space and be idempotent, independent, and isolated. How can a service be independent and isolated if its rules are maintained somewhere else?
I'd like to err on the side of simplicity, arguing that eliminating dependencies should take precedence over centralization if the rules can be shown to apply only in isolated circumstances. I'm not sure the argument will win the day.
So my questions are:
Where do you fall on the centralization versus independence argument?
What's your experience with Enterprise tools like rules engines?
How can I make the argument for isolation stronger?
If my view is incorrect, what argument would you make in favor of centralization?
In the long run, easy maintenance of the whole thing would be an absolute requirement.
So DRY should be honoured at all cost even if that involves a loss of performance here and there, some additional configuration issues and other "minor" problems.
Also "independent" is different than "self-contained".
Otherwise imagine the situation when you need to change something and you have to contact a lot of different parties to force them to update. With DRY you also solve the problem of having incompatible versions running at the same moment for a brief period of time.
So
Centralization > Indepenence (at least in the system you describe)
Single point of truth for rule engines (everybody on the same page)
Remind them the cost of maintenance as years pass
I find your view correct.
Your question is very Enterprise-specific, and I'm more into desktop stuff, so I hope this answer is not too general.
I liked the concept of Don't Repeat Yourself, until I found out how it was being codified and ossified. I liked it because it agreed with me (duh!) and my own ideas about how to make code more maintainable and less error-prone.
Basically, I see greater maintainability as requiring more of a learning curve on the part of the maintainer. I don't think there's an easy way around that. Here's an example of how to increase maintainability by a good factor, but not without a learning curve.

How many apps should an internal development group be building/maintaining?

I've always been of the opinion an internal development group should really only be building/maintaining three applications.
An internal composite/pluggable/extendable application.
The company website.
(Optional) A mobile version of #1 for field employees.
I'm a consultant, and everywhere I go, my clients have dozens of one-off applications in the web and on the desktop for every need no matter how related to the others. Someone comes to IT and says "I need this", and IT developers turn around and write another one-off ASP.NET application, or another WinForms app.
What's your opionion? Should I embrace the "as many apps as we want/need" movement? I assume it's common; but is it sensible?
EDIT:
A colleague pointed out that it depends on the focus of the development - are you making apps or are you making a system? I guess to me, internal development is about making a system; development of shippable software products, like MS Word, iTunes, and Photoshop, is about making apps.
All of them?
Wow do I ever agree with you. The problem is that many one-off applications will (at some point) each have many one-off maintenance requests. Anything from business rule updates to requests for new reports. At some point the ratio of apps that need to be maintained to available development staff is going to be stretched/taxed.
From my perhaps (limited?) vantage point, I'm starting to think #1 and #3 could be boiled down to Sharepoint. Most one-off applications where I work (a large 500+ attorney law firm) consist of one or more of the following:
A wiki
A blog
Some sort of list (or lists joined together in some type of relationship), which can be sorted and arranged in different ways.
A report (either a Sharepoint data view or a SQL Server Report work just fine)
Or, the user just wants to "make a web page" and add content to it. But only they should be able to edit it. Except when they're out of the office, and then, etc...
Try to build any one of the above using [name your technology], and you've got lots of maintenance cycles to look forward to (versus a relatively minor Sharepoint change).
If I could restate what I think is your point: why not put most of your dev cycles to work improving and maintaining a single application that can support most of your business' one-off needs, rather than cranking out an unending stream of smallish speciality apps?
This question depends on so many things and is subjective besides. I've worked at companies that have needed several different apps because we do business in discreet silos. In that case, an internal group may not build and maintain apps, but may build several, with another group that is responsible for maintenance.
Also, what do you mean by "app"? If you broaden the term enough, then you could say "it's all just one big app".
In short, I think the main consideration is the capacity of the group and what business needs are.
I think there should be internal development teams that each has a system which may contain multiple applications within it. To take a few examples of what I mean by systems:
ERP - If you are a manufacturer of products, you may need a system to keep track of inventory, accounting of books and money, and other planning elements. There are a wide range of scales of such systems but I suspect in most cases there is some customization done and that is where a team is used and may end up just doing that over and over if the company is successful and a new system is needed to replace the previous one as these can take years to get fully up and running. The application for the shop floor is likely not the same one as what the CFO needs in order to write the quarterly earnings numbers to give two examples here.
CRM - How about tracking all customer interactions within an organization that can be useful for sales and marketing departments? Again, there are many different solutions and generally there is customizations done which is another team. The sales team may have one view of the data but if there is a support arm to the company they may want different data about a customer to help them.
CMS - Now, here I can see your three applications making sense, but note what else there is beyond simple content.
I don't think I'd want to work where everything is a home grown solution and there is no outside code used at all. Lots of code out there can be used in rather good ways such as tools but also components like DB servers or development IDEs.
So what's the alternative to several one-off applications? One super-huge application that runs everything and everything? That seems even worse to me...

Resources