Branching and merging Biztalk orchestrations and maps - biztalk

If making changes to a Biztalk orchestration (.odx) or map (.btm) in two branches, e.g. in tfs or git, is there robust and well-defined way of merging the changes from one branch to the other?

Unfortunately, there's no good way to really merge or diff ODX and BTM files. BTM files in particular get rough because they tend to be stored on a single line. ODX files contain plenty of GUIDs that change, as well as designer information that's difficult to merge. This generally means that if there are differences I end up taking server or keeping local and working with the other developer to make changes.
Your best strategy is probably to put code that will change into a helper library and to call it from expression shapes and/or ExternalAssembly scripting functoids. Source control will work well for branching/merging changes to a .NET library.
Obviously this won't be able to capture all differences. You should also try to modularize Orchestrations when possible (use call/start orchestration, partner correlation, etc.) so that the individual artifacts are small and won't require (as?) many concurrent changes.
One other possibility for maps is to have them refer to external XSLT that is source controlled - but then you lose the value of the mapper designer.

Related

How to structure a proper 3-tier (no ORM) web project

I m working on a legacy web project, so there is no ORM(EF,Nhibernate) available here.
The problem here is I feel the structure is tedious while implementing a new function.
let's say I have biz Object Team.
Now if I want to get GetTeamDetailsByOrganisation
,following current coding style in the project,I need:
In Team's DAL, creat a method GetTeamDetailsByOrganisation
Create a method GetTeamDetailsByOrganisation in the Biz Object Team, and call the DAL method which I just created
In Team's BAL, wrap up the Biz object Team's method in another method,maybe same name, GetTeamDetailsByOrganisation
Page controller class call the BAL method.
It just feels not right. Any good practice or pattern can solve my problem here.
I know the tedium you speak of from similarily (probably worse) structured projects. Obviously there are multiple sensible answers to this problem, but it all depends upon your constraints and goals.
If the project is primarily in maintenance mode with very no new features being added I might accept that is the way things are. Although it sounds like you are adding at least some new features.
Is it possible to use a code generator? A project I worked on had a lot of tedium like this, which apparently was caused because they originally used a code generator for the code base which was lost to the sands of time. I ended up recreating the template which saved me a lot of time, sanity, and defects.
If the project is still under active development maybe it makes sense to perform some sort of large architectural change. My current project is currently in this category. We're decoupling code and adding repositories as we go. It's a slow process that takes diligence and discipline by the whole dev team. Each time a team takes on a story they tax that story with rewriting some of the legacy code in that area. To help facilitate this we gave a presentation to the rest of the team to get buy-in and understanding. We also created some documentation for our dev team that lists out the steps to take and the things to watch out for. In the past 6 months we've made a ton of progress. We don't have the duplication you speak of, but we have tight coupling issues which makes unit testing impossible without this refactor.
This is less likely to fit your scenario, but it may also be a possibility to take certain subset of features and separate those out into separate services that can be rewritten using a better platform and patterns. The old codebase can interoperate at the service layer if needed. You likely make changes in certain areas more than others, so the areas of heavy change might be top priority to move to a dedicated service. This has the benefit of allowing you to create a modern code base without having to rewrite the entire application from scratch all at once. This strategy is what Netflix has done to rewrite their their platform as they go and move it to the cloud.

How many files are too many in an ASP.NET MVC project?

I've seen some teams that start breaking into multiple projects from the beginning and others build behemoth single projects. The large project teams say that one massive project is easier to maintain than multiple smaller projects.
In general, how many files is too many?
The answer as George suggested it depends...but you may have large project with many areas which are new feature with asp.net mvc 2.0 . The time when you do want to break it another project is when you are trying to reuse that in another project. Since you do not want something coupled that it needs quite a few changes to work.
So, you need to analyze and understand the reusability of your projects.Ideally you would not want to have one big project with everything...you can divide into libraries..helpers..Models..etc..But again depends on how and what are you implementing..and sometimes a one large project also works.
There is probably no right or wrong answer here, but from experience, some teams know that certain components logically belong in a separate project, therefor they break it down initially.
Some teams might find that because a project is unmaintainable in it's current form, decide to break it down logically into more manageable parts.
As developers, we should always be breaking down problems into more manageable and consumable bits of work. The concept applies to solutions/projects that grow to a size that is just not favorable.
Short and simple answer.
If it becomes to big and messy, break it down.
We don't worry so much about 'files in project', rather we subdivide by 'projects contain logically related function'. Our base libraries are divided by functional area (UI, data access, etc), then the app components by function - reporting, contract maintenance/info, various odd & sundry table maintenance things, deal maintenance/info, rights maintenance/info, etc. (some of the terminology is domain-specific)
Given our translated-from-client/server app is fairly large, we decided that logically related separation would provide a simpler maintenance scheme.
If all your projects are in one solution, there's not really much of a difference (until you get beyond 10 or so projects). If you only plan on having one app, keep it in a single project but separate by folders if you feel that is easier.
We typically separate our project by tiers ... i.e. a web tier project, model/business logic tier, and data access or OR/M tier. It makes it easier for us to manage and conceptually think about the various apps. It also helps prevent us from mixing together concerns (i.e. you probably wouldn't want your model accessing the System.Mvc namespace, but if everything is in one project it's easier for a developer to slip such 'features' in).

Data Binding in 3 layered architecture?

Does data binding fit in a 3 layered architecture? Is dropping a grid-view on a web form and binding it to a LinkDataSource or SQLDataSource considered bad? The way I see it, that's the Presentation Layer talking to the Data Access Layer. I once heard a "professional developer" say never ever do this, so what's the alternative if you shouldn't?
The way you are doing is ok if it is a small project, but if you want your app. to have flexibility to support Windows/ Web in future then you must use Layers.
Please follow this link http://www.dotnetspider.com/resources/1566-n-Tier-Architecture-Asp.aspx
You should have a middle tier between Presentation and Data Access layers, the middle tier is pulled out from the presentation tier and, as its own layer, it controls an application’s functionality by performing detailed processing.
The main task of Business layer is business validation and business workflow.
When you build your business logic components into an SDK, you are effectively disconnecting it from your Web application, and any input validation that it performs. Therefore, your business logic components are the last line of defense to make sure that only valid values make it into your database.
Databinding is, of course, necessary to effectively dispay data.
Tooling is great and can boost productivity. It is equally important to understand what the tooling is generating, even at a basic level, in order to be able to effectively utilize the generated code.
The reaction you describe seems a bit extreme. If a wizard can generate some code that works for ya, then use it. If you don't understand the generated code then that is the next priority; learn about what it is doing and why. In the meantime, you have a page that people can put eyes on regardless of how it got there.
I am a bit pragmatic when it comes to tools. You do what you have to do. But, if after [insert appropriate internship length] you are still using code gen and cannot customize or fix it then you (as in the royal you, not the you you) are being lazy or stupid or both. ;-)
OT:(almost) Never say never unless you want to lessen the impact of what you are trying to communicate.
my 2 pesos.
When you're doing a small project or a prototype, go with the LINQDataSource or SQLDataSource. However, the downsides of those data sources are serious enough for you to think hard if they are appropriate. If your doing a multi-layered or multi-tear architecture, they simply don't fit. But even if your architecture isn't that strict, you should ask yourself how big this application is going to be and how likely it is going to be that you will make changes to the system in the future. How much time it is going to take you when you want to make a change to the database?
I've seen projects were the developers used those data sources, because those were the constructs that were used in those nice ASP.NET video's. However, when the projects grown from prototypes to big production applications (yes, I’ve seen it happen, the prototype seemed good enough), the lack of compile time support (your queries are defined in markup!) made it very hard to do any change to the system.
When you need to make a change to the system, that will be the time that you’ll see that the cost of the change is a magnitude bigger than the time you saved by flattering your architecture.

Abstraction or not?

The other day i stumbled onto a rather old usenet post by Linus Torwalds. It is the infamous "You are full of bull****" post when he defends his choice of using plain C for Git over something more modern.
In particular this post made me think about the enormous amount of abstraction layers that accumulate one over the other where I work. Mine is a Windows .Net environment. I must say that I like C# and the .Net environment, it really makes most things easy.
Now, I come from a very different background made of Unix technologies like C and a plethora or scripting languages; to me, also, OOP is just one, and not always the best, programming paradigm.. I often struggle (in a working kind of way, of course!) with my colleagues (one in particular), because they appear to be of the "any problem can be solved with an additional level of abstraction" church, while I'm more of the "keeping it simple" school. I think that there is a very different mental approach to the problems that maybe comes from the exposure to different cultures.
As a very simple example, for the first project I did here I needed some configuration for an application. I made a 10 rows class to load and parse a txt file to be located in the program's root dir containing colon separated key / value pairs, one per row. It worked.
In the end, to standardize the approach to the configuration problem, we now have a library to be located on every machine running each configured program that calls a service that, at startup, loads up an xml that contains the references to other xmls, one per application, that contain the configurations themselves.
Now, it is extensible and made up of fancy reusable abstractions, providers and all, but I still think that, if we one day really happen to reuse part of it, with the time taken to make it up, we can make the needed code from start or copy / past the old code and modify it.
What are your thoughts about it? Can you point out some interesting reference dealing with the problem?
Thanks
Abstraction makes it easier to construct software and understand how it is put together, but it complicates fully understanding certain issues around performance and security, because the abstraction layers introduce certain kinds of complexity.
Torvalds' position is not absurd, but he is an extremist.
Simple answer: programming languages provide data structures and ways to combine them. Use these directly at first, do not abstract. If you find you have representation invariants to maintain that are at a high risk of being broken due to a large number of usage sites possibly outside your control, then consider abstraction.
To implement this, first provide functions and convert the call sites to use them without hiding the representation. Hide the data representation only when you're satisfied your functional representation is sufficient. Make sure at this time to document the invariant being protected.
An "extreme programming" version of this: do not abstract until you have test cases that break your program. If you think the invariant can be breached, write the case that breaks it first.
Here's a similar question: https://stackoverflow.com/questions/1992279/abstraction-in-todays-languages-excited-or-sad.
I agree with #Steve Emmerson - 'Coders at Work' would give you some excellent perspective on this issue.

Architecture for Satellite Parts of a Larger Application

I work for a firm that provides certain types of financial consulting services in most states in the US. We currently have a fairly straightforward CRUD application that manages clients and information about assets and services we perform for each. It only concerns itself with the fundamental data points and processes that are common to all locations--the least common denominator.
Now we want to implement support for tracking disparate data points and processes that vary from state to state while preserving the core nationally-oriented system. Like this:
(source: flickr.com)
The stack I'm working with is ASP.Net and SQL Server 2008. The national application is a fairly straightforward web forms thing. Its data access layer is a repository wrapper around LINQ to SQL entities and datacontext. There is little business logic beyond CRUD operations currently, but there would be more as the complexities of each state were introduced.
So, how to impelement the satellite pieces...
Just start glomming on the functionality and pursue a big ball of mud
Build a series of satellite apps that re-use the data-access layer but are otherwise stand-alone
Invest (money and/or time) in a rules engine (a la Windows Workflow) and isolate the unique bits for each state as separate rule-sets
Invest (time) in a plugin framework a la MEF and implement each state's functionality as a plugin
Something else
The ideal user experience would appear as a single application that seamlessly adapts its presentation and processes to whatever location the user is working with. This is particularly useful because some users work with assets in multiple states. So there is a strike against number two.
I have no experience with MEF or WF so my question in large part is whether or not mine is even the type of problem either is intended to address. They both kinda sound like it based on the hype, but could turn out to be a square peg for a round hole.
In all cases each state introduces new data points, not just new processes, so I would imagine the data access layer would grow to accommodate the addition of new tables and columns, but I'm all for alternatives to that as well.
Edit: I tried to think of some examples I could share. One might be that in one state we submit certain legal filings involving client assets. The filing has attributes and workflow that are different from other states that may require similar filings, and the assets involved may have quite different attributes. Other states may not have comparable filings at all, still others may have a series of escalating filings that require knowledge of additional related entities unique to that state.
Start with the Strategy design pattern, which basically allows you outline a "placeholder", to be replaced by concrete classes at runtime.
You'll have to sketch out a clear interface between the core app and the "plugins", and you have each strategy implement that. Then, at runtime, when you know which state the user is working on, you can instantiate the appropriate state strategy class (perhaps using a factory method), and call the generic methods on that, e.g. something like
IStateStrategy stateStrategy = StateSelector.GetStateStrategy("TX"); //State id from db, of course...
stateStrategy.Process(nationalData);
Of course, each of these strategies should use the existing data layer, etc.
The (apparent) downside with this solution, is just that you'll be hardcoding the rules for each state, and you cannot transparently add new rules (or new states) without changing the code. Don't be fooled, that's not a bad thing - your business logic should be implemented in code, even if its dependent on runtime data.
Just a thought: whatever you do, completely code 3 states first (with 2 you're still tempted to repeat identical code, with more it's too time-consuming if you decide to change the design).
I must admit I'm completely ignorant about rules or WF. But wouldn't it be possible to just have one big stupid ASP.Net include file with instructions for states separated from main logic without any additional language/program?
Edit: Or is it just the fact that each state has quote a lot a completely different functionality, not just some bits?

Resources