I have inherited supporting and making BizTalk applications as part of my development role.
I'm a general C# developer so I was pleased to see I can create Classes and call the Methods from the Expression shape of an Orchestration.
This means I can do all the data manipulation using code I am familiar and faster with rather than learn the BizTalk ways.
At this point I am not concerned with if it's a good idea or not.
Are there any purely technical reasons I should not do this?
You would have to make sure that whatever external methods you are calling is muli-threading capable and can handle high throughput.
If you don't achieve the above then you will either get some very strange issues (caused by cross thread contamination) or you will cause a bottle neck in BizTalk which will reduce message throughput.
You also need to make sure that errors are handled, retried and propagated back correctly to the calling Orchestration on failure. I came across one solution where the developer for some reason had decided to call a web service using an external class. Every so often this web service would throw an error, but the class would just pass the error message as it was back to the Orchestration as if it was a valid response message. This would cause a failure later on in the Orchestration when it tried to use the message and it did not match the expected message. When I got allocated budget I replaced this class with a properly configured send port, which also automatically retried the message when it encountered the web service error and then it successfully processed.
Technically you are adding deployment and maintaining complexity with the externals assemblies, for example any change in some contract will require changes in the assembly and the orchestration.
And you're losing all the advantages of the BizTalk mapping engine for data transformation which is in general an easy part to learn.
I say this, it's very important for future maintainability to develop BizTalk apps in the "BizTalk Way".
For example, if you did message manipulation in an external class that should have been done in a Map or with XPath, I would fail that in Code Review and you'd have to refactor.
The reason is because whoever might take this over from you should expect a BizTalk app. I've seen situations such as you describe and it does make it harder to upgrade, enhance, support new business requirements because now the developer has to accommodate the BizTalk and external processes.
Technically using .NET classes does not break anything in the BizTalk. Lots of BizTalk components are based of .NET framework such as Adapters, pipeline components etc. In my view, use the best of both BizTalk and .NET depending on the scenarios. e.g. if you need to map an inbound XML to outbound XML use BizTalk maps as they are lot easier and quicker to implement, in this scenario using a .NET class is more tedious then using map. I don't think there is a big learning curve in using BizTalk features such as Maps, orchestrations, pipeline components.
Related
ASP.NET is known to exhibit what is called "thread agility". In short, it means that multiple threads may be employed to fulfill a single request, although not more than one thread at a time. This is an optimization that means a thread waiting for asynchronous I/O may be returned to the pool and used to service other requests.
However, ASP.NET does not migrate all thread-related data when moving a request. Microsoft either forgot to do so, or thought that using thread-local storage (made easy by the ThreadStatic attribute) was something only the people coding ASP.NET themselves should do.
Based on quick googling, it seems to me that the only way to avoid the issue is to rely on HttpContext instead. The context is indeed migrated if ASP.NET decides to switch threads mid-request, so this overcomes the problem. But it creates a brand new headache instead: It ties your application logic to HttpContext, and therefore to a web context. That's not acceptable in all situations (in fact, I'd say it's unacceptable in most). Besides, since HttpContext is sealed and has internal constructors, you cannot mock or stub it, and therefore your logic also becomes untestable.
According to this (old) blog post, CallContext does NOT work, which is pretty infuriating given that a call context is conceptually precisely a logical thread!
Is there a simple way to reliably implement "per-LOGICAL-thread" isolation that will work in asp.net contexts as well as other contexts?
If not, does anyone know of a lightweight third-party framework that solves the problem? Does StructureMap behave correctly when ASP.NET migrates threads?
I would like a general answer, but in case anyone wonders, the specific use case I'm looking at is for use of Entity Framework in a SharePoint context. We're unfortunately stuck with SP-2010 and EF 3.5 for a while. EF basically requires that data is saved using the same context as they were originally read from - or else you have to keep track of changes yourself. I would like to introduce a "current model" concept. The first time the model is called upon in processing each HTTP request it should be instantiated, and then that same model instance should be used for the duration of the request. But the code relying on "Model.Current" should also work if executed in the context of a timer job. I'm fine with the timer job code explicitly disposing of the model when done with it (a task I'd like to give to a handler for HttpApplication.EndRequest in the SharePoint web context).
There may be reasons not to do this, and that's interesting too, but I would anyway really appreciate to learn of a way to achieve "logical thread isolation" in an asp.net context, as it'd be remarkably useful.
There is a nice post related to the problem: Implicit Async Context ("AsyncLocal").
If I got everything right, Logical CallContext i.e. CallContext.LogicalGetData and CallContext.LogicalSetData make it real to migrate immutable data correctly given you live in the world past .NET 4.5. This immutable limitation is a nut but still...way to go.
I'm trying to refactor an existing project into PureMVC. This is an Adobe AIR desktop app taking advantage of the SQLite library included with AIR and building upon it with a few other libraries:
Paul Robertson's excellent async SQLRunner
promise-as3 implementation of asynchronous promises
websql-js documentation for good measure
I made my current implementation of the database similar to websql-js's promise based SQL access layer and it works pretty well, however I am struggling to see how it can work in PureMVC.
Currently, I have my VOs that will be paired with DAOs (data access objects) for database access. Where I'm stuck is how to track the dbFile and sqlRunner instances across the entire program. The DAOs will need to know about the sqlRunner, or at the very least, the dbFile. Should the sqlRunner be treated as singleton-esque? Or created for every database query?
Finally, how do I expose the dbFile or sqlRunner to the DAOs? In my head right now I see keeping these in a DatabaseProxy that would be exposed to other proxies, and instantiate DAOs when needed. What about a DAO factory pattern?
I'm very new to PureMVC but I really like the structure and separation of roles. Please don't hesitate to tell me if this implementation simply will not work.
Typically in PureMVC you would use a Proxy to fetch remote data and populate the VOs used by your View, so in that respect your proposed architecture sounds fine.
DAOs are not a pattern I've ever seen used in conjunction with PureMVC (which is not to say that nobody does or should). However, if I was setting out to write a crud application in PureMVC, I would probably think in terms of a Proxy (or proxies) to read information from the database, and Commands to write it back.
I am very new to remoting in flex. I am using flex 4.5 and talking to a web application built by someone else on the team using AMF. They have used Zend_AMF to serialize and unserialize the data.
One of the main issues I am facing at the moment is that I will need to talk to a lot of services (about 60 or so).
From examples on remoting I have seen online and from adobe, it seems that I need to define a remoting object for EACH service:
<mx:RemoteObject id="testservice" fault="testservice_faultHandler(event)" showBusyCursor="true" destination="account"/>
With so many services, I think I might have to define about 60 of those, which I don't think is very elegant.
At the same time, I have been playing with Pinta to test out the AMF endpoint. Pinta seems to be able to allow one to define an arbitary amount of services, methods and parameters without any of these limitations. Digging through the source, I find that they have actually drilled down deep into the remoting and are handling a lot of low level stuff.
So, the question is, is there a way to approach this problem without having to define loads or remoteobjects and without having to go down too deep and start having to handling low level remoting events ourselves?
Cheers
It seems unusual for an application to require that many RemoteObjects. I've worked on extremely large applications, and we typically end up with no more than ~6-10 RemoteObject declarations.
Although you don't give a lot of specifics in your post about the variations of RemoteObjects, I suspect you may be confusing RemoteObject with Operation.
You typically declare a RemoteObject instance for every end-point in your application. However, that endpoint can (and normally does) expose many different methods to be invoked. Each of these server-side methods gets results in a client-side Operation.
You can explicitly declare these if you wish, however the RemoteObject builds Operations for you if you don't declare them:
var remoteObject:RemoteObject;
// creates an operation for the saveAccount RPC call, and invokes it,
// returning the AsyncToken
var token:AsyncToken = remoteObject.saveAccount(account);
token.addResponder(this);
//... etc
If you're interacting with a single server layer, you can often get away with a single RemoteObject, pointing to a single destination on the API, which exposes many methods. This is approach is often referred to as an API Façade, and can be very useful, if backed with a solid dependency injection discipline on the API.
Another common approach is to segregate your API methods by logical business area, eg., AccountService, ShoppingCartService, etc. This has the benefit of being able to mix & match protocols between services (eg., AccountService may run over HTTPS).
How you choose to split up these RemoteObjects is up to you. However, 60 in a single applications sounds a bit suspect to me.
I'm currently working with web services that return objects such as a list of files e.g. File array.
I wanted to know whether its best practice to bind this type of object directly to my front end code for example a repeater/listview or whether to first parse it into my own list of "file class" e.g. customFiles[]
If the web service changes then it will break my front end code, however if I create my own CustomFile class, then i would only need to change my code in one place to fix the issue, but it just seems like a lot of extra work to create the same classes from a web service, i wanted to know what is the best practice for this type of work.
There is a delicate balancing act in properly encapsulating implementation details. Too little encapsulation is a maintenance nightmare as small changes in any area break the application. Too many layers is a different kind of maintenance headache altogether.
In this particular case I would create a small layer in your application to encapsulate the web service calls. This will ease your maintenance in both the application and the service as they will be loosely coupled.
It sounds like you have already answered your own problem. Best practice is to create your own custom class for the reasons you point out, but it is significant extra work.
If the webservice isn't likely to change then just use the existing classes, but if you need to cater for change then create your own.
Returning a class is fine as long as your client knows how to deserialize it. If it's truly a web service, where you don't have control over both ends of the conversation, it's more common to start with schemas for XML request and response streams. That decouples the client from the web service a bit more and allows any client that can send XML via HTTP and consume an XML response fair game.
I started to use silverlight/flex and immediately bumped into the asynchronous service calling. I'm used to solving the data access problems in an OO-way with one server fetch mechanism or another.
I have the following trivial code example:
public double ComputeOrderTotal(Order order)
{
double total = 0;
// OrderLines are lazy loaded
foreach (Orderline line in order.Orderlines)
{
// Article,customer are lazy loaded
total = total + line.Article.Price - order.Customer.discount;
}
return total;
}
If I understand correctly, this code is impossible in Flex/Silverlight. The lazy loading forces you to work with callbacks. IMO the simple expample above will be a BIG mess.
Can anyone give me a structured way to implement the above ?
Edit:
The problem is the same for Flex/Silverlight, pseudo code would
do fine
Its not really ORM related but most orms use lazy loading so i'll remove
that tag
The problem is lazy loading in the model
The above example would be very doable of all data was in memory but
we assume some has to be fetched from
the server
Closueres dont help since sometimes data is already loaded and no asynchronous fetch is needed
Yes I must agree that O/R mapping is usually done on the server-side of your application.
In SilverLight asynchronous way of execution is the desired pattern to use when working with services. Why services? Because as I said before there is no O/R mapping tool at the moment that could be used on the client-side (SilverLight). The best approach is to have your O/R mapped data exposed by a service that can be consumed by a SilverLight application. The best way at the moment is to use Ado.Net DataServices to transport the data, and on the client-side to manage the data using LINQ to Services. What is really interesting about ADS (former Astoria project) is that it is designed to be used with Entity Framework, but the good folks also implemented support for IQueriable so basically you can hook up any data provider that support LINQ. To name few you can consider Linq To Sql, Telerik OpenAccess, LLBLGen, etc. To push the updates back to the server the data source is required to support the ADS IUpdateable.
You can look just exactly how this could be done in a series of blogposts that I have prepared here: Getting Started with ADO.NET Data Services and Telerik Open Access
I can't speak to Silverlight but Flex is a web browser client technology and does not have any database driver embedded in the Flash runtime. You can do HTTP protocol interactions to a web server instead. It is there in the middle-tier web server where you will do any ORM with respect to a database connection, such as Java JDBC. Hibernate ORM and iBATIS are two popular choices in the Java middle-tier space.
Also, because of this:
Fallacies of Distributed Computing
You do not do synchronous interactions from a Flex client to its middle-tier services. Synchronous network operations have become verboten these days and are the hallmark signature of a poorly designed application - as due to reasons enumerated at the above link, the app can (and often will) exhibit a very bad user experience.
You instead make async calls to retrieve data, load the data into your client app's model object(s), and proceed to implement operations on the model. With Flex and BlazeDS you can also have the middle-tier push data to the client and update the client's model objects asynchronously. (Data binding is one way to respond to data being updated in an event driven manner.)
All this probably seems very far afield from the nature of inquiry in your posting - but your posting indicates you're off on an entirely incorrect footing as to how to understand client-side technologies that have asynchronous and event-driven programming baked into their fundamental architecture. These RIA client technologies are designed this way completely on purpose. So you will need to learn their mode of thinking if you want to have a good and productive experience using them.
I go into this in more detail, and with a Flex perspective, in this article:
Flex Async I/O vs Java and C# Explicit Threading
In my direct experience with Flex, I think this discussion is getting too complicated.
Your conceptual OO view is no different between sync and asynch. The only difference is that you use event handlers to deal with the host conversation in the DAL, rather than something returned from a method call. And that often happens entirely on the host side, and has nothing to do with Flex or Silverlight. (If you are using AIR for a workstation app, then it might be in client code, but the same applies. As well if you are using prolonged AJAX. Silverlight, of course, has no AIR equivalent.)
I've been able to design everything I need without any other changes required to accomodate asynch.
Flex has a single-threaded model. If you make a synchronous call to the web server, you'd block the entire GUI of the application. The user would have a frozen application until the call completes (or times out on a network error condition, etc.).
Of course real RIA programs aren't written that way. Their GUI remains accessible and responsive to the user via use of async calls. It also makes it possible to have real progress indicators that offer cancel buttons and such if the nature of the interaction warrants such.
Old, bad user experience web 1.0 applications exhibited the synchronous behaviour in their interactions with the web tier.
As my linked article points out, the async single-threaded model coupled with ActionScript3 closures is a good thing because it's a much simpler programming model than the alternative - writing multi-thread apps. Multi-threading was the approach of writing client-server Java Swing or C# .NET WinForm applications in order to achieve a similarly responsive, fluid-at-all-times user experience in the GUI.
Here's another article that delves into this whole subject matter of asynchronous, messaging/event-driven distributed app architecture:
Building Effective Enterprise Distributed Software Systems
data-driven communication vs behavior-driven communication
Silverlight is a client technology and the Object - Relational mapping happens completely in the server. So you have to forgot about the ORM in Silverlight.
Following your example what you have to do is to create a webservice (SOAP, REST...) that can give your silverlight client the complete "Order" object.
Once you have the object you can work with it with no communication with the server in a normal - synchronous way.
Speaking about Silverlight, you should definitely check RIA services.
Simply, it brings the DataContext from the server to the client from where you can asynchronously query it (there is no need to write WCF services by hand, it's all done by RIA).
C# 5
async / await construct will almost exactly what I want..
watch presentation by anders hejlsberg