How can I approach making my application use event-based architecture? - asp.net

It's a somewhat broad question, and for that I apologise, however I am struggling to get to grips with an approach for turning an overly complex (read: poorly designed) ASP.NET WebForms application into something more maintainable. I believe it can be transformed into something which is largely event-driven.
I don't mean events as in the .NET coded event, but the conceptual business process events, such as creating a new customer or completing an order.
In principal, I would like to be able to register a piece of code to be called whenever an event of a particular nature occurs. Ideally, there would be some well-defined mechanism to filter the events, so that the code is only called for events that meet certain criteria.
At present, I haven't found any frameworks that use this approach, which makes me worry I'm on a doomed path.
Are there any frameworks, patterns or good reads available for how you can approach this sort of design?
Is there a good reason why I should or shouldn't be attempting a solution this way?

The EDA Pattern seems ideal for what you are doing.
Personally I can't see past the ASP.NET MVC Framework for web applications.

Do you actually mean service bus frameworks?
Here are some of them - NServiceBus, Mass Transit, Rhino Service Bus

You might check out Biztalk for a message based infrastructure and a scalable workflow implementation...
I think it might be able to help you.

DDD group has a lot of discussion about this. If you can pay I would get Udi Dahan or Greg Young to put you on the right path. Basically you would identify those events and make them first class citizens of your model and then use service bus to send messages when these eve
ts occur.your domain or even other services could subscribe to those events nad take appropriate action.

Related

Axon Framework: Should microservices share events?

We are migrating a monolithic to a more distributed and we decided to use AxonFramework.
In Axon, as messages are first-class citizens, you get to model them as POJOs.
Now I wonder, since one event can be dispatched by one service and listen on any others, how should we handle event distribution.
My first impulse is to package them in a separate project as a JAR file, but this goes against a rule for microservices, that they should not share implementations.
Any suggestion is welcome.
Having some form of 'common' module is definitely not uncommon, although I'd personally use that 'common' module for that specific application alone.
I'd generally say you should regard your commands/events/queries as the API of your application. As such, it might be beneficial to share the event structure with other projects, but just not the actual POJO itself. You could for example think about using ProtoBuf for this use case, were in ProtoBuf describes a schema for your events.
Another thing to think about is to not expose your whole 'event-API'. Typically you'll have quite some fine grained events, things which other (micro) services in your environment are not interested in. There are however always a couple of 'very important events', differently put 'milestone events', which others definitely are interested in.
These milestone events in some scenarios aren't a direct POJO following from your domain, but rather an accumulations of several events.
It is thus not to uncommon to have a service which accumulates these and publishes another event to notify other services. The accumulating of these fine grained, internal events, and publishing a milestone event as a response to these is typically better suited as the event-API within your micro service architecture.
So that's a couple of ideas there for you, hope they give you some insights.
I'd like to give a clear cut solution to your question, but such an answer always hides behind 'it depends'.
You are right, the "official" rule is not to share models. So if you have distributed dev-teams, I would stick to it.
However, I tend to not follow strictly when I have components that are decoupled but developed by the same team or teams with high interaction ...

Workflow Foundation - Can I make it fit?

I have been researching workflow foundation for a week or so now, but have been aware of it and the concepts and use cases for it for many years, just never had the chance to dedicate any time to going deeper.
We now have some projects where we would benifit from a centralized business logic exposed as services as these projects require many different interfaces on different platforms I can see the "Business Logic Silos" occuring.
I have had a play around with some proof of concepts to discover what is possible and how it can be achieved and I must say, its a bit of a fundamental phase shift for a regular C# developer.
There are 3 things that I want to achieve:
Runtime instanciated state machines
Customizable by the user (perform different tasks in different orders and have unique functions called between states).
WCF exposed
So I have gone down the route of testing state machine workflows, xamlx wcf services, appfabric hosted services with persistance and monitoring, loading xamlx services from the databse at runtime, etc, but all of these examples seem not to play nicely together. For example, a hosted state machine service, when in appfabric, has issues with the sequence of service method calls such as:
"Operation 'MethodName' on service instance with identifier 'efa6654f-9132-40d8-b8d1-5e611dd645b1' cannot be performed at this time. Please ensure that the operations are performed in the correct order and that the binding in use provides ordered delivery guarantees".
Also, if you call instancial workflow services at runtime from an sql store, they cannot be tracked in appfabric.
I would like to Thank Ron Jacobs for all of his very helpful Hands On Labs and blog posts.
Are there any examples out there that anyone knows of that will tie together all of these concepts?
Am I trying to do something that is not possible or am I attempting this in the right way?
Thanks for all your help and any comments that you can make to assist.
Nick
Regarding the error, it seems like you have modified the WF once deployed (is that #2 in your list?), hence the error you mention.
Versioning (or for this case, modifying a WF after it's been deployed) is something that will be improved in the coming version, but I don't think it will achieve what you need in #2 (if it is what I understood), as the same WF is used for every instance.

Transaction management in Web services

Our client follows SOA principles and have design web services that are very fine grained like createCustomer, deleteCustomer, etc.
I am not sure if fine grained services are desirable as they create transactional related issues. for e.g. if a business requirement is every Customer must have a Address when it's created. So in this case, the presentation component will invoke createCustomer first and then createAddress. The services internally use simple JDBC to update the respective tables in db. As a service is invoked by external component, it has not way of fulfilling transactional requirement here i.e. if createAddress fails, createCustomer operation must be rolledback.
I guess, one of the approach to deal with this is to either design course grained services (that creates a Customer and associated Address in one single JDBC transaction) or
perhaps simple create a reversing service (deleteCustomer) that simply reverses the action of createCustomer.
any suggestions. thanks
The short answer: services should be designed for the convenience of the service client. If the client is told "call this, then cdon't forget to call that" you're making their lives too difficult. There should be a coarse-grained service.
A long answer: Can a Customer reasonably be entered with no Address? So we call
createCustomer( stuff but no address)
and the result is a valid (if maybe not ideal) state for a customer. Later we call
changeCustomerAddress ( customerId, Address)
and now the persisted customer is more useful.
In this scenario the API is just fine. The key point is that the system's integrity does not depend upon the client code "remembering" to do something, in this case to add the address. However, more likely we don't want a customer in the system without an address in which case I see it as the service's responsibility to ensure that this happens, and to give the caller the fewest possibilities of getting it wrong.
I would see a coarse-grained createCompleteCustomer() method as by far the best way to go - this allows the service provider to solve the problem once rather then require every client programmer to implement the logic.
Alternatives:
a). There are web Services specs for Atomic Transactions and major vendors do support these specs. In principle you could actually implement using fine-grained methods and true transactions. Practically, I think you enter a world of complexity when you go down this route.
b). A stateful interface (work, work, commit) as mentioned by #mtreit. Generally speaking statefulness either adds complexity or obstructs scalability. Where does the service hold the intermediate state? If in memeory, then we require affinity to a particular service instance and hence introduce scaling and reliability problems. If in some State or Work-in-progress database then we have significant additional implementation complexity.
Ok, lets start:
Our client follows SOA principles and
have design web services that are very
fine grained like createCustomer,
deleteCustomer, etc.
No, the client has forgotten to reach the SOA principles and put up what most people do - a morass of badly defined interfaces. For SOA principles, the clinent would have gone to a coarser interface (such asfor example the OData meachsnism to update data) or followed the advice of any book on multi tiered architecture written in like the last 25 years. SOA is just another word for what was invented with CORBA and all the mistakes SOA dudes do today where basically well known design stupidities 10 years ago with CORBA. Not that any of the people doing SOA today has ever heard of CORBA.
I am not sure if fine grained services
are desirable as they create
transactional related issues.
Only for users and platforms not supporting web services. Seriously. Naturally you get transactional issues if you - ignore transactional issues in your programming. The trick here is that people further up the food chain did not, just your client decided to ignore common knowledge (again, see my first remark on Corba).
The people designing web services were well aware of transactional issues, which is why web service specification (WS*) contains actually mechanisms for handling transactional integrity by moving commit operations up to the client calling the web service. The particular spec your client and you should read is WS-Atomic.
If you use the current technology to expose your web service (a.k.a. WCF on the MS platform, similar technologies exist in the java world) then you can expose transaction flow information to the client and let the client handle transaction demarcation. This has its own share iof problems - like clients keeping transactions open maliciously - but is still pretty much the only way to handle transactions that do get defined in the client.
As you give no platform and just mention java, I am pointing you to some MS example how that can look:
http://msdn.microsoft.com/en-us/library/ms752261.aspx
Web services, in general, are a lot more powerfull and a lot more thought out than what most people doing SOA ever think about. Most of the problems they see have been solved a long time ago. But then, SOA is just a buzz word for multi tiered architecture, but most people thinking it is the greatest thing since sliced bread just dont even know what was around 10 years ago.
As your customer I would be a lot more carefull about the performance side. Fine grained non-semantic web services like he defines are a performance hog for non-casual use because the amount of times you cross the network to ask / update small small small small stuff makes the network latency kill you. Creating an order for like 10 goods can easily take 30-40 network calls in this scenario which will really possibly take a lot of time. SOA preaches, ever since the beginning (if you ignore the ramblings of those who dont know history) to NOT use fine grained calls but to go for a coarse grained exchange of documents and / or a semantical approach, much like the OData system.
If transactionality is required, a coarser-grained single operation that can implement transaction-semantics on the server is definitely going to be much simpler to implement.
That said, certainly it is possible to construct some scheme where the target of the operations is not committed until all of the necessary fine-grained operations have succeeded. For instance, have a Commit operation that checks some flag associated with the object on the server; the flag is not set until all of the necessary steps in the transaction have completed, and Commit fails if the flag is not set.
Of course, if having light-weight, fine grained operations is an important design requirement, perhaps the need to have transactionality should be re-thought.

Are middleware apps required to do business logic?

Let's suppose I have a large middleware infrastructure mediating requests between several business components (customer applications, network, payments, etc). The middleware stack is responsible for orchestration, routing, transformation and other stuff (similar to the Enterprise Integration Patterns book by Gregor Hohpe).
My question is: is it good design to put some business logic on the middleware?
Let's say my app A requests some customer data from the middleware. But in order to get this data, I have to supply customer id and some other parameter. The fetching of this parameter should be done by the requesting app or is the middleware responsible for 'facilitating' and providing an interface that receives customer ids and internally fetches the other parameter?
I realize this is not a simple question (because of the definition of business logic), but I was wondering if it is a general approach or some guidelines.
Apart from the routing, transformation and orchestration, performance should be kept in mind while loading middleware with functional requirements. Middlware should take a fraction of the entire end-to-end transaction life time. This can be achieved only by concentrating on the middleware core functionalities, rather than trying to complement the host system functionalities.
This is the "Composite Application" pattern; the heart of a Service Oriented Architecture. That's what the ESB vendors are selling: a way to put additional business logic somewhere that creates a composite application out of existing applications.
This is not simple because your composite application is not just routing. It's a proper new composite transaction layered on top of the routing.
Hint. Look at getting a good ESB before going too much further. This rapidly gets out of control and having some additional support is helpful. Even if you don't buy something like Sun's JCAPS or Open ESB, you'll be happy you learned what it does and how they organize complex composite applications.
Orchestration, Routing and Transformation.
You don't do any of these for technical reasons, at random, or just for fun, you do these because you have some business requirement -- ergo there is business logic involved.
The only thing you are missing for a complete business system is calculation and reporting (let us assume you already have security in place!).
Except for very low level networking, OS and storage issues almost everything that comprises a computer system is there because the business/government/end users wants it to be there.
The choice of 'Business Logic' as terminoligy was very poor and has led to endless distortions of design and architecture.
What most good designers/architects mean by business logic is calculation and analysis.
If you "%s/Business Logic/Calculation/g" most of the architectural edicts make more sense.
The middleware application should do it. System A should have no idea that the other parameter exists, and will certainly have no idea about how to get it.

Scalable/Reusable Authorization Model

Ok, so I'm looking for a bit of architecture guidance, my team is getting a chance to re-cast certain decisions with a new feature that we're building, and I wanted to see what SO thought :-) There are of course certain things that we're not changing, so the solution would have to fit in this model. Namely, that we've got an ASP.NET application, which uses web services to allow users to perform actions on the system.
The problem comes in because, as with many systems, different users need access to different functions. Some roles have access to Y button, and others have access to Y and B button, while another still only has access to B. Most of the time that I see this, developers just put in a mish-mosh of if statements to deal with the UI state. My fear is that left unchecked, this will become an unmaintainable mess, because in addition to putting authorization logic in the GUI, it needs to be put in the web services (which are called via ajax) to ensure that only authorized users call certain methods.
so my question to you is, how can a system be designed to decrease the random ad-hoc if statements here and there that check for specific roles, which could be re-used in both GUI/webform code, and web service code.
Just for clarity, this is an ASP.NET web application, using webforms, and Script# for the AJAX functionality. Don't let the script# throw you off of answering, it's not fundamentally different than asp.net ajax :-)
Moving from the traditional group, role, or operation-level permission, there is a push to "claims-based" authorization, like what was delivered with WCF.
Zermatt is the codename for the Microsoft class-library that will help developers build claims-based applications on the server and client. Active Directory will become one of the STS an application would be able to authorize against concurrently with your own as well as other industry-standard servers...
In Code Complete (p. 411) Steve McConnell gives the following advice (which Bill Gates reads as a bedtime story in the Microsoft commercial).
"used in appropriate circumstances, table driven code is simpler than complicated logic, easier to modify, and more efficient."
"You can use a table to describe logic that's too dynamic to represent in code."
"The table-driven approach is more economical than the previous approach [rote object oriented design]"
Using a table based approach you can easily add new "users"(as in the modeling idea of a user/agent along with it's actions). Its a good way to avoid many "if"s. And I've used it before for situations like yours, and it's kept the code nice and tidy.

Resources