Highly configurable and efficient ESB / SOA / integration framework - asynchronous

my plan is to develop or use a Java-based integration framework (ESB, SOA whatever) that deals with services, with the following constraints:
a Service can be deployed on multiple machines but doesn't have to be present on every one of them
a Service can be deployed and re-deployed (with a newer version) separately
a Service is connected to other services either by:
in-memory connections
(async / sync) remoting to other machines
the routing logic of the Service connections should be configurable on the fly, without re-deploying or restarting anything
I know that OpenESB is close to these requirements, however it requires redeployment of the service to change the routing (suppose the connections are HTTP BC based), but I'm unfamiliar in this regard with MuleESB, WSO2, JBossESB, whatever open source ESB... Is there any good solution for this (e.g. configurable in-memory and/or remoting routing)? I don't really care about clustering as I plan to use the servers separately, and the designated (if required) JMS solution would be HornetQ if that matters.

You mention several different concepts, but a combination of an ESB pattern, Apache Load Balancer and Maven should get you close. Do not get to hung up on the product, settle on a paradigm/pattern and the decision of the product will be easy, it either does things the way you like or does not.
Here is the pattern I use.
SOA Design Patterns
This may also interest you SOA for executives
Cheers

After a long discussions about the pros and cons, we are going to have a HornetQ-based (JMS MQ) solution, where we create message routing rules and sometimes processing codes that handle the different kind of routing. HornetQ is able to handle the in-jvm requirement too, but that part will be covered under the hood.

Related

Monolithic to Microservcies architecture

My company has a .NET project which serves following use cases
It listens on a WebSocket Port with certificate based client authentication.
It listens on another WebSocket Port with authorization header based client authentication.
It listens on a TCP Port with certificate based client authentication.
On the above three ports different set of client devices are connected.
Now my company wants to convert this application in .NET Core so that this can be deployed on Linux servers to save the deployment cost. As an architect I am thinking in the direction of adopting microservices architecture along with migrating the application in .NET Core. So I am thinking that the above application can be break down into three microservices based on the above use cases.
AFAIK microservices architecture means breaking down your application into smaller services which serves a particular use case. So breaking this complete application into three different microservices is correct or not.
My organization is very new to micro services architecture.I just want to know that whether this thinking is correct or not as per architecture.
Thanks in advance for your help.
Generally I'd try to break things down based on business domains (or business capabilities) instead of technical reasons.
A good place to start might be reading about domain driven design and bounded contexts - see here - there's some good further reading at the end of that link.
Yes you are thinking in right direction.
Here are my suggestion-
you should go for .NET Core and Docker for implementing your Microservices in a better way.
There can be multiple cases when you will go for docker container in this scenario-
1: Run the same image in multiple containers
2: Manage different Containers
3: Run the same image in multiple environments
4: Tag and Run image with different versions
And other reason to go for micro services with docker-
Microservices are smaller in size
Microservices are easier to develop, deploy, and debug, because a fix only needs to be deployed onto the microservice with the bug, instead of across the board
Microservices can be scaled quickly and can be reused among different projects
Microservices work well with containers like Docker
Microservices are independent of each other, meaning that if one of the microservices goes down, there is little risk of the full application shutting down.
You do more research on the same and easily go for micro services architecture.
This may not be answering your question, but I thought it could be useful, especially in light of the fact that your organization is very new to micro-services.
I would recommend to carefully evaluate the advantages and especially disadvantages (complexity) that micro-services architecture introduces.
Just a few examples of things that you will need to think about are log aggregation, communication between services (sync vs async), E2E and integration tests, eventual consistency, etc. Obviously you may end up not having to deal with some of these, but all of them do become a lot more complicated with micro-services.
There should be good business justification to take on the additional complexity (=cost).
Microservices shouldn't be measured on how small enough but how autonomous and independent they are. Microservices are great to be designed around business and domain context and mentioned in details Identifying domain boundires
Since you are starting to build microservices in .Net core why not consider severless microservices ? You have plenty on options in major clouds (AWS,Azure) to build serverless microservices. Serverless are quicker to build and you get generous free tier and you don't have to manage clusters. Is there specific reason you would want to use Kubernetes? you can read more about cloud native and servlerss here Design Cloud native and Serverless

What is the recommended way to interface with a CorDapp via an API?

I am designing a CorDapp, which would require user input as well as API integration, and I am considering various approaches to expose flows and vault queries to the outside world.
Default option seems to be to use Corda RPC. Unless I missed something, there are only Java bindings for it, which is effectively restricting the clients to only be JVM-based. This is somewhat inconvenient, and ideally I would like something like OpenAPI to make it more open and implementation-agnostic.
Another option is to use some kind of Corda RPC to OpenAPI proxy. I know about Braid, and I'm sure there are others. Braid seems to support deployment as a Corda service packed together with the flows into the CorDapp itself, effectively making it running embedded into the Corda JVM.
Braid can be deployed as a standalone proxy too, which I suppose is option three.
Instinctively I find the embedded mode more attractive, as it reduces the number of moving parts, as opposed to a standalone mode. However, I am concerned that such model may be in fact become discouraged at some point, either because Corda developers consider it to be a misuse of services facility, or because some organisations will not be keen to deploy such code onto their nodes, especially when they may be running multiple CorDapps. I would imagine anything deployed as part of Corda JVM would at least require more scrutiny due to potential impact on other things running there, which in turn would reduce the agility.
I wonder what approach to integrate with a CorDapp is actually recommended?
Edit 1: I know it is technically possible to embed a webserver into the node and expose a REST API from there, at least in the current version of Corda (4.3 at the time of writing). The question is more about whether it is a good idea to do so, or not, and why.
Take a look at the question I had asked on Stackoverflow regarding front end for CordApp. This might be of some help.
Following is the link -
"Corda: Can we develop Dapps that will be run by IIS webserver to talk to Corda platform?"
You can use any front-end technology you want.
As of Corda 3, your backend must be JVM-based, for two reasons:
You need to load various flow, state and other class definitions onto
the classpath to pass as arguments to flows, retrieve objects from the
vault, etc.
You need to use the CordaRPCClient library to create an
RPC connection to the node
If you really need to write your back-end
in another language, there are a few workarounds:
Create a thin Java webserver that sits between your main webserver and
the node. The Java webserver translates HTTP requests from the main
webserver into RPC calls to the node, and RPC responses from the node
into HTTP responses to the main webserver
This is the approach taken
by libraries such as Braid
Use a library such as GraalVM to compile
non-JVM languages to JVM bytecode
An example of writing a JVM
webserver in Javascript using GraalVM is available here:
https://github.com/nitesh7sid/cordapp-example-nodejs-server-graalvm

Oracle SOA and MSA

Is it advicible to build the MSA based services on Oracle SOA or any other ESB suite for that matter? Is there any advantage or disadvantage?
If I am using Java, Spring and JPA over a message queue - say - RabbitMQ, I can achieve it in a more controlled environment with less recurring expenses. Of course will end up mixing tools like Drools or JBPM or similar to achieve things that may be OOTB (Out of the box) in the SOA Or ESB Suite. But scaling a specific service without paying licence fee for an additional environment should certainly be a good catch right?
Microservices architecture pattern applies to development of backend systems/services, whereas ESB (e.g. Oracle SOA Suite) is intended as an intermediary layer between consumers and backend services. Backend services contain rich application logic, whereas ESB services provide only intermediary functions like routing, transformation, orchestration etc.
ESB is not intended for rich application logic, though it's possible to do that.
Using ESB (e.g. Oracle SOA Suite) to host microservices is achievable, but you will get a big overhead comparing to limited functions and scalability. If you are looking for centralized API management (tracing, security etc.), you can put an API gateway into your architecture instead of full scale ESB.

Is BizTalk an ESB?

I am looking into architectural patterns, Enterprise Services Bus (ESB) precisely. Upon reading this article Enterprise Integration, and with little to no experience I am wondering if BizTalk has is a ESB or is it just a EAI (Hub/Spokes or Bus)?
I found this NServiceBus and Biztalk, describing BizTalk as a central message broker.
Taking other ESB frameworks into account (NServiceBus and Rhino Service Bus). These frameworks have no central point to process messages.
Is Biztalk a EAI rather than an ESB?
Many thanks
BizTalk is punted by Microsoft as having ESB capabilities - see the BTS ESB toolkit
However, the term 'ESB' covers a very broad field, and there is a lot of subjectivity about an exact definition of an ESB. IMHO there are weak points in BizTalk's claim to be comprehensive as an ESB (in a > 2010 definition of the term).
BTS originated in the Hub-and-Spoke EAI era, before ESB became widespread.
BTS is more suited toward asynchronous processes than synchronous processes - latencies will vary depending on load on the system, throttling state, etc.
BTS is cumbersome when it comes to ease of versioning of services and schemas (new deployment is needed)
BTS is cumbersome when it comes to management of MANY services (e.g. Using BizTalk as a facade for all 5000 of your corporate SOA / Web Services will be painful)
FWIW we have found BTS a good fit for:
all of our synchronous and asynchronous EAI (i.e. formalized integration contracts between major LOB systems, and with trading partners), and the large number of adapters assists with integrating a wide number of protocols.
For Business Process and Business Monitoring capabilities
Addressing transactional and delivery reliablity - Biztalk has capability to retry, tracking and resumption of Suspended messages, which is useful over unreliable networks or when it comes to integration with unreliable systems.
Update, with some further comparative experiences
BTS is very centralised - ultimately, even a multi-server BizTalk cluster / group is dependent on Sql-Server. Queue based ESB products tend to be more decentralised (logically and physically), so loss of a few endpoint or queue servers should not pull the whole enterprise down.
Many queue based ESB's are built on open source technologies, with an eye on avoiding single vendor lock-in
Many contemporary ESB's seem to take a commodity-computing approach to scale out. Scaling out with products like BizTalk can become expensive.
On the plus side, the monitoring and administration capabilities of commercial offerings like BTS should not be underestimated - make sure any ESB you are considering has adequate auditing, instrumenting, retry, and diagnostic (WMI / SNMP / SCOM etc) capabilities - you'll need a dashboard to monitor the health of your bus, and there is nothing worse than not knowing where a message went. Here, centralisation administration and diagnosis is a plus.
BizTalk is a messaging and workflow orchestration platform, on which you can build ESB behaviours and capabilities. To make this easier, and standardise ESB implementation on BizTalk, Microsoft released the BizTalk ESB Toolkit - a set of guidelines, patterns and code.
The concepts of EAI and BPM have been around for a while, so there are many companies that have leveraged BizTalk to create solutions to these problems. Companies that host a full ESB on BizTalk server are far fewer, and adoption has certainly slowed in the advent of WCF/WF/NServiceBus and, of course, Azure Service Bus.
So in summary, BizTalk out of the box is nether EAI or ESB, but can do both with a number of developers applied to the problem.
By "EAI or ESB" I'm assuming you wanted to know if BizTalk follows the Hub&Spoke or the Bus architecture.
From an architecture patterns perspective, integration solutions roughly fall under one of the two patterns-
The Hub and spoke:
This involves a centralized message broker sending out messages to various receivers, while all the senders send their messages only to this broker.
Thus neither the senders nor the receivers need to be aware of each other.
This is typically what many people refer to as EAI (although it is absolutely possible to implement an EAI solution that follows the BUS pattern).
Solutions following this pattern are easy to develop and administer. All the routing logic is centrally managed at one place - in the hub.
But as you would have guessed, this has a glaring drawback - single point of failure. If the hub crashes everything comes to a halt. Also, this model doesn't scale very well.
BUS:
Enterprise Integration solutions developed around this pattern are generally referred to as ESB. There is no intelligent central authority here. All senders publish their messages on the bus. The receivers need to be intelligent enough to determine which messages are intended for them and take them off the bus.
Thus the senders and the receivers need only be aware of the bus. But here the routing logic is spread across the receivers so there is no single point of failure. Also this model is highly scalable. However such solutions are quite complex and difficult to administer.
Coming to the question which pattern does BizTalk follow- it is a hybrid of both these patterns.
the Hub-like appearance is very obvious with its centralized Messaging Engine, and a central MessageBox database. This gives one the simplicity and ease of administration which is typical of the hub approach.
But if you look at the BizTalk architecture, one can have a Host with its Host Instances spread across multiple servers. It is also possible to have the different BizTalk databases like MessageBox, Tracking, Ent SSO etc. configured on different servers. This makes BizTalk solutions more scalable and tolerant to faults than the run-of-the-mill hub implementations - which is a behavior usually attributed to the bus approach.
Hope this answers your question.
BizTalk is certainly an ESB. EAI is more of a loose concept - BizTalk can certainly be deployed to support EAI, and it can also do a lot more.
BizTalk is more than an ESB but certainly fits the bill. This link is a little old, but answers your exact question.
EDIT: Here is a more-recent MS link that gets into specifics of implementation.
BizTalk can be used as both EAI and ESB.
As for ESB, the BizTalk server architecture is publish-subscribed, a single message can be published to the messagebox which acts as the messaging backbone bus. That message can be received by one or more destination systems that are subscribed to that message. Of course there more capabilities and features that you can get by using BizTalk server like the mapper tool and the use of pipeline components for example.
For use as EAI, BizTalk offers you orchestrations that manage the business logic, LOB(Line of business) adatpers to connect to systems(also legacy), mapper tool, rules engine, and a lot of what you need in order to integrate the different systems in or outside your company.
Absolutely! Biztalk comes from an EIS background, which makes perfect sense for ESB as an infrastructure backplane for service-oriented architectures that span hybrid technical platforms.
At a previous company we chose Biztalk in preference to the IBM ESB product for reasons of functionality and lower cost.
It is Microsoft, so you get what you pay for, but still well worth looking into.
Biztalk Server withot "ESB Toolkit" Is not an ESB.
Because of the following:
Is a contract first, need to build you message types first.
Need to Plan the whole scenario first to minimize the impact of changes.
Changes requires Deployment which increases downtime.
Regarding to your qustion, Yes BizTalk Server is EAI Product
I agree with most of what's said here. It's a stretch to pitch BizTalk as an all inclusive EBS solution even with the EBS toolkit.
To address a couple points made here ...
•BTS is more suited toward asynchronous processes than synchronous
processes - latencies will vary depending on load on the system,
throttling state, etc.
BizTalk hosts with unchanged defaults are not ideal for a low latency. But those hosts are meant to be tuned. Out of the box configuration is not suitable for any situation where throughput is needed. In my experiences of walking into an organization where BizTalk has been shunned there is always an untuned single host setup sitting in the middle of it. It is somewhat analogous to making tables in a dbms with no indexes, getting performance issues and saying the dbms itself sucks.
•BTS is cumbersome when it comes to ease of versioning of services and
schemas (new deployment is needed)
Like with any development platform you need to have a deployment strategy. If schemas have version in the namespace you do not need to redeploy anything. A new version maybe deployed without taking anything down.
As far as the service endpoints are concerned BizTalk can host web services without the use of IIS (BizTalk can use HTTP.SYS to host just as IIS does). To host an inprocess service in BizTalk is merely a matter of importing a binding which can be done without stopping anything in BizTalk. In those end points you can implement versioning as well (like http:.../thing/v1, http:.../thing/v2, etc.).
Anyway ~5 years have passed I'm sure you have hit a conclusion before now :)
BizTalk can do both ESB and EAI, depends how you design your biztalk applications.

Addressing scalability ,performance in a .net web application

I'm working on a .net portal which would be having lots of concurrent users.
so scalability,performance need to be addressed in the design and architecture.
We plan to use load balancing in the application.
Keeping this in mind,what would be the best way of communicating between IIS web server(hosting aspx,aspx.cs files) and application server (hosting .net assemblies like business logic and data access layer)?
Should it be .net remoting or soap web service?or is there any other approach?
Thanks.
Is there another approach? Yes - don't distribute your objects.
The most scalable approach is to NOT to distribute your objects away from each other. Ask yourself, why do you want to deploy one flavor of code to an "app server" while another flavor of code goes to a "web server"? The communication that goes on between those two layers, if they are distributed, will be much much much much (etc etc) more expensive than a local call.
With today's 64-bit servers, with all of that memory, and the hot CPUs, and with ASP.NET's superior memory management, why not put your business logic and DAL on the same physical machine as the ASPX files? Why not?
If you need to scale, add more servers. Simple.
There are good reasons, of course, to distribute. The most common good reasons have to do with domains of ownership - along several axes: security management, or even budget and control. In other words, to take the latter case, if team is responsible for running the business logic and a separate team is responsible for building and running the web layer -then it may make sense to distribute those two things to allow independence of management. Most of the good reasons for distributing computer code, have their origins in the structures of the human organizations using or developing the code.
There is no good technical reason why a web page should not run on the same CPU, sharing the same CLR VM and memory heap, as the database access layer.
Regardless what you do with distribution, it would be unwise to architect your system with less-than-formal interfaces defining the connections between the layers. If you keep formal interfaces, then it should be no problem for you to measure the perf and efficiency of a distributed approach versus a co-located approach.
Do you really need an app server? Just how big are you talking exactly? For example Stackoverflow.com has ~50k uniques a day and doesn't have an app server so I assume you are talking much bigger than that? Most performance bottle necks come down to database issues so I would concentrate on that.
I suggest you take a look at the Patterns and Practices groups guidelines for performance, more specifically Chapter 6 - Improving ASP.NET Performance of the guideline. I agree with Cheeso that you should seriously consider NOT physically splitting your application layer and UI layer if you can. The P&P guideline has the following notes:
Avoid Unnecessary Process Hops
Although process hops are not as expensive as machine hops, you should avoid process hops where possible. Process hops cause added overhead because they require interprocess communication (IPC) and marshaling. For example, if your solution uses Enterprise Services, use library applications where possible, unless you need to put your Enterprise Services application on a remote middle tier.
Understand the Performance Implications of a Remote Middle Tier
If possible, avoid the overhead of interprocess and intercomputer communication. Unless your business requirements dictate the use of a remote middle tier, keep your presentation, business, and data access logic on the Web server. Deploy your business and data access assemblies to the Bin directory of your application. However, you might require a remote middle tier for any of the following reasons:
You want to share your business logic between your Internet-facing Web applications and other internal enterprise applications.
Your scale-out and fault tolerance requirements dictate the use of a middle tier cluster or of load-balanced servers.
Your corporate security policy mandates that you cannot put business logic on your Web servers.
If you absolutely have to split the application logic up anyways, you could use WCF as a transport mechanism. I'm not sure how it stacks up against remoting when it comes to performance. However, I seem to remember that this is the guideline Microsoft is pushing.
Clemens Vasters (Technical Lead for the Microsoft .NET Service Bus) talks about WCF vs. Remoting in this answer on MSDN forums.
Learn to write asynchronously.
Explore the CCR runtime for example.
Each thread that is blocked waiting for IO responses is one less available to your system.
Turn off 'idealised logging' leave the ability to switch it back on via admin console. But logging is often a hidden bottle neck.
CACHE CACHE CACHE!
If it was expensive to get the data the first time, don't pay for it the second!
Avoid ASP.net's Session State - This can seriously bloat and lead to a large slow down in page responsiveness.
Modify the http headers to specify short browser caching (5sec - 20sec) (Depends on the nature of the content)
Utilise GZIP while you are at it!
AND USE LOTS OF RAM
Here are my tips
1)Move all your static files - images , css, js to a load balancer like nginx. This will greatly reduce the load on IIS server and it will have enough free resources to serve the main request.
2)Think about caching and avoiding database access altogether.
3)Try to implement REST principles are far as possible.
4)Keep session state to a bare minimum - if possible avoid it altogether.
There are some good performance and scalability points in these articles from Omar Al Zabir.
10 ASP.NET Performance and Scalability Secrets
and
99.99% available ASP.NET and SQL Server SaaS Production Architecture
(also check out his book Building a Web 2.0 Portal with ASP.NET 3.5)

Resources