Monolithic to Microservcies architecture - .net-core

My company has a .NET project which serves following use cases
It listens on a WebSocket Port with certificate based client authentication.
It listens on another WebSocket Port with authorization header based client authentication.
It listens on a TCP Port with certificate based client authentication.
On the above three ports different set of client devices are connected.
Now my company wants to convert this application in .NET Core so that this can be deployed on Linux servers to save the deployment cost. As an architect I am thinking in the direction of adopting microservices architecture along with migrating the application in .NET Core. So I am thinking that the above application can be break down into three microservices based on the above use cases.
AFAIK microservices architecture means breaking down your application into smaller services which serves a particular use case. So breaking this complete application into three different microservices is correct or not.
My organization is very new to micro services architecture.I just want to know that whether this thinking is correct or not as per architecture.
Thanks in advance for your help.

Generally I'd try to break things down based on business domains (or business capabilities) instead of technical reasons.
A good place to start might be reading about domain driven design and bounded contexts - see here - there's some good further reading at the end of that link.

Yes you are thinking in right direction.
Here are my suggestion-
you should go for .NET Core and Docker for implementing your Microservices in a better way.
There can be multiple cases when you will go for docker container in this scenario-
1: Run the same image in multiple containers
2: Manage different Containers
3: Run the same image in multiple environments
4: Tag and Run image with different versions
And other reason to go for micro services with docker-
Microservices are smaller in size
Microservices are easier to develop, deploy, and debug, because a fix only needs to be deployed onto the microservice with the bug, instead of across the board
Microservices can be scaled quickly and can be reused among different projects
Microservices work well with containers like Docker
Microservices are independent of each other, meaning that if one of the microservices goes down, there is little risk of the full application shutting down.
You do more research on the same and easily go for micro services architecture.

This may not be answering your question, but I thought it could be useful, especially in light of the fact that your organization is very new to micro-services.
I would recommend to carefully evaluate the advantages and especially disadvantages (complexity) that micro-services architecture introduces.
Just a few examples of things that you will need to think about are log aggregation, communication between services (sync vs async), E2E and integration tests, eventual consistency, etc. Obviously you may end up not having to deal with some of these, but all of them do become a lot more complicated with micro-services.
There should be good business justification to take on the additional complexity (=cost).

Microservices shouldn't be measured on how small enough but how autonomous and independent they are. Microservices are great to be designed around business and domain context and mentioned in details Identifying domain boundires
Since you are starting to build microservices in .Net core why not consider severless microservices ? You have plenty on options in major clouds (AWS,Azure) to build serverless microservices. Serverless are quicker to build and you get generous free tier and you don't have to manage clusters. Is there specific reason you would want to use Kubernetes? you can read more about cloud native and servlerss here Design Cloud native and Serverless

Related

What is the recommended way to interface with a CorDapp via an API?

I am designing a CorDapp, which would require user input as well as API integration, and I am considering various approaches to expose flows and vault queries to the outside world.
Default option seems to be to use Corda RPC. Unless I missed something, there are only Java bindings for it, which is effectively restricting the clients to only be JVM-based. This is somewhat inconvenient, and ideally I would like something like OpenAPI to make it more open and implementation-agnostic.
Another option is to use some kind of Corda RPC to OpenAPI proxy. I know about Braid, and I'm sure there are others. Braid seems to support deployment as a Corda service packed together with the flows into the CorDapp itself, effectively making it running embedded into the Corda JVM.
Braid can be deployed as a standalone proxy too, which I suppose is option three.
Instinctively I find the embedded mode more attractive, as it reduces the number of moving parts, as opposed to a standalone mode. However, I am concerned that such model may be in fact become discouraged at some point, either because Corda developers consider it to be a misuse of services facility, or because some organisations will not be keen to deploy such code onto their nodes, especially when they may be running multiple CorDapps. I would imagine anything deployed as part of Corda JVM would at least require more scrutiny due to potential impact on other things running there, which in turn would reduce the agility.
I wonder what approach to integrate with a CorDapp is actually recommended?
Edit 1: I know it is technically possible to embed a webserver into the node and expose a REST API from there, at least in the current version of Corda (4.3 at the time of writing). The question is more about whether it is a good idea to do so, or not, and why.
Take a look at the question I had asked on Stackoverflow regarding front end for CordApp. This might be of some help.
Following is the link -
"Corda: Can we develop Dapps that will be run by IIS webserver to talk to Corda platform?"
You can use any front-end technology you want.
As of Corda 3, your backend must be JVM-based, for two reasons:
You need to load various flow, state and other class definitions onto
the classpath to pass as arguments to flows, retrieve objects from the
vault, etc.
You need to use the CordaRPCClient library to create an
RPC connection to the node
If you really need to write your back-end
in another language, there are a few workarounds:
Create a thin Java webserver that sits between your main webserver and
the node. The Java webserver translates HTTP requests from the main
webserver into RPC calls to the node, and RPC responses from the node
into HTTP responses to the main webserver
This is the approach taken
by libraries such as Braid
Use a library such as GraalVM to compile
non-JVM languages to JVM bytecode
An example of writing a JVM
webserver in Javascript using GraalVM is available here:
https://github.com/nitesh7sid/cordapp-example-nodejs-server-graalvm

What's exactly a microservice in SOA

I've been trying to get my head around some SOA principles and I came across this page which describes a microservice in SOA:
http://soapatterns.org/design_patterns/micro_task_abstraction
Solution: Individual units of non-agnostic logic with specialized
processing and deployment requirements are separated using the
microservice model and abstracted into a microservice layer in which
there is the architectural freedom to tailor environments in support
of specialized service processing and deployment requirements.
What are exactly these specialized processing and deployment requirements?
And what's the key difference between a microservice and a taskservice in SOA? as they both appear to me none-agnostic and can be composited of other entity and utility services.
Service Oriented Achitecture (SOA) deals with defining services, design of what the services are meant to do, their interface, governance, service lifecycle, operational SLA's and "usage agreements" (these last two lead to a number of non-functional requirements), among others.
Microservices can be considered more of a development and deployment approach. If SOA is defined correctly (your interfaces are clean, designed with appropriate granularity, loosely coupled), it is easier for you to apply a microservices approach, subject to your technical platform.
Once you have defined a service, your implementation considerations would include infrastructure, application servers, build and deployment mechanisms. A key defining feature of a microservice deployment model is that as much of the "stack" is self-contained within a single deployable unit (can be referred to as a container).
Using a Java example, at the very least a microservice might be one service or function (exposed as Rest or SOAP) running in its own standalone JVM (bundled libraries, application/web server) which you can refer to as a container (example here). Contrast this with a more "traditional" approach where you might have one team managing the application server on which you might deploy multiple services or functions. The major benefit is you can avoid conflicting dependencies, your environment is fully in your control (effectively a few lines of code) and your service might perform more predictably as it has no interference from other services contending for the same resources (JVM threads, Data sources to name a few).
Specialised container management tools such as docker and associated scripting allow you to put together any required configuration you need from your environment.
Update 16/Jan/2018
Also note this paragraph on the distinction between SOA and Microservices as concepts. I translate it as: People didn't do SOA right so needed a new buzzword. True to an extent. However, you could also see the two as complementary concepts - SOA being more design heavy while microservices being a more deployment focused methodology.

Component distribution using EJBs: always treat as last resort?

Some former colleague of mine once told me before, remotely distributing EJBs should always be treated as a last resort. According to him, oftentimes, the drawbacks on this implementation outweigh the benefits.
So, when would remotely distributing EJBs be really recommended? what type of situation?
I mean, if I have a web centric app suffering performance degradation because its server cant handle the load, I can have its server load balanced instead.....rather than separate the business component using EJB.
Anyone can enlighten me on this?
Not all so simple. EJB technology is the best for applications integrity, based on javaee. Some examples:
Remote EJB is simplest way to get access to server business
functions from remote application (remote client). But now, when
application clients became more thinner and thinner, role of the
remote EJBs lost.
If your business service spreads among several
java application, EJB services is simple way (not only one) to
integrate them together.
Some more background regarding the actual problem might help. I was not able to understand the question completely since it began with EJBs and why or why not should we use them remotely and ended up talking about performance degradation. How are these two related in the context of the problem you are facing ?

Highly configurable and efficient ESB / SOA / integration framework

my plan is to develop or use a Java-based integration framework (ESB, SOA whatever) that deals with services, with the following constraints:
a Service can be deployed on multiple machines but doesn't have to be present on every one of them
a Service can be deployed and re-deployed (with a newer version) separately
a Service is connected to other services either by:
in-memory connections
(async / sync) remoting to other machines
the routing logic of the Service connections should be configurable on the fly, without re-deploying or restarting anything
I know that OpenESB is close to these requirements, however it requires redeployment of the service to change the routing (suppose the connections are HTTP BC based), but I'm unfamiliar in this regard with MuleESB, WSO2, JBossESB, whatever open source ESB... Is there any good solution for this (e.g. configurable in-memory and/or remoting routing)? I don't really care about clustering as I plan to use the servers separately, and the designated (if required) JMS solution would be HornetQ if that matters.
You mention several different concepts, but a combination of an ESB pattern, Apache Load Balancer and Maven should get you close. Do not get to hung up on the product, settle on a paradigm/pattern and the decision of the product will be easy, it either does things the way you like or does not.
Here is the pattern I use.
SOA Design Patterns
This may also interest you SOA for executives
Cheers
After a long discussions about the pros and cons, we are going to have a HornetQ-based (JMS MQ) solution, where we create message routing rules and sometimes processing codes that handle the different kind of routing. HornetQ is able to handle the in-jvm requirement too, but that part will be covered under the hood.

Addressing scalability ,performance in a .net web application

I'm working on a .net portal which would be having lots of concurrent users.
so scalability,performance need to be addressed in the design and architecture.
We plan to use load balancing in the application.
Keeping this in mind,what would be the best way of communicating between IIS web server(hosting aspx,aspx.cs files) and application server (hosting .net assemblies like business logic and data access layer)?
Should it be .net remoting or soap web service?or is there any other approach?
Thanks.
Is there another approach? Yes - don't distribute your objects.
The most scalable approach is to NOT to distribute your objects away from each other. Ask yourself, why do you want to deploy one flavor of code to an "app server" while another flavor of code goes to a "web server"? The communication that goes on between those two layers, if they are distributed, will be much much much much (etc etc) more expensive than a local call.
With today's 64-bit servers, with all of that memory, and the hot CPUs, and with ASP.NET's superior memory management, why not put your business logic and DAL on the same physical machine as the ASPX files? Why not?
If you need to scale, add more servers. Simple.
There are good reasons, of course, to distribute. The most common good reasons have to do with domains of ownership - along several axes: security management, or even budget and control. In other words, to take the latter case, if team is responsible for running the business logic and a separate team is responsible for building and running the web layer -then it may make sense to distribute those two things to allow independence of management. Most of the good reasons for distributing computer code, have their origins in the structures of the human organizations using or developing the code.
There is no good technical reason why a web page should not run on the same CPU, sharing the same CLR VM and memory heap, as the database access layer.
Regardless what you do with distribution, it would be unwise to architect your system with less-than-formal interfaces defining the connections between the layers. If you keep formal interfaces, then it should be no problem for you to measure the perf and efficiency of a distributed approach versus a co-located approach.
Do you really need an app server? Just how big are you talking exactly? For example Stackoverflow.com has ~50k uniques a day and doesn't have an app server so I assume you are talking much bigger than that? Most performance bottle necks come down to database issues so I would concentrate on that.
I suggest you take a look at the Patterns and Practices groups guidelines for performance, more specifically Chapter 6 - Improving ASP.NET Performance of the guideline. I agree with Cheeso that you should seriously consider NOT physically splitting your application layer and UI layer if you can. The P&P guideline has the following notes:
Avoid Unnecessary Process Hops
Although process hops are not as expensive as machine hops, you should avoid process hops where possible. Process hops cause added overhead because they require interprocess communication (IPC) and marshaling. For example, if your solution uses Enterprise Services, use library applications where possible, unless you need to put your Enterprise Services application on a remote middle tier.
Understand the Performance Implications of a Remote Middle Tier
If possible, avoid the overhead of interprocess and intercomputer communication. Unless your business requirements dictate the use of a remote middle tier, keep your presentation, business, and data access logic on the Web server. Deploy your business and data access assemblies to the Bin directory of your application. However, you might require a remote middle tier for any of the following reasons:
You want to share your business logic between your Internet-facing Web applications and other internal enterprise applications.
Your scale-out and fault tolerance requirements dictate the use of a middle tier cluster or of load-balanced servers.
Your corporate security policy mandates that you cannot put business logic on your Web servers.
If you absolutely have to split the application logic up anyways, you could use WCF as a transport mechanism. I'm not sure how it stacks up against remoting when it comes to performance. However, I seem to remember that this is the guideline Microsoft is pushing.
Clemens Vasters (Technical Lead for the Microsoft .NET Service Bus) talks about WCF vs. Remoting in this answer on MSDN forums.
Learn to write asynchronously.
Explore the CCR runtime for example.
Each thread that is blocked waiting for IO responses is one less available to your system.
Turn off 'idealised logging' leave the ability to switch it back on via admin console. But logging is often a hidden bottle neck.
CACHE CACHE CACHE!
If it was expensive to get the data the first time, don't pay for it the second!
Avoid ASP.net's Session State - This can seriously bloat and lead to a large slow down in page responsiveness.
Modify the http headers to specify short browser caching (5sec - 20sec) (Depends on the nature of the content)
Utilise GZIP while you are at it!
AND USE LOTS OF RAM
Here are my tips
1)Move all your static files - images , css, js to a load balancer like nginx. This will greatly reduce the load on IIS server and it will have enough free resources to serve the main request.
2)Think about caching and avoiding database access altogether.
3)Try to implement REST principles are far as possible.
4)Keep session state to a bare minimum - if possible avoid it altogether.
There are some good performance and scalability points in these articles from Omar Al Zabir.
10 ASP.NET Performance and Scalability Secrets
and
99.99% available ASP.NET and SQL Server SaaS Production Architecture
(also check out his book Building a Web 2.0 Portal with ASP.NET 3.5)

Resources