Microservices extensions/plugins architecture - networking

We are building a platform based on microservices. The platform will provide a number of basic functions that will be used in various independent projects.
We need to come up with such an architecture that will allow us to extend the basic functionality and interact with existing services. The main task is to ensure that the code of the main services remains unchanged, and custom solutions based on the platform can be easily reused.
We are considering several options. For example, there is a certain service "foo" which provides the functions foo1 and foo2. To extend the functions, we can create an independent "foobar" service and put it in front of the foo service, accept API requests, execute custom functions, and then redirect the request to foo. It turns out a kind of intermediary service, which acts as the main link in the implementation of the functions specific to a given project and the main platform. The advantage of this approach can be attributed to complete independence from the code base of basic services. And the main disadvantage is the complexity of implementation and the need to greatly fragment the functions of the main service.
The second option under consideration is similar to the approach that is often used in monolithic applications - the hook system, which allows you to override the behavior of the system. For example, you can create an independent service that will connect events and subscribers.
This approach is more flexible, but at the same time it is still quite difficult to implement. The main disadvantage of this approach is synchronous blocking network calls.
The third option that we are considering is the construction of the microservices themselves in such a way that additional modules can be added to them so that services can be customized at the build stage. The main code remains unchanged, but inside the process, the already mentioned scheme of hooks and events is implemented (at the code level of individual services). Benefit is ease of implementation. Of the cons, it is very difficult to implement customizations in the case multiple services are involved.
Perhaps we are trying to invent a bicycle and there are exist good solution(s) for the problem. If you know such, or you have a good idea about possible ways to solve this problem, please share.

I don't think there is an answer to this generic problem. The third solution seems pretty good. You might take advantage of chain-of-responsabilities or request-response pipelines paradigm to implement the open/closed principle pretty easily. This should ensure that you can add/modify features into your foo service without touching the existing code.
If you are looking for something to tackle to whole architecture rather than a single microservice, you might want to give a look at the Event-Driven microservices design.
This design approach is similar to your option 2, with the difference that it uses a message broker and async communication to let the microservices communicate.
There are a lot of advantages, like more resiliency and decoupling of the services.

Related

Testing async microservices with no API

We have 5 microservices (more on the way) that communicate with each other asynchronously. 3 of these microservices do not have any API. Those consume data from a message queue, do some processing, and write data into another queue. 2 of these microservices do have APIs, and those also consume data from the queues but send the response back to the caller.
Given that, for testing the service interactions, correctness of contracts, and end-to-end flow:
what would be the best way to test the asynchronous services that read from and write to queues?
would consumer-contract test be applicable anywhere?
I feel end-to-end production testing is possible, but can something more granular and effective be done?
First, let's correct one thing - you do have APIs. The messages that the "API-less" services read must have some defined content or format. That's an API. You should be testing it, both positive and negative.
Before you get to whole-system testing (in a test or staging environment that mimics your production environment) your testing should probably be in layers, much as in any other system.
Unit tests to test the behavior of each class. In your message-handling classes, for example, call each with messages that prompt specific actions and test that it works correctly. These can usually be run very quickly so it's easy to run them often while developing code.
Integration tests to make sure your interaction with next-level external systems works. You can use, for example, testcontainers to run isolated instances of the queueing system that your services interact with.
Exactly what form these tests take depends on what languages and frameworks your system is built with. I briefly looked at the Pact tool you referenced and it is taking a similar approach.
Looks like with Pact it is possible to do message-based contract testing. I think I have a path forward with this.

Can a Service Layer contain multiple services?

I'm rearchitecting a large web forms ASP.Net application, inserting a service layer to take away unwanted responsibility from the presentation layer.
I've seen a lot of examples where all the service methods are contained in one class.
Is this common / best practice? Or is it perfectly feasible to have a number of service classes within the service layer? I'm leaning towards having more than one service and those services being able to talk to each other.
Any guidance, pros/cons?
Richard
P.s. Note that I'm not talking about a web service layer, WCF or otherwsie, although that might become more relevant at a later date.
The SOLID principles, specifically the Single Responsibility Principle would suggest that having all of your functionality in one object is a bad idea, and i tend to agree. As your application grows the single class will become difficult to maintain.
Your comments to Yuriys answer would suggest you want to use an IOC container. Lets consider that in more detail for a moment...
The more functionality this single class contains, the more dependencies it will require. You could well end up having a constructor on the service that has a long list of parameters, simply because the class covers a lot of ground and depends on other functionality at a lower level, such as logging, database communication, authentication etc. Lets say a consumer of that service wants to call one, and only one specific method on that class before destroying the instance. Your IOC container will need to inject every dependency that the service could 'possibly' need at runtime, even though the consumer will only use maybe 1 or 2 of those dependencies.
Also from an operational perspective - if you have more than one developer on the team working on the service layer at the same time, there is more possibility of merge conflicts if you are both editing one file. But, as suggested you could use partials to counter that issue.
Usually a dose of practicality alongside a well known pattern or principle is the best way forward.
I would suggest researching Service Orientated Architecture if you havent already, as it may help you answer some key decisions in the approach to your solution.
Of course, it can.
Moreover, I believe that would be better to extract from this God service layer class few interfaces by functionality (i.e. ISecurityService, INotificationService etc.) and implement each interface in separate project. Also, you can utilize some IOC container to resolve class that implement service's interface. This way you can change each service's implementation independently without changing client functionality.
At least, for the first time you can mark your service super class as partial, then split it up by functionality into few .cs(.vb) files with meaningful names and group them together in Visual Studio. This will simplify navigating across service methods.
My take on structuring an application would be to start with splitting the application into two projects AppX.Web (UI logic) and AppX.Business (business logic), but still keep them in the same VS solution. The clarify in structure between business logic and UI logic helps you understand what services are shared among multiple web pages and which are local to a singel web page. You should avoid reusing code directly between the web pages, if you find that this is necessary then you should probaly move that piece of shared code to the business logic layer.
When implementing the business logic project you should try to create separate classes for different type of business logic. These classes can of course talk to eachother, but do avoid having the web pages talk to eachother.
Once you have separated UI logic from business logic you can continue to break down the AppX.Business code into smaller pieces if necessary. Common examples include:
AppX.Data: A Data Access Layer (DAL) which isolate all data manipulation from the actual business logic
AppX.Dto: Data Transfer Objects (DTO) which can be useful in many scenarios, e.g. when sending data to the client browser for processing by jQuery
AppX.Common: Shared logic which is generic to many other applications, this can be helper classes you have previously created or things which should be reviewed after the project for inclusion in company wide support classes.
Finally, let's talk about going all-in and expose your business logic as a WCF service. In that case you actually need not change anything in the existing structure. You can either add the WCF service to your existing AppX.Web project or expose them separately in AppX.Service. If you have properly separated business logic from UI logic the WCF layer can be just a thin wrapper around the business logic.
When implementing the WCF service it is quite possible to do all of that in a single class. No real business logic is available in the WCF service as it just make direct calls to the business logic.
If you build a new application you should consider you overall design up front, but now that you are rearchitecting I think you should work step-by-step:
Start by creating the AppX.Web and AppX.Business Projects
Identify services and create classes in AppX.Business for those services
Move code from the AppX.Web project into the new classes in AppX.Business, and make sure you call them from the web project.
Continue with additional break-down if you feel you need to do so!

Flex 4.5 remoting objects

I am very new to remoting in flex. I am using flex 4.5 and talking to a web application built by someone else on the team using AMF. They have used Zend_AMF to serialize and unserialize the data.
One of the main issues I am facing at the moment is that I will need to talk to a lot of services (about 60 or so).
From examples on remoting I have seen online and from adobe, it seems that I need to define a remoting object for EACH service:
<mx:RemoteObject id="testservice" fault="testservice_faultHandler(event)" showBusyCursor="true" destination="account"/>
With so many services, I think I might have to define about 60 of those, which I don't think is very elegant.
At the same time, I have been playing with Pinta to test out the AMF endpoint. Pinta seems to be able to allow one to define an arbitary amount of services, methods and parameters without any of these limitations. Digging through the source, I find that they have actually drilled down deep into the remoting and are handling a lot of low level stuff.
So, the question is, is there a way to approach this problem without having to define loads or remoteobjects and without having to go down too deep and start having to handling low level remoting events ourselves?
Cheers
It seems unusual for an application to require that many RemoteObjects. I've worked on extremely large applications, and we typically end up with no more than ~6-10 RemoteObject declarations.
Although you don't give a lot of specifics in your post about the variations of RemoteObjects, I suspect you may be confusing RemoteObject with Operation.
You typically declare a RemoteObject instance for every end-point in your application. However, that endpoint can (and normally does) expose many different methods to be invoked. Each of these server-side methods gets results in a client-side Operation.
You can explicitly declare these if you wish, however the RemoteObject builds Operations for you if you don't declare them:
var remoteObject:RemoteObject;
// creates an operation for the saveAccount RPC call, and invokes it,
// returning the AsyncToken
var token:AsyncToken = remoteObject.saveAccount(account);
token.addResponder(this);
//... etc
If you're interacting with a single server layer, you can often get away with a single RemoteObject, pointing to a single destination on the API, which exposes many methods. This is approach is often referred to as an API Façade, and can be very useful, if backed with a solid dependency injection discipline on the API.
Another common approach is to segregate your API methods by logical business area, eg., AccountService, ShoppingCartService, etc. This has the benefit of being able to mix & match protocols between services (eg., AccountService may run over HTTPS).
How you choose to split up these RemoteObjects is up to you. However, 60 in a single applications sounds a bit suspect to me.

The Purpose of a Service Layer and ASP.NET MVC 2

In an effort to understand MVC 2 and attempt to get my company to adopt it as a viable platform for future development, I have been doing a lot of reading lately. Having worked with ASP.NET pretty exclusively for the past few years, I had some catching up to do.
Currently, I understand the repository pattern, models, controllers, data annotations, etc. But there is one thing that is keeping me from completely understanding enough to start work on a reference application.
The first is the Service Layer Pattern. I have read many blog posts and questions here on Stack Overflow, but I still don't completely understand the purpose of this pattern. I watched the entire video series at MVCCentral on the Golf Tracker Application and also looked at the demo code he posted and it looks to me like the service layer is just another wrapper around the repository pattern that doesn't perform any work at all.
I also read this post: http://www.asp.net/Learn/mvc/tutorial-38-cs.aspx and it seemed to somewhat answer my question, however, if you are using data annotations to perform your validation, this seems unnecessary.
I have looked for demonstrations, posts, etc. but I can't seem to find anything that simply explains the pattern and gives me compelling evidence to use it.
Can someone please provide me with a 2nd grade (ok, maybe 5th grade) reason to use this pattern, what I would lose if I don't, and what I gain if I do?
In a MVC pattern you have responsibilities separated between the 3 players: Model, View and Controller.
The Model is responsible for doing the business stuff, the View presents the results of the business (providing also input to the business from the user) while the Controller acts like the glue between the Model and the View, separating the inner workings of each from the other.
The Model is usually backed up by a database so you have some DAOs accessing that. Your business does some...well... business and stores or retrieves data in/from the database.
But who coordinates the DAOs? The Controller? No! The Model should.
Enter the Service layer. The Service layer will provide high service to the controller and will manage other (lower level) players (DAOs, other services etc) behind the scenes. It contains the business logic of your app.
What happens if you don't use it?
You will have to put the business logic somewhere and the victim is usually the controller.
If the controller is web centric it will have to receive its input and provide response as HTTP requests, responses. But what if I want to call my app (and get access to the business it provides) from a Windows application which communicates with RPC or some other thing? What then?
Well, you will have to rewrite the controller and make the logic client agnostic. But with the Service layer you already have that. Yyou don't need to rewrite things.
The service layer provides communication with DTOs which are not tied to a specific controller implementation. If the controller (no matter what type of controller) provides the appropriate data (no mater the source) your service layer will do its thing providing a service to the caller and hiding the caller from all responsibilities of the business logic involved.
I have to say I agree with dpb with the above, the wrapper i.e. Service Layer is reusable, mockable, I am currently in the process of including this layer inside my app... here are some of the issues/ requirements I am pondering over (very quickly :p ) that could be off help to youeself...
1. Multiple portals (e.g. Bloggers portal, client portal, internal portal) which will be needed to be accessed by many different users. They all must be separate ASP.NET MVC Applications (an important requirement)
2. Within the apps themselves some calls to the database will be similar, the methods and the way the data is handled from the Repository layer. Without doubt some controllers from each module/ portal will make exactly or an overloaded version of the same call, hence a possible need for a service layer (code to interfaces) which I will then compile in a separate class project.
3.If I create a separate class project for my service layer I may need to do the same for the Data Layer or combine it with the Service Layer and keep the model away from the Web project itself. At least this way as my project grows I can throw out the data access layer (i.e. LinqToSql -> NHibernate), or a team member can without working on any code in any other project. The downside could be they could blow everything up lol...

Using a webservice as an interface for a data access layer in .NET

I am currently doing a CRUD project for school and basically they want us to have this kind of structure (3 projects):
Class Library
Contains all the data access logic (retrieving the data from the database with either LINQ of standard ADO.NET).
Web Service
Having a reference to the class library and offering [WebMethod]s that access the methods from the class library
ASP.NET Website
Having a service reference to the web service and using the WebMethods to retrieve the data
Which basically means that we cannot access the class library directly from the website:
Website
\
\
Web Service
\
\
Class Library
Now of course there are multiple solutions to choose from as to provide abstraction in the web service to separate for example methods that retrieve the articles and methods to retrieve the categories (which are two different entiries and have two seperate classes in the class libary):
I can either do one web service which will have all the methods (GetAllArticles, GetAllCategories, GetArticleByID, etc.) and the website only makes a reference to this one web service. But of course this will result in having all the methods in a single class (that is, a single web service)
Or I can create multiple web services (Articles.asmx, Categories.asmx etc...) and reference all of them from the website and then call the one I need depending on what data I need to retrieve.
But I mean for me, the above solutions are not really ideal because if I have the first solution, I will have a ton of methods all in one class (no abstraction whatsoever) and in the second solution, I will have to reference about 10 different web services from the website (one for articles, one for categories, etc.)
At school they told us to use the web service (or web services) to access the class library, and then retrieve the data from the class library using the Webmethods, but both solutions I mentioned earlier on seem a bit dodgy.
Is there a better way of how I might go about in implementing this structure?
In your example you suggested one service for Articles and one service for Categories. That may of course be just an example, but in that case it would not make sense to separate them, because it would be quite likely and common to have queries and/or routines which make use of both Category data and Article data.
With that in mind, it usually makes the most sense to group routines by the degree to which they are related to one another. If you can separate all your possible routines into, say, three clean groups with little to no overlap, then it makes sense to have three services. If you cannot cleanly separate any of your routines, you should limit yourself to one web service.
But regarding your statement "No abstraction" being a downside of one web service - that isn't necessarily true. You can always create layers of abstraction behind the web service, so that the service class itself is simply a facade with many thin method calls that reach into bulkier logic elsewhere.
In the end, the only approach which truly matters because it's the only approach which works is this: take the simplest approach possible, only build what you need to build in as few layers as you can; only add complexity when you reach a problem which cannot be solved in a better way than by adding that complexity. It is always easier to add complexity than to take it away.
I would put all the webmethods into the same class. The webmethods should only be wrappers around you business classes plus whatever input validation and authorisation you need.
Since web services are generally used by mahcines and not humans there is no pressing need for a hierarchy. It can become a burden to manage the URLs and web server if you create a webmethod class per business class.
If you are planning on using the HTML page that browsers automatically generate to call your web services as your main user interface you need to be a little bit careful. For example you won't be able to rely on SOAP headers if you need to authenticate requests.
Why not use ADO.Net Data Service?

Resources