Say for example, an accounting service has a need to schedule ledger update jobs,
should the accounting service include it's own scheduler? or,
should job scheduling be a shared service?
I guess option (a) makes the service more autonomous?
What other criteria should be involved in making the decision?
Job scheduler is usually an off-the-shelf component (like quartz) so you'd probably use that (many times it is embedded in the service host anyway).
In any event when you decide whether or not to create a separate service vs. having a component/library used internally within services you should consider the overhead of maintaining it, developing it as a separate unit etc. (see the nano-services antipattern)
Related
I have a .Net core application that consists of some background tasks (hosted services) and WEB APIs (which controls and get statuses of those background tasks). Other applications (e.g. clients) communicate with this service through these WEB API endpoints. We want this service to be highly available i.e. if a service crashes then another instance should start doing the work automatically. Also, the client applications should be able to switch to the next service automatically (clients should call the APIs of the new instance, instead of the old one).
The other important requirement is that the task (computation) this service performed in the background can’t be shared between two instances. We have to make sure only one instance does this task at a given time.
What I have done up to now is, I ran two instances of the same service and use a SQL server-based distributed locking mechanism (SqlDistributedLock) to acquire a lock. If a service could acquire a lock then goes and do the operation while the other node waiting to acquire the lock. If one service crashed the next node could be able to acquire the lock. On the client-side, I used Polly based retry mechanism to switch the calling URL to the next node to find the working node.
But this design has an issue, if the node which acquired the lock loses the connectivity to the SQL server then the second service managed to acquire the lock and started doing the work while the first service is also in the middle of doing the same.
I think I need some sought of leader election (seems done it wrongly), Can anyone help me with a better solution for this kind of a problem?
This problem is not specific to .Net or any other framework. So please make your question more general so as to make it more accessible. Generally the solution to this problem lies in the domain of Enterprise Integration Patterns, so consult the references as the status quo may change.
At first sight and based on my own experience developing distributed systems, I suggest two solutions:
use a load balancer or gateway to distribute requests between your service instances.
use a shared message queue broker to put requests in and let each service instance dequeue a request for processing.
Either is fine and I can use both for my own designs.
I have a use case of a lot consumer groups (and one topic per consumer group), I hence have to create many ConcurrentMessageListenerContainer instances each one for one topic/consumergroup. But I would like them to share a common thread pool to keep control of the calls of kafkaconsumer.poll and also how the records are processed. Do you think it is relevant to do that with spring kafka or do I have to implement my own version instantiating myself KafakConsumers ?
Using a pool won't help - the container currently uses a dedicated thread for each consumer. There is no support for sharing threads across containers.
I have an ASP.NET MVC 3 application that I am currently working on. I am implementing a service layer, which contains the business logic, and which is utilized by the controllers. The services themselves utilize repositories for data access, and the repositories use entity framework to talk to the database.
So top to bottom is: Controller > Service Layer > Repository (each service layer depends on a single injectable repository) > Entity Framework > Single Database.
I am finding myself making items such as UserService, EventService, PaymentService, etc.
In the service layer, I'll have functions such as:
ChargePaymentCard(int cardId, decimal amount) (part of
PaymentService)
ActivateEvent(int eventId) (part of EventService)
SendValidationEmail(int userId) (part of UserService)
Also, as an example of a second place I am using this, I have another simple console application that runs as a scheduled task, which utilizes one of these services. There is also an upcoming second web application that will need to use multiple of these services.
Further, I would like to keep us open to splitting things up (such as our single database) and to moving to a service-oriented architecture down the road, and breaking some of these out into web services (conceivably even to be used by non-.NET apps some day). I've been keeping my eyes open for steps that might make make the leap to SOA less painful down the road.
I have started down the path of creating a separate assembly (DLL) for each service, but am wondering if I have started down the wrong path. I'm trying to be flexible and keep things loosely coupled, but is this path helping me any (towards SOA or in general), or just adding complexity? Should I instead by creating a single assembly/dll, containing my entire service layer, and use that single assembly wherever any services need to be used?
I'm not sure the implications of the path(s) I'm starting down, so any help on this would be appreciated!
IMO - answer is it depends on a lot of factors of your application.
Assuming that you are building a non-trivial application (i.e. is not a college/hobby project to learn SOA):
User Service / Event Service / Payment Service
-- Create its own DLL & expose it as a WCF service if there are more than one applications using this service and if it is too much risk to share the DLL to different application
-- These services should not have inter-dependencies between each other & should focus on their individual area
-- Note: these services might share some common services like logging, authentication, data access etc.
Create a Composition Service
-- This service will do the composition of calls across all the other service
-- For example: if you have an Order placed & the business flow is that Order Placed > Confirm User Exists (User Service) > Raise an OrderPlaced event (Event Service) > Confirm Payment (Payment Service)
-- All such composition of service calls can be handled in this layer
-- Again, depending on the environment, you might choose to expose this service as its own DLL and/or expose it as a WCF
-- Note: this is the only service which will share the references to other services & will be the single point of composition
Now - with this layout - you will have options to call a service directly, if you want to interact with that service alone & you will need to call the composition service if you need a business work flow where different services need to be composed to complete the transaction.
As a starting point, I would recommend that you go through any of the books on SOA architecture - it will help clear a lot of concepts.
I tried to be as short as possible to keep this answer meaningful, there are tons of ways of doing the same thing, this is just one of the possible ways.
HTH.
Having one DLL per service sounds like a bad idea. According to Microsoft, you'd want to have one large assembly over multiple smaller ones due to performance implications (from here via this post).
I would split your base or core services into a separate project and keep most (if not all) services in it. Depending on your needs you may have services that only make sense in the context of a web project or a console app and not anywhere else. Those services should not be part of the "core" service layer and should reside in their appropriate projects.
It is better to separate the services from the consumers. In our peojects we have two levels of separation. We used to group all the service interfaces into one Visual Studio project. All the Service Implementations are grouped into another project.
The consumer of the services needs to reference two dll but it makes the solution more maintainable and scalable. We can have multiple implementations of the services.
For e.g. the service interface can define a contract for WebSearch in the interfaces project. And there can be multiple implementations of the WebSearch through different search service providers like Google search, Bing search, Yahoo search etc.
I'm trying to write test cases for my web services in a way that I can rollback any database changes they may make. I can try to surround them with a transaction scope, but how do I specify a context for the transaction? In other words, how does the transaction know which database and server to rollback on? In my case, the SQL server runs locally as well as the web services. Before you tell me to call the web services directly without a client, please understand that the web services have very specific runtime environment settings that would be a royal pain to reproduce for my test cases. Perhaps, the transaction scope isn't what I want to use, is there an alternative? Is there a database function I could call to start a transaction? Thanks.
First you are not doing unit testing. Unit test is about testing single small unit of code (function). When you test a function you are creating unit test for each execution path so that you have full coverage of tested code. But your system under tests includes client to service communication and service to database communication = several tiers of code + configuration. That is called integration testing.
The problem here is how did you design your service? Does your service flow transactions? Transaction flow allows starting transaction at your client and pass it to the service (distributed transaction). It is not default behavior and it requires special configuration of WCF bindings. If you use this approach you can do the same in your test. Start transaction at test and rollback the transaction at the end of the test. If you didn't design service to flow transaction you simply can't use it because your transaction started in the test will not affect the service. In that case you have several choices:
Create manual compensation. At the end of each test run custom SQL to move data to initial state. This simulates rollback. I don't recommend this approach.
Recreate database at the beginning of each test. This is slow but fully acceptable because integration tests are usually run only on build server few times per day.
Don't test WCF service level. WCF service should be only some wrapper on the top of business logic or data access logic. So don't test service level but instead test the wrapped layer. You can probably use transactions there. This approach can be well combined with previous one so that you have some small set of complex integration tests which requires database recreation and some bigger set of tests which can do rollback and use the same database.
Duplicate
This is a close duplicate of Dealing with a longer running process in WCF. Please considering posting your answer to that one instead of this.
Original Question
I'm implementing the business layer of an application that must run some background processes at scheduled times. The business layer is made up of several WCF services all running under the same web application.
The idea is defining a set of 'tasks' that must be run at different times (eg. every 5 minutes, everyday at 23:00, etc). That wouldn't be hard to implement as a windows service, but the problem is, the tasks need access to data caches that are living in the services, so this 'scheduler' must run under the IIS context in order to access that data.
What I'm doing currently is using a custom ServiceHostFactory in one of the WCF services which spawns a child thread and returns. The child thread sleeps and wakes up every X minutes to see if there are scheduled tasks and executes them.
But I'm worried about IIS randomly killing my thread when it recycles the application pool or after some inactive time (eg. no activity on any of the WCF services, which listen for requests from the presentation layer). The thread must run uninterrupted regardless of activity on the services. Is this really possible?
I have found an article by someone doing the same thing, but his solution seems to be pinging the server from the child thread itself regularly. Hopefully there is a better solution.
I have at some point implemented a Windows Service that would load a web page on a regular basis. The purpose of that was was that the site was hosting a Workflow Foundation runtime, and we wanted to ensure that the web application was brought back up after IIS recycling the application pool. Perhaps the same approach can be used in this case; have a service (or Scheduled Task in Windows; even simpler) run every x minutes and load a page that will check for tasks.
Is it a possibility to run either a Windows Service or place applications in the Windows Scheduler to execute methods in the WCF at certain times? Maybe use a BackgroundWorker inside the WCF. Another option would be for WCF to spawn other applications to do the business logic, passing the appropriate data, or pointers to the data in memory(unsafe).