I searched the documentation without finding a clear answer to this question.
I want to create a background worker in ABP Framework (https://docs.abp.io/en/abp/latest/Background-Workers).
But I don't know in which project / layer I should create it. I have the feeling the background job belongs to Domain layer. Is it correct? Does it belongs to the Application layer instead? Or does it depends of what the background worker does?
What is the best practice?
As you said, I think it depends on what the background worker does. But generally speaking, I think it should be in the domain layer because it's not a use-case that can be triggered with user/actor interaction, it is just a worker service that runs periodically.
Related
We are migrating a monolithic to a more distributed and we decided to use AxonFramework.
In Axon, as messages are first-class citizens, you get to model them as POJOs.
Now I wonder, since one event can be dispatched by one service and listen on any others, how should we handle event distribution.
My first impulse is to package them in a separate project as a JAR file, but this goes against a rule for microservices, that they should not share implementations.
Any suggestion is welcome.
Having some form of 'common' module is definitely not uncommon, although I'd personally use that 'common' module for that specific application alone.
I'd generally say you should regard your commands/events/queries as the API of your application. As such, it might be beneficial to share the event structure with other projects, but just not the actual POJO itself. You could for example think about using ProtoBuf for this use case, were in ProtoBuf describes a schema for your events.
Another thing to think about is to not expose your whole 'event-API'. Typically you'll have quite some fine grained events, things which other (micro) services in your environment are not interested in. There are however always a couple of 'very important events', differently put 'milestone events', which others definitely are interested in.
These milestone events in some scenarios aren't a direct POJO following from your domain, but rather an accumulations of several events.
It is thus not to uncommon to have a service which accumulates these and publishes another event to notify other services. The accumulating of these fine grained, internal events, and publishing a milestone event as a response to these is typically better suited as the event-API within your micro service architecture.
So that's a couple of ideas there for you, hope they give you some insights.
I'd like to give a clear cut solution to your question, but such an answer always hides behind 'it depends'.
You are right, the "official" rule is not to share models. So if you have distributed dev-teams, I would stick to it.
However, I tend to not follow strictly when I have components that are decoupled but developed by the same team or teams with high interaction ...
I'm developing a keyboard, so I'm implementing an InputMethodService. I have a requirement to add other features to this keyboard application but to separate it as another application in order to leave the keyboard as a lone keyboard implementation.
So I need to create a keyboard application and another application with all the other features (other features include but not limited to: a News Activity, a Messenger, a Lock Screen implementation and some Widgets).
Those two applications will need to communicate between them, from my research I found that there are several mechanisms I could use:
A Bounded Service
URI implementation
BroadcastReceivers
My question is: what would be the best implementation for my needs? Where my needs are to pass data from one application to another as well as starts activities and other components from one app in another.
After I made some research on this topic I found that there are several ways to do this operation:
Using Bounded Services that uses either a Messenger object to pass messages between the local process and the Remote Bounded Service or using AIDL to create an interface that will be passed from the Remote Bounded Service to the local process so that they can communicate.
The second options would be using the good old fashion BroadcastReceivers. That way as always it is possible to fire an Intent from the local process to the remote process and there receive some information.
The different for the usage of those both two would be decided by how strong would you like the connection to be between the two processes and how often should they be communicating. If they need to do one operation once in a while the BroadcastReceivers would be a perfectly good solution. But if you need a more consistent connection the Bounded Service is the way to go.
One of the things Marcus Zarra recommends in his Core Data book when talking about setting up an app's core data stack is to put calls to addPersistentStoreWithType:configuration:URL:options:error: on a background thread, because it can take an indeterminate amount of time (e.g., to run migrations). Is there a simple way to tell MagicalRecord to do that? It looks like all of its setupCoreDataStack... methods perform everything on the calling (presumably main) thread.
I don't think it makes sense to just move the top-level setup calls onto a background thread, because it wouldn't be safe to start using MR from the main thread until at least the contexts had been created, right? Do I need to implement my own setupCoreDataStackWithAsyncMigration or somesuch thing?
There is the wwdc2012 example code for setting up iCloud on a background thread (Shared Core Data sample). You could refractor the CoreDataController to use MagicalRecord (and ignore anything iCloud). IIRC the locking mechanism, to stop other threads from accessing the store while the setup is in progress, is already present.
Before you go down that route measure the time needed to startup on the device. If the startup is fast enough for your needs then you might want to stick with the setup on a main thread.
Migrations can take some time but migration won't occur on every app launch. Migration time depends on data volume and complexity of changes between model versions. So again it is a judgment call to invest time to move the migration to a background thread or to keep the user waiting.
I'm implementing a high traffic client web application that uses a lot of REST API's for its data access layer from the cloud database. I said client because it implements REST and not provides it.
REST APIs are implemented server side as well as client side and I need to figure out a good solution for caching. The application is running on a web farm so it I'm leaning toward a distributed caching like memcached. This caching solution will need to be like a proxy layer between my application and REST APIs and support both client side as well as server side.
For example if I make a call to update a record I would update through REST and I'd like to keep updated record in the cache so next calls to that record won't need extra call to the outside REST services.
I want to minimize REST calls as much as possible and would need to keep the data accurate as much as I can, but it doesn't need to be 100% accurate.
What is the best solution for this caching proxy? Is it a standalone application that runs on one of the servers with local cache, or built into current solution using distributed caching? what are you ideas, suggestion or concerns
Thank you,
You hit the nail on the head. You need a caching layer that acts as a proxy to your data.
I suggest that you create a layer that abstracts the concept of the cloud a way a bit. Your client shouldn't care where the data comes from. I would create a repository layer that communicates with the cloud and all other data. Then you can put a service layer on top of that that your client would actually call into. Inside this service layer is where you would implement things like your caching layer.
I used to always suggest using MemCached or MemCached Win32 depending on your environment. MemCached win32 works really well if you are in a windows world! Look to the Enyim client for MemCached win32...it is the least problematic of all the other ports.
If you are open to it though and you are in a .net world then you might try Velocity. MS finally got the clue that there was a hole in their caching framework in that they needed to support the farm concept. Velocity last time I checked is not out of beta yet...but still worth a look.
I generally suggest using the repository and service layer concepts from day one...even though you don't need it. The flexibility it provides for your application is worth having as you never know which direction your application will need to be pulled in. Needing to scale is usually the best reason to need this flexibility. But usually when you need to scale you need to scale now and refactoring in a repository layer and services layer while not impossible is usually semi-complex to do down the road.
We have two client apps (a web app and an agent app) accessing methods on the same service, but with slightly different requirements. My team wants to control behaviour on the service side by passing in a ApplicationType parameter to every method - which is essentially an enum containing the name of the calling client application - which is then used as a key for a database lookup to configure the service with client-specific options.
Something about this makes me uneasy as I don't think the service should really have to be aware of which client is calling it. I'm being told that it's easier to do it this way than pass a load of options dynamically through the method call.
Is there anything wrong with the client application telling the service who they are? Or is there really no difference between passing a config key versus a set of parameterized options?
One immediate problem I can see is that if we ever opened the service to another client run by a third party, we'd have to maintain their configuration settings locally for them. At the moment we own both client apps so it's not so much of a problem.
How would you do it?
In a layered solution, you should always consider your layers as onion-like layers, and dependencies should always go inwards, never outwards.
So your GUI/App layer should depend on the businesslogic layer, the businesslogic layer should depend on the data access layer, and similar.
Unless you categorize the clients (web, win, wpf, cli), or generalize it with client profiles (which client applications can configure), I would never pass in the name of the calling application, as this would make the business logic layer aware of and dependent upon the outside layer.
What kind of differences are we talking about that would depend on the type of application? If you elaborate a bit on the differences here, perhaps someone can come up with some helpful advice on other ways to solve this.
But I would definitely look for other ways before going down your described path.
Can't you create two different services, one for each application? The two services will share a lot of code or call a single internal service with different parameterization depending on what outer service was called.
From a design perspective, this is no different than having users with different profiles. From a security perspective, I hope your applications are doing something to identify themselves, lest users of one application figure out a way to invoke the other applications logic as a hack. (Image a HR application being used by the mafia and a bank at the same time, one customer would be interesting in hacking the other customer's application on a shared application host)
In .net the design doesn't feel this way because the credentials live on the thread (i.e. when you set the IIPrincipal, that info rides on the thread-- it is communicated along with each method call, but not as a parameter.)
Maybe what you are looking for in terms of a more elegant design is an ApplicationIdentity attribute. You'd have to write a custom one, I don't know of one in the framework right now.
This is a hard topic to discuss without a solid example.
You are right for feeling that way. Sending in the client type to change behaviour is not correct. It's not a bad idea for logging... but that's about it.
Here is what I would do:
Review each method to see what needs to be different and why.
Create different methods for different usages. The method name should be self explanatory. If you ever need to break compatibility, you have more control (assuming you're not using a versioning system which would be overkill for an in-house-only service).
In some cases request parameters (flags/enum values) are more appropriate.
In some cases knowing the operating environment is more appropriate (especially for data security). The operating environment almost always sent during a login request. Something like "attended"/"secure" (agent client) vs "unattended"/"not secure" (web client). Now you must exchange a session key (HTTP cookie or an application level session id). Sessions obviously doesn't work if you need to be 100% stateless -- especially if you want to scale-out without session replication... if you have that requirement, send a structure in every request.
Think of requests like functions in your code. You wouldn't put a magic parameter that changes the behaviour of the function. You would create multiple functions that each behave differently. Whoever is using the function makes the decision which one to call.
So why is client type so wrong? Client type has no specific meaning on its own. It has many meanings and they may change over time. It's simply informational which is why it is a handy thing to log. An operating environment does have a specific meaning.
Here is a scenario to consider: What if a new client type is developed that is slightly different in a way that would break compatibility with the original request? Now you have two requests. 2 clients use Request A and 1 client uses Request B. If you pass in a client type to each request, the server is expected to work for every possible client type. Much harder to test and maintain!!