Where should I put a logic for querying extra data in CQRS command flow - asp.net

I'm trying to implement simple DDD/CQRS architecture without event-sourcing for now.
Currently I need to write some code for adding a notification to a document entity (document can have multiple notifications).
I've already created a command NotificationAddCommand, ICommandService and IRepository.
Before inserting new notification through IRepository I have to query current user_id from db using NotificationAddCommand.User_name property.
I'm not sure how to do it right, because I can
Use IQuery from read-flow.
Pass user_name to domain entity and resolve user_id in the repository.
Code:
public class DocumentsCommandService : ICommandService<NotificationAddCommand>
{
private readonly IRepository<Notification, long> _notificationsRepository;
public DocumentsCommandService(
IRepository<Notification, long> notifsRepo)
{
_notificationsRepository = notifsRepo;
}
public void Handle(NotificationAddCommand command)
{
// command.user_id = Resolve(command.user_name) ??
// command.source_secret_id = Resolve(command.source_id, command.source_type) ??
foreach (var receiverId in command.Receivers)
{
var notificationEntity = _notificationsRepository.Get(0);
notificationEntity.TargetId = receiverId;
notificationEntity.Body = command.Text;
_notificationsRepository.Add(notificationEntity);
}
}
}
What if I need more complex logic before inserting? Is it ok to use IQuery or should I create additional services?

The idea of reusing your IQuery somewhat defeats the purpose of CQRS in the sense that your read-side is supposed to be optimized for pulling data for display/query purposes - meaning that it can be denormalized, distributed etc. in any way you deem necessary without being restricted by - or having implications for - the command side (a key example being that it might not be immediately consistent, while your command side obviously needs to be for integrity/validity purposes).
With that in mind, you should look to implement a contract for your write side that will resolve the necessary information for you. Driving from the consumer, that might look like this:
public DocumentsCommandService(IRepository<Notification, long> notifsRepo,
IUserIdResolver userIdResolver)
public interface IUserIdResolver
{
string ByName(string username);
}
With IUserIdResolver implemented as appropriate.
Of course, if both this and the query-side use the same low-level data access implementation (e.g. an immediately-consistent repository) that's fine - what's important is that your architecture is such that if you need to swap out where your read side gets its data for the purposes of, e.g. facilitating a slow offline process, your read and write sides are sufficiently separated that you can swap out where you're reading from without having to untangle reads from the writes.
Ultimately the most important thing is to know why you are making the architectural decisions you're making in your scenario - then you will find it much easier to make these sorts of decisions one way or another.

In a project i'm working i have similar issues. I see 3 options to solve this problem
1) What i did do is make a UserCommandRepository that has a query option. Then you would inject that repository into your service.
Since the few queries i did need were so simplistic (just returning single values) it seemed like a fine tradeoff in my case.
2) Another way of handling it is by forcing the user to just raise a command with the user_id. Then you can let him do the querying.
3) A third option is ask yourself why you need a user_id. If it's to make some relations when querying the data you could also have this handles when querying the data (or when propagating your writeDB to your readDB)

Related

does returning object in insert method violate cqrs pattern?

I have implemented MediatR in my asp.net core web api application.
Our controllers simply send a command or query to MediatR and return the result.
[HttpPost]
public async Task<IActionResult> CreateQuestion(CreateQuestionCommand command)
{
await Mediator.Send(command);
return Ok();
}
Due to the CQRS pattern that says commands should not return any value we don't return any value in our MediatR commands.
everything was normal since we decided to write some BDD tests.
In our BDD tests there is a simple scenario like this:
Scenario: [Try to add a new Question]
Given [I'm an authorized administrator]
When [I create a new Question with Title '<Title>' and '<Description>' and '<IsActive>' and '<IndicatorId'>]
Then [A question with Title '<Title>' and '<Description>' and '<IsActive>' and '<IndicatorId'> should be persisted in database]
Examples:
| Title | Description | IsActive | IndicatorId |
| This is a test title | this is a test description | true | 3cb23a10-107a-4834-8c1a-3fd893217861 |
We set Id property of Question object in its constructor. which means we don't know what Id we have set to this newly created Question and therefore we can't read it after adding it to database in a test environment.
my question was how to test it in BDD?
this is my test step implementation:
[When(#"\[I create a new Question with Title '([^']*)' and '([^']*)' and '([^']*)' and '([^']*)'>]")]
public async void WhenICreateANewQuestionWithTitleAndAndAnd(string title, string description, bool isActive, Guid indicatorId)
{
var command = new CreateQuestionCommand
{
Title = title,
Description = description,
IndicatorId = indicatorId
};
await questionController.CreateQuestion(command);
}
[Then(#"\[A question with Title '([^']*)' and '([^']*)' and '([^']*)' and '([^']*)'> should be persisted in database]")]
public void ThenAQuestionWithTitleAndAndAndShouldBePersistedInDatabase(string title, string description, string isActive, string indicatorId)
{
//? how to retrieve the created data. We don't have the Id here
}
How can I retrieve the added Question?
Should I change my command handlers to return my object after inserting it to database?
And if I do so, wouldn't it be a CQRS violation?
Thank you for your time.
There's a couple of ways to go about this, but it depends on how you actually expect the system to behave. If you're doing BDD you should focus on the kind of behaviour that would be observable by real users.
If you're implementing a system that allows users to create (and save, I infer) questions in a (Q&A?) database, then should they not have the ability to view or perhaps edit the question afterwards?
As the OP is currently phrased, the trouble writing the test seems to imply a lacking capability of the system in general. If the test can't identify the entity that was created, then how can a user?
The CreateQuestion method only returns Ok(), which means that there's no data in the response. Thus, as currently implemented, there's no way for the sender of the POST request to subsequently navigate to or retrieve the new resource.
How are users supposed to interact with the service? This should give you a hint about how to write the test.
A common REST pattern is to return the address of the new resource in the response's Location header. If the CreateQuestion method were to do that, the test would also be able to investigate that value and perhaps act on it.
Another option is to return the entity ID in the response body. Again, if that's the way things are supposed to work, the test could verify that.
Other systems are more asynchronous. Perhaps you only put the CreateQuestionCommand on a queue, to be handled later. In such a case I'd write a test that verifies that the command was added to the queue. You could also write a more long-running test that then waits for an asynchronous message handler to process the command, but if you want to test something like, you also need to deal with timeouts.
In a comment, you write:
I think he might add the question and then read all questions from database and see if his question is amongst them. He'd probably test without knowing Id.
Do you expect real users of the system to spelunk around in your database? Unless you do, the database is not part of a system's observable behaviour - it's an implementation detail. Thus, behaviour-driven design shouldn't concern itself with the contents of databases, but how a system behaves.
In short, find out how to observe the behaviour you want the system to have, and then test that.

How to get all aggregates with Axon framework?

I am starting out with the Axon framework and hit a bit of a roadblock.
While I can load individual aggregates using their ID, I can’t figure out how to get a list of all aggregates, or a list of all aggregate IDs.
The EventSourcingRepository class only has load() methods that return one aggregate.
Is there a way to all aggregate (IDs) or am I supposed to keep a list of all aggregate IDs outside of axon?
To keep things simple I am only using an InMemoryEventStorageEngine for now.
I am using Axon 3.0.7.
First off I'm was wondering why you would want to retrieve a complete list of all the aggregates from the Repository.
The Repository interface is set such that you can load an Aggregate to handle commands or to create a new Aggregate.
Asking the question you have, I'd almost guess you're using it for querying purposes rather than command handling.
This however isn't the intended use for the EventSourcingRepository.
One reason you'd want this I can think about, is that you want to implement an API call to publish a command to all Aggregates of a specific type in your application.
Taking that scenario then yes, you need to store the aggregateId references yourself.
But concluding with my earlier question: why do you want to retrieve a list of aggregates through the Repository interface?
Answer Update
Regarding your comment, I've added the following to my answer:
Axon helps you to set up your application with event sourcing in mind, but also with CQRS (Command Query Responsibility Segregation).
That thus means that the command and query side of your application are pulled apart.
The Aggregate Repository is the command side of your application, where you request to perform actions.
It thus does not provide a list-of-aggregates, as a command is an expression of intent on a aggregate. Hence it only requires the Repository user to retrieve one aggregate or create one.
The example you've got that you need of the list of Aggregates is the query side of your application.
The query side (your views/entities) is typically updated based on events (sourced through events).
For any query requirement you have in your application, you'd typically introduce a separate view tailored to your needs.
In your example, that means you'd introduce a Event Handling Component, listening to your Aggregate Events, which update a Repository with query models of your aggregate.
The EventStore passed into EventSourcingRepository implements StreamableMessageSource<M extends Message<?>> which is a means of reaching in for the aggregates.
Whilst doing it the framework way with an event handling component will probably scale better (depending on the how its used / the context), I'm pretty sure the event handling components are driven by StreamableMessageSource<M extends Message<?>> anyway. So if we wanted to skip the framework and just reach in, we could do it like this:
List<String> aggregates(StreamableMessageSource<Message<?>> eventStore) {
return immediatelyAvailableStream(eventStore.openStream(
eventStore.createTailToken() /* All events in the event store */
))
.filter(e -> e instanceof DomainEventMessage)
.map(e -> (DomainEventMessage) e)
.map(DomainEventMessage::getAggregateIdentifier)
.distinct()
.collect(Collectors.toList());
}
/*
Note that the stream returned by BlockingStream.asStream() will block / won't terminate
as it waits for future elements.
*/
static <M> Stream<M> immediatelyAvailableStream(final BlockingStream<M> messageStream) {
Iterator<M> iterator = new Iterator<M>() {
#Override
public boolean hasNext() {
return messageStream.hasNextAvailable();
}
#Override
public M next() {
try {
return messageStream.nextAvailable();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new IllegalStateException("Didn't expect to be interrupted");
}
}
};
Spliterator<M> spliterator = Spliterators.spliteratorUnknownSize(iterator, Spliterator.ORDERED);
Stream stream = StreamSupport.stream(spliterator, false);
return (Stream)stream.onClose(messageStream::close);
}

Is there a tangible benefit to using wrapper requests over plain messages in grpc service calls?

Lets say we have a message containing ID of some record in the database
message Record {
uint64 id = 1;
}
We also have an rpc call that returns all of the rows from table DATA that said record is mentioned in.
rpc GetDataForRecord(Record) returns (Data) {}
If we, for example, wrap Record in
RqData{
Record id = 1;
}
then once we need to only return, for example, "active" data, we won't need to make
GetActiveDataForRecord
instead we could extend RqData as:
RqData{
Record id = 1;
bool use_active = 2;
}
and use
rpc GetDataForRecord(RqData) returns (Data) {}
and clients that know of this new functionality will be able to call it, while older clients will just use it as it was passing only Record part within the Rq wrapper, without specifying active or not.
Here's the question: is there really a reason to use this kind of wrapping of everything into a separate request, or am I overthinking things and just passing plain structures will do?
I am kinda trying to think about the future, but not sure if I am not overcomplicating things.
In general, making a method-specific request and response is a Good Thing™ and is encouraged. For a Foo method you'd have FooRequest and FooResponse. Having specialized messages for the method allows you to add new "arguments," as you mentioned.
But for some cases it turns out fine to break the pattern and avoid the wrapping; it's a judgement call. Although you're asking from a different perspective, you may be interested in this answer about related methods.

Dealing with DB handles and initialization in functional programming

I have several functions that deal with database interactions. (like readModelById, updateModel, findModels, etc) that I try to use in a functional style.
In OOP, I'd create a class that takes DB-connection-parameters in the constructor, creates the database-connection and save the DB-handle in the instance. The functions then would just use the DB-handle from "this".
What's the best way in FP to deal with this? I don't want to hand around the DB handle throughout the entire application. I thought about partial application on the functions to "bake in" the handle, but that creates ugly boilerplate code, doing it one by one and handing it back.
What's the best practice/design pattern for things like this in FP?
There is a parallel to this in OOP that might suggest the right approach is to take the database resource as parameter. Consider the DB implementation in OOP using SOLID principles. Due to Interface Segregation Principle, you would end up with an interface per DB method and at least one implementation class per interface.
// C#
public interface IGetRegistrations
{
public Task<Registration[]> GetRegistrations(DateTime day);
}
public class GetRegistrationsImpl : IGetRegistrations
{
public Task<Registration[]> GetRegistrations(DateTime day)
{
...
}
private readonly DbResource _db;
public GetRegistrationsImpl(DbResource db)
{
_db = db;
}
}
Then to execute your use case, you pass in only the dependencies you need instead of the whole set of DB operations. (Assume that ISaveRegistration exists and is defined like above).
// C#
public async Task Register(
IGetRegistrations a,
ISaveRegistration b,
RegisterRequest requested
)
{
var registrations = await a.GetRegistrations(requested.Date);
// examine existing registrations and determine request is valid
// throw an exception if not?
...
return await b.SaveRegistration( ... );
}
Somewhere above where this code is called, you have to new up the implementations of these interfaces and provide them with DbResource.
var a = new GetRegistrationsImpl(db);
var b = new SaveRegistrationImpl(db);
...
return await Register(a, b, request);
Note: You could use a DI framework here to attempt to avoid some boilerplate. But I find it to be borrowing from Peter to pay Paul. You pay as much in having to learn a DI framework and how to make it behave as you do for wiring the dependencies yourself. And it is another tech new team members have to learn.
In FP, you can do the same thing by simply defining a function which takes the DB resource as a parameter. You can pass functions around directly instead of having to wrap them in classes implementing interfaces.
// F#
let getRegistrations (db: DbResource) (day: DateTime) =
...
let saveRegistration (db: DbResource) ... =
...
The use case function:
// F#
let register fGet fSave request =
async {
let! registrations = fGet request.Date
// call your business logic here
...
do! fSave ...
}
Then to call it you might do something like this:
register (getRegistrations db) (saveRegistration db) request
The partial application of db here is analogous to constructor injection. Your "losses" from passing it to multiple functions is minimal compared to the savings of not having to define interface + implementation for each DB operation.
Despite being in a functional-first language, the above is in principle the same as the OO/SOLID way... just less lines of code. To go a step further into the functional realm, you have to work on eliminating side effects in your business logic. Side effects can include: current time, random numbers, throwing exceptions, database operations, HTTP API calls, etc.
Since F# does not require you to declare side effects, I designate a border area of code where side effects should stop being used. For me, the use case level (register function above) is the last place for side effects. Any business logic lower than that, I work on pushing side effects up to the use case. It is a learning process to do that, so do not be discouraged if it seems impossible at first. Just do what you have to for now and learn as you go.
I have a post that attempts to set the right expectations on the benefits of FP and how to get them.
I'm going to add a second answer here taking an entirely different approach. I wrote about it here. This is the same approach used by MVU to isolate decisions from side effects, so it is applicable to UI (using Elmish) and backend.
This is worthwhile if you need to interleave important business logic with side effects. But not if you just need to execute a series of side effects. In that case just use a block of imperative statements, in a task (F# 6 or TaskBuilder) or async block if you need IO.
The pattern
Here are the basic parts.
Types
Model - The state of the workflow. Used to "remember" where we are in the workflow so it can be resumed after side effects.
Effect - Declarative representation of the side effects you want to perform and their required data.
Msg - Represents events that have happened. Primarily, they are the results of side effects. They will resume the workflow.
Functions
update - Makes all the decisions. It takes in its previous state (Model) and a Msg and returns an updated state and new Effects. This is a pure function which should have no side effects.
perform - Turns a declared Effect into a real side effect. For example, saving to a database. Returns a Msg with the result of the side effect.
init - Constructs an initial Model and starting Msg. Using this, a caller gets the data it needs to start the workflow without having to understand the internal details of update.
I jotted down an example for a rate-limited emailer. It includes the implementation I use on the backend to package and run this pattern, called Ump.
The logic can be tested without any instrumentation (no mocks/stubs/fakes/etc). Declare the side effects you expect, run the update function, then check that the output matches with simple equality. From the linked gist:
// example test
let expected = [SendEmail email1; ScheduleSend next]
let _, actual = Ump.test ump initArg [DueItems ([email1; email2], now)]
Assert.IsTrue(expected = actual)
The integrations can be tested by exercising perform.
This pattern takes some getting-used-to. It reminds me a bit of an Erlang actor or a state machine. But it is helpful when you really need your business logic to be correct and well-tested. It also happens to be a proper functional pattern.

why would I unit test business layer in MVP

i created the following sample method in business logic layer. my database doesn't allow nulls for name and parent columns:
public void Insert(string catName, long catParent)
{
EntityContext con = new EntityContext();
Category cat = new Category();
cat.Name = catName;
cat.Parent = catParent;
con.Category.AddObject(cat);
con.SaveChanges();
}
so i unit test this and test for empty name and empty parent will fail. to get around that issue i have to refactor the Insert mathod as following:
public void Insert(string catName, long catParent)
{
//added to pass the test
if(string.IsNullOrEmpty(catName)) throw new InvalidOperationException("wrong action. name is empty.");
long parent;
if(long.TryParse(catParent, out parent) == false) throw new InvalidOperationException("wrong action. parent didn't parsed.");
//real bussiness logic
EntityContext con = new EntityContext();
Category cat = new Category();
cat.Name = catName;
cat.Parent = parent;
con.Category.AddObject(cat);
con.SaveChanges();
}
my entire bussiness layer are simple calls to database. so now i'm validating the data again! i already planned to do my validation in UI and test that kind of stuff in UI test units. what should i test in my bussiness logic method other than validation related tasks? and if there is nothing to be unit tested why everybody says "unit test all the layers" and things like that which i found a lot online?
The techniques involved in testing are those that you break down your program into smaller parts (smaller components or even classes) and test those small parts. As you assemble those parts together, you make less comprehensive tests -- the smaller parts are already proven to work -- until you have a functional, tested program, which then you give to users for "user tests".
It's preferable to test smaller parts because:
It's simpler to write the tests. You'll need less data, you only setup one object, you have to inject less dependencies.
It's easier to figure out what to test. You know the failing conditions from a simple reading of the code (or, better yet, from the technical specification).
Now, how can you guarantee that you business layer, simple as it's, is correctly implemented? Even a simple database insert can fail if badly written. Besides, how can you protected yourself from changes? Right know, the code works, but what will happen in the future if the database is changed or someone update the business logic.
However, and this is important, you actually don't need to test everything. Use your intuition (which is also called experience) to understand what needs testing and what doesn't. If you method is simple enough, just make sure the client code is correctly tested.
Finally, you've said that all your validation will occur in the UI. The business layer should be able to validate the data in order to increase reuse in your application. Fail to do that and whenever you or whoever make changes in your code in the future might create new UI and forget to add the required validations.

Resources