Structurally there is one project with two organizations. Each organization "resides" in own cartridges.
There are two gross calculations created, each has own rule set and both are registered in Component Framework.
With this configuration second defined calculation overrides first one.
How can that be architecturally solved - to separate basket calculation based on organization?
Or i will need to have one gross calculation with one rule set and in that set different rules with analyzing site/app and moving this calculation classes to some common cartridge for both organizations?
With the described preconditions you can go into the direction #johannes-metzner pointed you.
The basket calculation resolves the RuleSet implementation by its name, which is resolved by a call to a pipeline extension point.
So you could try to provide own implementations for the pipeline extenstion point ProcessBasketCalculation-GetRuleSet with higher priority then the default implementation. The implementation has to return the RuleSetName specific for your organization. The calculation should then resolve the RuleSet behind and use it for the calculation.
You can also provide different implementation app specific. So for app X in org1 you can bind your Ruleset A and for app Y in org2 you can bind Ruleset B.
Related
How would one validate a unique constraint using DDD? Let's say that an Entity has a property name that must be unique among the system and there is a specific EntityRepository method nameExists(name): bool... This is what I found people suggests to do, because the repository is the abstraction of the collection of all the Entityies and should be able to perform this check.
So before creating/adding the new Entity the command / domain service could check for the existence of a newName against the repository, but I think that this will not always work because of concurrency.
In a concurrent scenario where two transactions are started simultaneously, the EntityRepository's nameExists method might return false for both transactions, and as a result of this two entries with the same name will be incorrectly inserted.
I am sure that I am missing something basic, but the answers I found all point to the repository exists method - TBH others say that a UNIQUE constraint should be put on the DB to catch the concurrency case, but what if one uses Event Sourcing or a persistence layer that does not have unique constraints?
| Follow up question |
What if the uniqueness constraint is to be applied in different levels of a hierarchy?
A Container's name must be unique in the system and then Child names must be unique inside a Container.
Let's say that a transactional DB takes care of the uniqueness at the lowest possible level, what about the domain?
Should I still express the uniqueness logic at the domain level, e.g. with a Domain Service for the system-level uniqueness and embedding Child entities inside the Container entity and having a business rule (and therefore making Container the aggregate root)?
Or should I not bother with "replicating" the uniqueness in the domain and (given there are no other rules to apply between the two) split Container and Child? Will the domain lack expressiveness then?
I am sure that I am missing something basic
Not something basic.
The term we normally use for enforcing a constraint, like uniqueness, across a set of entities is set validation. Greg Young calls your attention to a specific question:
What is the business impact of having a failure
Most set constraints fall into one of two categories
constraints that need to be true when the system reaches steady state, but may not hold while work is in progress. In business processes, these are often handled by detecting conflicts in the stored data, and then invoking various mitigation processes to resolve the conflict.
constraints that need to be true always.
The first category includes things like double booking a seat on an airplane; it's not necessarily a problem unless both people show up, and even then you can handle it by bumping someone to another seat, or another flight.
In these cases, you make a best effort - you look at a recent copy of the set, make sure there are no conflicts there, then hope for the best (accepting that some percentage of the time, you'll have missed a change).
See Memories, Guesses and Apologies (Pat Helland, 2007).
Second category is the hard one; to ensure the invariant holds you have to lock the entire set to ensure that races don't allow two different writers to insert conflicting information.
Relational databases tend to be really good at set validation - putting the entire set into a single database is going to be the right answer (note the assumption that the set is small enough to fit into a single database -- trying to lock two databases at the same time is hard).
Another possibility is to ensure that only one writer can update the set at any given time -- you don't have to worry about a losing a race when you are the only one running in it.
Sometimes you can lock a smaller set -- imagine, for example, having a collection of locks with numbers, and the hash code for the name tells you which lock you have to grab.
This simplest version of this is when you can use the name as the aggregate identifier itself.
if one uses Event Sourcing or a persistence layer that does not have unique constraints?
Sometimes, you introduce a persistent store dedicated to the set, just to ensure that you can maintain the invariant. See "microservices".
But if you can't change the database, and you can't use a database with the locking guarantees that you need, and the business absolutely has to have the set valid at all times... then you single thread that part of the work.
Everybody that wants to change a name puts a request into a queue, and the one thread responsible for managing the invariant certifies each and every change.
There's no magic; just hard work and trade offs.
Why(In what scenarios) do we need to integrate Spring Webflow with Spring MVC? Both these frameworks are used to create web-app and I do not see any point why we would integrate them. I would appreciate if someone could clarify me about it.
This is really late but I don't see a satisfactory answer to this question and would like to share an approach I had tried in a recent project which I feel is better than the spring web flow approach which is strictly tied down to spring views and unnecessarily adds to the already existing xml load . I created a SPA(Single Page Application) using angular js with Spring MVC. In angular js I did not use routers or state, rather I created a div within the controller like below
On the server side to capture all possible transitions from one frame(I am referring to a particular screen in the SPA) to another I created a tree of rules using MVEL . So in the database I had a structure which stored a tree of rules for every frame . The data in the MVEL expressions were being set by the various services each action invoked. Thus on any action the following steps were followed.
1) Validate the action.
2) Invoke various services.
3) Capture the data from these services and merge it with the existing data of the user.
4) Feed this captured data into collection of rules for each frame along with the details of the current frame.
5) Run the rules of the tree w.r.t to current frame and fetch its output.
6) If there is only one transition then that is the final transition. If there are 2 transitions and one is default then ignore the default transition and use the other transition.
7) Return the template name of the transition to the angular controller and set the value of the page variable in the scope of the controller.
Using this approach all my services had to do was store data in different data fields w.r.t a particular action. All the complex if-else conditions for Web Flows or any complex process definitions(like the one defined in Spring-Web Flow) were not required. The MVEL rule engine managed all that and since it was all in the database it could be changed without needing a server re-start.
I believe this generic approach with MVEL is a flexible approach which comprehensively handles the problem of a convoluted flow without making the application code a mess or adding additional unnecessary xml files.
We combine both. Web Flow for the multi-step activities, where it doesn't make sense to jump into the middle of a process, and plain-MVC Controllers for the single-step activities. Things you might bookmark individually.
For example, an appointment-scheduling application, "find my appointment" might be a single Controller that accepts identifying information. "Make a new appointment" is a flow, with multiple steps of selecting a location, date, time, confirm the appointment, etc.
If your application have complex Flow pages, events which need to be defined as Finite state machine then use Webflow. It would be justified to use webflow for website where you buy Insurance, Flight Tickets. Web Flow conditions are like:
There is a clear start and an end point.
The user must go through a set of screens in a specific order.
The changes are not finalized until the last step.
Once complete it shouldn't be possible to repeat a transaction accidentally
In our use case, we need to define certain rules at run-time based on which a node will transact with other nodes in the network. For example, we want to define a rate at the front end and check that the transaction is happening with this rate only for that particular node. In other words, can we define the terms and conditions at run-time and would this still be called a smart contract or does a smart contract need to be always hard-coded. Is there any alternate way to look at this?
The contract itself is hard-coded. This is because every node needs to agree that a given transaction is valid according to the contract rules, forever. If they varied based on the node, some nodes would consider a transaction valid while another would consider the transaction invalid, leading to inconsistencies in their ledgers.
Instead, you'd have to impose this logic in the flow. Let's say you have a TradeOffer flow that proposes a trade. Each node could install their own response flow that is initiated by TradeOffer flow. Each node's response flow could impose different conditions. For example, one node might sign any transaction, while another one would check that the proposed rate is within specified bounds.
To extend Joel's comment, the contract is indeed hard-coded, but there's nothing wrong with putting meta logic in there as long as the code runs the same way every time (i.e. it's deterministic).
What do I mean by this? Well, you can put a String type in your state which contains an expression that can then be evaluated (if you refer to https://relayto.com/r3/FIjS0Jfy/VB8epyay73 you can see the inclusion of a very basic maths expression used in a smart contract). There's nothing wrong with making this String as complex as possible, but just be aware that any potential users of your application will start raising eyebrows if you remove a lot of the protection that Corda offers of validation if you start dumbing down the coded verification logic and putting it all into a String.
Can somebody please refer me how do we separate out the models while separating out each service to run of their own? So basically a few of the models we have right now overlaps among services. I went through some of the procedures which ask to use canonical pattern, however I also got that we should not use canonical pattern.
One of the solution would be to keep all the models in a common place which we are doing right now. But its seems problematic for managing the services in the form of one repository per one service.
Duplicate models are also fine with me if I can find good logic for it.
Vertical slicing and domain decomposition leads to having each vertical slice have it's own well defined fields (not entities) that belong together (bounded context/business functions), defining a logical service boundary and decomposing the Service Boundary to Business Components and Autonomous Components (smallest unit of work).
Each component owns the data it modifies and is the only one in the system that can change the state of that data, you can have many copies of the data for readers but only one logical writer.
There for it makes more sense not to share these data models as they are internal to the component responsible for the data model's state (and is encapsulated).
Data readers can use view models and these are dictated more by the consumer then the producer, the view/read models hold the "real"/"current" state of the transactional data (which is private to the data modifier), read data can be updated by the data processor after any logical state change.
So it makes more sense to publish view/read models for read only consumers...
Check out Udi Dahan's video
Does that make sense?
I'm beginning the process of instrumenting a web application, and using StatsD to gather as many relevant metrics as possible. For instance, here are a few examples of the high-level metric names I'm currently using:
http.responseTime
http.status.4xx
http.status.5xx
view.renderTime
oauth.begin.facebook
oauth.complete.facebook
oauth.time.facebook
users.active
...and there are many, many more. What I'm grappling with right now is establishing a consistent hierarchy and set of naming conventions for the various metrics, so that the current ones make sense and that there are logical buckets within which to add future metrics.
My question is two fold:
What relevant metrics are you gathering that you have found indespensible?
What naming structure are you using to categorize metrics?
This is a question that has no definitive answer but here's how we do it at Datadog (we are a hosted monitoring service so we tend to obsess over these things).
1. Which metrics are indispensable? It depends on the beholder. But at a high-level, for each team, any metric that is as close to their goals as possible (which may not be the easiest to gather).
System metrics (e.g. system load, memory etc.) are trivial to gather but seldom actionable because they are too hard to reliably connect them to a probable cause.
On the other hand number of completed product tours matter to anyone tasked with making sure new users are happy from the first minute they use the product. StatsD makes this kind of stuff trivially easy to collect.
We have also found that the core set of key metrics for any teamchanges as the product evolves so there is a continuous editorial process.
Which in turn means that anyone in the company needs to be able to pick and choose which metrics matter to them. No permissions asked, no friction to get to the data.
2. Naming structure The highest level of hierarchy is the product line or the process. Our web frontend is internally called dogweb so all the metrics from that component are prefixed with dogweb.. The next level of hierarchy is the sub-component, e.g. dogweb.db., dogweb.http., etc.
The last level of hierarchy is the thing being measured (e.g. renderTime or responseTime).
The unresolved issue in graphite is the encoding of metric metadata in the metric name (and selection using *, e.g. dogweb.http.browser.*.renderTime) It's clever but can get in the way.
We ended up implementing explicit metadata in our data model, but this is not in statsd/graphite so I will leave the details out. If you want to know more, contact me directly.