.Net core microservices query from many services - .net-core

I have a .net core 5 microservices project, the client has a search module which will query the data from many objects, these objects are in many services.
first microservice for products.
in this microservice has table product {productid, productname }.
second microservice for vendors.
in this microservice has table account {verndorId, vendorName}.
third microservice for purchase
in this microservice has table purchase {title, productid(this id comes from product table in first microservcice), accountid (this id comes from account table in secount microservice)}.
Now : the user want to search for the purchase where product name like "clothes" and vendorname like "x";
who I can do this query throw microservice pattern.

taking a monolithic system and slapping service interfaces on top of each table will just make your life hell as you will begin to implement that database logic yourself - like in this question where you try to recreate database joins.
Putting aside that it looks like your partitioning to services doesn't sound right (or needed) for this case. considering purchases describe events that happened (even if they are later deleted) they can capture a state of the product or the vendor. so you can augment the data in the purchases to represent the product and vendor data that were right at the time of the purchase. This will isolate purchase queries to the purchase service and will have the added benefit of preserving history when products/vendors evolve over time.

Considering this case you have to build at least 3 Queue Managers in your Message Queue service integration for Purchase, Vendor and Products.
And setup the Request and Reply queues.
Now those request and reply of the data can be in either JSON or XML however you like.
You need to create a listener which will be responsible to listen to the reply queues and you can create something like SignalR streaming to continuous listen.
Once all the integeration is completed you can directly inject the result in your Client application.

Related

what is the best way to generate axon sequence number while working with multiple micro services

I am new to Axon framework and we are using Axon 3.3.3 with Mongo DB as Event Store.
We would like to know the best option to generate Aggregate Id with Microservices, as we see problem with loading events from event store
Example : we have order service and product service.
orderService generated aggregate id as 101 of type OrderAggregate and it has been stored into the event store.
If product service also generated id as 101 of type ProductAggregate.
Then how can we load particular micro service events from event store
I generally recommend not using sequential numbers. Besides the fact that it is a process that is hard to scale, you tend to easily bump into duplicates, and the scope of the sequential numbers is generally on the entity-type level.
Instead, consider using UUIDs (using UUID.random()) for your aggregates. They can be generated by the sender of commands, allowing the identifier of an aggregate to be used to consistently route messages, including the creation method, to the same machine. This would allow you to configure caching on the handling side, and be sure that any updates are sent to the same node where the aggregate was created.

ASP .NET,concurrency control with WCF

I would like to take advise from all of you based on your experience with SOA based architecture and how to achieve concurrency control with WCF and nHibernate.
ASP .NET (web app) - 1 tier (Web server)
WCF+nHibernate - 2nd tier (Application server)
Scenario1
a. User1 has now opened an Order and Edits it.
b. User2 in meantime opens the same Order and Edits it.
How do we prevent User2 from performing this operation. Both WCF calls to "edit order" fetches the record using different nHibernate sessions as the application server is designed to be stateless.
How do I configure nHibernate/WCF to handle this kind of scenario?
Scenario2
If entering an order involves multiple pages; with pages getting displayed based on business logic (WCF calls to determine next page) should the order data be maintained at Web Server (ASP .NET) across these pages and posted on each WCF call taking care of the order business logic? Is my thinking right?
I'm aware of the other way to do it, which is to create a temporary order record to maintain intermediate data across pages/WCF calls.
In regards to the first scenario, you don't want to block the user from attempting to make an edit; your system will never scale if you try and enforce pessimistic concurrency.
Rather, you want to use optimistic concurrency; your models would have a timestamp (usually an actual date time value, or better yet, some sort of binary value guaranteed to be unique for the particular context/table). Most ORMs (including nHibernate, I am sure) have support for optimistic concurrency; when generating the update, it will only update the record if the primary key is the same and the timestamp are the same, if it isn't, then you know that someone else edited the record since the last time you fetched the data.
For your second scenario, you might want to consider Windows Workflow, which has tight integration with WCF. Generally though, I'd have a separate table which contains the incomplete order information along with a key that the user can use (perhaps the session id) to look up the data.
Only when you know the order is complete do you move the data from your "staging" table over to your real orders table.

SOA and distributed transactions

When do distributed transactions make sense in a service-oriented architecture?
Distributed transactions are used very often in SOA environments. If you've a composite service calling multiple services, the underlying service calls should be handled as a single transaction. Business processes should allow for a roll-back of their steps. If the underlying resources allow for it, you can use 2-phase commits, but in many cases it is impossible. In these cases compensating actions should be done on the services/resources invoked before the failed step. In other words, undo the succeeded steps in a reverse order.
Imaginary example: telecom company provisions a new VoIP product for a customer with 6 service calls:
query inventory to check the customer has the right equipment
configure customer equipment via mediation
update inventory with new configuration
set up rating engine to count CDR's for customer
set up billing software to charge the customer with the correct price plan
update CRM system with the result of the provisioning process
The 6 steps above should be parts of one transaction. E.g. if inventory update fails, you (may) need to undo the customer equipment configuration.
Not really a case of when they make sense. A transaction (distributed or not) is implemented by necessity rather than arbitrary choice, to guarentee consistency. The alternative is to implement a reconcilliation process to ensure eventual consistency.
In the classic bank example (money out of account A, into account B), transactional consistency is pretty much essential. In some inventory systems (check inventory, decrement inventory, sell to customer), it may be acceptable for stock levels to be roughly accurate, rather than guarenteed. In which case, ignoring a failure (decrement inventory, sale fails to complete) could be dealt with by reconciling later.

SOA Style - Sharing data

I'm considering an SOA architecture for a set of servives to support a business that Im consulting for, previously we used database integration where each application picked out what it need from a shared MS SQL database and worked with it etc.. We had various apps integrating with the monster database including java, .net and microsoft access, there was referential integrity as everything was tightly coupled.
I'm a bit confused about how to support data sharing between services.
Lets take Product Service which sits on top of a the Product database provided by the wholesaler each month. We build a domain model and sit this on to of the database with Hibernate or whatvever, implentation wise Product is a large object graph given the information provided by the wholesaler about the product.
Now lets say the Review service, Pricing Service, Shipping Service, and Stock Service will subscribe to ProductUpdated, ProductAdded, ProductDeleted. The problem is that each service only need part or some parts of the information about the Product. Shipping might only need the dimensions and weight. Pricing might only need product id, wholesale cost, volume discount, price effective to date. Review might need product id, product name, producer.
Is it standard practice just to publish the whole Product (suitable non-subscriber-specific contracts e.g. ProductUpdated, and a suitable schema - representing all product object graph) and let the subscribers map whatever they need to their domain models (or heck do what they want with, might not even have a domain model)...
Or as I write this I'm thinking maybe:
Product Service Publishes ProductAdded message (does not included product details just an ID of product and maybe a timestamp)
Pricing Service subscribes to ProductAdded and publishes RequestPricingForProduct message
Product Service Publishes ResultForPricingForProduct message
Hmm.. seems a little better... but it feels like I'm building the contract for Product Service based on what other services I can identify and what they are going to need, perhaps in future XYZ Service requires something different. Im going to stop there as I think it's getting clearer where I'm confused... perhaps the above will work because I should expose a way to return whatever that should be public hmmm right.
Any comments or direction greatly appreciated. Sorry if this appears half baked.
I actually think the solution to this problem is to NOT share the data. SOA means that data is owned by a service - it is the technical authority of that data. I suggest reading a few Pat Helland articles, such as Data On The Inside, Data On The Outside.
The only thing that should be shared between these different services is the primary key - the ProductId in your example. Otherwise, for each service, the data that needs to be transactionally consistent goes together.
There does not need to be one "Product". Each service can have a different view of the product in their service. For the Pricing service, you have a productId and a price. For the review service, a productId and a review. And so on.
Where this starts to confuse people is how to display this data in the UI if it's from all these disparate services. How can you show a list of reviews for a product that has the product name from the ProductService and the review text from the ReviewService?
The answer to that is to compose the UI from all the different services. Get the product from the product service and get the review data from the review service and then combine that data in the UI.
I was in your position recently. The problem with directly exposing the underlying object through the service is that you increase coupling between layers, and there becomes little point in using a Service Oriented Achitecture at all. You would not be able to change these objects or business rules without affecting the web service too.
It sounds like you are on the right track. If you are serious about seperating your layers, then the most common pattern is to create a new separate set of message classes just for the web service, potentially each service, and translate your internal objects back and forth.
For an example of how to set up your service layer in this manner see the "Service Interface" pattern. On the client side of the service, there is an opposite pattern called "Service Gateway".
The Application Architecture Guide 2.0 has a whole chapter dedicated to the types of the decisions you are making (http://apparchguide.codeplex.com/Wiki/View.aspx?title=Chapter%2013%20-%20Service%20Layer%20Guidelines). I would download the whole guide.
Here is the portion most relevant to you. Long story short, if you take the time to create new coarse-grained methods, and message-based objects, you'll end up with a much better web service:
Consider the following guidelines when designing a service interface:
Consider using a coarse-grained interface to batch requests and minimize the number of calls over the network.
Design service interfaces in such a way that changes to the business logic do not affect the interface.
Do not implement business rules in a service interface.
Consider using standard formats for parameters to provide maximum compatibility with different types of clients.
Do not make assumptions in your interface design about the way that clients will use the service.
Do not use object inheritance to implement versioning for the service interface.

Able Commerce POS Data Merge

We are building an AbleCommerce 7 web store and trying to integrate it with an existing point-of-sale system. The product inventory will be shared between a phyical store and a web store so we will need to periodically update quantity on hand for each product to keep the POS and the web store as close to in synch as possible to avoid over selling product in either location. The POS system does have an scheduled export that will run every hour.
My question is, has anyone had any experience with synchronizing data with an Able Commerce 7 web store and would you have any advice on an approach?
Here are the approaches that we are currently considering:
Grab exported product data from the POS system and determine which products need to be updated. Make calls to a custom-built web service residing on the server with AbleCommerce to call AbleCommerce APIs and update the web store appropriately.
Able Commerce does have a Data Port utility that can import/export web store data via the Able Commerce XML format. This would provide all of the merging logic but there doesn't appear to be a way to programmatically kick off the merge process. Their utility is a compiled Windows application. There is no command-line interface that we are aware of. The Data Port utility calls an ASHX handler on the server.
Take an approach similar to #1 above but attempt to use the Data Port ASHX handler to update the products instead of using our own custom web service. Currently there is no documentation for interfacing with the ASHX handler that we are aware of.
Thanks,
Brian
We've set this up between AbleCommerce and an MAS system. We entered the products into the AbleCommerce system and then created a process to push the inventory, price, and cost information from the MAS system into the ProductVariants table.
The one issue we ran into is that no records exist in the ProductVariants table until you make a change to the variants data. So, we had to write a Stored Procedure to automatically populate the ProductVariants table so that we could do the sync.
I've done this with POS software. It wasn't AbleCommerce, but retail sales and POS software is generic enough (no vendor wants to tell prospects that "you need to operate differently") that it might work.
Sales -> Inventory
Figure out how to tap into the Data Port for near-real-time sales info. I fed this to a Message-Queue-By-DBMS-Table mechanism that was polled and flushed every 30 seconds to update inventory. There are several threads here that discuss MQ via dbms tables.
Inventory -> Sales
Usually there is a little more slack here - otherwise you get into interesting issues about QC inspection failures, in-transit, quantity validation at receiving, etc. But however it's done, you will have a mechanism for events occurring as new on-hand inventory becomes available. Just do the reverse of the first process. A QOH change event causes a message to be queued for a near-real-time polling app to update the POS.
I actually used a single queue table in MSSQL with a column for messagetype and XML for the message payload.
It ends up being simpler than the description might sound. Let me know if you want info offline.

Resources