passing Parameter in wf operation contract - workflow-foundation-4

i am Very new with wf service,i am using 4.0.i am creating the new project with all default properties,now the problem is how can is pass two method parameters in my getdata operation contract(which is by default and it accept ony one value )
i want to do somethng like this
[operationcontract]
int getdata(int first,int second);

Either select Parameters where you can enter multiple items, the easy way, or create a MessageContract with multiple body members and use that type in the Message option. The last options is more capable and more advanced, the fist is the better way to start if you don't have special messaging requirements.

Related

Validating a Contact has a Unique Email in Axon

I am curious to understand what the best practice approach is when using the Axon Framework to validate that an email field is unique to a Set of emails for a Contact Aggregate.
Example setup
ContactCreateCommand {
identifier = '123'
name = 'ABC'
email = 'info#abc.com'
}
ContactAggregate {
ContactAggregate(ContactCreateCommand cmd) {
//1. cannot validate email
AggregateLifecycle.apply(
new ContactCreatedEvent(//fields ... );
);
}
}
From my understanding of how this might be implemented, I have identified a number of possible ways to handle this, but perhaps there are more.
1. Do nothing in the Aggregate
This approach imposes that the invoker (of the command) does a query to find Contacts by email prior to sending the command, allowing for some milliseconds where eventual consistency allows for duplication.
Drawbacks:
Any "invoker" of the command would then be required to perform this validation check as its not possible to do this check inside the Aggregate using an Axon Query Handler.
Duplication can occur, so all projections based from these events need to handle this duplication somehow
2. Validate in a separate persistence layer
This approach introduces a new persistence layer that would validate uniqueness inside the aggregate.
Inside the ContactAggregate command handler for ContactCreateCommand we can then issue a query against this persistence layer (eg. a table in postgres with a unique index on it) and we can validate the email against this database which contains all the sets
Drawbacks:
Introduces an external persistence layer (external to the microservice) to guarantee uniqueness across Contacts
Scaling should be considered in the persistence layer, hitting this with a highly scaled aggregate could prove a bottleneck
3. Use a Saga and Singleton Aggregate
This approach enhances the previous setup by introducing an Aggregate that can only have at most 1 instance (e.g. Target Identifier is always the same). This way we create a 'Singleton Aggregate' that is responsible only to encapsulate the Set of all Contact Email Addresses.
ContactEmailValidateCommand {
identifier = 'SINGLETON_ID_1'
email='info#abc.com'
customerIdentifier = '123'
}
UniqueContactEmailAggregate {
#AggregateIdentifier
private String identifier;
Set<String> email = new HashSet<>();
on(ContactEmailValidateCommand cmd) {
if (email.contains(cmd.email) == false) {
AggregateLifecycle.apply(
new ContactEmailInvalidatedEvent(//fields ... );
} else {
AggregateLifecycle.apply(
new ContactEmailValidatedEvent(//fields ... );
);
}
}
}
After we do this check, we could then re-act appropriately to the ContactEmailInvalidatedEvent or ContactEmailValidatedEvent which might invalidate the contact afterwards.
The benefit of this approach is that it keeps the persistence local to the Aggregate, which could give better scaling (as more nodes are added, more aggregates with locally managed Sets exist).
Drawbacks
Quite a lot of boiler plate to replace "create unique index"
This approach allows an 'invalid' Contact to pollute the Event Store for ever
The 'Singleton Aggregate' is complex to ensure it is a true (perhaps there is a simpler or better way)
The 'invoker' of the CreateContactCommand must check to see the outcome of the Saga
What do others do to solve this? I feel option 2 is perhaps the simplest approach, but are there other options?
What you are essentially looking for is Set Based Validation (I think here blog does a nice job explaining the concept, and how to deal with it in Axon). In short, validating some field is (or is not) contained in a set of data. When doing CQRS, this becomes a somewhat interesting concept to reason about, with several solutions out there (as you've already portrayed).
I think the best solution to this is summarized under your second option to use a dedicated persistence layer for the email addresses. You'd simply create a very concise model containing just the email addresses, which you would validate prior to issuing the ContactCreateCommand. Note that this persistence layer belongs to the Command Model, as it is used to perform business validation. You'd thus introduce an example where you not only have Aggregates in your Command Model, but also Views. And as you've rightfully noted, this View needs to be optimized for it's use case of course. Maybe introducing a cache which is created on application start up wouldn't be to bad.
To ensure this email addresses view is as up to date as possible, it's smartest to ensure it is updated in the same transaction as when the ContactCreatedEvent (which contains a new email address, I assume) is published. You can do this by having a dedicated Event Handling Component for your "Email Addresses View" which is updated through a SubscribingEventProcessor (a SEP). This would work as the SEP is invoked by the same thread publishing the event (your aggregate).
You have a couple of options when it comes to querying this model prior to sending the command. You could use a MessageDispatchInterceptor which only reacts on the ContactCreateCommand for example. Or, you introduce a Handler Enhancer which is dedicated to react ContactCreateCommand to perform this validation. Or, you introduce another command like RequestContactCreationCommand which is targeted towards a regular component. This component would handle the command, validate the model and if approved dispatches a ContactCreateCommand.
That's my two cents to the situation, hope this helps #vcetinick!

Do axon state-based aggregates have a way of specifying #CreatedDate and #LastModifiedDate?

When creating an Axon JPA state-based aggregate is there a way to mark certain fields as being the #CreatedDate and #LastModifiedDate (as is possible with spring data jpa)?
In other words does Axon have the functionality where if any state of the aggregate is changed then axon automatically updates the #LastModifiedDate without us having to repeat it in every #CommandHandler?
Try using #CommandHandlerInterceptor inside your aggregate to intercept all commands and set
lastModifiedDate field.
#CommandHandlerInterceptor
public Object intercept(Object myCommand, InterceptorChain interceptorChain) throws Exception {
this.lastModifiedDate = Instant.now();
return interceptorChain.proceed();
}
I believe the proper solution would be to implement Axon's HandlerEnhancerDefinition interface to update these fields. This way you can grab the same timestamp (Instant) from the event that gets persisted in the event store and use that on your state-stored aggregate to make them match.
I wrote a blog post with a working example with a detailed explaination how to do this: https://michael.jeszenka.com/automatically-updating-timestamp-fields-for-axon-state-stored-aggregates/
Essentially, you will need to implement the wrapHandler() method to specify which types of event handlers you want to wrap with your enhancer. Then you will need to define a wrapper class to execute your desired behavior, which in our case is automatically handling the timestamps of the state-stored aggregate. This wrapper class will need to implement the Object handle(Message<?> message, T target) method which will allow us to grab the event timestamp from the meta data and use it to set the state-stored aggregate fields.

Modifying a Biztalk message from custom code

Disclaimer: I am a complete biztalk newbie.
I need to be able to read and potentially edit 4 nodes in a biztalk message; preferably this needs to be done from a c# helper class as I am making a service call and also have unit tests written for this.
I already have this class wired up and it works with the XLANGMessage class, the problem I am running into is at this point in the orchestration the message is a Schema based type and doesn't seem to have any way for me to modify it.
I've done some reading and found a few ideas but have not been able to confirm if any of these can work from custom code.
1 write a map to transform the incoming message to the desired type
or
2 write something like this in your helper component to transform the message
public XmlDocument TransformMessage(XLANGMessage message)
Then pass the result document to a biztalk message in a message assignment shape.
responseMessage = xmlDocument;
You may get better performance if you pass streams instead of messages around.
You can pass messages into and out of C# helper classes easily. The simplest way is just to treat input parameters and return values as of type System.Xml.XmlDocument. The XLANG/s engine will safely cast back and forth from the XLANGMessage type to XmlDocument.
As you are essentially creating a "new" instance of the message (messages are immutable in BizTalk), the call to your helper class needs to be performed in a Message Assignment shape, with the outer Construct shape constructing the copy of your original message.
public static XmlDocument UpdateMyMessage(XmlDocument sourceMessage)
{
/* Do stuff to your Message here */
return sourceMessage;
}
A best-practice to consider is to declare all your C# helper methods as Static. This will avoid any issues with de/serialisation of your helper class during dehydration.
Are BizTalk messages immutable?
Generally speaking they are however, by creating a “corrective” orchestration and using a pass by reference option on the incoming message parameter, an existing message can be modified.

ASP.net Routing - using database queries in order to determine the physical file, and add extra route data from query results

My question is regarding Page Routing in an ASP.net (VB) Web Forms website.
I need to route to 2 .aspx pages in multiple ways e.g.
routes.MapPageRoute("SEO", "{Title}/{Id}", "~/PageA.aspx")
routes.MapPageRoute("Catalogue", "Issue{IssueNumber}-{PageNumber}", "~/PageA.aspx")
but I need to implement some logic involving database queries (LINQ to SQL) on both routes e.g.
Route 1) Check a bit field, if false then physical file = PageA.aspx, true then PageB.aspx
Route 2) Lookup IssueNumber and PageNumber, retrieve PageId and add to RouteData, set physical file = PageA.aspx
I think the best way of doing this, is to implement an IRouteHandler class but I've not been able to determine:
Where to write the database queries in such class
How to set the physical file in the class
Where/how to add a new value to the route data i.e. PageId
Where to check that Id and Number fields are actually integers (constraints?)
I can't find any useful VB.net documentation, any suggestions?
If not I'm going to have to resort to an intermediate .aspx page i.e. Transfer.aspx and then do the database queries and then store return values in session variables and then do a Server.Transfer(PageA.aspx), but this seems like an old-fashioned and inelegant way of doing it. Please help!
Instead of writing your own IRouteHandler I'd suggest implementing your Route class. Override GetRouteData and setup the RouteData object that you return according to your needs.
Where to write the database queries in such class
As mentioned above, GetRouteData is the place you are looking for.
How to set the physical file in the class
On the RouteData object you return, set RouteHandler to a new PageRouteHandler instance. You can pass the physical path to PageRouteHandler's constructor.
Where/how to add a new value to the route data i.e. PageId
Use the Values property of the RouteData object.
Where to check that Id and Number fields are actually integers (constraints?)
This should be done with route constraints. The sixth parameter to MapPageRoute for example is a RouteValueDictionary with contraints. To simple check that a parameter is an integer, use a regular expression, like so:
routes.MapPageRoute("RouteName", _
"product/{id}", "~/Products.aspx", _
True, New RouteValueDictionary(), _
New RouteValueDictionary() From { {"id", "\d+"} })
See the "\d+" at the end? This is the regular expression that the id parameter needs to match.
If you need more complex constraints you can do that as well, see e.g. http://stephenwalther.com/blog/archive/2008/08/07/asp-net-mvc-tip-30-create-custom-route-constraints.aspx

Passing a web service an unknown number of parameters

I'm relatively new to utilizing web services. I'm trying to create one that will be accepting data from a ASP.Net form whose input controls are created dynamically at runtime and I don't how many control values will be getting passed.
I'm thinking I'll be using jQuery's serialize() on the form to get the data, but what do I have the web service accept for a parameter? I thought maybe I could use serializeArray(), but still I don't know what type of variable to accept for the JavaScript array.
Finally, I was thinking that I might need to create a simple data transfer object with the data before sending it along to the web service. I just didn't wanna go through with the DTO route if there was a much simpler way or an established best practice that I should follow.
Thanks in advance for any direction you can provide and let me know I wasn't clear enough, or if you have any questions.
The answer to the headline question (assuming this is an ASP.Net web service) is to use the params keyword in your web service method:
[WebMethod]
public void SendSomething(params string[] somethings)
{
foreach (string s in somethings)
{
// do whatever you're gonna do
}
}
Examples:
SendSomething("whatever");
SendSomething("whatever 1", "whatever 2", "whatever 3");
In fact, you don't really even need the params keyword - using an ordinary array as a parameter will let you pass in an unknown number of values.
Well I went with creating my own data transfer object which I guess was always the front of brain solution, I was just thinking that there was probably a recognized best practice on how to handle this.

Resources