I'm writing a custom email sending service for a client. The client also wants message templating as well, but they didn't specify whether or not they wanted it in the messaging service or not. So, I'm thinking about best practice, here. Should a messaging service be responsible for templating as well? Or should the templating happen before the call to the messaging service? What have you done? What works better and makes the most sense?
This is easy to answer with a question: Are you going to use the messaging service for sending all kinds of messages (with or without templates) or just templated ones? (a.k.a reusability of messaging service's functionality).
You mentioned two solutions in your question. Lets call them solution A and solution B.
Since clients constantly change their mind, you might have to later change whichever solution you adopted. Your implementation must be easy to change later on, so you can chose which one to implement like this:
think that you have implemented solution A and you have to change it into B. How hard will it be and what will it involve? Let's call this result 1;
think that you have implemented solution B and you have to change it into A. How hard will it be and what will it involve? Let's call this result 2;
compare result 1 and 2 with pros and cons.
Choose the one with the most pros.
You could also opt for a solution C. Make the messaging service send all kind of messages (generic) and include loosely coupled plugable templating (more specific). Package them together and you get a specific tool that you can later split with ease or add more templating implementations to it if needed.
Just my 2 cents!
Related
We are migrating a monolithic to a more distributed and we decided to use AxonFramework.
In Axon, as messages are first-class citizens, you get to model them as POJOs.
Now I wonder, since one event can be dispatched by one service and listen on any others, how should we handle event distribution.
My first impulse is to package them in a separate project as a JAR file, but this goes against a rule for microservices, that they should not share implementations.
Any suggestion is welcome.
Having some form of 'common' module is definitely not uncommon, although I'd personally use that 'common' module for that specific application alone.
I'd generally say you should regard your commands/events/queries as the API of your application. As such, it might be beneficial to share the event structure with other projects, but just not the actual POJO itself. You could for example think about using ProtoBuf for this use case, were in ProtoBuf describes a schema for your events.
Another thing to think about is to not expose your whole 'event-API'. Typically you'll have quite some fine grained events, things which other (micro) services in your environment are not interested in. There are however always a couple of 'very important events', differently put 'milestone events', which others definitely are interested in.
These milestone events in some scenarios aren't a direct POJO following from your domain, but rather an accumulations of several events.
It is thus not to uncommon to have a service which accumulates these and publishes another event to notify other services. The accumulating of these fine grained, internal events, and publishing a milestone event as a response to these is typically better suited as the event-API within your micro service architecture.
So that's a couple of ideas there for you, hope they give you some insights.
I'd like to give a clear cut solution to your question, but such an answer always hides behind 'it depends'.
You are right, the "official" rule is not to share models. So if you have distributed dev-teams, I would stick to it.
However, I tend to not follow strictly when I have components that are decoupled but developed by the same team or teams with high interaction ...
A lot of our use cases for Biztalk involve simply mapping and routing HL7 2.x messages from one system to another. Implementing maps and associating them to send/recieve ports is generally straightforward, but we also need to do some content based filtering on the sending side.
For example, we may want to only send ADT A04 and ADT A08 messages to system X if the sending facility is any 200 facilities (out of a possible 1000 facilities we have in our organization), but System Y needs ADT A04, A05, A8 for a totally different set of facilities and only for renal patients.
Because we're just routing messages and not really managing business processes here, utilzing orchestrations for the sole purpose to call out to the business rule engine is a little overkill here, especially considering that we'd probably need a seperate orchestration for each ADT type because of how schemas work. Is it possible to implement filter rules like this without using using orchestrations? The filters functionality of send ports looks a little too rudimentary for what we need, but at the same time I'd rather not develop and manage orchestrations.
You might be able to do this with property schemas...
You need to create a property schema and include the properties (from the other schemas) that you want to use for routing. Once you deploy the schema, those properties will be available for use as a filter in the send port. Start from here, you should be able to find examples somewhere...
As others have suggested you can use a custom pipeline component to call the Business Rules Engine.
And rather then trying to create your own, there is already an open source one available called the BizTalk Business Rules Engine Pipeline Framework
By calling BRE from the pipeline you can create complex rules which then set simple context properties on which you can route your messages.
Full disclosure: I've worked with the author of that framework when we were both at the same company.
I have some internal-facing ASP.NET web services that have had numerous API additions over the years. Some of the original web methods, while still available for consumption, have recommended replacements available. I would like to steer consuming clients toward using these new methods so I can retire and eventually remove their elders.
If this were a client API rather than a web service API, I'd just mark the offending methods with the obsolete attribute. But .NET attributes do not get serialized and are not visible to consuming developers when they add or refresh web references.
What techniques are recomended for obsoleting ASP.NET web methods? Is there anything built into the tooling (VS2005-2010)? I don't want to break any of the existing clients, so I can't simply remove the web methods outright or change their internal behavior to reprot their usage as erroneous.
Tim, the short answer to this is unfortunately that you have to contact those clients and communicate the change with them and agree on timelines etc. There might be something that you can do to smooth the process over for them, particularly if they are not IT savvy clients and had to get their applications built by external contractors.
You can butter this up any way you like for them really, from the system is going to be replaced, to we are doing it bigger, better and faster.
Additionally you can build in code to slow them down, NOT RECOMMENDED, but then when they inquire you can give them the, we don't support that system any longer, it has been replaced by system 'X'.
If the new methods you are talking about are still just web-methods, you can just point the old ones to the new ones, and let the clients use the old one.
Another option is to identify the clients stuck on the old methods, get their IP addresses and lock it down so only they can use it, this way you ensure new clients will not attempt to connect to the old methods.
Other than that, I cannot think of anything that will not be a pain or difficult for both yo and the client.
I have a small RIA that I built as a learning/make-my-life-easier project that uses Flex and ASP.Net. Currently, my architecture utilizes straight HTTP posts and the server responds with xml. I posted another question about security in my web app and some people mentioned SOAP. SOAP is something I've never actually used and I was wondering what the pros/cons were to using SOAP over my current architecture and then subsequently, how much work is require to convert such an application to utilize SOAP.
Thanks,
Chris
Since you have already implemented your own message schemas for sending and receiving, then SOAP in of itself will not give you any added value. The added value comes from SOAP's support for the WS-* standards, covering security, transactions, and several other goodies. The recommended way to take advantage of all that is to use WCF rather than ASP.NET, because WCF supports the latest versions of those standards.
FYI when trying to use SOAP with FLEX - XMLDecoder in Flex does not currently decode some complex data types appropriately, making it appear that you are not receiving all data. I have tracked the error down to the XMLDecoder where I can see the correct data is received, but is not appropriately packaged in the ResultEvent requiring me to override the XMLDecoder, which, I am sure you can imagine, is not very fun. Just wanted to put in my 2 cents. If you are thinking of moving in that direction it would probably be nice to know it doesn't always work for very complicated data types (in my example a collection of objects containing 2 strings and 2 arrays - only returns a collection of 1 string). Granted, it does work 99% of the time, but not always.
We have two client apps (a web app and an agent app) accessing methods on the same service, but with slightly different requirements. My team wants to control behaviour on the service side by passing in a ApplicationType parameter to every method - which is essentially an enum containing the name of the calling client application - which is then used as a key for a database lookup to configure the service with client-specific options.
Something about this makes me uneasy as I don't think the service should really have to be aware of which client is calling it. I'm being told that it's easier to do it this way than pass a load of options dynamically through the method call.
Is there anything wrong with the client application telling the service who they are? Or is there really no difference between passing a config key versus a set of parameterized options?
One immediate problem I can see is that if we ever opened the service to another client run by a third party, we'd have to maintain their configuration settings locally for them. At the moment we own both client apps so it's not so much of a problem.
How would you do it?
In a layered solution, you should always consider your layers as onion-like layers, and dependencies should always go inwards, never outwards.
So your GUI/App layer should depend on the businesslogic layer, the businesslogic layer should depend on the data access layer, and similar.
Unless you categorize the clients (web, win, wpf, cli), or generalize it with client profiles (which client applications can configure), I would never pass in the name of the calling application, as this would make the business logic layer aware of and dependent upon the outside layer.
What kind of differences are we talking about that would depend on the type of application? If you elaborate a bit on the differences here, perhaps someone can come up with some helpful advice on other ways to solve this.
But I would definitely look for other ways before going down your described path.
Can't you create two different services, one for each application? The two services will share a lot of code or call a single internal service with different parameterization depending on what outer service was called.
From a design perspective, this is no different than having users with different profiles. From a security perspective, I hope your applications are doing something to identify themselves, lest users of one application figure out a way to invoke the other applications logic as a hack. (Image a HR application being used by the mafia and a bank at the same time, one customer would be interesting in hacking the other customer's application on a shared application host)
In .net the design doesn't feel this way because the credentials live on the thread (i.e. when you set the IIPrincipal, that info rides on the thread-- it is communicated along with each method call, but not as a parameter.)
Maybe what you are looking for in terms of a more elegant design is an ApplicationIdentity attribute. You'd have to write a custom one, I don't know of one in the framework right now.
This is a hard topic to discuss without a solid example.
You are right for feeling that way. Sending in the client type to change behaviour is not correct. It's not a bad idea for logging... but that's about it.
Here is what I would do:
Review each method to see what needs to be different and why.
Create different methods for different usages. The method name should be self explanatory. If you ever need to break compatibility, you have more control (assuming you're not using a versioning system which would be overkill for an in-house-only service).
In some cases request parameters (flags/enum values) are more appropriate.
In some cases knowing the operating environment is more appropriate (especially for data security). The operating environment almost always sent during a login request. Something like "attended"/"secure" (agent client) vs "unattended"/"not secure" (web client). Now you must exchange a session key (HTTP cookie or an application level session id). Sessions obviously doesn't work if you need to be 100% stateless -- especially if you want to scale-out without session replication... if you have that requirement, send a structure in every request.
Think of requests like functions in your code. You wouldn't put a magic parameter that changes the behaviour of the function. You would create multiple functions that each behave differently. Whoever is using the function makes the decision which one to call.
So why is client type so wrong? Client type has no specific meaning on its own. It has many meanings and they may change over time. It's simply informational which is why it is a handy thing to log. An operating environment does have a specific meaning.
Here is a scenario to consider: What if a new client type is developed that is slightly different in a way that would break compatibility with the original request? Now you have two requests. 2 clients use Request A and 1 client uses Request B. If you pass in a client type to each request, the server is expected to work for every possible client type. Much harder to test and maintain!!