Add another Contract Model to WorkFlow Service - workflow-foundation-4

I have two work flow that be hosted through WorkflowServiceHost and those contain some Receive activity that expose services to out.
some services that my workflowservices must expose is general, for example suppose there is a monitorring service that return Tracking Information about one wf instance.
Solution 1 : in any workflow definition there is a receive activity that return Tracking Information.
but i look for another solution without defining this functionality as activity.
notice that i dont want use another service to expose these functionality, i want expose these functionality in same work flow service
Tanks
(sorry for bad english writing)

If you want to expose everything as one service you have 2 options:
Add the tracking requests to the workflow service as you suggest
Create a wrapper service, a regular WCF .svc file, and forward the requests to the workflow as needed and handle the tracking outside of the workflow.
If you really want to expose just a single external service I would go for the second option. That said, I normally just expose 2 services as they are different things and services are for machine consumption and not human consumption so the 2 URL's are no problem.

Related

How to configure dynamic routing of gRPC requests with envoy, nomad and consul

We use nomad to deploy our applications - which provide gRPC endpoints - as tasks. The tasks are then registered to Consul, using nomad's service stanza.
The routing for our applications is achieved with envoy proxy. We are running central envoy instances loadbalanced at IP 10.1.2.2.
The decision to which endpoint/task to route is currently based on the host header and every task is registered as a service under <$JOB>.our.cloud. This leads to two problems.
When accessing the service, the DNS name must be registered for the loadbalancer IP which leads to /etc/hosts entries like
10.1.2.2 serviceA.our.cloud serviceB.our.cloud serviceC.our.cloud
This problem is partially mitigated by using dnsmasq, but it is still a bit annoying when we add new services
It is not possible to have multiple services running at the same time which provide the same gRPC service. If we e.g. decide to test a new implementation of a service, we need to run it in the same job under the same name and all services which are defined in a gRPC service file need to be implemented.
A possible solution we have been discussing is to use the tags of the service stanza to add tags which define the provided gRPC services, e.g.:
service {
tags = ["grpc-my.company.firstpackage/ServiceA", "grpc-my.company.secondpackage/ServiceB"]
}
But this is discouraged by Consul:
Dots are not supported because Consul internally uses them to delimit service tags.
Now we were thinking about doing it with tags like grpc-my-company-firstpackage__ServiceA, ... This looks really disgusting, though :-(
So my questions are:
Has anyone ever done something like that?
If so, what are recommendations on how to route to gRPC services which are autodiscovered with Consul?
Does anyone have some other ideas or insights into this?
How is this accomplished in e.g. istio?
I think this is a fully supported usecase for Istio. Istio will help you with service discovery w/ Consul and you can use route rules to specify which deployment will provide the service. You can start explore from https://istio.io/docs/tasks/traffic-management/
We do something similar to this, using our own product, Turbine Labs.
We're on a slightly different stack, but the idea is:
Pull service discovery information into our control plane. (We use Kubernetes but support Consul).
Organize this service discovery information by service and by version. We use the tbn_cluster, stage, and version (like here).
Since version for us is the SHA of the release, we don't have formatting issues with it. Also, they don't have to be unique, because the tbn_cluster tag defines the first level of the hierarchy.
Once we have those, we use UI / API to define all the routes (e.g. app.turbinelabs.io/stats -> stats_service). These rules include the tags, so when we deploy a new version (deploy != release), no traffic is routed to it. Releases are done by updating the rules.
(There's even some nice UI affordances for updating those rules for the common case of "release 10% of traffic to the new version," like a slider!)
Hopefully that helps! You might check out LearnEnvoy.io -- lots of tutorials and best practices on what works with Envoy. The articles on Service Discovery Integration and Incremental Blue/Green Releases may be helpful.

Should the services in my service layer live in separate projects/DLLs/assemblies?

I have an ASP.NET MVC 3 application that I am currently working on. I am implementing a service layer, which contains the business logic, and which is utilized by the controllers. The services themselves utilize repositories for data access, and the repositories use entity framework to talk to the database.
So top to bottom is: Controller > Service Layer > Repository (each service layer depends on a single injectable repository) > Entity Framework > Single Database.
I am finding myself making items such as UserService, EventService, PaymentService, etc.
In the service layer, I'll have functions such as:
ChargePaymentCard(int cardId, decimal amount) (part of
PaymentService)
ActivateEvent(int eventId) (part of EventService)
SendValidationEmail(int userId) (part of UserService)
Also, as an example of a second place I am using this, I have another simple console application that runs as a scheduled task, which utilizes one of these services. There is also an upcoming second web application that will need to use multiple of these services.
Further, I would like to keep us open to splitting things up (such as our single database) and to moving to a service-oriented architecture down the road, and breaking some of these out into web services (conceivably even to be used by non-.NET apps some day). I've been keeping my eyes open for steps that might make make the leap to SOA less painful down the road.
I have started down the path of creating a separate assembly (DLL) for each service, but am wondering if I have started down the wrong path. I'm trying to be flexible and keep things loosely coupled, but is this path helping me any (towards SOA or in general), or just adding complexity? Should I instead by creating a single assembly/dll, containing my entire service layer, and use that single assembly wherever any services need to be used?
I'm not sure the implications of the path(s) I'm starting down, so any help on this would be appreciated!
IMO - answer is it depends on a lot of factors of your application.
Assuming that you are building a non-trivial application (i.e. is not a college/hobby project to learn SOA):
User Service / Event Service / Payment Service
-- Create its own DLL & expose it as a WCF service if there are more than one applications using this service and if it is too much risk to share the DLL to different application
-- These services should not have inter-dependencies between each other & should focus on their individual area
-- Note: these services might share some common services like logging, authentication, data access etc.
Create a Composition Service
-- This service will do the composition of calls across all the other service
-- For example: if you have an Order placed & the business flow is that Order Placed > Confirm User Exists (User Service) > Raise an OrderPlaced event (Event Service) > Confirm Payment (Payment Service)
-- All such composition of service calls can be handled in this layer
-- Again, depending on the environment, you might choose to expose this service as its own DLL and/or expose it as a WCF
-- Note: this is the only service which will share the references to other services & will be the single point of composition
Now - with this layout - you will have options to call a service directly, if you want to interact with that service alone & you will need to call the composition service if you need a business work flow where different services need to be composed to complete the transaction.
As a starting point, I would recommend that you go through any of the books on SOA architecture - it will help clear a lot of concepts.
I tried to be as short as possible to keep this answer meaningful, there are tons of ways of doing the same thing, this is just one of the possible ways.
HTH.
Having one DLL per service sounds like a bad idea. According to Microsoft, you'd want to have one large assembly over multiple smaller ones due to performance implications (from here via this post).
I would split your base or core services into a separate project and keep most (if not all) services in it. Depending on your needs you may have services that only make sense in the context of a web project or a console app and not anywhere else. Those services should not be part of the "core" service layer and should reside in their appropriate projects.
It is better to separate the services from the consumers. In our peojects we have two levels of separation. We used to group all the service interfaces into one Visual Studio project. All the Service Implementations are grouped into another project.
The consumer of the services needs to reference two dll but it makes the solution more maintainable and scalable. We can have multiple implementations of the services.
For e.g. the service interface can define a contract for WebSearch in the interfaces project. And there can be multiple implementations of the WebSearch through different search service providers like Google search, Bing search, Yahoo search etc.

The Purpose of a Service Layer and ASP.NET MVC 2

In an effort to understand MVC 2 and attempt to get my company to adopt it as a viable platform for future development, I have been doing a lot of reading lately. Having worked with ASP.NET pretty exclusively for the past few years, I had some catching up to do.
Currently, I understand the repository pattern, models, controllers, data annotations, etc. But there is one thing that is keeping me from completely understanding enough to start work on a reference application.
The first is the Service Layer Pattern. I have read many blog posts and questions here on Stack Overflow, but I still don't completely understand the purpose of this pattern. I watched the entire video series at MVCCentral on the Golf Tracker Application and also looked at the demo code he posted and it looks to me like the service layer is just another wrapper around the repository pattern that doesn't perform any work at all.
I also read this post: http://www.asp.net/Learn/mvc/tutorial-38-cs.aspx and it seemed to somewhat answer my question, however, if you are using data annotations to perform your validation, this seems unnecessary.
I have looked for demonstrations, posts, etc. but I can't seem to find anything that simply explains the pattern and gives me compelling evidence to use it.
Can someone please provide me with a 2nd grade (ok, maybe 5th grade) reason to use this pattern, what I would lose if I don't, and what I gain if I do?
In a MVC pattern you have responsibilities separated between the 3 players: Model, View and Controller.
The Model is responsible for doing the business stuff, the View presents the results of the business (providing also input to the business from the user) while the Controller acts like the glue between the Model and the View, separating the inner workings of each from the other.
The Model is usually backed up by a database so you have some DAOs accessing that. Your business does some...well... business and stores or retrieves data in/from the database.
But who coordinates the DAOs? The Controller? No! The Model should.
Enter the Service layer. The Service layer will provide high service to the controller and will manage other (lower level) players (DAOs, other services etc) behind the scenes. It contains the business logic of your app.
What happens if you don't use it?
You will have to put the business logic somewhere and the victim is usually the controller.
If the controller is web centric it will have to receive its input and provide response as HTTP requests, responses. But what if I want to call my app (and get access to the business it provides) from a Windows application which communicates with RPC or some other thing? What then?
Well, you will have to rewrite the controller and make the logic client agnostic. But with the Service layer you already have that. Yyou don't need to rewrite things.
The service layer provides communication with DTOs which are not tied to a specific controller implementation. If the controller (no matter what type of controller) provides the appropriate data (no mater the source) your service layer will do its thing providing a service to the caller and hiding the caller from all responsibilities of the business logic involved.
I have to say I agree with dpb with the above, the wrapper i.e. Service Layer is reusable, mockable, I am currently in the process of including this layer inside my app... here are some of the issues/ requirements I am pondering over (very quickly :p ) that could be off help to youeself...
1. Multiple portals (e.g. Bloggers portal, client portal, internal portal) which will be needed to be accessed by many different users. They all must be separate ASP.NET MVC Applications (an important requirement)
2. Within the apps themselves some calls to the database will be similar, the methods and the way the data is handled from the Repository layer. Without doubt some controllers from each module/ portal will make exactly or an overloaded version of the same call, hence a possible need for a service layer (code to interfaces) which I will then compile in a separate class project.
3.If I create a separate class project for my service layer I may need to do the same for the Data Layer or combine it with the Service Layer and keep the model away from the Web project itself. At least this way as my project grows I can throw out the data access layer (i.e. LinqToSql -> NHibernate), or a team member can without working on any code in any other project. The downside could be they could blow everything up lol...

Avoiding having to map WCF's generated complex types

I have an ASP.NET MVC web app whose controllers use WCF to call into the domain model on a different server. The domain code needs to talk to a database and access to the database server isn't always possible from web servers (depends on the customer site) hence the use of WCF to get to a place where my code is allowed to connect to the database server.
This is configurable so if the controllers are able to access the database server directly then I use local instances of the domain objects rather than use WCF.
Lets say I have a page asking for person details like age, name etc. This is a complex type that is a parameter on my WCF operation like this :
[OperationContract]
string SayHello( Person oPerson);
When I generate the client code (eg; by adding a service reference in my client) I get a separate Person class that fulfills the wcf contract. The client, an MVC web app, can use this client Person class as the view model and all is well. I pass that straight into the WCF client methods and it all works brilliantly.
If my mvc client app is configured to NOT use WCF I have a problem. If I am calling my domain objects directly from the controller (assume I have a domain access factory/provider setup) then I need the original Person class and not the wcf generated Person class. This results in my problem which is that I will have to perform mapping from one object to another if I don't use WCF
The main problem with this is that there are many domain objects that will need to be mapped and errors may be introduced such as new properties forgotten about in future changes
I'm learning and experimenting with WCF and MVC can you help me know what my options are in this scenario? I'm sure there will be an easy way out of this given the extensibility of WCF and MVC
Thanks
It appears that you are not actually trying to use a service-oriented architecture. In this case, you can place the domain objects into a single assembly, and share it between the WCF service and the clients. When creating the clients, use "Add Service Reference", and on the "Advanced" tab, choose "Share Types". Either choose to share all types, or choose the list of assemblies whose types you want to share.
Sound service-oriented-architecture dictates that you use message based communication regardless of whether your service is on another machine, in another process, in another appdomain, or in your appdomain. You can use different endpoints with different bindings to take advantage of the speed of the link (http, tcp, named pipes) based on the location of your service, but the code using that service would remain the same.
This may not be the easiest or least time-consuming answer, but one thing you can do is avoid using the "add service reference" option, and then copy your contract interfaces to your MVC application and initiate the connection to WCF manually without automatically creating a service proxy. This will allow you to use one set of classes for your model objects and you can control explicitly when to use WCF or not.
There's a good series of webcasts on WCF by Michele Leroux Bustamante, and I think in episode 2, she explains how to do exactly this. Check it out here: http://www.dasblonde.net/WCFWebcastSeries.aspx
Hope this helps!
One sound option is that you always use WCF, even if client and server are in the same process, as Aviad points out.
Another option is to define the service contracts on interfaces, and to put these, together with the data contracts into an assembly that is shared between client and server. In the client, don't use svcutil or a service reference; instead, use ClientFactory<T>.
This way, your client code will use the same interfaces and classes as the server.

Creating good interfaces, what should be included and what should be left out

I am in the process of updating a website for the third time in in 2 years, looks like this is going to happen all of the time and several websites are using the same DB. I want to use the same code for all of them and keep it easy to update in the future. So I plan on writing some interfaces and then place the business login in a service to keep things consistent across the board and add in some unit testing.
So I am looking at my current repositories and I am not sure what should be in my Interface and what should be in my Service.
For example I have an Add method - no brainer I have an Add in the interface and an add in the Service.
Then I have an AuthenticateAccountManager method that takes 3 parameters, should this be in both or just the Service and have a simple Get method in my interface (say by Username) and then do the validation against th other 2 properties in the Service.
I also have a QualifyPartner that sets a bool to true, should this just be in the Service and again have a simple Get method in my Interface, trying to keep that as small as possible?
Following the Separation of Concerns principle- AuthenticateAccountManager is a service-level operation. It should call into your repository, which will return the raw User data. The service then authenticates or not based on what is returned by the repository.
The general guideline is that the repository is responsible for retrieval and committing of data only. Interpreting and executing behaviors based on the data is business logic.

Resources