Web Services Model - asp.net

I have 1 Site (MySite.com) / 1 Web Service (WebService.MySite.Com) and one Common Library (LibCommon)
The common Library Contains a Model e.g. UserModel = LibCommon.UserModel
The web service has a method 'Void CheckUser(LibCommon.UserModel model)'
However when I add the 'WebService' reference to 'MySite.com' the method changes so that it looks like 'Void CheckUser(WebService.MySite.Com.UserModel model)'
So I think fair enough I can just cast one object to the other as they are identical however .NET Says I cannot do this?
Is there a work around for this?
Cheers,

Note this is for WCF, and not ASMX web services:
You can't directly cast the original data class to the proxied class generated by the WCF service reference wizard. However, you can reuse the original data class in the client:
Add the library reference containing the transfer objects (i.e. LibCommon) as a reference to both the Service (WebService) and the Client (Mysite.com). When adding the service reference on the client, choose the advanced tab and then select Reuse types in referenced assemblies. This will then reuse the common data transfer classes, instead of duplicating the types with proxies.
Note however that by eliminating the proxied data class, you are introducing direct coupling between client and server - you should do this only if you have control over both client and server w.r.t. version control issues etc (e.g. able to deploy new versions of both client and server simultaneously)
As an aside, it is also possible to eliminate the shared service interface as well, by moving the server side Service contract interface into a separate, common assembly and then using a technique such as this or this.

Related

How do I get access to Castle DynamicProxy generation options within MOQ?

Does MOQ give access to Castle's DynamicProxy generation? Or are there configurable static methods or something in the Castle namespace that would allow me to tune MOQ's proxy gen behavior?
Some Background
I am Mocking a WCF Service endpoint (IWhatever). WCF automatically adds Async call back options for methods (e.g. IWhatever.DoWork() is also realized as IWhatever.DoWorkAsync()).
I'm looking to use the Mock<IWhatever> object while self-hosting this service mock'd; basically spoof this external web service to my system. However, when [self-hosted] WCF tries to create a DoWorkAsync() method; it already exists... which ultimately throws errors when opening the self-hosted/mock'd IWhatever endpoint. ((NOTE: I don't have access to original contract to use directly)).
Sooo.. looks like Castle DynamicProxy allows for one to define which methods should be generated (see: http://kozmic.net/2009/01/17/castle-dynamic-proxy-tutorial-part-iii-selecting-which-methods-to/). I was thinking I would use to not intercept calls on methods ending with "[...]Async". However I don't see where I would add this customization rule in the proxy generation within MOQ; hence my question.

WCF service architecture query

I have an application that consists of a web application, and mutliple windows services, only one windows service is installed depending on what version of the backend sofware is used.
Currently, Data is saved by the web app in a database, then the relevant service is installed and this picks up the data and posts it in to the backend system that is installed.
I want to change this to use WCF services so the resulting data is returned directly to the web app.
I have not used WCF services before but Im assuming I can do something like this.
WebApp.Objects.Dll - contains Database objects, eg PurchaseOrder object
WebApp.Service.Contracts.dll - here I can describe the service methods, this will reference the WebApp.Objects.dll so I can take a PurchaseOrder object as a parameter
WebApp.Service.2011.dll - This will be the actual service for the 2011 version of the backend system, this will reference the WebApp.Service.Contracts dll
WebApp.Service.2012.dll - This will be the actual service for the 2012 version of the backend system, this will reference the WebApp.Service.Contracts dll
So, my question is, does the web app need to know the specifics about what backend WCF service is used? I just want to call a service with the specified Interface and not care about how its implemented or what it does internally, but just to return the purchase order that was created in the backend system (whether it return an interface or a concrete class)
Will i be able to create a service client without needing to know whether its the 2011, or 2012 WCF service being used?
As long as you are able to use the exact same contract for all the versions the web application does not need to know which version of the WCF service it is accessing.
In the configuration of the web application, you specify the URL and the contract. However, besides the contract there might be other differences between the services. In an extreme example this might mean that v2011 uses a different binding as v2012 of the backend - which is not very likely from your description. But also subtle differences in the configuration or the behavior of the services should be addressed in the configuration files. E.g. if v2012 needs longer for an action as v2011 does, the timeouts need to be configured so that the longer time of v2012 does not lead to an expiration.

Creating/Exposing WCF services from an existing ASP.NET application

We need to expose some services (i.e. AddressValidatorService, CustomerFinderService) that currently reside in an ASP.NET application to other applications within our organization. Exposing these services via WCF seems like a natural fit, but I don't see any best-practices for how to pull these common services into a WCF wrapper in such a way that my existing ASP.NET application can continue to use them with minimal code changes and/or awareness that the service they are consuming is no longer in-process.
I'm especially looking for recommendations on how to structure the existing ASP.NET solution and whether to host our new WCF in the same solution or in some new shared WCF solution referenced by both our ASP.NET application and external callers.
Also, is it bad practice to simply promote the DTOs currently only consumed in-process via ASP.NET to full fledged data contracts or is it preferable to create duplicate DTOs that are explicitly decorated with [DataContract]? The latter seems like a maintenance nightmare.
To answer your second question:
Also, is it bad practice to simply promote the DTOs currently only consumed in-process via ASP.NET to full fledged data contracts or is it preferable to create duplicate DTOs that are explicitly decorated with [DataContract]? The latter seems like a maintenance nightmare.
It is considered a bad practice to expose your business model as WCF contracts. So if your DTOs are replicas of your domain model then it would be a strict no-no, because
1. any change in the model would directly effect the contracts and hence all the clients using it
2. you would be exposing your business "know-how" to the outside world.
The latter can tend to get difficult for any evolving system, but then you have various open source tools (like AutoMapper) that ease your mapping nighmares.
You can convert an existing project to WCF, then continue to use it in-process by using a project reference. It can then be consumed by an eternal source using the WCF client. A WCF client converts the class name from ClassName to ClassNameClient when consumed over WCF, but the class will function pretty much the same.
For example:
MyClass obj = new MyClass();
obj.DoSomething(withData);
Would become:
MyClassClient obj = new MyClassClient();
obj.DoSomething(withData);
You would publish the WCF project to some endpoint, like address.example.com, then use a service reference to the endpoint to reference the code, like a project reference, in your other projects.
Note that while the externally referencing projects would not be impacted by the change or know that the data is going over the network, if you have chatty calls to the project in question, it will definitely take a performance hit. You may want to consolidate related methods into single methods to save on round-tripping.
If these are exposed as static page services, there's no magic wrapper -- you're going to need to move code to a standalone service implementation class and put a .svc file in front of it. (Or use WCF4 fileless activation, or a service factory, but that's getting a bit away from the core question here.)
If these are exposed as ASMX, you can actually put an ASMX facade in front of a WCF service class and get basic HTTP/XML/ASMX responses as you would from your legacy ASMX webservices. You an expose that same WCF service class through standard WCF configuration for non-legacy consumers.
Finally, you can expose any WCF service as basicHTTP with serviceMetadata + httpGetEnabled, and you'll get a service endpoint usable by legacy consumers of an ASMX service.
http://msdn.microsoft.com/en-us/library/ms751433.aspx

structuring of web services

Consider 3 modules/classes in an ASP.NET Webforms application.
I need a web service for each of them, where each web service contains only one function.
Should I group them into one web service class, or should I keep the one web service for each class?
If they are related and need to be exposed for consumtion by a single client you could create one webservice and call this an API. This means you and your client maintain/consume a single webservice.
If they are clearly unrelated, separate them.
Group them in one class (the base webservice class). If it's needed you can branch from here and instantiate more complicated classes, or even call external libraries (for example if you have a data layer to call)

Avoiding having to map WCF's generated complex types

I have an ASP.NET MVC web app whose controllers use WCF to call into the domain model on a different server. The domain code needs to talk to a database and access to the database server isn't always possible from web servers (depends on the customer site) hence the use of WCF to get to a place where my code is allowed to connect to the database server.
This is configurable so if the controllers are able to access the database server directly then I use local instances of the domain objects rather than use WCF.
Lets say I have a page asking for person details like age, name etc. This is a complex type that is a parameter on my WCF operation like this :
[OperationContract]
string SayHello( Person oPerson);
When I generate the client code (eg; by adding a service reference in my client) I get a separate Person class that fulfills the wcf contract. The client, an MVC web app, can use this client Person class as the view model and all is well. I pass that straight into the WCF client methods and it all works brilliantly.
If my mvc client app is configured to NOT use WCF I have a problem. If I am calling my domain objects directly from the controller (assume I have a domain access factory/provider setup) then I need the original Person class and not the wcf generated Person class. This results in my problem which is that I will have to perform mapping from one object to another if I don't use WCF
The main problem with this is that there are many domain objects that will need to be mapped and errors may be introduced such as new properties forgotten about in future changes
I'm learning and experimenting with WCF and MVC can you help me know what my options are in this scenario? I'm sure there will be an easy way out of this given the extensibility of WCF and MVC
Thanks
It appears that you are not actually trying to use a service-oriented architecture. In this case, you can place the domain objects into a single assembly, and share it between the WCF service and the clients. When creating the clients, use "Add Service Reference", and on the "Advanced" tab, choose "Share Types". Either choose to share all types, or choose the list of assemblies whose types you want to share.
Sound service-oriented-architecture dictates that you use message based communication regardless of whether your service is on another machine, in another process, in another appdomain, or in your appdomain. You can use different endpoints with different bindings to take advantage of the speed of the link (http, tcp, named pipes) based on the location of your service, but the code using that service would remain the same.
This may not be the easiest or least time-consuming answer, but one thing you can do is avoid using the "add service reference" option, and then copy your contract interfaces to your MVC application and initiate the connection to WCF manually without automatically creating a service proxy. This will allow you to use one set of classes for your model objects and you can control explicitly when to use WCF or not.
There's a good series of webcasts on WCF by Michele Leroux Bustamante, and I think in episode 2, she explains how to do exactly this. Check it out here: http://www.dasblonde.net/WCFWebcastSeries.aspx
Hope this helps!
One sound option is that you always use WCF, even if client and server are in the same process, as Aviad points out.
Another option is to define the service contracts on interfaces, and to put these, together with the data contracts into an assembly that is shared between client and server. In the client, don't use svcutil or a service reference; instead, use ClientFactory<T>.
This way, your client code will use the same interfaces and classes as the server.

Resources