I'm about to implement a web service for my database, perhaps using WCF Data Services. Some of the objects I need to make available have child objects that need to be present for the objects to be useful. But because of lazy loading in the Entity Framework, those child objects are not going to be automatically loaded.
I'm going to be calling this service using JSON, and I don't want to have to specify the $expand option in each call. And it's not clear to me where I would use the LoadProperty method (same link), since I'm just writing the InitializeService method and letting the framework do the rest.
Is there a way to configure it to explicitly load some child objects and not others?
WCF Data Services currently doesn't support auto-expand on the server. The client always has to ask for expansions.
You could implement some kind of a workaround around the WCF DS, by modifying the incoming request. So for example if the client sends request for ~/Products you could modify it before it gets to WCF DS and let it process ~/Products&$expand=Category and that way effectively achieve auto-expand. But for such a service to be robust, you would have to parse the query URL and only add the expand if there's not already one in there and so on.
The other way is if its always necessary for the child object to be present, can we make the child object complex types instead of entities, so that they always come along with the parent. Is there a strong reason for the child objects to be individual entities?
Hope this helps.
Thanks
Pratik
Related
In one of my views, I have a ViewModel which I populate from two tables, and then bind a List<ViewModel> to an editable GridView (ASP.NET Web Forms).
Now I need to send that edited List<ViewModel> back to the Services layer to update it in the database.
My question is - is it Okay to send the ViewModel back to Services, or should it stay in the Presentation? If not - should I better use a DTO? Many thanks.
Nice question !
After several (hard) debates with my teammates + my experience with MVC applications, I would not recommend to pass viewmodel to your service / domain layer.
ViewModel belongs to presentation, no matter what.
Because viewModel can be a combination of different models (e.g : 1 viewModel built from 10 models), your service layer should only work with your domain entities.
Otherwise, your service layer will end up to be unusable because constrained by your viewModels which are specifics for one view.
Nice tools like https://github.com/AutoMapper/AutoMapper were made to make the mapping job.
I would not do it. My rule is: supply service methods with everything they need to do their job and nothing more.
Why?
Because it reduces coupling. More often than not service methods are addressed from several sources (consumers). It is much easier for a consumer to fulfil a simple method signature than having to build a relatively complex object like a view model that it otherwise may have nothing to do with. It may even need a reference to an assembly it wouldn't need otherwise.
It greatly reduces maintenance effort. I think an average developer spends more than 50% of his time inspecting and tracking existing code (maybe even much more). Now everybody knows that looking for something that is not there takes disproportionally much time: you must have been everywhere to be sure. If a method receives arguments (or object with properties) that are not used directly or further down the call stack you or others will walk this long road time and again.
So if there is anything in the view model that does not play a part in the service method, don't use it to call the method.
Yes. I am pretty sure it is ok.
Try to use MS Entity Framework and it will help you allots.
i am using EF4 and StructureMap in an asp.net web application. I am using the repository/unit of work patterns as detailed in this post. In the code, there is a line that delegates the setup of an ObjectContext in global.asax.
EntityUnitOfWorkFactory.SetObjectContext(() => new MyObjectContext());
On the web page code-behind, you can create a generic repository interface like so ...
IRepository<MyPocoObject> ds = ObjectFactory.GetInstance<IRepository<MyPocoObject>>();
My question is what is a good approach to refactoring this code so that I can use more than one ObjectContext and differentiate between them in the code-behind? Basically i have two databases/entity models in my application and need to query them both on the same page.
The Unit of Work is used to manage persistence across multiple repositories, not multiple object contexts.
You're not going to be able to persist changes across multiple contexts using a unit of work, as the UoW is simply implemented as a wrapper for a ObjectContext. Therefore, you'll need two unit of works.
Overall, things are going to get messy. You're going to have two OCs newed up and disposed each HTTP request, not to mention transaction management is going to be a nightmare.
Must you have two ObjectContexts? What is the reasoning for this? If it's for scalability, don't bother; it's going to be too painful for other things like your repository, unit of work and http scope management.
It's hard to provide good advice without seeing how you have your repositories set up.
Try creating wrapper classes for each object context, each implementing IUnitOfWork and a secondary unique interface (IEfSqlContext1, etc which represents one of your models/contexts).
Then you can inject whichever context you want.
As I said though, try and avoid having two EDMX/Contexts. It's more trouble than it's worth.
I have a flex application and need to show the real time data into the chatrs and datagrids.
Eralier we are used Httpservices to showing the real time data and historical data into charts and datagrids. But now we are going to replace the Httpservices to remote objects.
So which places generally need to change. I have a little bit idea about remote objects.
Thanks,
Ravi
If you need to display real time data (or "near real time") you should use some kind of pushing mechanism - take a look on BlazeDS and read about polling and streaming.
If you just need to replace your webservices with remote objects you will need to replace the code dealing with the xml response (extracting data etc) with the code dealing with the objects returned by the remote calls. It is not mandatory to use strongly typed objects, but it will help.
If you are going to replace your HTTPService with RemoteObject, some questions you need riposte yourself.
What framework are you going to implement, if any then check their RemoteObject Invoker Tag if any.
Your resultEvent and FaultEvent will vary according to the framework you are going to apply.
If you are going with Flex default RemoteObject
Then you need to replace all your HTTPService with RemoteObject tags.
Your backend code also requires some changes with business logic should get into methods with the result of function or method returning an object.
Finally a suggestion.
Instead of going with Remote Objects, why not go with Webservice. You can re use the components somewhere else too.
Updated links about Cairngorm
http://www.adobe.com/devnet/flex/articles/cairngorm_pt5_03.html
http://www.jeffryhouser.com/index.cfm/2007/2/19/Learning-Cairngorm-Part-3
http://www.asfusion.com/blog/entry/hello-world-cairngorm-example
http://justjoshn.com/entry/contact-manager-part-2-cairngorm-example
Thanks
I have a Flex app that needs some custom serialization. I tried to use IExternalizeable. If it worked that would be exactly what I need. But the issue is that I need to do this custom serialization on the client only.
It seems that to get the IExternalizeableized classes read/write methods called the Java classes also have to implement the interface. But the server already has all of the customization that it can handle; that is unfortunately not an option.
I tried to dig into the RPC classes. I was gonna monkey-patch what I needed. But I could only see the classes to handle the (AMF)XML data whereas I have the binary bits flowing. It appears that all of the serial/deserialization logic is already compiled into the player. At least that's my guess.
What I am attempting is to take the data from the AMF stream and update objects that already exist. Currently I am copying the values from the returned objects in my service handlers into the already existing model objects. I would prefer to skip the step where the NEW items have their values set and instead only set the values on the existing objects.
I am working on a web application that consumes a web service. Web service is written in .NET.
I want to know whether using a reference parameter for a Web method is a good practice or not?
You can use ref and out params with WCF services, but under the hood they're wrapped up.
Anything passed to a WebMethod or service has to be serialised - you can make it behave as if it is a ref or out by wrapping it in something that sets the values back, but this is messy.
You're better off with a record class - a simple serialisable class that's basically just a list of auto properties that's the return of the WebMethod.
This results in extra classes, but is much easier to maintain.
It is best to have the ws message based.
You can still be doing so implicitly when you use multiple parameters, there is still the message you are receiving with those. Just keep them separated, if you need multiple outputs return a simple result class for the operation.