I've just recently learned the PureMVC framework, and am a little confused as to the coupling between Proxy and Mediator objects. The links on this page connect to some documents describing the framework. (Please note, the links on the aforementioned page open PDFs.)
The diagrams and examples of PureMVC I've examined often show a direct coupling between a Mediator and Proxy. When the proxy's state is updated, rather than sending a new Notification, the Mediator (which retrieves a reference to the Proxy from the Facade) has its state updated.
This certainly seems to simplify the logic of the code, but it also directly couples two seemingly disparate components together. To my understanding, a Mediator's purpose is to translate Events from a view into PureMVC Notifications. Proxies are meant to perform some function to gather data and relay it back to the view. These two components seem to exist in different layers of the application, and perhaps shouldn't necessarily be coupled together.
Wouldn't it make more sense to have the Proxy objects send their own Notifications when their state updates, which are forwarded to the interested Mediator by the Facade?
Even if you update the mediator through notifications it would be coupled to the proxy but that's ok, it has to be.
As long as you don't couple the proxy i would say it's ok.
Juan
Wouldn't it make more sense to have the Proxy objects send their own Notifications when their state updates, which are forwarded to the interested Mediator by the Facade?
Yes, this is exactly what needs to happen. PureMVC is simply a Notifier / Observer pattern implementation with a lightweight MVC structure linked by a Facade. I strictly advise not to couple Proxies to Mediators; only allow Mediators to respond to Notifications that are sent by the Proxy when the data state changes. This allows those classes to be fully decoupled.
If you are using Flex, I would recommend HydraMVC / HydraFramework which is a Flex-specific port of PureMVC MultiCore, mimicking the API of PureMVC however is much less verbose, and includes a way to decouple server interaction from Proxies via a DelegateRegistry. (Full disclosure, I am the principal developer on this project, however it is fully OSS and free to use / contribute.) Regardless which MVC framework you implement, I still strongly recommend fully decoupling Proxies / Mediators via Notifications.
Related
So I've this Microservice architecture where there is an ApiGateway, 2 microservices i.e., Configurations. API and API-1. The Configuration. API is mainly responsible to parse the JSON request and
access the DB and update Status tables, also to fetch required data, it even adds up more values to the JSON request and send it to the API-1. API-1 is responsible to just generate report based on the json passed.
Yes I can merge the configurations. API to the API-1 and make it a single service/container but the requirement is not to merge and create two different components i.e., 1 component purely based on
fetching the data, updating the status while the other just to generate the reports.
So here are some questions:
: Should I use gRPC for the configuration.API or is there a better way to achieve this.
Thank you.
RPC is a synchronous communication so you have to come up with strong reason to use it in service to service communication. it brings the fast and performant communication on the table but also coupling to the services. if you insist use rpc it is better to use MASSTRANSIT to implement the rpc in less coupled way. however in most cases the asynchronous event-base communication is recommended to avoid coupling (in that case look at CAP theory, SAGA, circuit breaker ).
since you said
but the requirement is not to merege and create two different
components
and that is your reason and also base on the fact
also to fetch requried data, it even adds up more values to the JSON
request and send it to the API-1
i think the second one makes scenes more. how ever i cant understand why you change the database position since you said the configuration service is responsible for that.
if your report service needs request huge data to generate report you have to think about the design. there is no more profile on you domain so there cannot be an absolute answer to this. but consider data reduce from insertion or request or some sort of pre-calculation if you could and also caching responses.
I have heard a few people make the comment that the default controller/provider for Entity Framework data in the new ASP.NET WebAPI (DbDataController) is not, strictly speaking, a REST based service, but more like an RPC-style service. I understand that the WebAPI framework allows you to make any kind of HTTP service, REST or otherwise, but can someone explain to me specifically what it is about the service exposed by a DbDataController that makes it not a true REST service?
The REST architectural style describes the following six constraints applied to the architecture, while leaving the implementation of the individual components free to design:
Client–server
A uniform interface separates clients from servers. This separation of
concerns means that, for example, clients are not concerned with data
storage, which remains internal to each server, so that the
portability of client code is improved. Servers are not concerned with
the user interface or user state, so that servers can be simpler and
more scalable. Servers and clients may also be replaced and developed
independently, as long as the interface between them is not altered.
Stateless
The client–server communication is further constrained by no client
context being stored on the server between requests. Each request from
any client contains all of the information necessary to service the
request, and any session state is held in the client. The server can
be stateful; this constraint merely requires that server-side state be
addressable by URL as a resource. This not only makes servers more
visible for monitoring, but also makes them more reliable in the face
of partial network failures as well as further enhancing their
scalability.
Cacheable
As on the World Wide Web, clients can cache responses. Responses must
therefore, implicitly or explicitly, define themselves as cacheable,
or not, to prevent clients reusing stale or inappropriate data in
response to further requests. Well-managed caching partially or
completely eliminates some client–server interactions, further
improving scalability and performance.
Layered system
A client cannot ordinarily tell whether it is connected directly to
the end server, or to an intermediary along the way. Intermediary
servers may improve system scalability by enabling load-balancing and
by providing shared caches. They may also enforce security policies.
Code on demand (optional)
Servers are able temporarily to extend or customize the functionality
of a client by the transfer of executable code. Examples of this may
include compiled components such as Java applets and client-side
scripts such as JavaScript.
Uniform interface
The uniform interface between clients and servers, discussed below,
simplifies and decouples the architecture, which enables each part to
evolve independently. The four guiding principles of this interface
are detailed below.
http://en.wikipedia.org/wiki/Representational_state_transfer#Constraints
"The only optional constraint of REST architecture is code on demand. If a service violates any other constraint, it cannot strictly be considered RESTful. DbDataController class exposes Entity Framework models as HTTP services. These services have a great feature overlap with WCF Data Services, such as CRUD support, metadata and request batching. They even partially mimic OData's query string format.But these services follow the RPC style they're not RESTful and they don't use OData." A qoute from this site:
WCF Data Services and ASP.NET Web API
In an effort to understand MVC 2 and attempt to get my company to adopt it as a viable platform for future development, I have been doing a lot of reading lately. Having worked with ASP.NET pretty exclusively for the past few years, I had some catching up to do.
Currently, I understand the repository pattern, models, controllers, data annotations, etc. But there is one thing that is keeping me from completely understanding enough to start work on a reference application.
The first is the Service Layer Pattern. I have read many blog posts and questions here on Stack Overflow, but I still don't completely understand the purpose of this pattern. I watched the entire video series at MVCCentral on the Golf Tracker Application and also looked at the demo code he posted and it looks to me like the service layer is just another wrapper around the repository pattern that doesn't perform any work at all.
I also read this post: http://www.asp.net/Learn/mvc/tutorial-38-cs.aspx and it seemed to somewhat answer my question, however, if you are using data annotations to perform your validation, this seems unnecessary.
I have looked for demonstrations, posts, etc. but I can't seem to find anything that simply explains the pattern and gives me compelling evidence to use it.
Can someone please provide me with a 2nd grade (ok, maybe 5th grade) reason to use this pattern, what I would lose if I don't, and what I gain if I do?
In a MVC pattern you have responsibilities separated between the 3 players: Model, View and Controller.
The Model is responsible for doing the business stuff, the View presents the results of the business (providing also input to the business from the user) while the Controller acts like the glue between the Model and the View, separating the inner workings of each from the other.
The Model is usually backed up by a database so you have some DAOs accessing that. Your business does some...well... business and stores or retrieves data in/from the database.
But who coordinates the DAOs? The Controller? No! The Model should.
Enter the Service layer. The Service layer will provide high service to the controller and will manage other (lower level) players (DAOs, other services etc) behind the scenes. It contains the business logic of your app.
What happens if you don't use it?
You will have to put the business logic somewhere and the victim is usually the controller.
If the controller is web centric it will have to receive its input and provide response as HTTP requests, responses. But what if I want to call my app (and get access to the business it provides) from a Windows application which communicates with RPC or some other thing? What then?
Well, you will have to rewrite the controller and make the logic client agnostic. But with the Service layer you already have that. Yyou don't need to rewrite things.
The service layer provides communication with DTOs which are not tied to a specific controller implementation. If the controller (no matter what type of controller) provides the appropriate data (no mater the source) your service layer will do its thing providing a service to the caller and hiding the caller from all responsibilities of the business logic involved.
I have to say I agree with dpb with the above, the wrapper i.e. Service Layer is reusable, mockable, I am currently in the process of including this layer inside my app... here are some of the issues/ requirements I am pondering over (very quickly :p ) that could be off help to youeself...
1. Multiple portals (e.g. Bloggers portal, client portal, internal portal) which will be needed to be accessed by many different users. They all must be separate ASP.NET MVC Applications (an important requirement)
2. Within the apps themselves some calls to the database will be similar, the methods and the way the data is handled from the Repository layer. Without doubt some controllers from each module/ portal will make exactly or an overloaded version of the same call, hence a possible need for a service layer (code to interfaces) which I will then compile in a separate class project.
3.If I create a separate class project for my service layer I may need to do the same for the Data Layer or combine it with the Service Layer and keep the model away from the Web project itself. At least this way as my project grows I can throw out the data access layer (i.e. LinqToSql -> NHibernate), or a team member can without working on any code in any other project. The downside could be they could blow everything up lol...
I was wondering if any of you had successfully implemented DDD in a Client/Server app and would like to share some experiences.
We are currently working on a smart client in Flex and a backend in Java. On the server we have a service layer exposed to the client that offers CRUD operations amongst some other service methods. I understand that in DDD these services should be repositories and services should be used to handle use cases that do not fit inside a repository. Right now, we mimic these services on the client behind an interface and inject implementations (Webservices, RMI, etc) via an IoC container.
So some questions arise:
should the server expose repositories to the client or do we need to have some sort of a facade (that is able to handle security for instance)
should the client implement repositories (and DDD in general?) knowing that in the client, most of the logic is view related and real business logic lives on the server. All communication with the server happens asynchronously and we have a single threaded programming model on the client.
how about mapping client to server objects and vice versa? We tried DTO's but reverted back to exposing the state of our objects and mapping directly to them. I know this is considered bad practice, but it saves us an incredible amount of time)
In general I think a new generation of applications is coming with the growth of Flex, Silverlight, JavaFX and I'm curious how DDD fits into this.
I would not expose repositories directly to the client. The first big problem as you mention is security: you can't trust the client, so you cannot expose your data access API to potentially hostile clients.
Wrap your repositories with services on the server and create a thin delegate layer in the client that handles the remote communication.
Exposing your Entities is not necessarily a bad practice it's just that it becomes problematic when you start to factor in things like lazy loading, sending data over the wire the client doesn't need, etc. If you write a DTO class which wraps one or more entities and delegates get/set calls you can actually build up a DTO layer pretty quickly, especially using code generation available in most IDEs.
The key to all of this is that a set of patterns should really only apply to a part of your application, not to the whole thing. The fact that you have rich logic in your domain model and use repositories for data access as part of DDD should not influence the client in any way. Conceptually the RIAs that I build have three layers:
Client uses something like MVC, MVP or MVVM to present the UI. The Model layer eventually calls into...
What I might call the "Integration Layer." This is a contract of services and data objects that exist on both the client and server to allow the two to coordinate. Usually the UI design drives this layer so that (A) only the data that the client needs is passed to it and (B) data access can be coarse-grained, i.e. "make one method call for all the state needed for this set of UI.
Server using whatever it wants to handle business logic and data access. This might be DDD or something a little more old school, like a data layer built using stored procs in the DB and a lot of "ResultSet" or "DataTable" objects.
The point really is that the client and server are both very different animals and they need to vary independently. In order to do so, you need a layer inbetween that is a fair compromise between the needs of the UI and the reality of how things might need to be on the server.
The one big advantage that Silverlight/WPF and JavaFX have over Flex + anything is that you can use a lot of logic in the first two because you have the same VM on both sides of the app. Flex is the best UI technology hands down but it lacks a server component where code could be shared and re-used more effectively.
I started to use silverlight/flex and immediately bumped into the asynchronous service calling. I'm used to solving the data access problems in an OO-way with one server fetch mechanism or another.
I have the following trivial code example:
public double ComputeOrderTotal(Order order)
{
double total = 0;
// OrderLines are lazy loaded
foreach (Orderline line in order.Orderlines)
{
// Article,customer are lazy loaded
total = total + line.Article.Price - order.Customer.discount;
}
return total;
}
If I understand correctly, this code is impossible in Flex/Silverlight. The lazy loading forces you to work with callbacks. IMO the simple expample above will be a BIG mess.
Can anyone give me a structured way to implement the above ?
Edit:
The problem is the same for Flex/Silverlight, pseudo code would
do fine
Its not really ORM related but most orms use lazy loading so i'll remove
that tag
The problem is lazy loading in the model
The above example would be very doable of all data was in memory but
we assume some has to be fetched from
the server
Closueres dont help since sometimes data is already loaded and no asynchronous fetch is needed
Yes I must agree that O/R mapping is usually done on the server-side of your application.
In SilverLight asynchronous way of execution is the desired pattern to use when working with services. Why services? Because as I said before there is no O/R mapping tool at the moment that could be used on the client-side (SilverLight). The best approach is to have your O/R mapped data exposed by a service that can be consumed by a SilverLight application. The best way at the moment is to use Ado.Net DataServices to transport the data, and on the client-side to manage the data using LINQ to Services. What is really interesting about ADS (former Astoria project) is that it is designed to be used with Entity Framework, but the good folks also implemented support for IQueriable so basically you can hook up any data provider that support LINQ. To name few you can consider Linq To Sql, Telerik OpenAccess, LLBLGen, etc. To push the updates back to the server the data source is required to support the ADS IUpdateable.
You can look just exactly how this could be done in a series of blogposts that I have prepared here: Getting Started with ADO.NET Data Services and Telerik Open Access
I can't speak to Silverlight but Flex is a web browser client technology and does not have any database driver embedded in the Flash runtime. You can do HTTP protocol interactions to a web server instead. It is there in the middle-tier web server where you will do any ORM with respect to a database connection, such as Java JDBC. Hibernate ORM and iBATIS are two popular choices in the Java middle-tier space.
Also, because of this:
Fallacies of Distributed Computing
You do not do synchronous interactions from a Flex client to its middle-tier services. Synchronous network operations have become verboten these days and are the hallmark signature of a poorly designed application - as due to reasons enumerated at the above link, the app can (and often will) exhibit a very bad user experience.
You instead make async calls to retrieve data, load the data into your client app's model object(s), and proceed to implement operations on the model. With Flex and BlazeDS you can also have the middle-tier push data to the client and update the client's model objects asynchronously. (Data binding is one way to respond to data being updated in an event driven manner.)
All this probably seems very far afield from the nature of inquiry in your posting - but your posting indicates you're off on an entirely incorrect footing as to how to understand client-side technologies that have asynchronous and event-driven programming baked into their fundamental architecture. These RIA client technologies are designed this way completely on purpose. So you will need to learn their mode of thinking if you want to have a good and productive experience using them.
I go into this in more detail, and with a Flex perspective, in this article:
Flex Async I/O vs Java and C# Explicit Threading
In my direct experience with Flex, I think this discussion is getting too complicated.
Your conceptual OO view is no different between sync and asynch. The only difference is that you use event handlers to deal with the host conversation in the DAL, rather than something returned from a method call. And that often happens entirely on the host side, and has nothing to do with Flex or Silverlight. (If you are using AIR for a workstation app, then it might be in client code, but the same applies. As well if you are using prolonged AJAX. Silverlight, of course, has no AIR equivalent.)
I've been able to design everything I need without any other changes required to accomodate asynch.
Flex has a single-threaded model. If you make a synchronous call to the web server, you'd block the entire GUI of the application. The user would have a frozen application until the call completes (or times out on a network error condition, etc.).
Of course real RIA programs aren't written that way. Their GUI remains accessible and responsive to the user via use of async calls. It also makes it possible to have real progress indicators that offer cancel buttons and such if the nature of the interaction warrants such.
Old, bad user experience web 1.0 applications exhibited the synchronous behaviour in their interactions with the web tier.
As my linked article points out, the async single-threaded model coupled with ActionScript3 closures is a good thing because it's a much simpler programming model than the alternative - writing multi-thread apps. Multi-threading was the approach of writing client-server Java Swing or C# .NET WinForm applications in order to achieve a similarly responsive, fluid-at-all-times user experience in the GUI.
Here's another article that delves into this whole subject matter of asynchronous, messaging/event-driven distributed app architecture:
Building Effective Enterprise Distributed Software Systems
data-driven communication vs behavior-driven communication
Silverlight is a client technology and the Object - Relational mapping happens completely in the server. So you have to forgot about the ORM in Silverlight.
Following your example what you have to do is to create a webservice (SOAP, REST...) that can give your silverlight client the complete "Order" object.
Once you have the object you can work with it with no communication with the server in a normal - synchronous way.
Speaking about Silverlight, you should definitely check RIA services.
Simply, it brings the DataContext from the server to the client from where you can asynchronously query it (there is no need to write WCF services by hand, it's all done by RIA).
C# 5
async / await construct will almost exactly what I want..
watch presentation by anders hejlsberg