Asynchronous Database Access Layer in PureMVC - sqlite

I'm trying to refactor an existing project into PureMVC. This is an Adobe AIR desktop app taking advantage of the SQLite library included with AIR and building upon it with a few other libraries:
Paul Robertson's excellent async SQLRunner
promise-as3 implementation of asynchronous promises
websql-js documentation for good measure
I made my current implementation of the database similar to websql-js's promise based SQL access layer and it works pretty well, however I am struggling to see how it can work in PureMVC.
Currently, I have my VOs that will be paired with DAOs (data access objects) for database access. Where I'm stuck is how to track the dbFile and sqlRunner instances across the entire program. The DAOs will need to know about the sqlRunner, or at the very least, the dbFile. Should the sqlRunner be treated as singleton-esque? Or created for every database query?
Finally, how do I expose the dbFile or sqlRunner to the DAOs? In my head right now I see keeping these in a DatabaseProxy that would be exposed to other proxies, and instantiate DAOs when needed. What about a DAO factory pattern?
I'm very new to PureMVC but I really like the structure and separation of roles. Please don't hesitate to tell me if this implementation simply will not work.

Typically in PureMVC you would use a Proxy to fetch remote data and populate the VOs used by your View, so in that respect your proposed architecture sounds fine.
DAOs are not a pattern I've ever seen used in conjunction with PureMVC (which is not to say that nobody does or should). However, if I was setting out to write a crud application in PureMVC, I would probably think in terms of a Proxy (or proxies) to read information from the database, and Commands to write it back.

Related

Flex 4.5 remoting objects

I am very new to remoting in flex. I am using flex 4.5 and talking to a web application built by someone else on the team using AMF. They have used Zend_AMF to serialize and unserialize the data.
One of the main issues I am facing at the moment is that I will need to talk to a lot of services (about 60 or so).
From examples on remoting I have seen online and from adobe, it seems that I need to define a remoting object for EACH service:
<mx:RemoteObject id="testservice" fault="testservice_faultHandler(event)" showBusyCursor="true" destination="account"/>
With so many services, I think I might have to define about 60 of those, which I don't think is very elegant.
At the same time, I have been playing with Pinta to test out the AMF endpoint. Pinta seems to be able to allow one to define an arbitary amount of services, methods and parameters without any of these limitations. Digging through the source, I find that they have actually drilled down deep into the remoting and are handling a lot of low level stuff.
So, the question is, is there a way to approach this problem without having to define loads or remoteobjects and without having to go down too deep and start having to handling low level remoting events ourselves?
Cheers
It seems unusual for an application to require that many RemoteObjects. I've worked on extremely large applications, and we typically end up with no more than ~6-10 RemoteObject declarations.
Although you don't give a lot of specifics in your post about the variations of RemoteObjects, I suspect you may be confusing RemoteObject with Operation.
You typically declare a RemoteObject instance for every end-point in your application. However, that endpoint can (and normally does) expose many different methods to be invoked. Each of these server-side methods gets results in a client-side Operation.
You can explicitly declare these if you wish, however the RemoteObject builds Operations for you if you don't declare them:
var remoteObject:RemoteObject;
// creates an operation for the saveAccount RPC call, and invokes it,
// returning the AsyncToken
var token:AsyncToken = remoteObject.saveAccount(account);
token.addResponder(this);
//... etc
If you're interacting with a single server layer, you can often get away with a single RemoteObject, pointing to a single destination on the API, which exposes many methods. This is approach is often referred to as an API Façade, and can be very useful, if backed with a solid dependency injection discipline on the API.
Another common approach is to segregate your API methods by logical business area, eg., AccountService, ShoppingCartService, etc. This has the benefit of being able to mix & match protocols between services (eg., AccountService may run over HTTPS).
How you choose to split up these RemoteObjects is up to you. However, 60 in a single applications sounds a bit suspect to me.

best practice: general asmx data loading web-method or multi web-methods

I'm wondering which is better approach from performance point of view, is it better to use one web-service method to load data by passing Database Table name and keys or is it better to use separate method for each database table! knowing that i'm using .net asmx through ajax requests.
it's obvious that one method is better from OO perspective since it have one function type 'data loading' but what about performance? does IIS affected by that or not? also is it better to make multi web-services 'asmx files' or just one!
I really dont think that creating separate methods for handling data fetch different tables is necessary. The performance gain\loss that u r likely to experience by passing an additional table name param to your webservice call would be too small to even consider unless your table names are really huge, which i dont think is the case.
The only reason i would even consider doing some thing like this is if i have nothing else to do in terms of performance improvement or if being forced to do it ;-).
If you really want to optimize your request size try
serializing your input params using JSON (if you are not doing it already)
use a cookieless domain for your webservice
hope this helps
I don't think the service level should have any knowledge of database tables, just like you ideally don't want to see data access code in a controller action or ASPX's code behind.
Personally, I prefer to organize my services to match my domain model.
If I have Customer, Order, and Item classes, for example, I would have corresponding Customer.asmx, Order.asmx, and Item.asmx services to expose selected methods within those classes.
Services are typically responsible for exposing business functionality through a contract. I realize ASMX services really had not concept of "Contracts" in their broadest sense, however you think of it as a set of operations supported by the service. What is your goal here, do you want to expose tabular data as a service ?
Service technology on the Microsoft stack has come a long way from ASMX. Perhaps an obvious question, have you looked at WCF Data Services?
Links:
Exposing Your Data as a Service (WCF Data Services)
Getting Started with WCF Data Services

The Purpose of a Service Layer and ASP.NET MVC 2

In an effort to understand MVC 2 and attempt to get my company to adopt it as a viable platform for future development, I have been doing a lot of reading lately. Having worked with ASP.NET pretty exclusively for the past few years, I had some catching up to do.
Currently, I understand the repository pattern, models, controllers, data annotations, etc. But there is one thing that is keeping me from completely understanding enough to start work on a reference application.
The first is the Service Layer Pattern. I have read many blog posts and questions here on Stack Overflow, but I still don't completely understand the purpose of this pattern. I watched the entire video series at MVCCentral on the Golf Tracker Application and also looked at the demo code he posted and it looks to me like the service layer is just another wrapper around the repository pattern that doesn't perform any work at all.
I also read this post: http://www.asp.net/Learn/mvc/tutorial-38-cs.aspx and it seemed to somewhat answer my question, however, if you are using data annotations to perform your validation, this seems unnecessary.
I have looked for demonstrations, posts, etc. but I can't seem to find anything that simply explains the pattern and gives me compelling evidence to use it.
Can someone please provide me with a 2nd grade (ok, maybe 5th grade) reason to use this pattern, what I would lose if I don't, and what I gain if I do?
In a MVC pattern you have responsibilities separated between the 3 players: Model, View and Controller.
The Model is responsible for doing the business stuff, the View presents the results of the business (providing also input to the business from the user) while the Controller acts like the glue between the Model and the View, separating the inner workings of each from the other.
The Model is usually backed up by a database so you have some DAOs accessing that. Your business does some...well... business and stores or retrieves data in/from the database.
But who coordinates the DAOs? The Controller? No! The Model should.
Enter the Service layer. The Service layer will provide high service to the controller and will manage other (lower level) players (DAOs, other services etc) behind the scenes. It contains the business logic of your app.
What happens if you don't use it?
You will have to put the business logic somewhere and the victim is usually the controller.
If the controller is web centric it will have to receive its input and provide response as HTTP requests, responses. But what if I want to call my app (and get access to the business it provides) from a Windows application which communicates with RPC or some other thing? What then?
Well, you will have to rewrite the controller and make the logic client agnostic. But with the Service layer you already have that. Yyou don't need to rewrite things.
The service layer provides communication with DTOs which are not tied to a specific controller implementation. If the controller (no matter what type of controller) provides the appropriate data (no mater the source) your service layer will do its thing providing a service to the caller and hiding the caller from all responsibilities of the business logic involved.
I have to say I agree with dpb with the above, the wrapper i.e. Service Layer is reusable, mockable, I am currently in the process of including this layer inside my app... here are some of the issues/ requirements I am pondering over (very quickly :p ) that could be off help to youeself...
1. Multiple portals (e.g. Bloggers portal, client portal, internal portal) which will be needed to be accessed by many different users. They all must be separate ASP.NET MVC Applications (an important requirement)
2. Within the apps themselves some calls to the database will be similar, the methods and the way the data is handled from the Repository layer. Without doubt some controllers from each module/ portal will make exactly or an overloaded version of the same call, hence a possible need for a service layer (code to interfaces) which I will then compile in a separate class project.
3.If I create a separate class project for my service layer I may need to do the same for the Data Layer or combine it with the Service Layer and keep the model away from the Web project itself. At least this way as my project grows I can throw out the data access layer (i.e. LinqToSql -> NHibernate), or a team member can without working on any code in any other project. The downside could be they could blow everything up lol...

Can Ado Data Services replace my webservices which i use in ajax calls in my websites?

I used to create normal webservices in my websites, and call these services from javascript to make ajax calls.
Now i am learning about Ado Data Services,
My question is:
Does this Ado Data Services can replace my normal webservice in new sites i will create?
And if Yes,
Can i put these Ado Data Services in a separate project "local on the same server" and just reference from my website? "to use the same services for my websites internal use and also give the same services to other websites or services, the same as twitter for example doing"
depends what you want to do , I suggest you read my conversation with Pablo Castro the architect of Ado.Net Data Services
Data Services - Lacking
Here is basically Pablo's words.
I agree that some of these things are quite inconvenient and we're looking at fixing them (e.g. use of custom types in addition to types defined in the input model in order to produce custom result-sets). However, some others are just intrinsic to the nature of Data Services.
The Data Services framework is not a gateway to a database and in general if you need something like that then Data Services will just get in the way. The goal of Data Services is to create a resource model out of an input data model, and expose it with a RESTful interface that exposes the uniform interface, such that every unit of data in the underlying model ("entities") become an addressable resource that can be manipulated with the standard verbs.
Often the actual implementation of a RESTful interface includes more sophisticated behaviors than just doing CRUD over the data under the covers, which need to be defined in a way that doesn't break the uniform interface. That's why the Data Services server runtime has hooks for business logic and validation in the form of query/change interceptors and others. We also acknowledge that it's not always possible or maybe practical to model absolutely everything as resources operated with standard verbs, so we included service operations as a escape-hatch.
Things like joins dilute the abstraction we're trying to create. I'm not saying that they are bad or anything (relational databases without them wouldn't be all that useful), it's just that if what's required for a given application scenario is the full query expressiveness of a relational database to be available at the service boundary, then you can simply exchange queries over the wire (and manage the security implications of that). For joins that can be modeled as association traversals, then data services already has support for them.
I guess this is a long way to say that Data Services is not a solution for every problem that involves exposing data to the web. If you want a RESTful interface over a resource model that matches our underlying data model, then it usually works out well and it will save you a lot of work. If you need a custom inteface or direct access to a database, then Data Services is typically not the right tool and other framework components such as WCF's SOAP and REST support do a great job at that.

Silverlight,asynchronous,lazy loading what's the best way?

I started to use silverlight/flex and immediately bumped into the asynchronous service calling. I'm used to solving the data access problems in an OO-way with one server fetch mechanism or another.
I have the following trivial code example:
public double ComputeOrderTotal(Order order)
{
double total = 0;
// OrderLines are lazy loaded
foreach (Orderline line in order.Orderlines)
{
// Article,customer are lazy loaded
total = total + line.Article.Price - order.Customer.discount;
}
return total;
}
If I understand correctly, this code is impossible in Flex/Silverlight. The lazy loading forces you to work with callbacks. IMO the simple expample above will be a BIG mess.
Can anyone give me a structured way to implement the above ?
Edit:
The problem is the same for Flex/Silverlight, pseudo code would
do fine
Its not really ORM related but most orms use lazy loading so i'll remove
that tag
The problem is lazy loading in the model
The above example would be very doable of all data was in memory but
we assume some has to be fetched from
the server
Closueres dont help since sometimes data is already loaded and no asynchronous fetch is needed
Yes I must agree that O/R mapping is usually done on the server-side of your application.
In SilverLight asynchronous way of execution is the desired pattern to use when working with services. Why services? Because as I said before there is no O/R mapping tool at the moment that could be used on the client-side (SilverLight). The best approach is to have your O/R mapped data exposed by a service that can be consumed by a SilverLight application. The best way at the moment is to use Ado.Net DataServices to transport the data, and on the client-side to manage the data using LINQ to Services. What is really interesting about ADS (former Astoria project) is that it is designed to be used with Entity Framework, but the good folks also implemented support for IQueriable so basically you can hook up any data provider that support LINQ. To name few you can consider Linq To Sql, Telerik OpenAccess, LLBLGen, etc. To push the updates back to the server the data source is required to support the ADS IUpdateable.
You can look just exactly how this could be done in a series of blogposts that I have prepared here: Getting Started with ADO.NET Data Services and Telerik Open Access
I can't speak to Silverlight but Flex is a web browser client technology and does not have any database driver embedded in the Flash runtime. You can do HTTP protocol interactions to a web server instead. It is there in the middle-tier web server where you will do any ORM with respect to a database connection, such as Java JDBC. Hibernate ORM and iBATIS are two popular choices in the Java middle-tier space.
Also, because of this:
Fallacies of Distributed Computing
You do not do synchronous interactions from a Flex client to its middle-tier services. Synchronous network operations have become verboten these days and are the hallmark signature of a poorly designed application - as due to reasons enumerated at the above link, the app can (and often will) exhibit a very bad user experience.
You instead make async calls to retrieve data, load the data into your client app's model object(s), and proceed to implement operations on the model. With Flex and BlazeDS you can also have the middle-tier push data to the client and update the client's model objects asynchronously. (Data binding is one way to respond to data being updated in an event driven manner.)
All this probably seems very far afield from the nature of inquiry in your posting - but your posting indicates you're off on an entirely incorrect footing as to how to understand client-side technologies that have asynchronous and event-driven programming baked into their fundamental architecture. These RIA client technologies are designed this way completely on purpose. So you will need to learn their mode of thinking if you want to have a good and productive experience using them.
I go into this in more detail, and with a Flex perspective, in this article:
Flex Async I/O vs Java and C# Explicit Threading
In my direct experience with Flex, I think this discussion is getting too complicated.
Your conceptual OO view is no different between sync and asynch. The only difference is that you use event handlers to deal with the host conversation in the DAL, rather than something returned from a method call. And that often happens entirely on the host side, and has nothing to do with Flex or Silverlight. (If you are using AIR for a workstation app, then it might be in client code, but the same applies. As well if you are using prolonged AJAX. Silverlight, of course, has no AIR equivalent.)
I've been able to design everything I need without any other changes required to accomodate asynch.
Flex has a single-threaded model. If you make a synchronous call to the web server, you'd block the entire GUI of the application. The user would have a frozen application until the call completes (or times out on a network error condition, etc.).
Of course real RIA programs aren't written that way. Their GUI remains accessible and responsive to the user via use of async calls. It also makes it possible to have real progress indicators that offer cancel buttons and such if the nature of the interaction warrants such.
Old, bad user experience web 1.0 applications exhibited the synchronous behaviour in their interactions with the web tier.
As my linked article points out, the async single-threaded model coupled with ActionScript3 closures is a good thing because it's a much simpler programming model than the alternative - writing multi-thread apps. Multi-threading was the approach of writing client-server Java Swing or C# .NET WinForm applications in order to achieve a similarly responsive, fluid-at-all-times user experience in the GUI.
Here's another article that delves into this whole subject matter of asynchronous, messaging/event-driven distributed app architecture:
Building Effective Enterprise Distributed Software Systems
data-driven communication vs behavior-driven communication
Silverlight is a client technology and the Object - Relational mapping happens completely in the server. So you have to forgot about the ORM in Silverlight.
Following your example what you have to do is to create a webservice (SOAP, REST...) that can give your silverlight client the complete "Order" object.
Once you have the object you can work with it with no communication with the server in a normal - synchronous way.
Speaking about Silverlight, you should definitely check RIA services.
Simply, it brings the DataContext from the server to the client from where you can asynchronously query it (there is no need to write WCF services by hand, it's all done by RIA).
C# 5
async / await construct will almost exactly what I want..
watch presentation by anders hejlsberg

Resources