My question is possibly a subtle one:
Web services - are they extensions of the presentation/web layer? ..or are they extensions of the biz/data layer?
That may seem like a dumb question. Web services are an extension of the web tier. I'm not so sure though. I'm building a pretty standard webform with some AJAX-y features, and it seems to me I could build the web services in one of two ways:
they could retrieve data for me (biz/data layer extension).
example: GetUserData(userEmail)
where the web form has javascript on it that knows how to consume the user data and make changes to markup
they could return completely rendered user controls (html; extension of web layer)
example: RenderUserProfileControl(userEmail)
where the web form has simple/dumb js that only copies and pastes the web service html in to the form
I could see it working in either scenario, but I'm interested in different points of view... Thoughts?
In my mind, a web service has 2 characteristics:
it exposes data to external sources, i.e. other sources than the application they reside within. In this sense I agree with #Pete in that you're not really designing a web service; you're designing a helper class that responds to requests in a web-service-like fashion. A semantic distinction, perhaps, but one that's proved useful to me.
it returns data (and only data) in a format that is reusable by multiple consumers. For me this is the answer to your "why not #2" question - if you return web-control-like structures then you limit the usefulness of the web service to other potential callers. They must present the data the way you're returning it, and can't choose to represent it in another way, which minimises the usefulness (and re-usefulness) of the service as a whole.
All of that said, if what you really are looking at is a helper class that responds like a web-service and you only ever intend to use it in this one use case then you can do whatever you like, and your case #2 will work. From my perspective, though, it breaks the separation of responsibilities; you're combining data-access and rendering functions in the same class. I suspect that even if you don't care about MVC patterns option #2 will make your classes harder to maintain, and you're certainly limiting their future usefulness to you; if you ever wanted to access the same data but render it differently you'd need to refactor.
I would say definitely not #2, but #1 is valid.
I also think (and this is opinion) that web services as a data access layer is not ideal. The service has to have a little bit more value (in general - I am sure there are notable exceptions to this).
Even in scenario 1, this service is presenting the data that is available in the data layer, and is not part of the data layer itself, it's just that it's presenting data in a different format than a UI format (ie. JSON, xml etc.)
Regards which scenario I would use, I would go for scenario #1 as that service is reusable in other web forms and other scenarios.
While #1 (def. not #2) is generally the correct approach (expose just the data needed to the view layer and have all markup handled there), be careful with the web portion of the service in your design.
Data should only be exposed as a web service (SOAP/WSDL, REST) if it is meant to be consumed remotely (some SOA architects may argue this, but I think that is out of scope for this question), otherwise you are likely doing too much, and over-designing your request and response format. Use what makes sense for your application - an Ajax framework that facilitates client/server communication and abstracts the underlying format of communication can be a big help. The important thing is to nicely encapsulate the code that retrieves the data you want (you can call it a service, but likely it will just be a nicely written helper class) so it can be re-used, and then expose this data in whatever way makes the most sense for the given application.
Related
Can someone help me understand the advantages of Spring Web Flow.
What I have understood is
All the flows can be centrally configured in an XML file.
Need not have an overhead of carrying over the data from one request to another as it can be done by flow scope.
It helps especially in cases like Breadcrumbs navigation.
Flows can be further divided into Sub Flows to reduce the complexity.
Are there any other ones which I am have not tweaked into?
I'm going to play devil's advocate and say don't use it for anything other than simple use cases. Simple use cases meaning no ajax calls, no modal dialogs, no partial updates just standard html forms/flows for simple persistence (i.e page A -> page B -> Page C where each 'page' maps to a view-state definition in a 1 to 1 relationship all defined in the same flow xml file).
Spring webflow cons:
Yes everything is in xml files in theory it is suppose to be simple but when you have multiple flow xml files each with multiple state definitions and possibly subflow definitions it can become cumbersome to maintain or easily determine what the sequential logic of a flow is. (kind of like the old "GOTO operator" where any part of a flow logic can jump back to any previously or later defined part making the flow logic although seemingly "sequential" in xml... un-intuitive to follow)
Some features of Spring Webflow's documentation are unintuitive or flat out undocumented leading to hours of trial and error. For instance, exception handeling, usauge of 'output' tag (only works in subflow->back to parent caller undocumented), sending back flash view responses to a user is also unintuitive and uses a different container than Spring MVC (many times when a flow ends you want to send a msg to the user that is defined in a controller outside of webflow... but since the flow ended you can't do it with in spring webflow using flashScope container), etc...
Adding subflows although sounds good does not reduce complexity actually increases it. Due to the way subflows are defined. Definitions are long and complex and can be confusing when you have many end-states in both the main parent flow and the child subflows.
Initial setup and configuration can be painful if integrating with certain 3rd party view frameworks like Apache Tiles or Theymeleaf... I recall spending a few hours if not days on this.
State snapshots (Saving the user's input between pages) although a powerful feature from Flow A's view-state_1 <-> Flow A's view-state_2 and vise versa. This does not work between Main Flow A <-> Sub Flow B and vise versa... forcing the developer to manually bind (or rather hack) saving a user's state between Parent main flow's <-> subflows.
Debugging application logic placed inside webflow can be difficult. For instance, in webflow you can assign variables and perform condition checks all using SPEL inside the xml but this tends to be a pitfall. Over time you learn to avoid placing application logic inside the actual webflow xml and only use the xml to call service class methods and to place the returned values in the various scopes (again this hard learned lesson/best practice is undocumented). Also, because you are executing logic using SPEL... refactoring classes, method names, or variables sometimes silently break your application significantly increasing your development time.
fragment rendering... a powerful but unintuitive feature of webflow. Setting up fragment rendering was 1 of the most painful things I had to do with webflow. The documentation was lacking. I think this feature could easily go in the pros side if it was better documented and easy to setup. I actually documented how to use this feature via stackoverflow... How to include a pop-up dialog box in subflow
Static URLs per flow. If your flow has multiple views defined with in 1 flow your URL will not change navigating from view-state to view-state. This can be limiting if you want to control or match the content of the page with a dynamic url.
If your flow is defined in "/WEB-INF/flows/doSumTing/sumting-flow.xml" and your "base-path" is set to "WEB-INF/flows". Then to navigate to your flow you goto http://<your-host>/<your-webapp-name-if-defined>/doSumTing . The flow file name is completely ignored and not used in the URL at all. Although clear now I found this unintuitive when I first started.
Spring Webflow pros:
concept of "scope" containers flowScope, viewScope, flashScope, sessionScope and having easy access to these containers WITH IN A FLOW gives the developer flexibility as these are accessible from anywhere and are mutable.
Easily define states view-state,action-state,decision-state,end-state which clearly defines what each state is doing but as mentioned in the cons... if your application is complex and has MANY different states and transitions constantly Going back and forth... this can clutter your -flow.xml file makes it hard to read or follow the sequential logic. It's only easy if you have simple use cases with a small number of state definitions.
Seldom used but a powerful feature of webflow is flow inheritance. Common flow functionality across multiple flows can be defined in a single abstract parent flow and extended by child flows (similar to an abstract class definition in java). This feature is nice with regards to the DRY principle if you have many flows that share common logic.
Easily defined validation rules and support for JSR-303 (but Spring MVC has this as well)
The output tag can be used to send POJOs back and forth between Main flow <-> subflow. This feature is nice because the parameters don't need to be passed through the url via get/post and can pass as many POJOs as you wish.
Clearly defined views. What the view name is and which model variable it is being mapped to (e.g <view-state id="edit" view="#{flowScope.modelPathName}/editView" model="modelObj"> ). Also in the example just demonstrated can use preprocessing expressions for viewnames or most arguments in webflow... nice feature though not very well documented :/
Conclusion: Spring Webflow project was a good idea and sounds great on paper but the cons make it cumbersome to work with in complex use cases increasing development time significantly. Since a better solution exists for complex use cases (Spring MVC) to me it is not worth investing heavily in web flow because you can achieve the same results with Spring MVC and with a faster development time for both complex and simple use cases. Morever, Spring MVC is actively maintained, has better documentation, and a larger user community. Maybe if some of my cons are addressed it would tip the scales in Webflow's favor but until then I will recommend Spring MVC over webflow.
Note: I might be missing some things but this is what I came up with off the top of my head.
Additionally, you can use the back button and retain state up to the number of snapshots you choose to store.
You also may find this related question useful.
The majority of the Application Architecture advice seems to advise strongly that only Presentation Layer should have access to HTTPContext (to promote loose coupling, decrease dependencies, increase testability etc).
So, how do people deal with Caching and Session? Very specific DataAccess and Business Logic knowledge is required to determine what items need caching to best benefit application performance. However, and an ASP.Net web application, access to these is provided via the HTTPContext.
One option would be to create a CacheFactory and a SessionFactory, and an ICache and ISession interface - and then somehow use DI principle to pass and ISession and ICache to each method in the BLL and subsequently DA Layer that needs it.
Is this really what developers end up doing? Is there another, easier, way to deal with this?
Thanks for any advice.
Personally, if I'm developing for ASP.NET (web) only, and I know the class libraries are only going to target the web, I have no issues referencing System.Web or any other assembly as needed. Sometimes practicality should come first.
On the other hand if you know, or are unsure if the BLL, DAL or other layers may be reused in a client-server or other environment, you may need to look at using a more flexible approach such as a session handler interface, etc.
The main thing is you clearly understand what best practice recommends. At that point you can make rational decisions about what should apply to each project or situation. It won't always fit like a glove.
In my experience caching is done ata certain layer - and what you're caching applies within the scope of that layer, so:
You might cache data in the DAL so that the DAL returns it from memory and not by hitting the DB again; so here we're caching "raw" data at the data level.
The BL might cache similar data (say POCO's) - in other words "logical" data that has had BL applied to it; so here we're caching BL data in the BL level.
Your UI might cache (you guessed it) information that is built at the UI layer - liek a presentation of data as a webpage, XML, etc.
When you look at caching, as the architect / designer of the system you'll make decisions about what things are worth caching, generally this will be driven by a need to balance:
Performance needs to hit a specific target.
What time (and cost) options you have.
Assuming we're caching to increase performance, you'll want to analyse your system and identify the bottlenecks that need to be removed or avoided - the first pass of this analysis should at least identify which layer to focus on first.
Another thing to consider is that the more you can cache closer to the consumer - the less overall processing there is that needs to occur; in otherwords, if you can cache at the UI layer then the BL and DAL layers won't even get a look-in (you won't need to add caching there).
At this point I want to ask you what exactly it is that you're trying to achieve.
While everything I've said is true (as far as I am aware) it's all based on the assumption that we're delivering "vertical" slices of the systems functionality (like "creating an invoice", "adding a product" etc); there's another model you can apply - the cross-cutting concerns model.
Think about something like logging - every part of you system (UI / BL / DAL) should be able to log system events; my favourite tool for this is the MS Enterprise Libraries (MSEntLibs). When I work with them I consider them to be a "black-box" - a self contained unit. I (by choice) have no idea how they are architected - the thing that is important is that they are isolated and have dependencies which are relativly easy to manage.
The MSEntLibs can log to a database, but if I call an MSEntLibs logging method (to log to the DB) direct from my UI (and skipping over my BL) I don't care.
So depending on what you're trying to do the right anmswer will either be:
Simply identifying what needs to be cached at letting the appropriate layer handle it.
You don't need to try using DI to pass stuff to you BL at all - a cross-cutting black-box component might be appropriate?
I am working on maintaining an ASP.NET website, and I've noticed the business layer and other supporting libraries make heavy use of HttpContext.Current.Session. This makes it hard to keep track of session variables, to determine what they're used for and why they even exist.
Is it considered bad practice to use the session in the business layer? And would it be wise to start moving all code that uses the session into the code-behind?
It's almost never a good idea. There's lots of reasons, but here's a couple:
you'll never be able to use business layer code in anything other than ASP.NET
Unit Testing becomes much more of a pain or even impossible.
We ran into huge headaches with this exact same situation when we started to build services that utilized common business layer code.
I follow this rule - any class in System.Web namespace (javax.servlet package in Java) should not be present in your business layer.
Yes - the BL should not have any knowledge about the Session. Its a dependency that you don't need.
make a class that is an indirection, in which case on the web it may return values from HttpContext.Current.Session, and in other areas would resolve that from somewhere else. IE have an interface ISessionStore and have concrete classes WebSessionStore and WindowsFormsSessionStore, etc.
this will make your code easier to test and also gives you expansion paths when say, you now want x business logic to run in a windows service where it can run x piece of code every y minutes.
In my opinion it is bad practice.
It makes it pretty hard to dissociate that business layer from the environment. If you expect to unit test the thing for example, you're out of luck.
One way to take care of that simply would be to insulate this into an abstraction for now, so that you can pass a "state cache" around and not refer to HttpContext. That will take you at least to some degree of abstraction.
Another more interesting question is, why does the business layer need to refer to that?
its always better to have a centralized Cache/Session manager which encapsulates the complete interaction with session/cache or whatever persistence method you use. having your BL to interact with sessions is definitely a very bad practice and in a way defeats the purpose of the tiered architecture altogether.
I have started design of a ColdFusion application that is entirely web based. Not much use of Flash forms, or AJAX.
The first version is a strict web app. Version 2 will be a Flex front end.
I want to design and build things so that the Flex layer can use existing logic. It's okay if it means I have to do extra work in version 1. I would like to harden the logic code once and not re-factor.
What are things worth considering / designing / implementing now that would greatly aid in being able to design an app in this way?
One big suggestion, depending on where you're coming from (as it's a rather big question), would be to leverage the ColdFusion component (CFC) as much as possible; the CFC architecture is excellent, versatile and powerful, it integrates quite nicely with Flex (and will do so even better in coming versions of Flex and CF), so to the extent you can design your component tier with that in mind, you'll be glad you did.
It's been a while since I wrote CF code, but on the last big project I did with it, I spent a good deal of time designing a functional tier out of CFCs to be used by the plain ol' Web app, much as it sounds like you're doing -- and then later, when it came time to bolt on an Ajax UI for a subsection of the site (it could've been Flex, but in my case, it happened to be a YUI implementation), I created a facade layer of publicly exposed CFCs whose job it was to wrap and expose a specialized subset of the functionality provided by the first tier. Doing so allowed me to leverage and extend existing code in a way unique to the services that needed it, without having to expose the underlying (first tier) CFCs directly.
I'm sure other folks will have many more (and probably more detailed) suggestions, but that's the one big one I have for you first off -- learn, know and use the CF component. Good luck!
I agree with Christian that the best thing you can do is put everything as far as database logic or any other logic for the application in CFC's, and more specifically, I would suggest using webservices. The main reason for this is that it will allow you to eventually have your cf code, which is all of your database persistence and logic on a different server than where you serve the flex applications from, and would allow code reuse for other applications as well. The nice thing too about writing your cfc's as webservices is that you can use them either as webservices or directly as components in Flex using AMF (remote object). Now of course how much those benefits really apply to you depends on your situation, but its a good plan to follow.
But the main suggestion is to think of your application as having a presentation layer and a logic and persistence layer. If you are making a decision, it goes in the logic layer. If you are showing a screen or doing anything with presentation, it goes in that layer. Keeping those things separate will allow you to more easily switch out your presentation layer to flex later on.
Also, it can be useful to trap any errors thrown and return messages as results (with any results, like in a structure) from all methods. Flex has a nasty habit of telling you something went wrong, but not passing along the error information. This will help you to debug and handle any errors that get thrown much easier.
Check out Matt Woodward's presentation on the topic, it's very informative:
And a few general things to add to the answers everyone else provided:
Encapsulate your data interaction in CFCs (typically in Services which delegate to Gateways and DAOs)
In most cases, you'll want to create "bean" CFC's to represent your business objects (users, widgets, etc), these are what will transfer to Flex as ActionScript classes. You'll need to add cfproperty tags to them to make them serializable to ActionScript (case- and order- sensitive!), so pay attention to that when you create them to prevent having to deal with it later, and use one of the code generation tools like the Adobe CF Extensions for Eclipse or Illudium PU36 to do it for you.
Create a remote facade CFC (or set of CFCs depending on how big the app is) that delegates methods to your Services - this is where you set the access for your methods to "remote" - generally the only place you want to do this (it will feel like you're doing a lot of delegating but it pays off to have all your remote services centralized)
As you develop with HTML, treat your remote facade CFCs as your API and make your HTML views as "dumb" as possible. Think of it this way: any logic you write in your CF view will have to be replicated in your Flex view. If you build the project only using your remote API, you'll have a pretty good feel for how Flex will interact with the application.
Check out ColdSpring, it offers a lot of great features for managing all the objects you're going to create!
I don't claim to be an architecture expert and I know I've thrown around a lot of jargon here to keep it short, but some Googling around CF blogs should turn up a lot of info about the design patterns I've mentioned. Good luck!
What are the downsides to using static methods in a web site business layer versus instantiating a class and then calling a method on the class? What are the performance hits either way?
The performance differences will be negligible.
The downside of using a static method is that it becomes less testable. When dependencies are expressed in static method calls, you can't replace those dependencies with mocks/stubs. If all dependencies are expressed as interfaces, where the implementation is passed into the component, then you can use a mock/stub version of the component for unit tests, and then the real implementation (possibly hooked up with an IoC container) for the real deployment.
Jon Skeet is right--the performance difference would be insignificant...
Having said that, if you are building an enterprise application, I would suggest using the traditional tiered approach espoused by Microsoft and a number of other software companies. Let me briefly explain:
I'm going to use ASP.NET because I'm most familiar with it, but this should easily translate into any other technology you may be using.
The presentation layer of your application would be comprised of ASP.NET aspx pages for display and ASP.NET code-behinds for "process control." This is a fancy way of talking about what happens when I click submit. Do I go to another page? Is there validation? Do I need to save information to the database? Where do I go after that?
The process control is the liaison between the presentation layer and the business layer. This layer is broken up into two pieces (and this is where your question comes in). The most flexible way of building this layer is to have a set of business logic classes (e.g., PaymentProcessing, CustomerManagement, etc.) that have methods like ProcessPayment, DeleteCustomer, CreateAccount, etc. These would be static methods.
When the above methods get called from the process control layer, they would handle all the instantiation of business objects (e.g., Customer, Invoice, Payment, etc.) and apply the appropriate business rules.
Your business objects are what would handle all the database interaction with your data layer. That is, they know how to save the data they contain...this is similar to the MVC pattern.
So--what's the benefit of this? Well, you still get testability at multiple levels. You can test your UI, you can test the business process (by calling the business logic classes with the appropriate data), and you can test the business objects (by manually instantiating them and testing their methods. You also know that if your data model or objects change, your UI won't be impacted, and only your business logic classes will have to change. Also, if the business logic changes, you can change those classes without impacting the objects.
Hope this helps a bit.
Performance wise, using static methods avoids the overhead of object creation/destruction. This is usually non significant.
They should be used only where the action the method takes is not related to state, for instance, for factory methods. It'd make no sense to create an object instance just to instantiate another object instance :-)
String.Format(), the TryParse() and Parse() methods are all good examples of when a static method makes sense. They perform always the same thing, do not need state and are fairly common so instancing makes less sense.
On the other hand, using them when it does not make sense (for example, having to pass all the state into the method, say, with 10 arguments), makes everything more complicated, less maintainable, less readable and less testable as Jon says. I think it's not relevant if this is about business layer or anywhere else in the code, only use them sparingly and when the situation justifies them.
If the method uses static data, this will actually be shared amongst all users of your web application.
Code-only, no real problems beyond the usual issues with static methods in all systems.
Testability: static dependencies are less testable
Threading: you can have concurrency problems
Design: static variables are like global variables