SqlCacheDependency in n-Tier Architekture - asp.net

I read some articles about SqlCacheDependency. I think it is a really cool way for updating caches, but i'm not sure how i can handle this technologie if my application is a n-tier architekture.
Is this just useful if my program is a small webapplication, or is there also a way for use in big n-tier architektures?

You can create your own ICacheDependency interface and use a factory class to give you the appropriate object. This way neither your DAL or BL need to reference System.Web namespace. You can put this factory class in a common tier and reference it in the UI layer.
MS Petshop 4 has used something like this, you may want to follow that.

In this case, you would need to have your DAL return you an object that derives from the CacheDependency abstract class, that would do the same thing as SqlCacheDependency, but optimized for your DAL.
This is, of course, a failure of separation of concerns, but if you need the dependency, it's the best way to go.

Related

Should the UI reference the Repository?

I have four assemblies; UserInterface, BusinessLogic, DataAccess, Common.
The User Interface references the Repository that is in the DataAccess, is that bad practice? Should I create pass through methods in the BusinessLogic so that the UserInterface is not coupled to the DA assembly?
Even in cases where the BusinessLogic method does nothing but call the relevant Repository method?
Or am I being pendantic?
rather than think of the UI talking to the repository, think of implementations depending on abstractions. in this instance the UI depends on IRepository. How IRepository is implemented doesn't matter.
and putting this all into separate assemblies is overkill. just use namespaces to segregate your code. it will be much easier to maintain.
If you are trying to do Domain Driven Design then please understand the role of a repository before you think of using it in UI. Very nice explanation here http://devlicio.us/blogs/casey/archive/2009/02/20/ddd-the-repository-pattern.aspx
I thnk you are missing a Layer. Entities Layer or Data Transfer Layer.
It's not definitely ag good practice, an UI have to know knothing about your DAL, that's why You have Your business Layer.
I think You should do it the classic way UI - BL - DAL and backwards should be the same, using Data Transfer Objects
Always using DTO between these layers, transfer objects from the UI to the BL, from BL to the DAL and that way backwards.
I think the main reason of layered structure is 'Seperation of Concerns'. SoC is basicly offer loosely coupling. So reference of UI in DAL is not good thing.
On the other hand, UI should take care of user interaction (not directly calls from DAL). BL should take care of validation and call DAL methods. DAL is the final step and it can validate datas in according to SQL aspects, then handle SQL Statements.

Asp.Net MVC and Entity Framework Architecture

I'm working on a fairly large project at the moment and am currently in the planning stages. I've done a lot of reading into the various patterns suggested for development, somthing that has split the team at the moment is when using Entity Framework should the classes be passed through the applciation layers so that a view accepts an Entity Framework class or should these classes be mapped to BLL Classes and if so at which point (Controller or Library) should this be done?
I'm interested in hearing some positives and negitives for each solutions.
This is one of those great "it depends" questions ....
For me it's a matter of pragmatism. I use the raw entity classes where ever I can for expediency. I start using DTOs when either the object graph in question starts becoming too cumbersome or the object in question has sensitive data I don't want sent over the wire.
This is again one of those questions that doesn't really have a right or wrong answer, its personal taste really. Personally I would opt for using DTO's or interfaces when passing data to the Views. I don't tend to pass around entity objects to different layers of my application they are strictly confined to the DAL, or if I do need to pass it up a layer I would almost always use an interface never the concrete type.

Best Practices: What to use Reflection for?

I was toying with the idea of allowing module to with a class in a properties file ; something like
availableModules.properties
Contact=org.addressbook.ContactMain
Business=org.addressbook.BusinessMain
Notes=org.addressbook.Notes
...
My framework will use reflection to instantiate the relevant modules, and thereafter call methods on the relevant base classes, or pass the objects as parameters as required.
Is the above a good place to use reflection?
Are there any best practices on where to use reflection already posted on SO (I couldnt' locate one)? Could we start a list along those lines with any responses posted here?
EDIT
Here's another example of the kind of scenarios I have in mind.
Some core code needed to determine the point of call.
One application I saw achieved this by using reflection, another application used an exception. Would you deem the former to be a recommended scenario where reflection may be applied?
For a great framework supporting your idea have a look at the IOC container of the spring framework.
Is the above a good place to use
reflection?
I'd say no. If you want to do this kind of thing, you should probably be using one of the (many) existing mature frameworks that support Inversion of Control aka Dependency injection. Spring IOC is the most popular one, but there are many others. Google for "ioc framework java".
Underneath the hood, these frameworks most likely use reflection. But that doesn't mean you should reinvent the wheel.
I usually used reflection if I want to dynamically use a class which information (assembly name, class name, method name, method parameters, etc) are stored in a string (text files or database).

Exposing Entities Outside The Assembly

I'm looking for opinions on best practices with regards to passing entities beyond assembly boundaries. I'm using Linq-To-SQL, but the same question would apply to Entity Framework, NHibernate, etc.
I have an assembly that I want to reuse in multiple projects. In it there are several entities which I have so far kept internal, however I am finding it would be beneficial to return a list of the entities to the caller. Should I create a new class to encapsulate the data or should I just expose the entity itself.
For example, let's say I have an Address entity. Would it be better to have a method GetAddress(...) that returns the Address entity, or should I create another class with the same properties to expose the Address data?
Thanks!
One vote for just exposing the entities. In practice, the reasons for hiding the entities behind DTOs end up not really being relevant. For example, when was the last time you ripped out your internal data access layer for something entirely different that would have caused you to lose the auto-generated entity classes?
Plus you get to save time by avoiding the painful mapping exercise that ensues when you only expose DTOs. IMO, having an automatic mapping tool that uses reflection or something doesn't count as no pain because now you pay in performance what you would otherwise pay in tedium.
You might want to consider using a Repository to expose the Entities to outside assemblies. Here is a great CodeProject article on a generic Repository that can be used with EF.

What are the downsides to static methods?

What are the downsides to using static methods in a web site business layer versus instantiating a class and then calling a method on the class? What are the performance hits either way?
The performance differences will be negligible.
The downside of using a static method is that it becomes less testable. When dependencies are expressed in static method calls, you can't replace those dependencies with mocks/stubs. If all dependencies are expressed as interfaces, where the implementation is passed into the component, then you can use a mock/stub version of the component for unit tests, and then the real implementation (possibly hooked up with an IoC container) for the real deployment.
Jon Skeet is right--the performance difference would be insignificant...
Having said that, if you are building an enterprise application, I would suggest using the traditional tiered approach espoused by Microsoft and a number of other software companies. Let me briefly explain:
I'm going to use ASP.NET because I'm most familiar with it, but this should easily translate into any other technology you may be using.
The presentation layer of your application would be comprised of ASP.NET aspx pages for display and ASP.NET code-behinds for "process control." This is a fancy way of talking about what happens when I click submit. Do I go to another page? Is there validation? Do I need to save information to the database? Where do I go after that?
The process control is the liaison between the presentation layer and the business layer. This layer is broken up into two pieces (and this is where your question comes in). The most flexible way of building this layer is to have a set of business logic classes (e.g., PaymentProcessing, CustomerManagement, etc.) that have methods like ProcessPayment, DeleteCustomer, CreateAccount, etc. These would be static methods.
When the above methods get called from the process control layer, they would handle all the instantiation of business objects (e.g., Customer, Invoice, Payment, etc.) and apply the appropriate business rules.
Your business objects are what would handle all the database interaction with your data layer. That is, they know how to save the data they contain...this is similar to the MVC pattern.
So--what's the benefit of this? Well, you still get testability at multiple levels. You can test your UI, you can test the business process (by calling the business logic classes with the appropriate data), and you can test the business objects (by manually instantiating them and testing their methods. You also know that if your data model or objects change, your UI won't be impacted, and only your business logic classes will have to change. Also, if the business logic changes, you can change those classes without impacting the objects.
Hope this helps a bit.
Performance wise, using static methods avoids the overhead of object creation/destruction. This is usually non significant.
They should be used only where the action the method takes is not related to state, for instance, for factory methods. It'd make no sense to create an object instance just to instantiate another object instance :-)
String.Format(), the TryParse() and Parse() methods are all good examples of when a static method makes sense. They perform always the same thing, do not need state and are fairly common so instancing makes less sense.
On the other hand, using them when it does not make sense (for example, having to pass all the state into the method, say, with 10 arguments), makes everything more complicated, less maintainable, less readable and less testable as Jon says. I think it's not relevant if this is about business layer or anywhere else in the code, only use them sparingly and when the situation justifies them.
If the method uses static data, this will actually be shared amongst all users of your web application.
Code-only, no real problems beyond the usual issues with static methods in all systems.
Testability: static dependencies are less testable
Threading: you can have concurrency problems
Design: static variables are like global variables

Resources