I'm looking for as comprehensive as possible of a mock replacement and wrapper for the ASP.NET HttpContext in my applications. A comprehensive mock replacement could potentially increase the testability of my ASP.NET web applications substantially, without necessitating migrating every application to more-testable frameworks such as MVC.
Some of the features I am most interested in seeing in an HttpContext wrapper and mock framework include:
Serialized session storage (e.g., .Session).
Serialized application-scoped storage (e.g., .Application).
Per-request item storage (e.g., .Items).
HttpRequest data, such as referrers, request Uri, server variables, post data, etc.
HttpResponse data, such as status codes and content.
Local file resolution (e.g. Server.MapPath)
VirtualPathUtility for application-relative URL path resolution, which has a dependency on the ASP.NET runtime.
The identity and principal (e.g., .User) for validating authentication/authorization rules.
The AllErrors collection for testing error resolution in HttpModules and Global.asax.
I considered writing my own interface, wrapper, and mock; however, I believe such must already exist. The variety of mock frameworks is a little overwhelming for a first-timer to absorb.
What is the most comprehensive HttpContext wrapper and mock that you would recommend?
My company have done well with just creating interfaces for all the http objcets (IHttpRequest, IHttpResponse, etc).
It's a bit repetative but basically need all methods and properties on the interface
then create a concrete type for each which takes the real type as a constructor parameter and passes all methods and properties through to the real object.
Then you can test everything with ease using RhinoMocks or whatever..
Check out the moq framework. This is the mocking framework that the MVC team uses and it is considered by many (including myself) to have the lowest barrier to entry. Also check out the mocking helpers in the MvcContrib project.
Related
Controllers are created per request in Asp.net MVC, this is a fact.
My question is:
what is the best practice for for creating common objects (Logger, Localizer, Configurator) that are used in the web application, possible methods from my knowledge are:
Create the object per request then dispose it at the end of the request.
Create the object at the beginning of user session then dispose it at the end of the session.
Create only one object for the entire application and dispose it at application shutdown.
It depends entirely on the scope within these components are used and their dependencies, to elaborate:
Logger
A logger is usually alive throughout the entire lifespan of the application and can be used all over the application (scope), the logger doesn't care for the request or the controller (dependencies). Best Lifetime Scope: Application.
LogSource / LogContext
Unlike logger, LogSource / LogContext are contextual object that are to be used withing a given context (scope), these object are aware of their context (controller / request) (dependencies). Best Lifetime Scope: Controller / Request.
Localizer
Localizer is a bit less clear-cut as the other components. For one it is use application wide (Controllers, Views etc.) (scope). But as for its dependencies the matter is implementation specific, your localizer may need a CultureInfo object for construction or it may not. In a case when it is not required then there are no dependencies for that component and an Application scope will fit.
On the other hand if a culture is required then it is a dependency for that component. Now it's logical to then scope it to a user session but this is where the idea of performance and dependency specifics comes to play. If you scope the localizer to a user session then each user will have x amount of time the server to read the localization file before it's served its response. Additionally the dependency is not the user session but the user's culture, whilst the possible user sessions are infinite, the possible supported locals are all but infinite. Keeping that in mind you can then create a localization pool that contains every supported local (and possibly a fallback local for the ones that aren't supported),scoped to the application, which can serve localizers scoped to user sessions.
To sum up - You can apply the scope-dependencies logic to any component but as we've seed with the localizer it's always best to understand the component and the implication of scoping that component to a specific scope.
It really depends on 2 things:
The implementation of those common objects - for example thread safety. For example if your logger has a singleton life-cycle and it has some static property (for example level or a name) then by modifying it in one request it will affect others (and this most probably is unwanted behavior).
The purpose(usage) of those objects - lets assume that you have a LoggerManager or a Configurator or some kind of Factory object that should be initialized only once on application start and then it is used to create(retrieve) other objects (for example an IoC container)then it is certainly makes sense to make it singleton (sometimes it is absolutely required).
From my experience most of the objects have either transient or per-request life cycle. Regarding the singletons (one object for the entire application) - usually it is very clear when to use them. I think a god example is a LoggerManager that is used to create Logger instances based on some static configuration or a Configurator (as you called it) that is initialized on application start and parses some configuration files. My rules of thumb for singletons are:
they have to be thread-safe
they don't change often
they either hold some shared data that is applicable to all threads/requests/users and etc
they are heavy objects - if they are light weight you can create a new object each time (transient life-cycle) and most probably you won't notice any difference in performance.
Regarding the session - for me it is more of a cache, so most of the time it holds some user related data
I need to make several properties accessible from application's business layer. Those are some ids and common settings. Most of them are valid only through request-response lifespan.
This is a web application (ASP.NET Web Forms to be specific) with dependency injection set up.
Currently those properties are passed through method parameters directly to business layer's services. That works but is not very efficient since:
sometimes parameters' values need to be passed deeper obscuring the readability a bit
some properties should be lazy resolved, and this should be done only once per request
retrieving properties which are resolved by touching a database can be confusing for new developers (there is not unified way of doing this)
some services are constructed by a factory which enriches them with some config parameters
I was thinking about introducing an application context interface, with an implementation in the main project, which would be created on every request. It could be injected to the services directly making them parametrized automatically and independently (services won't need the factory anymore).
Is it how this problem should be tackled or maybe there are some other options?
One option I don't like here is that it might bond the main particle with business layer which is not a perfect example of The Clean Architecture.
Id say you solution is a very common one - inject an 'application context' into your classes. One thing I would be careful of though is making sure you are following the Integration Segregation Principle (from SOLID). Dont just start making all your classes expect an application context instance. Instead, design interfaces that split the application context up, and have your classes expect them as dependencies. Your application context will then need to implement all the interfaces.
This is the correct way to do things as it decouples your classes from implementation. Really your classes don't care if their dependency is from one giant application context, they just care about specific methods implemented by it. This will make your code more robust as you will reduce the risk of breaking something if you change the implementation of the application context later on.
Why don't you use some dependency injection container? Your global settings and parameters can be registered to it as pseudo-singletons and then you will be able to neatly request them from any point inside your application.
I have had a read on what's new in .NET4.6 and one of the things is ASP.NET 5 which I am quite excited about.
One of the new things is New modular HTTP request pipeline, however there is no more info on how exactly is it going to change.
The only reference in the article is
ASP.NET 5 introduces a new HTTP request pipeline that is lean and
fast. This pipeline is modular so you can add only the components that
you need. By reducing the overhead in the pipeline, your app will
experience better throughput. The new pipeline also supports OWIN.
What are major differences between ASP.NET4.5 and ASP.NET5 Http pipelines? How modularity will be controlled?
The biggest difference in my opinion is the modularity of the new request pipeline. In the past, the application lifecycle followed a relatively strict path that you could hook into via classes implementing IHttpModule. This would allow you to affect the request, but only at certain points along the way by subscribing to the different events that occur (e.g. BeginRequest, AuthenticateRequest, etc.).
The full descriptions of these can be found on MSDN: IIS 5 & 6 or IIS 7, and a walkthrough of creating such a module can be found here.
In the new ASP.NET 5 world, the request pipeline is decoupled from System.Web and IIS. Instead of a pre-defined path, it uses the concept of middleware. If you are familiar with OWIN, the idea is nearly identical, but the basic idea is that these Middleware Components are registered and then the request passes through them in the order that they are registered.
Each middleware component is provided a RequestDelegate (the next middleware component in the pipeline) and the current HttpContext per-request. On each request, the component is invoked, and then has the opportunity to pass the request along to the next in the chain if applicable. For example, an authentication component might opt not to pass the request along to the next component if authentication fails. Using this system, you can really handle a request any way you choose, and can be as light-weight or as feature-rich as you need it to be.
This example is a little bit dated now (e.g. IBuilder has been renamed to IApplicationBuilder), but it is still a great overview of how building and registering these components looks.
I've read through this article trying to understand why you want a session bean in between the client and entity bean. Is it because by letting the client access entity bean directly you would let the client know exactly all about the database?
So by having middleman (the session bean) you would only let the client know part of the database by implementing the business logic in some certain way. So only part of the database which is relevant to the client is only visible. Possibly also increase the security.
Is the above statement true?
Avoiding tight coupling between the client & the business objects, increasing manageability.
Reducing fine-grained method invocations, leads to minimize method invocation calls over the network, providing coarse-grained access to clients.
Can have centralized security & transaction constraints.
Greater flexibility & ability to cope with changes.
Exposing only required & providing simpler interface to the clients, hiding the underlying complexity and inner details, interdependencies between business components.
The article you cite is COMPLETELY out of date. Check the date, it's from 2002.
There is no such thing anymore as an entity bean in EJB (they are currently retained for backwards compatibility, but are on the verge of being purged completely). Entity beans where awkward things; a model object (e.g. Person) that lives completely in the container and where access to every property of it (e.g. getName, getAge) required a remote container call.
In this time and age, we have JPA entities that are POJOs and contain only data. Don't confuse a JPA entity with this ancient EJB entity bean. They sound similar but are completely different things. JPA entities can be safely send to a (remote) client. If you are really concerned that the names used in your entity reveal your DB structure, you could use XML mapping files instead of annotations and use completely different names.
That said, session beans can still perfectly be used to implement the Facade pattern if that's needed. This pattern is indeed used to give clients a simplified and often restricted view of your system. It's just that the idea of using session beans as a Facade for entity beans is completely outdated.
It is to simplify the work of the client. The Facade presents a simple interface and hides the complexity of the model from the client. It also makes it possible for the model to change without affecting the client, as long as the facade does not change its interface.
It decouples application logic with the business logic.
So the actual data structures and implementation can change without breaking existing code utilizing the APIs.
Of course it hides the data structure from "unknown" applications if you expose your beans to external networks
We need to expose some services (i.e. AddressValidatorService, CustomerFinderService) that currently reside in an ASP.NET application to other applications within our organization. Exposing these services via WCF seems like a natural fit, but I don't see any best-practices for how to pull these common services into a WCF wrapper in such a way that my existing ASP.NET application can continue to use them with minimal code changes and/or awareness that the service they are consuming is no longer in-process.
I'm especially looking for recommendations on how to structure the existing ASP.NET solution and whether to host our new WCF in the same solution or in some new shared WCF solution referenced by both our ASP.NET application and external callers.
Also, is it bad practice to simply promote the DTOs currently only consumed in-process via ASP.NET to full fledged data contracts or is it preferable to create duplicate DTOs that are explicitly decorated with [DataContract]? The latter seems like a maintenance nightmare.
To answer your second question:
Also, is it bad practice to simply promote the DTOs currently only consumed in-process via ASP.NET to full fledged data contracts or is it preferable to create duplicate DTOs that are explicitly decorated with [DataContract]? The latter seems like a maintenance nightmare.
It is considered a bad practice to expose your business model as WCF contracts. So if your DTOs are replicas of your domain model then it would be a strict no-no, because
1. any change in the model would directly effect the contracts and hence all the clients using it
2. you would be exposing your business "know-how" to the outside world.
The latter can tend to get difficult for any evolving system, but then you have various open source tools (like AutoMapper) that ease your mapping nighmares.
You can convert an existing project to WCF, then continue to use it in-process by using a project reference. It can then be consumed by an eternal source using the WCF client. A WCF client converts the class name from ClassName to ClassNameClient when consumed over WCF, but the class will function pretty much the same.
For example:
MyClass obj = new MyClass();
obj.DoSomething(withData);
Would become:
MyClassClient obj = new MyClassClient();
obj.DoSomething(withData);
You would publish the WCF project to some endpoint, like address.example.com, then use a service reference to the endpoint to reference the code, like a project reference, in your other projects.
Note that while the externally referencing projects would not be impacted by the change or know that the data is going over the network, if you have chatty calls to the project in question, it will definitely take a performance hit. You may want to consolidate related methods into single methods to save on round-tripping.
If these are exposed as static page services, there's no magic wrapper -- you're going to need to move code to a standalone service implementation class and put a .svc file in front of it. (Or use WCF4 fileless activation, or a service factory, but that's getting a bit away from the core question here.)
If these are exposed as ASMX, you can actually put an ASMX facade in front of a WCF service class and get basic HTTP/XML/ASMX responses as you would from your legacy ASMX webservices. You an expose that same WCF service class through standard WCF configuration for non-legacy consumers.
Finally, you can expose any WCF service as basicHTTP with serviceMetadata + httpGetEnabled, and you'll get a service endpoint usable by legacy consumers of an ASMX service.
http://msdn.microsoft.com/en-us/library/ms751433.aspx