The question in short is that we are stumbling upon BDD definitions that more or less require different states - which leads to the necessity for a mock of sorts for ASP.NET/MVC - I know of none, which is why I ask here
Details:
We are developing a project in ASP.NET (MVC3/Razor engine) and are using SpecFlow to drive our development.
We quite often stumble into situations where we need the webpage under test to perform in a certain manner so that we can verify the behavior, i.e:
Scenario: Should render alternatively when backend system is down
Given that the backend system is down
And there are no channels for the page to display
When I inspect the webpage under test
Then the page renderes an alternative html indicating that there is a problem
For a unit test, this is less of an issue - run mock on the controller bit, and verify that it delivers the correct results, however, for a SpecFlow test, this is more or less requiring alternate configurations.
So it is possible at all, or - are there some known software patterns for developing webpages using BDD that I've missed?
Even when using SpecFlow, you can still use a mocking framework. What I would do is use the [BeforeScenario] attribute to set up the mocks for the test e.g.
[BeforeScenario]
public void BeforeShouldRenderAlternatively()
{
// Do mock setups.
}
This SO question might come in handy for you also.
You could use Deleporter
Deleporter is a little .NET library that teleports arbitrary delegates into an ASP.NET application in some other process (e.g., hosted in IIS) and runs them there.
It lets you delve into a remote ASP.NET application’s internals without any special cooperation from the remote app, and then you can do any of the following:
Cross-process mocking, by combining it with any mocking tool. For example, you could inject a temporary mock database or simulate the passing of time (e.g., if your integration tests want to specify what happens after 30 days or whatever)
Test different configurations, by writing to static properties in the remote ASP.NET appdomain or using the ConfigurationManager API to edit its entries.
Run teardown or cleanup logic such as flushing caches. For example, recently I needed to restore a SQL database to a known state after each test in the suite. The trouble was that ASP.NET connection pool was still holding open connections on the old database, causing connection errors. I resolved this easily by using Deleporter to issue a SqlConnection.ClearAllPools() command in the remote appdomain – the ASP.NET app under test didn’t need to know anything about it.
Related
We have an environment with a vendor deployed application to several front ends on it. It makes heavy use of the ASP .Net storage (Session, Application, and Cache). Problem is with the load this environment quickly brings IIS to it's knees with the amount of data it's trying to keep in memory.
The solution we are trying to go with is to override the storage mechanism and implement our own. (Specifically a Redis server to manage the storage)
We have implemented their cache interface and set up Microsoft.Web.Redis.RedisSessionStateProvider in the web.config to manage the session. That part all works fine. The problem is that the caching inside the vendor application does not always use their provided interface. Decompiling the dll and examining dump files show that there are several instances of them directly calling (for example):
HttpContext.Current.Cache.Insert(...) and HttpContext.Current.Application[...] = ...
Is there any way we can override the HTTPContext* calls so that they'll use Redis to cache instead of the Asp .Net application storage?
When it is a "3rd" party which uses HttpContext.Current you probably have no chance to change that behavior.
Is this other application running within your context (do you control the app domain). Or is it a standalone application?
I once tried to change HttpContext.Current.Cache for unit testing and ended up mocking the whole HttpContext because it was so very internal somewhere in the Microsoft stack.
All this is pretty hard to do, not really recommended and can lead to all kinds of other errors.
In short, don't use HttpContext.Current.Cache. Use something you can inject.
In general, libraries should never use that static context.
It is much more flexible to have abstractions + DI for those kind of things...
For caching, you could use CacheManager for example.
I prefer to keep my handlers free from ASP.NET infrastructure that is very hard to test (yes, even in ASP.NET Core). But sometimes it happens and you have a dependency like UserManager (I'd like to know one day why it's not an interface), HttpContext, etc. and unit-tests are turned into a mocking-hell.
I tried using integration testing to deal with it by creating a TestServer and having all the ASP.NET infrastructure initialized for every api call. It works quite well but sometimes seems like an overkill if I want to test simple logic of my handler. And while it solves technical problem of mocking ASP.NET infrastructure, it keeps architectural problem (if you consider it so) of having ASP.NET infrastructure into your handlers.
I'd like to know what are the recommended approaches to deal with it?
I feel your pain. I stumbled across a fantastic blog post from Jimmy Bogard that handles this problem by using what Martin Fowler calls Subcutaneous Tests. I will leave the deep explanation to those experts but in a nutshell subcutaneous tests simply avoid all the difficult to test aspects of the UI.
Shameless plug: I am currently in the process of writing up a wiki that demonstrates these patterns in a sample end-to-end project on github. It's not difficult to follow but is probably too much code to post for a SO answer.
To Summarize:
If you are using MediatR correctly your controllers should be very thin which makes testing them pointless.
What you want to test are your handlers.
However, you want to test your handlers as part of your real world pipeline.
To Solve:
Wrap the http request in a transaction.
Build a test fixture that mimics the applications Startup.cs
Setup a test db server to execute queries and commands against but also is reset after each test.
That's basically it. Everytime you run an integration test against one of your handlers:
The hosting environment is mocked but your application is started up in a real world test.
Your query or command is wrapped in a transaction mimicking your DbContext.
The handler is executed against a real database and then reset.
I would add more code examples to my answer but between the blog post and the wiki I provided, it is much easier to follow the code examples there.
Edit 8/2021:
Stick with the source. Jimmy Bogard keeps the contoso university project current on his github page. Another great and a little more advanced example is the modular monolith project by Kamil Grzybek. That also is updated regularly on his github page.
Mediatr or no, you should always try to have only very basic pass this along logic in your controllers and call injected business logic classes from there to do the actual work. As you inject them with interfaces to this business logic, your controllers' dependencies are easily mocked in your unit tests, and your tests can focus on if they implement those interfaces properly and do only the basic work of routing input/output. And your actual business logic can be tested even easier.
For those classes that are static, for instance for reading the web.config settings, one strategy that I like a lot is make an interfaced wrapper class around them. While ConfigurationManager is static, I can still just write a regular class with an interface that I put methods or properties on to read a specific setting (preferably semantically named) from the Configuration Manager. Now I can easily mock any configured setting (or absence of it) in my test by just mocking the interface and setting up different return values.
I'd say it depends on the level of confidence you want to get in the end. If you want to make sure the whole system works as expected, then integration tests using a TestServer are probably the way to go.
One advantage of MediatR, though, is it allows you to decouple your business logic from the application using it, which is why at the very top level, let's say in controllers, there's no logic but just a delegation to the mediator.
That being said, you're right that sometimes your logic needs information from the hosting application. An example would be the user making the request, which is accessible in the HTTP context.
In that case, if you want to avoid having to set up a test HTTP server to test your logic works, you could represent that information in an abstraction and your handler would then take a dependency on that abstraction. Your tests could then mock that dependency while using the real system for everything else.
Does that make sense?
I am creating basic .NET webpages as a UI, to invoke WCF service and display the result. I feel many web developers must have come across this situation, earlier.
Is there any existing tool out there, which could take WSDL as input and generate input fields either in HTML or .NET webpages.
There is also a tool named storm that I read about but I don't have direct experience with.
http://storm.codeplex.com/
Testing a service from a web page is definetely a bad practice.
Depending of your service, there are many tools available like WcfTestClient, soapUI, WCFStorm, ... but is also a bad usage.
From my point of view, you will never find any better tool than your favorite unit test framework. The test client, nor soapUI will create a test that can run in a Continuous Integration scenario.
ASP.NET apps that I've developed (on ASP.NET 2.0) have typically been backed by a database; the great majority of the .NET code on the server loads data in the form of a DataSet or SqlDataReader and uses it to databind something like a DataGrid. The meaningful logic is either database dependent or user interface dependent.
In this context, how should I implement unit tests that would be run by a continuous integration server (probably CruiseControl.NET)? Should I set up a test database connection for it to use to test CRUD operations and more complex SPROCs, or should more logic be contained in the .NET code and not in a SPROC? This becomes more complex when there is structure in the database that the application expects to find (such as a Users table in something I'm writing for a CMS).
Also, what are the best ways to do unit-testing of user interfaces? I've found NUnitASP, which is now abandoned but mentions Selenium and Watir.
Take a look at the MVP pattern (Model View Presenter). This should allow you to isolate the behaviour of your system and unit test it properly.
Also, consider switching to MVC (I would go with Fubu over ASP.NET MVC). This will allow you to test controllers and have a more rails-like experience.
To automate, I use WatiN (like watir but for .NET). On top of it I use StoryTeller (Google "StoryTeller Jeremy Miller") to present what is happening in a more human readable way and to provide templates for QA to use.
I would highly recommend against any business logic in sprocs. Stay away from them. Look at the repository pattern to abstract away getting and setting data.
Hope this gets you started.
I am writing a web application in ASP.NET 3.5 that takes care of some basic data entry scenarios. There is also a component to the application that needs to continuously poll some data and perform actions based on business logic.
What is the best way to implement the "polling" component? It needs to run and check the data every couple of minutes or so.
I have seen a couple of different options in the past:
The web application starts a background thread that will always run while the web application does. (The implementation I saw started the thread in the Application_Start event.)
Create a windows service that is always running
What are the benefits to either of these options? Are there additional options?
I am leaning toward a windows service because it is separated and can run on a different server (more scalable) as well as there is more control over when it is started/stopped, etc. However, I feel like the compactness of having the "background" logic running in the process of the web application might make the entire solution more understandable.
I'd go for the separate Windows service primarily for the reasons you give:
You can run it on a different server if necessary.
You can start and stop it independently of the web site.
I'd also add that it could well have some impact on the performance of the web site itself - something you want to avoid.
The buzz-word here is "separation of concerns". The web site is concerned with presenting the data to the user, the service with checking the integrity of the data.
You can also update the web site and service independently of each other should you need to.
I was going to suggest that you look at a scheduled task and let Windows control when the process runs, but I re-read your question and noted that you wanted the checks to run every couple of minutes. The overhead of starting the process might be too great in this case - though some experimentation would probably prove this one way or the other.
If you use a scheduled task there's also the possibility that you could start the next check before the current one has finished - something you can code for if you're in complete control.
Why not just use a console app that has no ui? Can do all that the windows service can and is much easier to debug and maintain. I would not do a windows service unless you absolutely have to.
You might find that the SQL Server job scheduler sufficient for what you want.
Console application does not do well in this case. I wrote a TAPI application which has to stay in the background and intercept incoming calls. But it did it only once because the tapi manager got GCed and was never available for the second incoming call.