I was just wondering how others are going about testing controller actions in asp.net mvc? Most of my dependencies are injected in to my controllers so there is a not a huge amount of logic in the action methods but there may be some conditional logic for example which I think is unavoidable.
In the past I have written tests for these action methods, mocked the dependencies and tested the results. I have found this is very brittle and a real PITA to maintain. Having 'Expect' and 'Stub' methods everywhere breaks very easily but I don't see any other way of testing controller actions.
I actually think it might be easier to test some of these manually! Anyone have any suggestions? Perhaps I am missing something here?
Thanks
Imran
i don't think i've writtin a single test for a controller because all of my logic is elsewhere. Like you say the controllers have a minimum amount of code in them and any logic in them is so simple that it really doesn't bear a whole test strategy.
i prefer to give my models a whole lot of tests as well as any supporting code like DTL's and data layers etc.
i think i've seen some people mock up their copntollers, pass in the expected models and look at the resultant output but i'm not sure how much that gives you.
i think if i were to test a controller i'd only really test the post actions to ensure that what is being given to my controller is what i'm getting in my model as well as testing (security). But then I have security in a number of places of varying levels.
but all that is integration testing more than functional. functional i do elsewhere like i said earlier.
then again, if it's worth writing then it's worth testing huh? I guess you need to decide what and where the breakable bits are and how you want them tested.
Related
In my project there are lots of Static methods and all are inturn hitting the DB. I am supposed to write Unit Test for the project but often struck with as all the methods are static and they are hitting DB. Is there any way to overcome this. Sorry for being abstract in the question but my concern is what is the way to write unit test for static methods and those hitting DB. MOQ is not useful when the methods are static and also in my project one method is calling other method within the same class. So in this case i cannot MOQ the inside method as both are in the same class.
The project I'm currently in is lot worse than what you have described. It is a blue print of an un-testable system. There are couple of options I think, but it all depends on your situation.
Write Integration test, which hits the database, and test multiple components together. I know this is not ideal, but it at least give some confidence on the work you do. Then try to refactor your code in a small step at a time, (be sure to take baby steps) and write Unit tests around that code. Make sure your integration tests continue to pass. You are still allowed to refactor your intergeneration type tests, if the semantics are changed.
This might not be easier as I said, and it takes time. That's why I said it is depends on your situation.
Another option would be (I know many people do this with legacy code) to use one of those pricey Isolation frameworks such as Isolator, MS Fakes perhaps to fake out those un testable dependencies. Once those tests written you can look at re factoring the code to make it more testable.
I have been looking at the unit testing topic and honestly I have never yet seen it in a live application.
Im a little foggy on the subject....
A simple example is if I am populating a listbox with data, I would know through debugging if the data is being populated and if it wasnt it would probably be easy to figure out why. Futhermore I couldnt possibly put it in production if it wasnt work so, why would I need to do a unit test? I dont see the point of it.
What if you were working on an entirely different area of the site, but because of the way your code was constructed, the changes you made broke the code that fills the listbox with data? How long would it take you to discover that? Worse, what if it was someone else on the team who made such a change; someone who had no idea how the listbox-filling code worked at all? Or even someone who didn't know there was code to fill listboxes?
Unit testing gives you a set of tests that ensure you never regress and introduce bugs in areas of your program that have already been proven to work, because you run the unit tests after every change and refactoring. Unit testing lets you program without fear.
Furthermore, by designing your code to be testable, you necessarily create a loosely-coupled architecture that follows a large list of best practices, e.g. dependency injection.
The point is that by using unit tests you are sure that each class is working as intended.
What is the value of that, apart from knowing that it works under certain conditions?
When you refactor your code, change the design, rework a (supposedly) unrelated piece of code, if your tests still run correctly you know your have not broken any functionality.
Unit testing is both about ensuring that the code you write conforms to your expectations and that any changes to it still conform to them.
There're many pros and cons with using Testing. Take a look at Art Of Unit Testing somehow, this book greatly covers subjects of unit testing. Also, you can find out why you should do unit testing.
In your example, imagine you had to check populating of listbox, combobox and other data at about on 15 web pages. How much browser reload, mouse clicks, breakpoint hits and runs should you have to make to test it with debug? Many. But with unit testing, one of the core rules is that tests should be run simply, by single click. If you design unit tests correctly, you can test thousands and thousands of code with the single click
Unit testing will give you an opportunity to test your logic without hitting the sql server and firing Cassini or IIS Express. (Of course you need to implement dependency injection firstly on your main project and mock them on your test application)
Think about you have written hundreds of test methods. And you will run all the test methods in bulk. This could take a minutes and more depending on your data structure. Buy if you implement dependency injection on your project and mock them in your test, this will take quite a small time.
here is a source which you could find a good article on dependency injection : http://haacked.com/archive/2007/12/07/tdd-and-dependency-injection-with-asp.net-mvc.aspx
this my reason of using unit testing. if your project is really big enough, I think your should also consider Test-driven development (TDD)
If we had a defined hierarchy in an application. For ex a 3 - tier architecture, how do we restrict subsequent developers from violating the norms?
For ex, in case of MVP (not asp.net MVC) architecture, the presenter should always bind the model and view. This helps in writing proper unit test programs. However, we had instances where people directly imported the model in view and called the functions violating the norms and hence the test cases couldn't be written properly.
Is there a way we can restrict which classes are allowed to inherit from a set of classes? I am looking at various possibilities, including adopting a different design pattern, however a new approach should be worth the code change involved.
I'm afraid this is not possible. We tried to achieve this with the help of attributes and we didn't succeed. You may want to refer to my past post on SO.
The best you can do is keep checking your assemblies with NDepend. NDepend shows you dependancy diagram of assemblies in your project and you can immediately track the violations and take actions reactively.
(source: ndepend.com)
It's been almost 3 years since I posted this question. I must say that I have tried exploring this despite the brilliant answers here. Some of the lessons I've learnt so far -
More code smell come out by looking at the consumers (Unit tests are best place to look, if you have them).
Number of parameters in a constructor are a direct indication of number of dependencies. Too many dependencies => Class is doing too much.
Number of (public) methods in a class
Setup of unit tests will almost always give this away
Code deteriorates over time, unless there is a focused effort to clear technical debt, and refactoring. This is true irrespective of the language.
Tools can help only to an extent. But a combination of tools and tests often give enough hints on various smells. It takes a bit of experience to catch them in a timely fashion, particularly to understand each smell's significance and impact.
You are wanting to solve a people problem with software? Prepare for a world of pain!
The way to solve the problem is to make sure that you have ways of working with people that you don't end up with those kinds of problems.... Pair Programming / Review. Induction of people when they first come onto the project, etc.
Having said that, you can write tools that analyse the software and look for common problems. But people are pretty creative and can find all sorts of bizarre ways of doing things.
Just as soon as everything gets locked down according to your satisfaction, new requirements will arrive and you'll have to break through the side of it.
Enforcing such stringency at the programming level with .NET is almost impossible considering a programmer can access all private members through reflection.
Do yourself and favour and schedule regular code reviews, provide education and implement proper training. And, as you said, it will become quickly evident when you can't write unit tests against it.
What about NetArchTest, which is inspired by ArchUnit?
Example:
// Classes in the presentation should not directly reference repositories
var result = Types.InCurrentDomain()
.That()
.ResideInNamespace("NetArchTest.SampleLibrary.Presentation")
.ShouldNot()
.HaveDependencyOn("NetArchTest.SampleLibrary.Data")
.GetResult()
.IsSuccessful;
// Classes in the "data" namespace should implement IRepository
result = Types.InCurrentDomain()
.That().HaveDependencyOn("System.Data")
.And().ResideInNamespace(("ArchTest"))
.Should().ResideInNamespace(("NetArchTest.SampleLibrary.Data"))
.GetResult()
.IsSuccessful;
"This project allows you create tests that enforce conventions for class design, naming and dependency in .Net code bases. These can be used with any unit test framework and incorporated into a build pipeline. "
When I first heard about ASP.NET MVC, I was thinking that would mean applications with three parts: model, view, and controller.
Then I read NerdDinner and learned the ways of repositories and view-models. Next, I read this tutorial and soon became sold on the virtues of a service layer. Finally, I read the Fluent Validation documentation, and I'll be darned if I didn't end up writing a bunch of validators.
Tonight, I took a step back and thought about what had become of my project. It seems to have become the victim of the design pattern equivalent of "feature creep". Somehow I'd gone from Model-View-Controller to Model-Repository-Service-Validator-View-ViewModel-Controller. You want loosely coupled and DRY? We got your loosely coupled and DRY right here! But I'm wondering if this could be a case of too much of a good thing.
Am I right to be concerned? Or is this actually not as crazy as it sounds? On one hand, it seems crazy to have so many layers. On the other hand, every layer has a clearly defined purpose that makes sense to me. Have your MVC applications turned into MRSVVVMC apps too? If not, what do they look like? Where's that right balance?
If you have one form with three attributes, this is overkill.
But if you have a 'real' application, and the responsibilities of each layer are well defined, I'd consider it pretty reasonable.
It sounds to me like you found a pattern and went looking for a problem. You should find a problem, and use the appropriate tool from your toolbox... not all the tools. Unless this is an academic exercise of course.
I am working on some code coverage for my applications. Now, I know that code coverage is an activity linked to the type of tests that you create and the language for which you wish to do the code coverage.
My question is: Is there any possible way to do some generic code coverage? Like in, can we have a set of features/test cases, which can be run (along with a lot more specific tests for the application under test) to get the code coverage for say 10% or more of the code?
More like, if I wish to build a framework for code coverage, what is the best possible way to go about making a generic one? Is it possible to have some functionality automated or generalized?
I'm not sure that generic coverage tools are the holy grail, for a couple of reasons:
Coverage is not a goal, it's an instrument. It tells you which parts of the code are not entirely hit by a test. It does not say anything about how good the tests are.
Generated tests can not guess the semantics of your code. Frameworks that generate tests for you only can deduct meaning from reading your code, which in essence could be wrong, because the whole point of unittesting is to see if the code actually behaves like you intended it too.
Because the automated framework will generate artificial coverage, you can never tell wether a piece of code is tested with a proper unittest, or superficially tested by a framework. I'd rather have untested code show up as uncovered, so I fix that.
What you could do (and I've done ;-) ) is write a generic test for testing Java beans. By reflection, you can test a Java bean against the Sun spec of a Java bean. Assert that equals and hashcode are both implemented (or neither of them), see that the getter actually returns the value you pushed in with the setter, check wether all properties have getters and setters.
You can do the same basic trick for anything that implements "comparable" for instance.
It's easy to do, easy to maintain and forces you to have clean beans. As for the rest of the unittests, I try to focus on getting important parts tested first and thouroughly.
Coverage can give a false sense of security. Common sense can not be automated.
This is usually achieved by combining static code analysis (Coverity, Klockwork or their free analogs) with dynamic analysis by running a tests against instrumented application (profiler + memory checker). Unfortunately, this is hard to automate test algorythms, most tools are kind of "recorders" able to record traffic/keys/signals - depending on domain and replay them (with minimal changes/substitutions like session ID/user/etc)