I was asked to work on a product which is developed in ejb2.x. The code base is very larger and the code is more generic (same interface for more than 5 implementations). To understand the flow and what do the methods in the classes do, I want to unit test the code base. How can I do this? Any help would be appreciated.
Thanks,
Anjali.
I thinks it's a late answer, but I do my unit tests with a web project using Cactus and jUnit. I use some mocks: EasyMOck, PowerMock e MockEjb when necessary.
Related
In my project there are lots of Static methods and all are inturn hitting the DB. I am supposed to write Unit Test for the project but often struck with as all the methods are static and they are hitting DB. Is there any way to overcome this. Sorry for being abstract in the question but my concern is what is the way to write unit test for static methods and those hitting DB. MOQ is not useful when the methods are static and also in my project one method is calling other method within the same class. So in this case i cannot MOQ the inside method as both are in the same class.
The project I'm currently in is lot worse than what you have described. It is a blue print of an un-testable system. There are couple of options I think, but it all depends on your situation.
Write Integration test, which hits the database, and test multiple components together. I know this is not ideal, but it at least give some confidence on the work you do. Then try to refactor your code in a small step at a time, (be sure to take baby steps) and write Unit tests around that code. Make sure your integration tests continue to pass. You are still allowed to refactor your intergeneration type tests, if the semantics are changed.
This might not be easier as I said, and it takes time. That's why I said it is depends on your situation.
Another option would be (I know many people do this with legacy code) to use one of those pricey Isolation frameworks such as Isolator, MS Fakes perhaps to fake out those un testable dependencies. Once those tests written you can look at re factoring the code to make it more testable.
Over the last 2 months, I've been trying to learn the new MVC framework. After getting my head around all the object oriented concepts, I created a test site using MVC3, EF4 w/ DbContext, and ASP.Net Membership provider. All was going pretty well. Then, I decided to dive in and learn testing, starting with Unit Testing.
After 2 weeks of banging my head against my keyboard, I now feel as frustrated as can be. I've gone through tons of video tutorials (TekPub, Plural-sight), online tutorials (ASP.Net, Microsoft, etc..etc..) and plenty of StackOverflow questions/answers. I now sort-of (ha!) understand Loose Coupling, Dependency Injection, Respositories, Interfaces, Stubs, Mocks (yes, I read the Fowler article many times), Shims, lambdas, refactoring...the list goes on and on (...and on.). I've looked at Ninject, Structuremap, Moq, TypeMock, JustMock, nUnit, xUnit, etc...
So I know there are a bunch of ways to skin this cat. Now I see that VS11/MVC4 is coming out and they have this thing called Fakes which seem to be a good option for static methods like the Membership stuff.
My question:
I want to test my MVC EF4/DbContext/Membership application. Most of my pages require an authenticated user [Authorize] and thats before I even get to the actual method to be tested.
If you were just starting out (like me), what is the simplest and easiest route to testing my CRUD application? I don't necessarily like having a DI framework running on the production side (just another thing that might go wrong) and I find the fracking things are confusing as all get out.
I could upgrade to VS11/MVC4 and try the Fakes approach. Appears to be slightly simpler but still seems like I need all the Respository/Interfaces for EF stuff.
Or would you just chuck it and use an Integration Test tool like Selenium (which is what I had to use before with Forms based development)?
Any suggestions are greatly appreciated. Sorry if this is a lousy question but I'm hoping for a ray of light here...
For all versions of mvc or programs its almost same way of testing.
You sould have most of your logic based on interfaces.
this will allow you to separate concerns and unit test anything you need also this will give you posiibility to fake (create fake implementations) or mock (create class on runtime that will represent logic you want, using Mock, or Rhino Mock) the logic.
You can read more about basic unit testing here:
http://msdn.microsoft.com/en-us/magazine/dd942838.aspx
also i would recomend you have a look on book where Steven Sanderson has example application including unit testing of most parts.
http://www.amazon.co.uk/Pro-ASP-NET-MVC-Framework-ebook/dp/B005PZ07US
Here you have introduction to MOQ
http://www.codeproject.com/Tips/182847/An-Introduction-to-MOQ
You have alternative such as rhino mock:
http://ayende.com/blog
and some examples:
http://daysincode.blogspot.com/2012/06/examples-of-mocking-with-rhino-moq.html
Of course everything here leads to : http://msdn.microsoft.com/en-us/magazine/ekstremalna-przerobka-asp-net--czesc6-podzial-obowiazkow.aspx
If we had a defined hierarchy in an application. For ex a 3 - tier architecture, how do we restrict subsequent developers from violating the norms?
For ex, in case of MVP (not asp.net MVC) architecture, the presenter should always bind the model and view. This helps in writing proper unit test programs. However, we had instances where people directly imported the model in view and called the functions violating the norms and hence the test cases couldn't be written properly.
Is there a way we can restrict which classes are allowed to inherit from a set of classes? I am looking at various possibilities, including adopting a different design pattern, however a new approach should be worth the code change involved.
I'm afraid this is not possible. We tried to achieve this with the help of attributes and we didn't succeed. You may want to refer to my past post on SO.
The best you can do is keep checking your assemblies with NDepend. NDepend shows you dependancy diagram of assemblies in your project and you can immediately track the violations and take actions reactively.
(source: ndepend.com)
It's been almost 3 years since I posted this question. I must say that I have tried exploring this despite the brilliant answers here. Some of the lessons I've learnt so far -
More code smell come out by looking at the consumers (Unit tests are best place to look, if you have them).
Number of parameters in a constructor are a direct indication of number of dependencies. Too many dependencies => Class is doing too much.
Number of (public) methods in a class
Setup of unit tests will almost always give this away
Code deteriorates over time, unless there is a focused effort to clear technical debt, and refactoring. This is true irrespective of the language.
Tools can help only to an extent. But a combination of tools and tests often give enough hints on various smells. It takes a bit of experience to catch them in a timely fashion, particularly to understand each smell's significance and impact.
You are wanting to solve a people problem with software? Prepare for a world of pain!
The way to solve the problem is to make sure that you have ways of working with people that you don't end up with those kinds of problems.... Pair Programming / Review. Induction of people when they first come onto the project, etc.
Having said that, you can write tools that analyse the software and look for common problems. But people are pretty creative and can find all sorts of bizarre ways of doing things.
Just as soon as everything gets locked down according to your satisfaction, new requirements will arrive and you'll have to break through the side of it.
Enforcing such stringency at the programming level with .NET is almost impossible considering a programmer can access all private members through reflection.
Do yourself and favour and schedule regular code reviews, provide education and implement proper training. And, as you said, it will become quickly evident when you can't write unit tests against it.
What about NetArchTest, which is inspired by ArchUnit?
Example:
// Classes in the presentation should not directly reference repositories
var result = Types.InCurrentDomain()
.That()
.ResideInNamespace("NetArchTest.SampleLibrary.Presentation")
.ShouldNot()
.HaveDependencyOn("NetArchTest.SampleLibrary.Data")
.GetResult()
.IsSuccessful;
// Classes in the "data" namespace should implement IRepository
result = Types.InCurrentDomain()
.That().HaveDependencyOn("System.Data")
.And().ResideInNamespace(("ArchTest"))
.Should().ResideInNamespace(("NetArchTest.SampleLibrary.Data"))
.GetResult()
.IsSuccessful;
"This project allows you create tests that enforce conventions for class design, naming and dependency in .Net code bases. These can be used with any unit test framework and incorporated into a build pipeline. "
I am working on some code coverage for my applications. Now, I know that code coverage is an activity linked to the type of tests that you create and the language for which you wish to do the code coverage.
My question is: Is there any possible way to do some generic code coverage? Like in, can we have a set of features/test cases, which can be run (along with a lot more specific tests for the application under test) to get the code coverage for say 10% or more of the code?
More like, if I wish to build a framework for code coverage, what is the best possible way to go about making a generic one? Is it possible to have some functionality automated or generalized?
I'm not sure that generic coverage tools are the holy grail, for a couple of reasons:
Coverage is not a goal, it's an instrument. It tells you which parts of the code are not entirely hit by a test. It does not say anything about how good the tests are.
Generated tests can not guess the semantics of your code. Frameworks that generate tests for you only can deduct meaning from reading your code, which in essence could be wrong, because the whole point of unittesting is to see if the code actually behaves like you intended it too.
Because the automated framework will generate artificial coverage, you can never tell wether a piece of code is tested with a proper unittest, or superficially tested by a framework. I'd rather have untested code show up as uncovered, so I fix that.
What you could do (and I've done ;-) ) is write a generic test for testing Java beans. By reflection, you can test a Java bean against the Sun spec of a Java bean. Assert that equals and hashcode are both implemented (or neither of them), see that the getter actually returns the value you pushed in with the setter, check wether all properties have getters and setters.
You can do the same basic trick for anything that implements "comparable" for instance.
It's easy to do, easy to maintain and forces you to have clean beans. As for the rest of the unittests, I try to focus on getting important parts tested first and thouroughly.
Coverage can give a false sense of security. Common sense can not be automated.
This is usually achieved by combining static code analysis (Coverity, Klockwork or their free analogs) with dynamic analysis by running a tests against instrumented application (profiler + memory checker). Unfortunately, this is hard to automate test algorythms, most tools are kind of "recorders" able to record traffic/keys/signals - depending on domain and replay them (with minimal changes/substitutions like session ID/user/etc)
I've joined a team that works on a product. This product has been around for ~5 years or so, and uses ASP.NET WebForms. Its original architecture has faded over time, and things have become relatively disorganized throughout the solution. It's by no means terrible, but definitely can use some work; you all know what I mean.
I've been performing some refactorings since coming on to the project team about 6 months ago. Some of those refactorings are simple, Extract Method, Pull Method Up, etc. Some of the refactorings are more structural. The latter changes make me nervous as there isn't a comprehensive suite of unit tests to accompany every component.
The whole team is on board for the need to make structural changes through refactoring, but our Project Manager has expressed some concerns that we don't have adequate tests to make refactorings with the confidence that we aren't introducing regression bugs into the system. He would like us to write more tests first (against the existing architecture), then perform the refactorings. My argument is that the system's class structure is too tightly coupled to write adequate tests, and that using a more Test Driven approach while we perform our refactorings may be better. What I mean by this is not writing tests against the existing components, but writing tests for specific functional requirements, then refactoring existing code to meet those requirements. This will allow us to write tests that will probably have more longevity in the system, rather than writing a bunch of 'throw away' tests.
Does anyone have any experience as to what the best course of action is? I have my own thoughts, but would like to hear some input from the community.
Your PM's concerns are valid - make sure you get your system under test before making any major refactorings.
I would strongly recommend getting a copy of Michael Feather's book Working Effectively With Legacy Code (by "Legacy Code" Feathers means any system that isn't adequately covered by unit tests). This is chock full of good ideas for how to break down those couplings and dependencies you speak of, in a safe manner that won't risk introducing regression bugs.
Good luck with the refactoring programme; in my experience it's an enjoyable and cathartic process from which you can learn a lot.
Can you re-factor in parallel? What I mean is re-write the pieces you want to refactor using TDD, but leave the existing code base in place. Then phase out the existing code when your new tests meet the needs for your PM?
I would also like to throw in a suggestion to visit the Refactoring website by Martin Fowler. He literally wrote the book on this stuff.
As far as introducing unit tests into the equation the best method I have found is to find a top level component and identify all the external dependencies it has on concrete objects and replace them with interfaces. Once you've done that it will be a lot easier to write unit tests against your code base and you can do it one component at a time. Even better, you won't have to throw away any unit tests.
Unit testing ASP.Net can be tricky, but there are plenty of frameworks that make it easier to do. ASP.Net MVC, and WCSF to name a few.
Just tossing out a second recommendation for Working Effectively with Legacy Code, an excellent book that really opened my eyes to the fact that almost any old / crappy / untestable code can be wrangled!
Totally agree with the answer from Ian Nelson. Additionally I would start to get some "high level" tests (functional or component tests) in place to preserve the behaviour from the view point of the user. This point might be the most important concern for your PM.