I've implemented a Radix Trie (aka Patricia Trie) in Java, and would like to thoroughly test it. It implements the Map, SortedMap and NavigableMap interfaces, which add up to a pretty large number of methods to check. =/
I figure the people who wrote library classes like HashMap and TreeMap must have had a suite of JUnit tests (or something similar) to ensure they behave correctly. Does anyone know of a way to get the source code of these tests? I'd love to put this code through the same paces.
In the google collections library, there are some basic test harnesses that thoroughly test the contracts of various structures, including maps.
Here is a link to the gcode page: http://code.google.com/p/google-collections/
Related
I am interested in doing a project relying on automated proofs, in great dimension as a learning exercise. So far my online search suggests Lean is the way to go, in theory.
However, all I read about it talks about using it as a proof assistant in VS code or Emacs. But that's not what I need, I need a system I can communicate with fully programmatically. I.E string of assumptions goes in -> string specifying deductibility comes out or something like that.
To be more precise, I need to be able to call parsing functions on strings that do the heavy work of determining whether a set of results is deducible from the input assumptions.
I cant find documentation about Lean being able to do this.
In my project there are lots of Static methods and all are inturn hitting the DB. I am supposed to write Unit Test for the project but often struck with as all the methods are static and they are hitting DB. Is there any way to overcome this. Sorry for being abstract in the question but my concern is what is the way to write unit test for static methods and those hitting DB. MOQ is not useful when the methods are static and also in my project one method is calling other method within the same class. So in this case i cannot MOQ the inside method as both are in the same class.
The project I'm currently in is lot worse than what you have described. It is a blue print of an un-testable system. There are couple of options I think, but it all depends on your situation.
Write Integration test, which hits the database, and test multiple components together. I know this is not ideal, but it at least give some confidence on the work you do. Then try to refactor your code in a small step at a time, (be sure to take baby steps) and write Unit tests around that code. Make sure your integration tests continue to pass. You are still allowed to refactor your intergeneration type tests, if the semantics are changed.
This might not be easier as I said, and it takes time. That's why I said it is depends on your situation.
Another option would be (I know many people do this with legacy code) to use one of those pricey Isolation frameworks such as Isolator, MS Fakes perhaps to fake out those un testable dependencies. Once those tests written you can look at re factoring the code to make it more testable.
I seldom see static analyzer for functional programming languages, like Racket/Scheme, I even doubt that whether there are any. I would like to write a static analyzer for functional languages, say Scheme/Racket. How should I go about it?
Yes, there is some work on static analysis of dynamic languages like Scheme. For instance, see the work of Olin Shivers (http://www.ccs.neu.edu/home/shivers/citations.html) and Manuel Serrano (http://www-sop.inria.fr/members/Manuel.Serrano/index-1.html).
There's already, e.g., typed racket: http://docs.racket-lang.org/ts-guide/index.html
Since valid racket code is valid typed racket, you just have to change the language you're working in. Then, for libraries with typed versions, load those instead of the untyped versions, and certain type errors can get caught statically already. Further type annotations can be added to your own code to get guarantees of type correctness beyond that...
First read this paper by Shivers, explaining why there is no static control flow graph available in Scheme.
Might implemented k-CFA in Scheme. Matt Might's site and blog is a good starting point for exploring static analysis of higher-order languages.
I did some static analysis implementations for Scheme in Java as well:
k-CFA implementation
Interprocedural Dependence Analysis implementation based on a paper by Might and Prabhu
If we had a defined hierarchy in an application. For ex a 3 - tier architecture, how do we restrict subsequent developers from violating the norms?
For ex, in case of MVP (not asp.net MVC) architecture, the presenter should always bind the model and view. This helps in writing proper unit test programs. However, we had instances where people directly imported the model in view and called the functions violating the norms and hence the test cases couldn't be written properly.
Is there a way we can restrict which classes are allowed to inherit from a set of classes? I am looking at various possibilities, including adopting a different design pattern, however a new approach should be worth the code change involved.
I'm afraid this is not possible. We tried to achieve this with the help of attributes and we didn't succeed. You may want to refer to my past post on SO.
The best you can do is keep checking your assemblies with NDepend. NDepend shows you dependancy diagram of assemblies in your project and you can immediately track the violations and take actions reactively.
(source: ndepend.com)
It's been almost 3 years since I posted this question. I must say that I have tried exploring this despite the brilliant answers here. Some of the lessons I've learnt so far -
More code smell come out by looking at the consumers (Unit tests are best place to look, if you have them).
Number of parameters in a constructor are a direct indication of number of dependencies. Too many dependencies => Class is doing too much.
Number of (public) methods in a class
Setup of unit tests will almost always give this away
Code deteriorates over time, unless there is a focused effort to clear technical debt, and refactoring. This is true irrespective of the language.
Tools can help only to an extent. But a combination of tools and tests often give enough hints on various smells. It takes a bit of experience to catch them in a timely fashion, particularly to understand each smell's significance and impact.
You are wanting to solve a people problem with software? Prepare for a world of pain!
The way to solve the problem is to make sure that you have ways of working with people that you don't end up with those kinds of problems.... Pair Programming / Review. Induction of people when they first come onto the project, etc.
Having said that, you can write tools that analyse the software and look for common problems. But people are pretty creative and can find all sorts of bizarre ways of doing things.
Just as soon as everything gets locked down according to your satisfaction, new requirements will arrive and you'll have to break through the side of it.
Enforcing such stringency at the programming level with .NET is almost impossible considering a programmer can access all private members through reflection.
Do yourself and favour and schedule regular code reviews, provide education and implement proper training. And, as you said, it will become quickly evident when you can't write unit tests against it.
What about NetArchTest, which is inspired by ArchUnit?
Example:
// Classes in the presentation should not directly reference repositories
var result = Types.InCurrentDomain()
.That()
.ResideInNamespace("NetArchTest.SampleLibrary.Presentation")
.ShouldNot()
.HaveDependencyOn("NetArchTest.SampleLibrary.Data")
.GetResult()
.IsSuccessful;
// Classes in the "data" namespace should implement IRepository
result = Types.InCurrentDomain()
.That().HaveDependencyOn("System.Data")
.And().ResideInNamespace(("ArchTest"))
.Should().ResideInNamespace(("NetArchTest.SampleLibrary.Data"))
.GetResult()
.IsSuccessful;
"This project allows you create tests that enforce conventions for class design, naming and dependency in .Net code bases. These can be used with any unit test framework and incorporated into a build pipeline. "
I am working on some code coverage for my applications. Now, I know that code coverage is an activity linked to the type of tests that you create and the language for which you wish to do the code coverage.
My question is: Is there any possible way to do some generic code coverage? Like in, can we have a set of features/test cases, which can be run (along with a lot more specific tests for the application under test) to get the code coverage for say 10% or more of the code?
More like, if I wish to build a framework for code coverage, what is the best possible way to go about making a generic one? Is it possible to have some functionality automated or generalized?
I'm not sure that generic coverage tools are the holy grail, for a couple of reasons:
Coverage is not a goal, it's an instrument. It tells you which parts of the code are not entirely hit by a test. It does not say anything about how good the tests are.
Generated tests can not guess the semantics of your code. Frameworks that generate tests for you only can deduct meaning from reading your code, which in essence could be wrong, because the whole point of unittesting is to see if the code actually behaves like you intended it too.
Because the automated framework will generate artificial coverage, you can never tell wether a piece of code is tested with a proper unittest, or superficially tested by a framework. I'd rather have untested code show up as uncovered, so I fix that.
What you could do (and I've done ;-) ) is write a generic test for testing Java beans. By reflection, you can test a Java bean against the Sun spec of a Java bean. Assert that equals and hashcode are both implemented (or neither of them), see that the getter actually returns the value you pushed in with the setter, check wether all properties have getters and setters.
You can do the same basic trick for anything that implements "comparable" for instance.
It's easy to do, easy to maintain and forces you to have clean beans. As for the rest of the unittests, I try to focus on getting important parts tested first and thouroughly.
Coverage can give a false sense of security. Common sense can not be automated.
This is usually achieved by combining static code analysis (Coverity, Klockwork or their free analogs) with dynamic analysis by running a tests against instrumented application (profiler + memory checker). Unfortunately, this is hard to automate test algorythms, most tools are kind of "recorders" able to record traffic/keys/signals - depending on domain and replay them (with minimal changes/substitutions like session ID/user/etc)