Jacoco-IntegrationTests code coverage - integration-testing

I am not able to get over this issue-
I have 3 classes A B and C.
A is a integration test class which tests classes B and C together.
B and C are classes in another package(w.r.t class A)
Now when i run integration test class A i want the code coverage to show what parts of B and C are covered.I am not getting the required output.
What i am getting as the output is that no classes are instrumented.
So no test coverage for the two classes...If i write a sample code in src/main/java in the same module as A is in..It recognizes the class and instruments it.
Why cant it do the same for classes outside its package.
Kindly help.Thanks

This can be caused by a number of problems :
1. Classes not being triggered according to the jacoco agent
The first thing you need to check if your classes B and C have been triggered by the jacoco agent. This can be done by generating a jacoco report and clicking on the sessions link (top right corner).
If your class B or C is not listed here, it means that there is a problem with your jacoco agent, and it wasn't attached to the correct JVM that triggers class B / C , or no code in class B/C was triggered.
2. Classes triggered according to the jacoco agent but no source/class files available
If your class B or C is listed here, but it's not clickable, it means that your class B / C was triggered and detected by the jacoco agent, but it wasn't able to link it.
Keep in mind that during report generation, jacoco needs to have the class files and source files available in order to generate a report. (if you're using maven, it expects the class files in the project.build.outputDirectory and the sources in project.build.sourceDirectory
3. Classes triggered according to the jacoco agent but wrong class files available
If your class B or C is deployed on an appserver, it is possible that the appserver also instruments the bytecode of those classes during deployment, creating a situation where the class files in your local project are not the same as the class files detected by the jacoco agent (See this topic for a discussion on that : https://groups.google.com/forum/?fromgroups=#!topic/jacoco/GjSlBoFTRrc). In that case, Jacoco offers a classdumpdir param that can be set to a folder where jacoco will dump the classes it has detected during your test run. You need to use these classes during your report generation.
References
http://www.eclemma.org/jacoco/trunk/doc/agent.html
https://groups.google.com/forum/?fromgroups=#!topic/jacoco/GjSlBoFTRrc

Related

Is there a fix for InheritanceManager breaking static type checking?

I have added django-model-utils to an existing (large) project, and the build is now failing, as part of the build includes static type checking with mypy.
It complains that models that I have added objects = InheritanceManager() to, don't have attributes for reverse ForeignKeys, if the reverse FK is accessed in a method on that model. For example, take the following:
class Student(Model):
school = ForeignKey(School, related_name='students')
class School(Model):
objects = InheritanceManager() # IRL School is a subclass of some other model
def something(self):
return self.students.filter(...)
Then running mypy on it, will return:
error: "School" has no attribute "students"
And even if I remove the related_name, and use self.student_set (i.e. the default django relation), it will still produce the same type of error. Only removing the InheritanceManager fixes the static type checking. Of course, I'm using it for a reason as I need to select_subclasses elsewhere in the code.
Has anyone come across this, or have a fix?
django-stubs uses plugin to add all managers. This plugin is triggered only if added manager is a "subclass" (not just real subclass, but also recognizable by mypy as such) of models.Manager.
django-model-utils is untyped, so InheritanceManager is in fact Any for mypy, and plugin does not see it. To solve exactly this issue I was adding py.typed marker to django-model-utils as a CI stage after package installation. You can also use a fork with py.typed or create a stub package for django-model-utils. This can result in other issues and doesn't give good type checking (all unannotated methods have Any as implicit arguments and return type), but is better than nothing. For my needs the marker was sufficient.
py.typed marker is an empty file located in package root (venv/lib/.../django_model_utils/py.typed) - it tells mypy that package does not need separate stubs and contains all necessary types.

Use dart reflectable on external lib

I need to use reflectable on a third party lib but it is not working.
Consider this scenario:
Library A has the reflector declaration:
class Reflector extends Reflectable {
const Reflector()
: super(invokingCapability,
typeRelationsCapability,
metadataCapability,
superclassQuantifyCapability,
reflectedTypeCapability);
}
const Reflector reflector = const Reflector();
Library B has the classes that are annotated with the reflector:
import 'package:library_a/library_a.dart' show reflector;
#reflector
class whateverz {}
Now the application C needs to use reflection on whateverz class that is within library B.
My problem is that the reflectable lib can't see the whateverz class annotated. The build warns "reflector.dart: This reflector does not match anything"
And if I do "print(reflector.annotatedClasses);" it prints [] within the console.
Is this possible? To annotate the classes on a third party lib that I will end up using in an application with reflection?
If yes, what am I doing wrong?
I suspect that the transformation isn't being performed on the correct main file.
The transformer is capable of looking up any declaration in your program, so if there is a library in your program which is importing library B (and hence also library A) then the transformer should certainly be able to generate a mirror for class whateverz, and you should find that mirror in reflector.annotatedClasses.
But the set of files taken into account during transformation is the transitive closure of the imports from your entry point (that is, the relevant element in the entry_points specified in your pubspec.yaml), so if you specify an entry point which is not the actual main file then the transformer may get to work with a smaller (or just different) set of libraries. For instance, if you use library A as the entry point then the transformer won't know that library B exists (assuming that library A doesn't directly or indirectly import library B), so the transformer won't discover any declarations in library B and you won't get the corresponding mirrors.
If you are working on a library that other developers will import and use, you need to tell them to include the reflectable transformer in their pubspec.yaml and add an element to the entry_points (or check that they are using a wildcard that already matches all the desired entry points).
You can check out three_files_test.dart to see a tiny example where a reflector in one file is used to annotate classes in different files, and you can check out meta_reflectors_test.dart to see how you can decouple reflectors, target classes, and other elements even more (e.g., by using GlobalQuantifyCapability to associate a certain reflector with a certain target class without editing the file that contains the target class).

why are some IronPython dlls generated with a DLRCachedCode class inside?

when I compile some .py codefiles with no class definitions into dlls , the compiled dll is created with a "DRLCachedCode" class inside. Why is that?
When you compile IronPython code it doesn't get compiled to normal .NET code where you'd have a class at the IL level for each class you have at the source level. Instead it gets compiled into the same form that we compile to internally using the DLR.
For user code this is just a bunch of executable methods. There's one method for each module, function definition, and class definition. When the module code runs it executes against a dictionary. Depending on what you do in the module the .NET method may publish into the dictionary a:
PythonType for new-style classes
An OldClass for old-style classes
A PythonFunction object for function
definitions
Any values that you assign to (e.g.
Foo=42)
Any side effects of doing exec w/o providing a dictionary (e.g. exec "x=42")
etc...
The final piece of the puzzle is where is this dictionary stored and how do you get at it? The dictionary is stored in a PythonModule object and we create it when the user imports the pre-compiled module and then we execute the module against it. Therefore this code is only available via Python's import statement (or the extension method on ScriptEngine "ImportModule" which is exposed via IronPython.Hosting.Python class).
So all of the layout of the code is considered an internal implementation detail which we reserve the right to change at any point in time.
Finally the name DLRCachedCode comes because the DLR (outer layer) saves this code for us. Multiple languages can actually be saved into a single DLL if someone really wanted to.
This link answers the question: http://www.ironpython.info/index.php/Using_Compiled_Python_Classes_from_.NET/CSharp_IP_2.6 how to access an IronPython class from C#.
Manual compilation: \IronPython 2.7\Tools\Scripts>ipy pyc.py /out:MyClass /target:dll MyClass.py did not work. Only when I used SharpDevelop with IronPython it worked as in the post.

Entity Container and Model generation in different assemblies

I'm doing some refactoring and am trying to reuse my genertated entity models. My application has a few assemblies, one being my outward facing public types (API) and one containing implementations of providers (such as the log).
I'd like to split the generation of the entities and models so that the entities will be in the API assembly and the container will be in the implementation assembly. Is this possible?
Is possible. This is how I did it.
Assembly A
Database.EDMX
Models.TT
Models.cs
Assembly B
Database.EDMX (Added as a Link to the real file in Assembly A)
EntityContainer.TT
EntityContainer.cs
That's how everything is laid out. These are the rough steps:
Right click on the EDMX in A (public API assembly) and Add Code Generation File
Adds a TT to the project. Called it Models, as it will contain the models only.
Edited the TT and removed code generation for entity containers
In assembly B (internal implementations) added Database.EDMA as a link
Opened in assembly B, right click and Add Code Generation File
Adds a TT to project B. Called it EntityContainer as it will contain that only.
Edited TT to do the following
Removed entity creation steps
Changed the path to Database.EDMX to a relative path pointing at the original copy in A
Added a using for my models
Hopefully this will all compile and work correctly (I'm still far from getting everything compiled and tested). Looks good so far.
Additional change:
In my entity container TT, I had to modify the definition of the EscapeEndTypeName to the following:
string EscapeEndTypeName(AssociationType association, int index,
CodeGenerationTools code)
{
EntityType entity = association.AssociationEndMembers[index]
.GetEntityType();
return code.CreateFullName(
code.EscapeNamespace(association.NamespaceName), code.Escape(entity));
}
I'm using association.NamespaceName as it contains the correct namespace from the other assembly.
I don't know the answer, but I think that your question is essentially equivalent to "Is it possible to cause a T4 template in one project to emit code into a different project?" If you can do that, then you can do what you want. Note, though, that this is substantially easier in EF 4.
So I think you might get useful feedback if you asked that question directly.

Eclipse/Flex: update a file every time I launch?

OK my project uses an xml file called Chart-app.xml inside this XML file there is a tag called <version></version> which I keep in the format like: <version>1.2.128</version> I am wondering if I can set it to append to the third number every time I run my project.
So if I ran it now it would be 1.2.129, then if i ran it again it would be 1.2.130
Thanks!!
After reading VonC's answer I don't know anything about ANT or creating custom builds, but he did give me an idea that seems to be working:
I already have a method to tell if the app is running in the ADL (within eclipse), so if it is, I just have my app open the file itself and change the value.
I am not sure there is a native Eclipse way to do this.
You can increment the number within that xml file either:
programmatically, launching a special class which do that, and then call your primary application Class
through a dependency during launch, for instance, you can make a JUnit test suite which will first call a Java class doing the increment, and then call your main method.
But in both case, you would have to somehow code the increment process.
Note: it is easier when you want to increment something each time you build, because you can add a custom builder.

Resources