I want to run multiple jacoco javaagents on the same target with tcp ports. This causes an issue due to a naming conflict.
My thought is to rename the jacoco_agent.jar to jacoco2_agent.jar including all the class names.
I already tried jarjar, but that failed. Attaching the renamed jar raises an error. The ClassNotFoundException: org.jacoco.agent.rt.internal_3570298.PreMain occurs. Indicating that the naming did go wrong at some point.
3 days ago Stefan wrote in a comment:
#kriegaex your solution worked. Thank you very much for your help.
Now that comment is gone, not sure why. Anyway, I am going to convert my comment into an answer:
Are you aware of the option to merge multiple JaCoCo files into one for a complete coverage report? You can do this with the JaCoCo Maven plugin (e.g. for multi-module projects) as well as manually. Just make sure that your fuzzer creates different files for each run, then merge them in the end.
Related
Sonarqube is displaying errors for empty css/scss files in the Angular application. What are the effects of having empty scss files? Do they cause issues with performance, side bugs/errors, future problems, what are the compound negative issues? These are generally leftover when we do ng generate component
Sonarqube flag: Remove this empty stylesheet
Article below states to ignore it, compiler will take care of it, however more interested in the effects of leaving empty files, if there are any.
Empty style (.css/.scss) files
Company would have to go through 1000+ empty scss files in large application, interested to know if its worth the time.
As far as I can tell from looking into this, the best answer is to just leave it. The compiler will indeed handle the empty files appropriately.
SonarQube is just picking it up as code smell, empty files should probably be removed to keep a project in its least complex state possible. In the example you gave with a company going through that many files it is a complete waste of time.
I come from the future and I face the same problem, our solution was to delete the files that were already empty and generate the new components with the --inline-style option, this way no css/scss files are created by default when create the components.
Is there a way to make csslint in Atom ignore "ids", so I don't get the warning "Don't use IDs in selectors"?
Edit: My question was identified as a possible duplicate of "Disable warnings (ids selector) in linter-csslint on Atom?", but I mentioned in my own answer that I could not understand how to make the process to solve my problem. I ended up finding out.
Well, guys, I ended up discovering how to do that. The other questions similar to mine did solve the problem, but they were not so clear TO ME on how to make the solution work. If you guys go to github "https://github.com/ebednarz/csslintrc/blob/master/.csslintrc", there are some lines to ignore the elements. What wasn't that clear to me was "ok, what should I do with it?".
So you have two options (I use Windows). You either create a file in Atom or command line called ".csslintrc" and place it INSIDE THE FOLDER OF THE PROJECT you are working on OR you can place this file in your USER FOLDER IN WINDOWS. Like, "C:\Users\yourUserName". If you put it there, all the projects will use this file.
Over the past few weeks I have been getting into Ada, for various different reasons. But there is no doubt that information regarding my personal reasons as to why I'm using Ada is out of scope for this question.
As of the other day I started using the gprbuild command that comes with the Windows version of GNAT, in order to get the benefits of a system for managing my applications in a project-related manner. That is, being able to define certain attributes on a per-project basis, rather than manually setting up the compile-phase myself.
Currently when naming my files, their names are based off of what seems to be a standard for the grpbuild, although I could very much be wrong. For periods (in the package structure), a - is put in the name of the file, for underscores, an _ is put accordingly. As such, a package by the name App.Test.File_Utils would have a file name of app-test-file_utils: .ads and .adb accordingly.
In the .gpr project file I have specified:
for Source_Dirs use ("app/src/**");
so that I am allowed to use multiple directories for storing my files, rather than needing to have them all in the same directory.
The Problem
The problem that arises, however, is that file names tend to get very long. As I am already putting the files in a directory based on the package name contained by the file, I was wondering if there is a way to somehow make the compiler understand that the package name can be retrieved from the file's directory name.
That is, rather than having to name the App.Test.File_Utils' file name app-test-file_utils, I would like it to reside under the app/test directory by the name file_utils.
Is this doable, or will I be stuck with the horrors of eventually having to name my files along the lines of: app-test-some-then-one-has-more_files-another_package-knew-test-more-important_package.ads? Granted, I have not missed something about how an Ada application should actually be structured.
What I have tried
I tried looking for answers in the package Naming configuration of the gpr files in the documentation, but to no avail. Furthermore I have been browsing the web for information, but decided it might be better to get help through Stackoverflow, so that other people who might struggle with this problem in the future (granted it is a problem in the first place) might also get help.
Any pointers in the right direction would be very helpful!
In the top-secret GNAT documentation there is a description of how to use non-default file names. It's a great deal of effort. You will probably give up, use the default names, and put them all in a single directory.
You can also simplify much of the effort by using GPS and letting it build your project file as you add files to your source directories.
I have a rather big library with a significant set of APIs that I need to expose. In fact, I'd like to expose the whole thing. There is a lot of namespacing going on, like:
FooLibrary.Bar
FooLibrary.Qux.Rumps
FooLibrary.Qux.Scrooge
..
Basically, what I would like to do is make sure that the user can access that whole namespace. I have had a whole bunch of trouble with this, and I'm totally new to closure, so I thought I'd ask for some input.
First, I need closurebuilder.py to send the full list of files to the closure compiler. This doesn't seem supported: --namespace Foo does not include Foo.Bar. --input only allows a single file, not a directory. Nor can I simply send my list of files to the closure compiler directly, because my code is also requiring things like "goog.assers", so I do need the resolver.
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
This is my main issue.
However, later the closure compiler, with ADVANCED_OPTIMIZATIONS on, will optimize all these names away. Now I can fix that by adding "#export" all over the place, which I am not happy about, but should work. I suppose it would also be valid to use an extern here. Or I could simply disable advanced optimizations.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
Finally, for working in source mode, I need to do goog.require() for every namespace I am using. This is merely an inconvenience, though I am mentioning because it sort of related to my trouble above. I would prefer to be able to do:
goog.requireRecursively('FooLibrary')
in order to pull all the child namespaces as well; thus, recreating with a single command the environment that I have when I am using the compiled version of my library.
I feel like I am possibly misunderstanding some things, or how Closure is supposed to be used. I'd be interested in looking at other Closure-based libraries to see how they solve this.
You are discovering that Closure-compiler is built more for the end consumer and not as much for the library author.
If you are exporting basically everything, then you would be better off with SIMPLE_OPTIMIZATIONS. I would still highly encourage you to maintain compatibility of your library with ADVANCED_OPTIMIZATIONS so that users can compile the library source with their project.
First, I need closurebuilder.py to send the full list of files to the closure compiler. ...
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
You would need to specify an --root of your source folder and specify the namespaces of the leaf nodes of your file dependency tree. You may have better luck with the now deprecated CalcDeps.py script. I still use it for some projects.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
You can't do that because it only makes sense based on the final usage. You as the library writer wish to export everything, but perhaps a consumer of your library wishes to include the source (uncompiled) version and have more dead code elimination. Library authors are stuck in a kind of middle ground between SIMPLE and ADVANCED optimization levels.
What I have done for this case is maintain a separate exports file for my namespace that exports everything. When compiling a standalone version of my library for distribution, the exports file is included in the compilation. However I can still include the library source (without the exports) into a project and get full dead code elimination. The work/payoff balance of this though must be weighed against just using SIMPLE_OPTIMIZATIONS for the standalone library.
My GeolocationMarker library has an example of this strategy.
My team now has an SVN + Ankh setup in ASP.NET with trunk, branches, and tags. We switch branches and work on code, but many times there will be inexplicable conflicts in files after simple changes. Why is this? I suspect we simply don't understand enough of how this works. Are there any do's and don'ts, or how should we be approaching our daily changes and commits, without causing conflicts? Is there a basic pecking order of operations to perform to achieve SVN zen? Are we updating before committing or something? Any help is greatly appreciated.
Always update before commit. If you really work with branches don't use switch or only if you really undstand the switch command and how it works otherwise checkout a branch into a fresh working copy in other words create a new one.
Always branch, merge on the solution element, make sure you're fully up to date before merging (ankhsvn will warn about this), also make sure you have no modified files before merging.
Read up on svnbook for when to use normal merging and when to use reintegrate.
Finally, if a conflict does occur, make sure you have a good 3way merge tool to solve the conflict. AnkhSVN recognizes a lot of them automatically, but I really like source gear diffmerge