I have spent a day migrating all PMD and Checkstyle rules to the new Squid rules, as the PMD/Checkstyle ones are marked as deprecated.
However, some fine tuning options which I am used to with PMD/CS, are not present in Squid.
As a result, Sonar is cluttered with thousands of issues which reports nothing of real value.
Example 1
Rule:
BadConstantName_S00115_Check / S00115
All our enums are implemented with camelCase instead of CONSTANT_NAME, e.g.:
public enum Classification {
PoorMinus(1),
Poor(2),
PoorPlus(3),
OrdinaryMinus(4),
Ordinary(5),
is easier to read than:
public enum Classification {
POOR_MINUS(1),
POOR(2),
POOR_PLUS(3),
This is done to improve readability when referenced elsewhere in the code (using static imports)
So what I am looking for is a way to suppress this rule for enums, as we want to keep the rule for "real" constants.
Example 2
Rule:
MethodCyclomaticComplexity
After migrating this rule reports complexity for all equals and hashcode methods.
(the deprecated CS rule was easy to suppress for these methods)
It makes no sense (at least for us) to measure complexity of methods which are auto-generated by Eclipse/IntelliJ. What is important is to measure complexity on the "business logic" part of the code
Here we would really like to suppress the rule for these/specific methods
Example 3
Rule: UndocumentedApi
I want to ensure javadoc for interfaces (including class and methods) and classes at class level only (not methods/fields).
As it is now, this is not possible.
Again, I would like to disable checking for methods and fields
CHALLENGE
Does anybody know how to achieve these kind of suppression?
I have looked at Settings / Exclusions on SonarQube, however it seems very hard or even impossible to achieve this with the exclusions settings.
So what we really need is to be able to tweak rules to suppress checking of certain types and methods, to make the rules more flexible.
Ideally this should be implemented as a general features, which can be easily applied to rules where this makes sense.
As for now I have to disable these rules (and most likely others as well), because the configuration of the rules are not fine-grained.
Where can I submit this as a feature request?
I have checked out the source from Github, so currently looking into programmatically fixing these issues. This is however not a viable long-term solution.
Dag is correct, these rules have no configurations. After internal discussions at SonarSource, configuratons are not going to be added.
The best idea for your situation would be to implement custom rules in Java as discussed here
I have seen similar need on other rules too, but i guess the new philosophy is something else:
http://www.sonarqube.org/already-158-checkstyle-and-pmd-rules-deprecated-by-sonarqube-java-rules/
Too many configuration options: in a perfect world, a good quality rule is a rule WITHOUT any configuration options.
PS: Somehow i don't feel comfortable with the very long term ambition of making PMD, Checkstyle and Findbugs obsolete via SonarQubes own rule engine, but this is just my opinion.
Especially for PMD the way isn't that long. Support for PMD 5 doesn't seem to come and SonarQube 4.2 does not even have the pmd-plugin by default.
Related
I'm writing a desktop app using Gnome technologies, and I reached the
stage I started planning Semantic Desktop support.
After a lot of brainstorming, sketching ideas and models, writing notes
and reading a lot about RDF and related topics, I finally came up with a
plan draft.
The first thing I decided to do is to define the way I give URIs to
resources, and this is where I'd like to hear your advice.
My program consists of two parts:
1) On the lower level, an RDF schema is defined. It's a standard set of
classes and properties, possible extended by users who want more options
(using a definition language translated to RDF).
2) On the high level, the user defines resources using those classes and
properties.
There's no problem with the lower level, because the data model is
public: Even if a user decides to add new content, she's very welcome to
share it and make other people's apps have more features. The problem is
with the second part. In the higher level, the user defines tasks,
meetings, appointments, plans and schedules. These may be private, and
the user may prefer to to have any info in the URI revealing the source
of the information.
So here are the questions I have on my mind:
1) Which URI scheme should I use? I don't have a website or any web
pages, so using http doesn't make sense. It also doesn't seem to make
sense to use any other standard IANA-registered URI. I've been
considering two options: Use some custom, my own, URI scheme name for
public resources, and use a bare URN for private ones, something like
this:
urn : random_name_i_made_up : some_private_resource_uuid
But I was wondering whether a custom URI scheme is a good decision, I'm
open to hear ideas from you :)
2) How to hide the private resources? On one hand, it may be very useful
for the URI to tell where a task came from, especially when tasks are
shared and delegated between people. On the other hand, it doesn't
consider privacy. Then I was thinking, can I/should I use two different
URI styles depending on user settings? This would create some
inconsistency. I'm not sure what to do here, since I don't have any
experience with URIs. Hopefully you have some advice for me.
1) Which URI scheme should I use?
I would advise the standard urn:uuid: followed by your resource UUID. Using standards is generally to be preferred over home-grown solutions!
2) How to hide the private resources?
Don't use different identifier schemes. Trying to bake authorization and access control into the identity scheme is mixing the layers in a way that's bound to cause you pain in the future. For example, what happens if a user makes some currently private content (e.g. a draft) into public (it's now in its publishable form)?
Have a single, uniform identifier solution, then provide one or more services that may or may not resolve a given identifier to a document, depending on context (user identity, metadata about the content itself, etc etc). Yes this is much like an HTTP server would do, so you may want to reconsider whether to have an embedded HTTP service in your architecture. If not, the service you need will have many similarities to HTTP, you just need to be clear the circumstances in which an identifier may be resolved to a document, what happens when that is either not possible or not permitted, etc.
You may also want to consider where you're going to add the most value. Re-inventing the basic service access protocols may be a fun exercise, but your users may get more value if you re-use standard components at the basic service level, and concentrate instead on innovating and adding features once the user actually has access to the content objects.
I have a system where 'dynamic logic' is implemented as Drools Rules, using Rules engine.
For each client implementation, custom pricing and tax calculation logic is implementted using the drl files for that specific implementation.
rule 'abc'
when
name = 'X'
then
price= '12'
end
And one rule's condition is dependent on what's set in the previous rules, so there is basically rules transition.
This is really painful as the drools rules are not sequential programming and is not developer friendly. There get introduced lots of bugs due to mis-interpretation of how drools evaluates.
Are there a better 'java/groovy' alternative that could easily replace it?
I think the answer is going to depend on what you ultimately want the end solution to be. If you are wanting to pull your business rules out of a rules engine and put them into java/groovy, that is very different from wanting to pull them out of one rules engine and into another.
Your questions seems to lean towards the prior, so I'll address that. Be very careful with this approach. The previous individuals who implemented this appear to have done this the proper way with respect to using the Rete algorithm, as it sounds like the firing of one rule can execute other rules, this is good business rules - they aren't sequential they are declarative. Remember that imperative software is written for engineers mostly, it doesn't map back to the real world 100% of the time :)
If you put want to move this into java/groovy you are moving into an imperative language, which could put you into an if/then/else hell. I would suggest the following:
Isolate this code away from the rest of your codebase - you will be doing a lot of maintenance on this code in the future when the business changes their rules. Good interface design and encapsulation here will payoff big time down the road.
Develop some type of DSL with your business customer so when they say something like "Credit Policy", you know exactly what they are referring to and can change the related rules appropriately.
Unit Test, Unit Test, Unit Test. This applies to your current configuration too. If you are finding bugs now, why aren't your tests? It doesn't take long to setup junit to create an object and invoke your Drools engine and test the response. If you add some loops to test ranges of variables that expect the same response you can be in the hundreds of thousands of tests in no time.
As an aside: If you don't want to go down this route, then I highly suggest getting some training on Drools so that you understand the engine and Rete if you don't already. There are some big wins you can make with your customers if you are able to quickly translate their rules into implementable software.
Ok so I was just thinking to myself why do programmers stress so much when it comes down to Access Modifiers within OOP.
Lets take this code for example / PHP!
class StackOverflow
{
private var $web_address;
public function setWebAddress(){/*...*/}
}
Because web_address is private it cannot be changed by $object->web_address = 'w.e.', but the fact that that Variable will only ever change is if your programme does $object->web_address = 'w.e.';
If within my application I wanted a variable not to be changed, then I would make my application so that my programming does not have the code to change it, therefore it would never be changed ?
So my question is: What are the major rules and reasons in using private / protected / non-public entities
Because (ideally), a class should have two parts:
an interface exposed to the rest of the world, a manifest of how others can talk to it. Example in a filehandle class: String read(int bytes). Of course this has to be public, (one/the) main purpose of our class is to provide this functionality.
internal state, which noone but the instance itself should (have to) care about. Example in a filehandle class: private String buffer. This can and should be hidden from the rest of the world: They have no buisness with it, it's an implementation detail.
This is even done in language without access modifiers, e.g. Python - except that we don't force people to respect privacy (and remember, they can always use reflection anyway - encapsulation can never be 100% enforced) but prefix private members with _ to indicate "you shouldn't touch this; if you want to mess with it, do at your own risk".
Because you might not be the only developer in your project and the other developers might not know that they shouldn't change it. Or you might forget etc.
It makes it easy to spot (even the compiler can spot it) when you're doing something that someone has said would be a bad idea.
So my question is: What are the major rules and reasons in using private / protected / non-public entities
In Python, there are no access modifiers.
So the reasons are actually language-specific. You might want to update your question slightly to reflect this.
It's a fairly common question about Python. Many programmers from Java or C++ (or other) backgrounds like to think deeply about this. When they learn Python, there's really no deep thinking. The operating principle is
We're all adults here
It's not clear who -- precisely -- the access modifiers help. In Lakos' book, Large-Scale Software Design, there's a long discussion of "protected", since the semantics of protected make subclasses and client interfaces a bit murky.
http://www.amazon.com/Large-Scale-Software-Design-John-Lakos/dp/0201633620
Access modifiers is a tool for defensive programming strategy. You protect your code consciously against your own stupid errors (when you forget something after a while, didn't understand something correctly or just haven't had enough coffee).
You keep yourself from accidentally executing $object->web_address = 'w.e.';. This might seem unnecessary at the moment, but it won't be unnecessary if
two month later you want to change something in the project (and forgot all about the fact that web_address should not be changed directly) or
your project has many thousand lines of code and you simply cannot remember which field you are "allowed" to set directly and which ones require a setter method.
Just because a class has "something" doesn't mean it should expose that something. The class should implement its contract/interface/whatever you want to call it, but in doing so it could easily have all kinds of internal members/methods that don't need to be (and by all rights shouldn't be) known outside of that class.
Sure, you could write the rest of your application to just deal with it anyway, but that's not really considered good design.
If we had a defined hierarchy in an application. For ex a 3 - tier architecture, how do we restrict subsequent developers from violating the norms?
For ex, in case of MVP (not asp.net MVC) architecture, the presenter should always bind the model and view. This helps in writing proper unit test programs. However, we had instances where people directly imported the model in view and called the functions violating the norms and hence the test cases couldn't be written properly.
Is there a way we can restrict which classes are allowed to inherit from a set of classes? I am looking at various possibilities, including adopting a different design pattern, however a new approach should be worth the code change involved.
I'm afraid this is not possible. We tried to achieve this with the help of attributes and we didn't succeed. You may want to refer to my past post on SO.
The best you can do is keep checking your assemblies with NDepend. NDepend shows you dependancy diagram of assemblies in your project and you can immediately track the violations and take actions reactively.
(source: ndepend.com)
It's been almost 3 years since I posted this question. I must say that I have tried exploring this despite the brilliant answers here. Some of the lessons I've learnt so far -
More code smell come out by looking at the consumers (Unit tests are best place to look, if you have them).
Number of parameters in a constructor are a direct indication of number of dependencies. Too many dependencies => Class is doing too much.
Number of (public) methods in a class
Setup of unit tests will almost always give this away
Code deteriorates over time, unless there is a focused effort to clear technical debt, and refactoring. This is true irrespective of the language.
Tools can help only to an extent. But a combination of tools and tests often give enough hints on various smells. It takes a bit of experience to catch them in a timely fashion, particularly to understand each smell's significance and impact.
You are wanting to solve a people problem with software? Prepare for a world of pain!
The way to solve the problem is to make sure that you have ways of working with people that you don't end up with those kinds of problems.... Pair Programming / Review. Induction of people when they first come onto the project, etc.
Having said that, you can write tools that analyse the software and look for common problems. But people are pretty creative and can find all sorts of bizarre ways of doing things.
Just as soon as everything gets locked down according to your satisfaction, new requirements will arrive and you'll have to break through the side of it.
Enforcing such stringency at the programming level with .NET is almost impossible considering a programmer can access all private members through reflection.
Do yourself and favour and schedule regular code reviews, provide education and implement proper training. And, as you said, it will become quickly evident when you can't write unit tests against it.
What about NetArchTest, which is inspired by ArchUnit?
Example:
// Classes in the presentation should not directly reference repositories
var result = Types.InCurrentDomain()
.That()
.ResideInNamespace("NetArchTest.SampleLibrary.Presentation")
.ShouldNot()
.HaveDependencyOn("NetArchTest.SampleLibrary.Data")
.GetResult()
.IsSuccessful;
// Classes in the "data" namespace should implement IRepository
result = Types.InCurrentDomain()
.That().HaveDependencyOn("System.Data")
.And().ResideInNamespace(("ArchTest"))
.Should().ResideInNamespace(("NetArchTest.SampleLibrary.Data"))
.GetResult()
.IsSuccessful;
"This project allows you create tests that enforce conventions for class design, naming and dependency in .Net code bases. These can be used with any unit test framework and incorporated into a build pipeline. "
Say you got an old codebase that you need to maintain and it's clearly not compliant with current standards. How would you distribute your efforts to keep the codebase backwards compatible while gaining standards compliance? What is important to you?
At my workplace we don't get any time to refactor things just because it makes the code better. There has to be a bug or a feature request from the customer.
Given that rule I do the following:
When I refactor:
If I have to make changes to an old project, I cleanup, refactor or rewrite the part that I'm changing.
Because I always have to understand the code to make the change in the first place, that's the best time to make other changes.
Also this is a good time to add missing unit tests, for your change and the existing code.
I always do the refactoring first and then make my changes, so that way I'm much more confident my changes didn't break anything.
Priorities:
The code parts which need refactoring the most, often contain the most bugs. Also we don't get any bug reports or feature requests for parts which are of no interest to the customer. Hence with this method prioritization comes automatically.
Circumenventing bad designs:
I guess there are tons of books about this, but here's what helped me the most:
I build facades:
Facades with a new good design which "quarantine" the existing bad code. I use this if I have to write new code and I have to reuse badly structured existing code, which I can't change due to time constraints.
Facades with the original bad design, which hide the new good code. I use this if I have to write new code which is used by the existing code.
The main purpose of those facades is dividing good from bad code and thus limiting dependencies and cascading effects.
I'd factor in time to fix up modules as I needed to work with them. An up-front rewrite/refactor isn't going to fly in any reasonable company, but it'll almost definitely be a nightmare to maintain a bodgy codebase without cleaning it up somewhat.