final methods in JavaFX source - javafx

Problem
I need to overwrite the method
#Override protected final void layoutChartChildren(double top, double left, double width, double height)
of the XYChart class. Obviously I'm not allowed to.
Question
Why do people declare methods as "final"? Is there any benefit in that?

This answer is just a verbatim quote of text by Richard Bair, one of the JavaFX API designers, which was posted on a mailing list in response to the question: "Why is almost everything in the [JavaFX] API final?"
Subclassing breaks encapsulation. That's the fundamental reason why
you must design with care to allow for subclassing, or prohibit it.
Making all the fields of a class public would give developers
increased power -- but of course this breaks encapsulation, so we
avoid it.
We broke people all the time in Swing. It was very difficult to make
even modest bug fixes in Swing without breaking somebody. Changing the
order of calls in a method, broke people. When your framework or API
is being used by millions of programs and the program authors have no
way of knowing which version of your framework they might be running
on (the curse of a shared install of the JRE!), then you find an awful
lot of wisdom in making everything final you possibly can. It isn't
just to protect your own freedom, it actually creates a better product
for everybody. You think you want to subclass and override, but this
comes with a significant downside. The framework author isn't going to
be able to make things better for you in the future.
There's more to it though. When you design an API, you have to think
about the combinations of all things allowed by a developer. When you
allow subclassing, you open up a tremendous number of additional
possible failure modes, so you need to do so with care. Allowing a
subclass but limiting what a superclass allows for redefinition
reduces failure modes. One of my ideals in API design is to create an
API with as much power as possible while reducing the number of
failure modes. It is challenging to do so while also providing enough
flexibility for developers to do what they need to do, and if I have
to choose, I will always err on the side of giving less API in a
release, because you can always add more API later, but once you've
released an API you're stuck with it, or you will break people. And in
this case, API doesn't just mean the method signature, it means the
behavior when certain methods are invoked (as Josh points out in
Effective Java).
The getter / setter method problem Jonathan described is a perfect
example. If we make those methods non-final, then indeed it allows a
subclass to override and log calls. But that's about all it is good
for. If the subclass were to never call super, then we will be broken
(and their app as well!). They think they're disallowing a certain
input value, but they're not. Or the getter returns a value other than
what the property object holds. Or listener notification doesn't
happen right or at the right time. Or the wrong instance of the
property object is returned.
Two things I really like: final, and immutability. GUI's however tend
to favor big class hierarchies and mutable state :-). But we use final
and immutability as much as we can.
Some information:
Best practice since JavaFX setters/getters are final?

Related

Are Singletons EVIL in GUI Programming with Qt?

I'm just starting my first fairly large Qt project which will be mostly a bunch of screens with buttons, tab widgets, and Qwt Plots. The panel stack pattern described in Qt Quarterly 27 seems pretty nice for my application. Each of my screens is a QWidget encapsulated in a Panel which is shown/hidden by a QStackedWidget. However it uses a singleton pattern for each Panel so that they aren't all created immediately as the app starts and so that more than one of each screen isn't ever created.
So I started coding. Got the panel stack working. Added some code so that dynamically updating widgets aren't dynamically updating all the time. Got my history stack/back button working for the panels. Everything seems just fine, but I've got one nagging worry:
My code smells.
I am in no place to argue with any of the hate posted here and on blogs about the singleton pattern. I think I get it and the code I've written does make me feel a bit dirty with all the boilerplate lines and global objects. But I do like not having to worry about whether or not I already instantiated a screen before switching to it and adding it to my history stack. I just say switch to that screen, it's added to my history stack, and the magic works.
From what I've read there are also some cases where singletons can be worthwhile. Is this one of those special cases? The magic screen switching / history stack makes me think 'yes' but the sheer number of different singleton classes I'm going to have to create makes me think 'NO no NO NO NO'.
I want to just man up and figure out how to get the singleton pattern out of my code now so that I don't have to do it later. But I don't want to get rid of all my singleton classes just to get rid of my singleton classes because they're EVIL [citation needed].
Any input is much appreciated!
I don't really hate singletons, but this sounds like a case with no use for them. I don't understand why there are so many singletons in that article.
First, the PanelStack is a singleton by itself. Why? If that's your main widget, then just create it on the stack in the main(), which is both cleaner and faster. If it is a part of a more complicated UI, then put it there as a member of that UI. A regular class is just fine here, making it singleton only limits its possible uses.
Then, each panel is also a singleton? At this point even singleton lovers should begin to feel that there are too many of them already. Which is probably why you are asking this question in the first place. Let's see what real advantages singletons give here. Well, about the only advantage I can figure out from that article is the ability of lazily creating panels on the fly as they are needed. This is actually a good thing, but in fact, lazy creation and singletons are different patterns, although one often uses the other.
Why not just put all those panels in some common container instead? In this case, the PanelStack looks like a perfect candidate for it. It is the very place where panels are stored after all. Instead of a bunch of singletons, let's create a bunch of methods in the PanelStack:
class PanelStack : public QWidget
{
Q_OBJECT
public:
int addPanel(AbstractPanel *);
void showPanel(int);
RecordingsPanel *getRecordingsPanel();
ReecrdingDetailsPanel *getRecordingDetailsPanel();
private:
...
};
And so on. These get*Panel() methods can still create panels lazily as needed. Now, it's essentially the same thing as having a bunch of singletons, with some advantages added:
If we make panels children of the stack, they are automatically deleted when the stack is deleted. No need to worry about memory management which is always a pain with singletons.
You could even implement some sort of "garbage collector" in the PanelStack that deletes panels that haven't been used for some time. Or when some sort of "max active panels" limit is reached.
Now, the only disadvantage I can think of is that we have a dependency between the stack and the panels now. But what's worse, to store instances in one class, introducing a dependency, or to store them globally? If you think that the stack should be independent from panels, which does sound reasonable, then we probably just need another class to put all those things in. It could be a subclass of QApplication, or just some random "UI manager" class, but it is still better to store everything in one place than to store everything globally.
Using singletons here only breaks encapsulation and limits the possible uses of the whole UI. What if we want to have two windows with those panels? Or multiple tabs (think web browser)? Singletons will bite hard. And they are only really useful when the instance is accessed widely across many unrelated classes (think DB connections, loggers, pools and other typical singleton uses). They are mostly useless in an UI because with UI it is almost always obvious that "this thing belongs there, and probably nowhere else".

Access Modifiers ... Why?

Ok so I was just thinking to myself why do programmers stress so much when it comes down to Access Modifiers within OOP.
Lets take this code for example / PHP!
class StackOverflow
{
private var $web_address;
public function setWebAddress(){/*...*/}
}
Because web_address is private it cannot be changed by $object->web_address = 'w.e.', but the fact that that Variable will only ever change is if your programme does $object->web_address = 'w.e.';
If within my application I wanted a variable not to be changed, then I would make my application so that my programming does not have the code to change it, therefore it would never be changed ?
So my question is: What are the major rules and reasons in using private / protected / non-public entities
Because (ideally), a class should have two parts:
an interface exposed to the rest of the world, a manifest of how others can talk to it. Example in a filehandle class: String read(int bytes). Of course this has to be public, (one/the) main purpose of our class is to provide this functionality.
internal state, which noone but the instance itself should (have to) care about. Example in a filehandle class: private String buffer. This can and should be hidden from the rest of the world: They have no buisness with it, it's an implementation detail.
This is even done in language without access modifiers, e.g. Python - except that we don't force people to respect privacy (and remember, they can always use reflection anyway - encapsulation can never be 100% enforced) but prefix private members with _ to indicate "you shouldn't touch this; if you want to mess with it, do at your own risk".
Because you might not be the only developer in your project and the other developers might not know that they shouldn't change it. Or you might forget etc.
It makes it easy to spot (even the compiler can spot it) when you're doing something that someone has said would be a bad idea.
So my question is: What are the major rules and reasons in using private / protected / non-public entities
In Python, there are no access modifiers.
So the reasons are actually language-specific. You might want to update your question slightly to reflect this.
It's a fairly common question about Python. Many programmers from Java or C++ (or other) backgrounds like to think deeply about this. When they learn Python, there's really no deep thinking. The operating principle is
We're all adults here
It's not clear who -- precisely -- the access modifiers help. In Lakos' book, Large-Scale Software Design, there's a long discussion of "protected", since the semantics of protected make subclasses and client interfaces a bit murky.
http://www.amazon.com/Large-Scale-Software-Design-John-Lakos/dp/0201633620
Access modifiers is a tool for defensive programming strategy. You protect your code consciously against your own stupid errors (when you forget something after a while, didn't understand something correctly or just haven't had enough coffee).
You keep yourself from accidentally executing $object->web_address = 'w.e.';. This might seem unnecessary at the moment, but it won't be unnecessary if
two month later you want to change something in the project (and forgot all about the fact that web_address should not be changed directly) or
your project has many thousand lines of code and you simply cannot remember which field you are "allowed" to set directly and which ones require a setter method.
Just because a class has "something" doesn't mean it should expose that something. The class should implement its contract/interface/whatever you want to call it, but in doing so it could easily have all kinds of internal members/methods that don't need to be (and by all rights shouldn't be) known outside of that class.
Sure, you could write the rest of your application to just deal with it anyway, but that's not really considered good design.

API design: is "fault tolerance" a good thing?

I've consolidated many of the useful answers and came up with my own answer below
For example, I am writing a an API Foo which needs explicit initialization and termination. (Should be language agnostic but I'm using C++ here)
class Foo
{
public:
static void InitLibrary(int someMagicInputRequiredAtRuntime);
static void TermLibrary(int someOtherInput);
};
Apparently, our library doesn't care about multi-threading, reentrancy or whatnot. Let's suppose our Init function should only be called once, calling it again with any other input would wreak havoc.
What's the best way to communicate this to my caller? I can think of two ways:
Inside InitLibrary, I assert some static variable which will blame my caller for init'ing twice.
Inside InitLibrary, I check some static variable and silently aborts if my lib has already been initialized.
Method #1 obviously is explicit, while method #2 makes it more user friendly. I am thinking that method #2 probably has the disadvantage that my caller wouldn't be aware of the fact that InitLibrary shouln't be called twice.
What would be the pros/cons of each approach? Is there a cleverer way to subvert all these?
Edit
I know that the example here is very contrived. As #daemon pointed out, I should initialized myself and not bother the caller. Practically however, there are places where I need more information to properly initialize myself (note the use of my variable name someMagicInputRequiredAtRuntime). This is not restricted to initialization/termination but other instances where the dilemma exists whether I should choose to be quote-and-quote "fault tolorent" or fail lousily.
I would definitely go for approach 1, along with an easy-to-understand exception and good documentation that explains why this fails. This will force the caller to be aware that this can happen, and the calling class can easily wrap the call in a try-catch statement if needed.
Failing silently, on the other hand, will lead your users to believe that the second call was successful (no error message, no exception) and thus they will expect that the new values are set. So when they try to do something else with Foo, they don't get the expected results. And it's darn near impossible to figure out why if they don't have access to your source code.
Serenity Prayer (modified for interfaces)
SA, grant me the assertions
to accept the things devs cannot change
the code to except the things they can,
and the conditionals to detect the difference
If the fault is in the environment, then you should try and make your code deal with it. If it is something that the developer can prevent by fixing their code, it should generate an exception.
A good approach would be to have a factory that creates an intialized library object (this would require you to wrap your library in a class). Multiple create-calls to the factory would create different objects. This way, the initialize-method would then not be a part of the public interface of the library, and the factory would manage initialization.
If there can be only one instance of the library active, make the factory check for existing instances. This would effectively make your library-object a singleton.
I would suggest that you should flag an exception if your routine cannot achieve the expected post-condition. If someone calls your init routine twice, and the system state after calling it the second time will be the same would be the same as if it had just been called once, then it is probably not necessary to throw an exception. If the system state after the second call would not match the caller's expectation, then an exception should be thrown.
In general, I think it's more helpful to think in terms of state than in terms of action. To use an analogy, an attempt to open as "write new" a file that is already open should either fail or result in a close-erase-reopen. It should not simply perform a no-op, since the program will be expecting to be writing into an empty file whose creation time matches the current time. On the other hand, trying to close a file that's already closed should generally not be considered an error, because the desire is that the file be closed.
BTW, it's often helpful to have available a "Try" version of a method that might throw an exception. It would be nice, for example, to have a Control.TryBeginInvoke available for things like update routines (if a thread-safe control property changes, the property handler would like the control to be updated if it still exists, but won't really mind if the control gets disposed; it's a little irksome not being able to avoid a first-chance exception if a control gets closed when its property is being updated).
Have a private static counter variable in your class. If it is 0 then do the logic in Init and increment the counter, If it is more than 0 then simply increment the counter. In Term do the opposite, decrement until it is 0 then do the logic.
Another way is to use a Singleton pattern, here is a sample in C++.
I guess one way to subvert this dilemma is to fulfill both camps. Ruby has the -w warning switch, it is custom for gcc users to -Wall or even -Weffc++ and Perl has taint mode. By default, these "just work," but the more careful programmer can turn on these strict settings themselves.
One example against the "always complain the slightest error" approach is HTML. Imagine how frustrated the world would be if all browsers would bark at any CSS hacks (such as drawing elements at negative coordinates).
After considering many excellent answers, I've come to this conclusion for myself: When someone sits down, my API should ideally "just work." Of course, for anyone to be involved in any domain, he needs to work at one or two level of abstractions lower than the problem he is trying to solve, which means my user must learn about my internals sooner or later. If he uses my API for long enough, he will begin to stretch the limits and too much efforts to "hide" or "encapsulate" the inner workings will only become nuisance.
I guess fault tolerance is most of the time a good thing, it's just that it's difficult to get right when the API user is stretching corner cases. I could say the best of both worlds is to provide some kind of "strict mode" so that when things don't "just work," the user can easily dissect the problem.
Of course, doing this is a lot of extra work, so I may be just talking ideals here. Practically it all comes down to the specific case and the programmer's decision.
If your language doesn't allow this error to surface statically, chances are good the error will surface only at runtime. Depending on the use of your library, this means the error won't surface until much later in development. Possibly only when shipped (again, depends on alot).
If there's no danger in silently eating an error (which isn't a real error anyway, since you catch it before anything dangerous happens), then I'd say you should silently eat it. This makes it more user friendly.
If however someMagicInputRequiredAtRuntime varies from calling to calling, I'd raise the error whenever possible, or presumably the library will not function as expected ("I init'ed the lib with value 42, but it's behaving as if I initted with 11!?").
If this Library is a static class, (a library type with no state), why not put the call to Init in the type initializer? If it is an instantiatable type, then put the call in the constructor, or in the factory method that handles instantiation.
Don;t allow public access to the Init function at all.
I think your interface is a bit too technical. No programmer want to learn what concept you have used while designing the API. Programmers want solutions for their actual problems and don't want to learn how to use an API. Nobody wants to init your API, that is something that the API should handle in the background as far as possible. Find a good abstraction that shields the developer from as much low-level technical stuff as possible. That implies, that the API should be fault tolerant.

Why are getters prefixed with the word "get"?

Generally speaking, creating a fluid API is something that makes all programmers happy; Both for the creators who write the interface, and the consumers who program against it. Looking beyond conventions, why is it that we prefix all our getters with the word "get". Omitting it usually results in a more fluid, easy to read set of instructions, which ultimately leads to happiness (however small or passive). Consider this very simple example. (pseudo code)
Conventional:
person = new Person("Joey")
person.getName().toLower().print()
Alternative:
person = new Person("Joey")
person.name().toLower().print()
Of course this only applies to languages where getters/setters are the norm, but is not directed at any specific language. Were these conventions developed around technical limitations (disambiguation), or simply through the pursuit of a more explicit, intentional feeling type of interface, or perhaps this is just a case of trickle a down norm. What are your thoughts? And how would simple changes to these conventions impact your happiness / daily attitudes towards your craft (however minimal).
Thanks.
Because, in languages without Properties, name() is a function. Without some more information though, it's not necessarily specific about what it's doing (or what it's going to return).
Functions/Methods are also supposed to be Verbs because they are performing some action. name() obviously doesn't fit the bill because it tells you nothing about what action it is performing.
getName() lets you know without a doubt that the method is going to return a name.
In languages with Properties, the fact that something is a Property expresses the same meaning as having get or set attached to it. It merely makes things look a little neater.
The best answer I have ever heard for using the get/set prefixes is as such:
If you didn't use them, both the accessor and mutator (getter and setter) would have the same name; thus, they would be overloaded. Generally, you should only overload a method when each implementation of the method performs a similar function (but with different inputs).
In this case, you would have two methods with the same name that peformed very different functions, and that could be confusing to users of the API.
I always appreciate consistent get/set prefixing when working with a new API and its documentation. The automatic grouping of getters and setters when all functions are listed in their alphabetical order greatly helps to distinguish between simple data access and advanced functinality.
The same is true when using intellisense/auto completion within the IDE.
What about the case where a property is named after an verb?
object.action()
Does this get the type of action to be performed, or execute the action... Adding get/set/do removes the ambiguity which is always a good thing...
object.getAction()
object.setAction(action)
object.doAction()
In school we were taught to use get to distinguish methods from data structures. I never understood why the parens wouldn't be a tipoff. I'm of the personal opinion that overuse of get/set methods can be a horrendous time waster, and it's a phase I see a lot of object oriented programmers go through soon after they start.
I may not write much Objective-C, but since I learned it I've really come to love it's conventions. The very thing you are asking about is addressed by the language.
Here's a Smalltalk answer which I like most. One has to know a few rules about Smalltalk BTW.
fields are only accessible in the they are defined.If you dont write "accessors" you won't be able to do anything with them.
The convention there is having a Variable (let's anme it instVar1.
then you write a function instVar1 which just returns instVar1 and instVar: which sets
the value.
I like this convention much more than anything else. If you see a : somewhere you can bet it's some "setter" in one or the other way.
Custom.
Plus, in C++, if you return a reference, that provides potential information leakage into the class itself.

Dispose & Finalize for collections of properties?

I'm looking at some vb.net code I just inherited, and cannot fathom why the original developer would do this.
Basically, each "Domain" class is a collection of properties. And each one implements IDisposable.Dispose, and overrides Finalize(). There is no base class, so each just extents Object.
Dispose sets each private var to Nothing, or calls _private.Dispose when the property is another domain object. There's a private var that tracks the disposed state, and the final thing in Dispose is GC.suppressFinalize(Me)
Finalize just calls Me.Dispose and MyBase.Finalize.
Is there any benefit to this? Any harm? There are no un-managed resources, no db connections, nothing that would seem to need this.
That strikes me as being a VB6 pattern.
I would bet the guy was coming straight from VB6, maybe in the earlier days of .NET when these patterns were not widely understood.
There also is one case were setting an nternal reference to nothing is useful in a call to Dispose: when the member is marked as Withevents.
Without that, you risk having an uncollected object handling events when it really should not be doing that anymore.
It would seem to me that this is something that is NOT needed at all, especially without un-managed resources and data connections.
If you happen to be able to sanitize and post the code we might be able to get a bit more insight, but realistically I can't see a need for it.
Depending on the size of the objects, and how often they are created/destroyed, it could be to ensure GC can happen as early as possible.
It may be, that this pattern was used in other projects and it continues on without understanding why it was used in the first place. Monkey Gardeners
The only reason that I could see for this -- and this is dubious at best -- is if these things are being created and disposed of higher in the "food chain" and there is a potential for some of these domain classes to have either a limited or unmanaged resource at some point.
Even that is sketchy...it sounds like someone came from an unmanaged background and was looking for the .NET equivalent to managing your memory and came across the IDisposable interface.

Resources