Access Modifiers ... Why? - access-modifiers

Ok so I was just thinking to myself why do programmers stress so much when it comes down to Access Modifiers within OOP.
Lets take this code for example / PHP!
class StackOverflow
{
private var $web_address;
public function setWebAddress(){/*...*/}
}
Because web_address is private it cannot be changed by $object->web_address = 'w.e.', but the fact that that Variable will only ever change is if your programme does $object->web_address = 'w.e.';
If within my application I wanted a variable not to be changed, then I would make my application so that my programming does not have the code to change it, therefore it would never be changed ?
So my question is: What are the major rules and reasons in using private / protected / non-public entities

Because (ideally), a class should have two parts:
an interface exposed to the rest of the world, a manifest of how others can talk to it. Example in a filehandle class: String read(int bytes). Of course this has to be public, (one/the) main purpose of our class is to provide this functionality.
internal state, which noone but the instance itself should (have to) care about. Example in a filehandle class: private String buffer. This can and should be hidden from the rest of the world: They have no buisness with it, it's an implementation detail.
This is even done in language without access modifiers, e.g. Python - except that we don't force people to respect privacy (and remember, they can always use reflection anyway - encapsulation can never be 100% enforced) but prefix private members with _ to indicate "you shouldn't touch this; if you want to mess with it, do at your own risk".

Because you might not be the only developer in your project and the other developers might not know that they shouldn't change it. Or you might forget etc.
It makes it easy to spot (even the compiler can spot it) when you're doing something that someone has said would be a bad idea.

So my question is: What are the major rules and reasons in using private / protected / non-public entities
In Python, there are no access modifiers.
So the reasons are actually language-specific. You might want to update your question slightly to reflect this.
It's a fairly common question about Python. Many programmers from Java or C++ (or other) backgrounds like to think deeply about this. When they learn Python, there's really no deep thinking. The operating principle is
We're all adults here
It's not clear who -- precisely -- the access modifiers help. In Lakos' book, Large-Scale Software Design, there's a long discussion of "protected", since the semantics of protected make subclasses and client interfaces a bit murky.
http://www.amazon.com/Large-Scale-Software-Design-John-Lakos/dp/0201633620

Access modifiers is a tool for defensive programming strategy. You protect your code consciously against your own stupid errors (when you forget something after a while, didn't understand something correctly or just haven't had enough coffee).

You keep yourself from accidentally executing $object->web_address = 'w.e.';. This might seem unnecessary at the moment, but it won't be unnecessary if
two month later you want to change something in the project (and forgot all about the fact that web_address should not be changed directly) or
your project has many thousand lines of code and you simply cannot remember which field you are "allowed" to set directly and which ones require a setter method.

Just because a class has "something" doesn't mean it should expose that something. The class should implement its contract/interface/whatever you want to call it, but in doing so it could easily have all kinds of internal members/methods that don't need to be (and by all rights shouldn't be) known outside of that class.
Sure, you could write the rest of your application to just deal with it anyway, but that's not really considered good design.

Related

How can we mock private methods without using power mockito

Can we mock private methods without using powermockito. I know it is possible though powermockito but just wanted to check with all, is it possible by some other means.
Thanks
-Sam
By design, this isn't possible without PowerMockito or a similar tool.
See Mockito's Wiki where they give the following reasons:
It requires hacking of classloaders that is never bullet proof and it
changes the API (you must use custom test runner, annotate the class,
etc.).
It is very easy to work around - just change the visibility of
method from private to package-protected (or protected).
It requires
the team to spend time implementing & maintaining it. And it does not
make sense given point (2) and a fact that it is already implemented
in different tool (powermock).
Finally... Mocking private methods is a
hint that there is something wrong with Object Oriented understanding.
In OO you want objects (or roles) to collaborate, not methods. Forget
about pascal & procedural code. Think in objects.
There are of course cases where it is not possible to work around, but just take a step back and
Make sure that you are testing the correct things (should you be testing private methods instead of the public ones)
Consider just changing these methods to be package private instead of relying on PowerMock.
Yes we can use the Reflection API for this which provided by Java vendor.

final methods in JavaFX source

Problem
I need to overwrite the method
#Override protected final void layoutChartChildren(double top, double left, double width, double height)
of the XYChart class. Obviously I'm not allowed to.
Question
Why do people declare methods as "final"? Is there any benefit in that?
This answer is just a verbatim quote of text by Richard Bair, one of the JavaFX API designers, which was posted on a mailing list in response to the question: "Why is almost everything in the [JavaFX] API final?"
Subclassing breaks encapsulation. That's the fundamental reason why
you must design with care to allow for subclassing, or prohibit it.
Making all the fields of a class public would give developers
increased power -- but of course this breaks encapsulation, so we
avoid it.
We broke people all the time in Swing. It was very difficult to make
even modest bug fixes in Swing without breaking somebody. Changing the
order of calls in a method, broke people. When your framework or API
is being used by millions of programs and the program authors have no
way of knowing which version of your framework they might be running
on (the curse of a shared install of the JRE!), then you find an awful
lot of wisdom in making everything final you possibly can. It isn't
just to protect your own freedom, it actually creates a better product
for everybody. You think you want to subclass and override, but this
comes with a significant downside. The framework author isn't going to
be able to make things better for you in the future.
There's more to it though. When you design an API, you have to think
about the combinations of all things allowed by a developer. When you
allow subclassing, you open up a tremendous number of additional
possible failure modes, so you need to do so with care. Allowing a
subclass but limiting what a superclass allows for redefinition
reduces failure modes. One of my ideals in API design is to create an
API with as much power as possible while reducing the number of
failure modes. It is challenging to do so while also providing enough
flexibility for developers to do what they need to do, and if I have
to choose, I will always err on the side of giving less API in a
release, because you can always add more API later, but once you've
released an API you're stuck with it, or you will break people. And in
this case, API doesn't just mean the method signature, it means the
behavior when certain methods are invoked (as Josh points out in
Effective Java).
The getter / setter method problem Jonathan described is a perfect
example. If we make those methods non-final, then indeed it allows a
subclass to override and log calls. But that's about all it is good
for. If the subclass were to never call super, then we will be broken
(and their app as well!). They think they're disallowing a certain
input value, but they're not. Or the getter returns a value other than
what the property object holds. Or listener notification doesn't
happen right or at the right time. Or the wrong instance of the
property object is returned.
Two things I really like: final, and immutability. GUI's however tend
to favor big class hierarchies and mutable state :-). But we use final
and immutability as much as we can.
Some information:
Best practice since JavaFX setters/getters are final?

Why are getters prefixed with the word "get"?

Generally speaking, creating a fluid API is something that makes all programmers happy; Both for the creators who write the interface, and the consumers who program against it. Looking beyond conventions, why is it that we prefix all our getters with the word "get". Omitting it usually results in a more fluid, easy to read set of instructions, which ultimately leads to happiness (however small or passive). Consider this very simple example. (pseudo code)
Conventional:
person = new Person("Joey")
person.getName().toLower().print()
Alternative:
person = new Person("Joey")
person.name().toLower().print()
Of course this only applies to languages where getters/setters are the norm, but is not directed at any specific language. Were these conventions developed around technical limitations (disambiguation), or simply through the pursuit of a more explicit, intentional feeling type of interface, or perhaps this is just a case of trickle a down norm. What are your thoughts? And how would simple changes to these conventions impact your happiness / daily attitudes towards your craft (however minimal).
Thanks.
Because, in languages without Properties, name() is a function. Without some more information though, it's not necessarily specific about what it's doing (or what it's going to return).
Functions/Methods are also supposed to be Verbs because they are performing some action. name() obviously doesn't fit the bill because it tells you nothing about what action it is performing.
getName() lets you know without a doubt that the method is going to return a name.
In languages with Properties, the fact that something is a Property expresses the same meaning as having get or set attached to it. It merely makes things look a little neater.
The best answer I have ever heard for using the get/set prefixes is as such:
If you didn't use them, both the accessor and mutator (getter and setter) would have the same name; thus, they would be overloaded. Generally, you should only overload a method when each implementation of the method performs a similar function (but with different inputs).
In this case, you would have two methods with the same name that peformed very different functions, and that could be confusing to users of the API.
I always appreciate consistent get/set prefixing when working with a new API and its documentation. The automatic grouping of getters and setters when all functions are listed in their alphabetical order greatly helps to distinguish between simple data access and advanced functinality.
The same is true when using intellisense/auto completion within the IDE.
What about the case where a property is named after an verb?
object.action()
Does this get the type of action to be performed, or execute the action... Adding get/set/do removes the ambiguity which is always a good thing...
object.getAction()
object.setAction(action)
object.doAction()
In school we were taught to use get to distinguish methods from data structures. I never understood why the parens wouldn't be a tipoff. I'm of the personal opinion that overuse of get/set methods can be a horrendous time waster, and it's a phase I see a lot of object oriented programmers go through soon after they start.
I may not write much Objective-C, but since I learned it I've really come to love it's conventions. The very thing you are asking about is addressed by the language.
Here's a Smalltalk answer which I like most. One has to know a few rules about Smalltalk BTW.
fields are only accessible in the they are defined.If you dont write "accessors" you won't be able to do anything with them.
The convention there is having a Variable (let's anme it instVar1.
then you write a function instVar1 which just returns instVar1 and instVar: which sets
the value.
I like this convention much more than anything else. If you see a : somewhere you can bet it's some "setter" in one or the other way.
Custom.
Plus, in C++, if you return a reference, that provides potential information leakage into the class itself.

Dispose & Finalize for collections of properties?

I'm looking at some vb.net code I just inherited, and cannot fathom why the original developer would do this.
Basically, each "Domain" class is a collection of properties. And each one implements IDisposable.Dispose, and overrides Finalize(). There is no base class, so each just extents Object.
Dispose sets each private var to Nothing, or calls _private.Dispose when the property is another domain object. There's a private var that tracks the disposed state, and the final thing in Dispose is GC.suppressFinalize(Me)
Finalize just calls Me.Dispose and MyBase.Finalize.
Is there any benefit to this? Any harm? There are no un-managed resources, no db connections, nothing that would seem to need this.
That strikes me as being a VB6 pattern.
I would bet the guy was coming straight from VB6, maybe in the earlier days of .NET when these patterns were not widely understood.
There also is one case were setting an nternal reference to nothing is useful in a call to Dispose: when the member is marked as Withevents.
Without that, you risk having an uncollected object handling events when it really should not be doing that anymore.
It would seem to me that this is something that is NOT needed at all, especially without un-managed resources and data connections.
If you happen to be able to sanitize and post the code we might be able to get a bit more insight, but realistically I can't see a need for it.
Depending on the size of the objects, and how often they are created/destroyed, it could be to ensure GC can happen as early as possible.
It may be, that this pattern was used in other projects and it continues on without understanding why it was used in the first place. Monkey Gardeners
The only reason that I could see for this -- and this is dubious at best -- is if these things are being created and disposed of higher in the "food chain" and there is a potential for some of these domain classes to have either a limited or unmanaged resource at some point.
Even that is sketchy...it sounds like someone came from an unmanaged background and was looking for the .NET equivalent to managing your memory and came across the IDisposable interface.

Would you consider this a singleton/singleton pattern?

Imagine in the Global.asax.cs file I had an instance class as a private field. Let's say like this:
private MyClass _myClass = new MyClass();
And I had a static method on Global called GetMyClass() that gets the current HttpApplication and returns that instance.
public static MyClass GetMyClass()
{
return ((Global)HttpContext.Current.ApplicationInstance)._myClass;
}
So I could get the instance on the current requests httpapplication by calling Global.GetMyClass().
Keep in mind that there is more than one (Global) HttpApplication. There is an HttpApplication for each request and they are pooled/shared, so in the truest sense it is not a real singleton. But it does follow the pattern to a degree.
So as the question asked, would you consider this at the very least the singleton pattern?
Would you say it should not be used? Would you discourage its use? Would you say it's a possibly bad practice like a true singleton.
Could you see any problems that may arise from this type of usage scenario?
Or would you say it's not a true singleton, so it's OK, and not bad practice. Would you recommend this as a semi-quasi singleton where an instance per request is required? If not what other pattern/suggestion would you use/give?
Have you ever used anything such as this?
I have used this on past projects, but I am unsure if it's a practice I should stay away from. I have never had any issues in the past though.
Please give me your thoughts and opinions on this.
I am not asking what a singleton is. And I consider a singleton bad practice when used improperly which is in many many many cases. That is me. However, that is not what I am trying to discuss. I am trying to discuss THIS scenario I gave.
Whether or not this fits the cookie-cutter pattern of a Singleton, it still suffers from the same problems as Singleton:
It is a static, concrete reference and cannot be substituted for separate behavior or stubbed/mocked during a test
You cannot subclass this and preserve this behavior, so it's quite easy to circumvent the singleton nature of this example
I'm not a .NET person so I'll refrain from commenting on this, except for this part:
Would you say its bad practice like a true singleton.
True singletons aren't 'bad practice'. They're HORRIBLY OVERUSED but that's not the same thing. I read something recently (can't remember where, alas) where someone pointed out the -- 'want or need' vs. 'can'.
"We only want one of these", or "we'll only need one": not a singleton.
"We CAN only have one of these": singleton
That is, if the very idea of having two of that object will break something in horrible and subtle ways, yes, use a singleton. This is true a lot more rarely than people think, hence the proliferation of singletons.
A Singleton is an object, of which, there CAN BE only one.
Objects of which there just happens to be one right now are not singleton.
Since you're talking about a web application, you need to be very careful with assuming anything with static classes or this type of pseudo-singleton because as David B said, they are only shared across that thread. Where you will get in trouble is if IIS is configured to use more than one worker process (configured with the ill-named "Web-Garden" mode, but also the # worker processes can be set in machine.config). Assuming the box has more than one processor, whoever is trying to tweak it's performance is bound to turn this on.
A better pattern for this sort of thing is to use the HttpCache object. It is already thread-safe by nature, but what still catches most people is you object also needs to be thread-safe (since you're only going to probably create the instance and then read/write to a lot of its properties over time). Here's some skeleton code to give you an idea of what I'm talking about:
public SomeClassType SomeProperty
{
get
{
if (HttpContext.Current.Cache["SomeKey"] == null)
{
HttpContext.Current.Cache.Add("SomeKey", new SomeClass(), null,
System.Web.Caching.Cache.NoAbsoluteExpiration, TimeSpan.FromDays(1),
CacheItemPriority.NotRemovable, null);
}
return (SomeClassType) HttpContext.Current.Cache["SomeKey"];
}
}
Now if you think you might need a web farm (multi-server) scale path, then the above won't work as the application cache isn't shared across machines.
Forget singleton for a moment.
You have static methods that return application state. You better watch out.
If two threads access this shared state... boom. If you live on the webserver, your code will eventually be run in a multi-threaded context.
I would say that it is definitely NOT a singleton. Design patterns are most useful as definitions of common coding practices. When you talk about singletons, you are talking about an object where there is only one instance.
As you yourself have noted, there are multiple HttpApplications, so your code does not follow the design of a Singleton and does not have the same side-effects.
For example, one might use a singleton to update currency exchange rates. If this person unknowingly used your example, they would fire up seven instances to do the job that 'only one object' was meant to do.

Resources