What is the difference between the approaches below in collections - collections

Approach one : creating object of the subclass through reference of base class.
Approach Two : creating object of the subclass through reference of same class.
List<Point> objOne = new ArrayList<Point>();
ArrayList<Point> objTwo = new ArrayList<Point>();

List is an Interface even. It is more abstract and defines exactly but not more, the API properties. The implementation can differ in Java: ArrayList for bulk work, LinkedList for just a couple, saving on memory. This implementation decision should be hidden.
Functions should operate on List more than on a specific implementation, say ArrayList. It is also more generally formulated if one talks about Lists instead of ArrayLists. So also for variables I would not overspecify their type.
In many (scripting) languages like VB, PHP and others this distinction does not exist, and there is one type with one implementation. This simplifies their language and might appeal to some, but Java has a nice technical side.
You can play with different implementations, dynamically elect one with a factory method. Mock the implementation in a unit test.

List is an interface while ArrayList is a Class.
So lets suppose your co-worker wrote a function which takes in a list of points and does some work on it.
Now he used abstraction like this so that his code could work on both ArrayList as well as LinkedList.
public void myfunction(List<Point> pointlist){
//do something on the point
}
Now your first object objOne will work with this function because it has used Abstraction. This is how good code should be written.
Now your second object objTwo will not work with this function and so this kind of code should be avoided.

Related

What does `new(...)` do in Julia?

What is the function of new() in Julia? Is this question even specific enough?
I am looking through the module Mocha where new(...) is used quite commonly, but I don't see any definition of new(), only uses of it, nor do I find reference to it in the Julia documentation.
I thought it might then be defined in a module that is being used by Mocha, but then I would think I could learn about new() with Mocha.new from the REPL, but that comes back with ERROR: UndefVarError: new not defined.
For the life of me I can't figure out what new(...) is doing. If it doesn't sound like something common to Julia, what can I do to try to track down where it's defined?
From http://docs.julialang.org/en/release-0.4/manual/constructors/
Inner Constructor Methods
While outer constructor methods succeed in
addressing the problem of providing additional convenience methods for
constructing objects, they fail to address the other two use cases
mentioned in the introduction of this chapter: enforcing invariants,
and allowing construction of self-referential objects. For these
problems, one needs inner constructor methods. An inner constructor
method is much like an outer constructor method, with two differences:
It is declared inside the block of a type declaration, rather than
outside of it like normal methods.
It has access to a special locally existent function called new that creates objects of the block’s type.

Rationale behind Ada encapsulation of dynamically dispatching operations (primitives)

In Ada, Primitive operations of a type T can only be defined in the package where T is defined. For example, if a Vehicules package defines Car and Bike tagged record, both inheriting a common Vehicle abstract tagged type, then all operations than can dispatch on the class-wide Vehicle'Class type must be defined in this Vehicles package.
Let's say that you do not want to add primitive operations: you do not have the permission to edit the source file, or you do not want to clutter the package with unrelated features.
Then, you cannot define operations in other packages that implicitely dispatches on type Vehicle'Class.
For example, you may want to serialize vehicles (define a Vehicles_XML package with a To_Xml dispatching function) or display them as UI elements (define a Vehicles_GTK package with Get_Label, Get_Icon, ... dispatching functions), etc.
The only way to perform dynamic dispatch is to write the code explicitely; for example, inside Vechicle_XML:
if V in Car'Class then
return Car_XML (Car (V));
else
if V in Bike'Class then
return Bike_XML (Bike (V));
else
raise Constraint_Error
with "Vehicle_XML is only defined for Car and Bike."
end if;
(And a Visitor pattern defined in Vehicles and used elsewhere would work, of course, but that still requires the same kind of explicit dispatching code. edit in fact, no, but there is still some boilerplate code to write)
My question is then:
is there a reason why operations dynamically dispatching on T are restricted to be defined in the defining package of T?
Is this intentional? Is there some historical reasons behind this?
Thanks
EDIT:
Thanks for the current answers: basically, it seems that it is a matter of language implementation (freezing rules/virtual tables).
I agree that compilers are developped incrementally over time and that not all features fit nicely in an existing tool.
As such, isolating dispatching operators in a unique package seems to be a decision mostly guided by existing implementations than by language design. Other languages outside of the C++/Java family provide dynamic dispatch without such requirement (e.g. OCaml, Lisp (CLOS); if that matters, those are also compiled languages, or more precisely, language for which compilers exist).
When I asked this question, I wanted to know if there were more fundamental reasons, at language specification level, behind this part of Ada specifications (otherwise, does it really mean that the specification assumes/enforces a particular implementation of dynamic disapatch?)
Ideally, I am looking for an authoritative source, like a rationale or guideline section in Reference Manuals, or any kind of archived discussion about this specific part of the language.
I can think of several reasons:
(1) Your example has Car and Bike defined in the same package, both derived from Vehicles. However, that's not the "normal" use case, in my experience; it's more common to define each derived type in its own package. (Which I think is close to how "classes" are used in other compiled languages.) And note also that it's not uncommon to define new derived types afterwards. That's one of the whole points of object-oriented programming, to facilitate reuse; and it's a good thing if, when designing a new feature, you can find some existing type that you can derive from, and reuse its features.
So suppose you have your Vehicles package that defines Vehicle, Car, and Bike. Now in some other package V2, you want to define a new dispatching operation on a Vehicle. For this to work, you have to provide the overriding operations for Car and Bike, with their bodies; and assuming you are not allowed to modify Vehicles, then the language designers have to decide where the bodies of the new operation have to be. Presumably, you'd have to write them in V2. (One consequence is that the body that you write in V2 would not have access to the private part of Vehicles, and therefore it couldn't access implementation details of Car or Bike; so you could only write the body of that operation if terms of already-defined operations.) So then the question is: does V2 need to provide operations for all types that are derived from Vehicle? What about types derived from Vehicle that don't become part of the final program (maybe they're derived to be used in someone else's project)? What about types derived from Vehicle that haven't yet been defined (see preceding paragraph)? In theory, I suppose this could be made to work by checking everything at link time. However, that would be a major paradigm change for the language. It's not something that could be easily. (It's pretty common, by the way, for programmers to think "it would be nice to add feature X to a language, and it shouldn't be too hard because X is simple to talk about", without realizing just what a vast impact such a "simple" feature would have.)
(2) A practical reason has to do with how dispatching is implemented. Typically, it's done with a vector of procedure/function pointers. (I don't know for sure what the exact implementation is in all cases, but I think this is basically the case for every Ada compiler as well as for C++ and Java compilers, and probably C#.) What this means is that when you define a tagged type (or a class, in other languages), the compiler will set up a vector of pointers, and based on how many operations are defined for the type, say N, it will reserve slots 1..N in the vector for the addresses of the subprograms. If a type is derived from that type and defines overriding subprograms, the derived type gets its own vector, where slots 1..N will be pointers to the actual overriding subprograms. Then, when calling a dispatching subprogram, a program can look up the address in some known slot index assigned to that subprogram, and it will jump to the correct address depending on the object's actual type. If a derived type defines new primitive subprograms, new slots are assigned N+1..N2, and types derived from that could define new subprograms that get slots N2+1..N3, and so on.
Adding new dispatching subprograms to Vehicle would interfere with this. Since new types have been derived from Vehicle, you can't insert a new area into the vector after N, because code has already been generated that assumes the slots starting at N+1 have been assigned to new operations derived for derived types. And since we may not know all the types that have been derived from Vehicle and we don't know what other types will be derived from Vehicle in the future and how many new operations will be defined for them, it's hard to pick some other location in the vector that could be used for the new operations. Again, this could be done if all of the slot assignment were deferred until link time, but that would be a major paradigm change, again.
To be honest, I can think of other ways to make this work, by adding new operations not in the "main" dispatch vector but in an auxiliary one; dispatching would probably require a search for the correct vector (perhaps using an ID assigned to the package that defines the new operations). Also, adding interface types to Ada 2005 has already complicated the simple vector implementation somewhat. But I do think this (i.e. it doesn't fit into the model) is one reason why the ability to add new dispatching operations like you suggest isn't present in Ada (or in any other compiled language that I know of).
Without having checked the rationale for Ada 95 (where tagged types were introduced), I am pretty sure the freezing rules for tagged types are derived from the simple requirement that all objects in T'Class should have all the dispatching operations of type T.
To fulfill that requirement, you have to freeze type and say that no more dispatching operations can be added to type T once you:
Derive a type from T, or
Are at the end of the package specification where T was declared.
If you didn't do that, you could have a type derived from type T (i.e. in T'Class), which hadn't inherited all the dispatching operations of type T. If you passed an object of that type as a T'Class parameter to a subprogram, which knew of one more dispatching operation on type T, a call to that operation would have to fail. - We wouldn't want that to happen.
Answering your extended question:
Ada comes with both a Reference Manual (the ISO standard), a Rationale and an Annotated Reference Manual. And a large part of the discussions behind these documents are public as well.
For Ada 2012 see http://www.adaic.org/ada-resources/standards/ada12/
Tagged types (dynamic dispatching) was introduced in Ada 95. The documents related to that version of the standard can be found at http://www.adaic.org/ada-resources/standards/ada-95-documents/

Any way to automatically generate QSharedData-based structures?

Qt has a build-in supprt for creating objects with integrated reference counting via QSharedData and QSharedDataPointer. All works great, but for each such object I need to write a lot of code: QSharedData-based implementation class with constructor and copy constructor, object class itsef with accessor methods for each filed.
For a simple structures with 5-10 fields this requires really lot of near same code. Is it some ways to automate such classes generation? Maybe it's some generators exists that take a short description and automatically generates implementation class and object class with all accessors?
You usually don't have to implement copy ctor or operator= when using QSharedData/Pointer. The default impls copy/assign the QSharedData-derived member, which usually does the Right Thing (TM).
For the public class, you need to implement the ctor creating the private object, and if the private class is not declared in the header but in the implementation (which is better), a dtor (doing nothing, the only point is that is not inlined and defined in the .cpp, after the private declaration).
For the private class, no method/ctor/dtor implementations are necessary.
For simple value-based classes, writing setters is of course tedious, but the same is true if you use plain private member variables. The overhead in LOC doesn't grow with the number of members.
And no, there is no standard generator solution for that I know of, although writing a script or emacs macro etc. doing it is not that hard. Probably would make sense to add such things to a publicly available toolbox, or QtCreator...
I don't think generators would exist for these things, but I suggest two things:
(ab)use existing shared containers (QVector, QList...)
read the documentation (with examples) on QSharedData, QSharedDataPointer, and QExplicitelySharedDataPointer
The two subclasses have simple examples that show how to implement the shared-ness it seems. I can't help you further though, because I've never had the need to create my own.
On second thought, why not make all data fields public, and use the QSharedData derivative as a struct-like class with reference counting? Maybe not nice on encapsulation, but if you're careful, nothing wrong should happen.

The standard map/associative-array structure to use in flash actionscript 3?

I'm relatively new to flash, and is confused about what I should use to store and retrieve key value pairs. After some googling I've found various map-like things to choose from:
1) Use a Object:
var map:Object = new Object();
map["key"] = "value";
The problem is that it seems to lack some very basic features. For example to even get the size of map I'd have to write a util method.
2) Use a Dictionary
What does this standard library class provide over the simple object? It seems silly for it to exist if it's functionally identical to Object.
3) Go download some custom HashMap/HashTable implementation from the web.
I've used a lot of modern languages, and this is the first time I haven't been able to find a library implementation of an associative array within 5 minutes. So I'd like to get some best-practice advice from an experienced flash developer.
Thanks!
Maybe your google foo is a bit weak today?
But you're right, the built-in Object object, doesn't provide many extra features.
Dictionaries have at least two important differences with Objects:
They can use any object as a key. For Objects, the key has to be a string (if you pass any other object, the toString() method will be implicitly called).
You can optionally set they keys to be weak referenced (this doesn't make much sense for Objects).
Anyway, there are a number of opensource libraries that implement various data structures and collection types.
Just from the top of my head:
http://lab.polygonal.de/ds/
http://sibirjak.com/blog/index.php/collections/as3commons-collections/
You should use whichever option is needed for a particular situation. There is no correct answer to say "always use 'x'".
I've found that the vast majority of times Object based dictionaries are all that's needed. They're extremely fast and easy to use and I almost never need the extra features. It converts any key to a string which works well for most situations.
Dictionary provides some extra features but I've never needed object-based keys (and I've been programming in Flex since 1.0 alpha 1). The only time I've used a Dictionary is as a hack to get access to a weak reference since Flex doesn't provide a simple weak reference class.
More complex dictionaries are available which will provide more functionality. If you really need this functionality, then they will be useful, but I wouldn't recommend jumping in to use them when a plain old Object will work find for your needs. That said, if you do turn out to need them in your application, it might be best to using the same 3rd-party dictionary everywhere in that app for consistency and easy of maintenance.

Use-cases for reflection

Recently I was talking to a co-worker about C++ and lamented that there was no way to take a string with the name of a class field and extract the field with that name; in other words, it lacks reflection. He gave me a baffled look and asked when anyone would ever need to do such a thing.
Off the top of my head I didn't have a good answer for him, other than "hey, I need to do it right now". So I sat down and came up with a list of some of the things I've actually done with reflection in various languages. Unfortunately, most of my examples come from my web programming in Python, and I was hoping that the people here would have more examples. Here's the list I came up with:
Given a config file with lines like
x = "Hello World!"
y = 5.0
dynamically set the fields of some config object equal to the values in that file. (This was what I wished I could do in C++, but actually couldn't do.)
When sorting a list of objects, sort based on an arbitrary attribute given that attribute's name from a config file or web request.
When writing software that uses a network protocol, reflection lets you call methods based on string values from that protocol. For example, I wrote an IRC bot that would translate
!some_command arg1 arg2
into a method call actions.some_command(arg1, arg2) and print whatever that function returned back to the IRC channel.
When using Python's __getattr__ function (which is sort of like method_missing in Ruby/Smalltalk) I was working with a class with a whole lot of statistics, such as late_total. For every statistic, I wanted to be able to add _percent to get that statistic as a percentage of the total things I was counting (for example, stats.late_total_percent). Reflection made this very easy.
So can anyone here give any examples from their own programming experiences of times when reflection has been helpful? The next time a co-worker asks me why I'd "ever want to do something like that" I'd like to be more prepared.
I can list following usage for reflection:
Late binding
Security (introspect code for security reasons)
Code analysis
Dynamic typing (duck typing is not possible without reflection)
Metaprogramming
Some real-world usages of reflection from my personal experience:
Developed plugin system based on reflection
Used aspect-oriented programming model
Performed static code analysis
Used various Dependency Injection frameworks
...
Reflection is good thing :)
I've used reflection to get current method information for exceptions, logging, etc.
string src = MethodInfo.GetCurrentMethod().ToString();
string msg = "Big Mistake";
Exception newEx = new Exception(msg, ex);
newEx.Source = src;
instead of
string src = "MyMethod";
string msg = "Big MistakeA";
Exception newEx = new Exception(msg, ex);
newEx.Source = src;
It's just easier for copy/paste inheritance and code generation.
I'm in a situation now where I have a stream of XML coming in over the wire and I need to instantiate an Entity object that will populate itself from elements in the stream. It's easier to use reflection to figure out which Entity object can handle which XML element than to write a gigantic, maintenance-nightmare conditional statement. There's clearly a dependency between the XML schema and how I structure and name my objects, but I control both so it's not a big problem.
There are lot's of times you want to dynamically instantiate and work with objects where the type isn't known until runtime. For example with OR-mappers or in a plugin architecture. Mocking frameworks use it, if you want to write a logging-library and dynamically want to examine type and properties of exceptions.
If I think a bit longer I can probably come up with more examples.
I find reflection very useful if the input data (like xml) has a complex structure which is easily mapped to object-instances or i need some kind of "is a" relationship between the instances.
As reflection is relatively easy in java, I sometimes use it for simple data (key-value maps) where I have a small fixed set of keys. One one hand it's simple to determine if a key is valid (if the class has a setter setKey(String data)), on the other hand i can change the type of the (textual) input data and hide the transformation (e.g simple cast to int in getKey()), so the rest of the application can rely on correctly typed data.
If the type of some key-value-pair changes for one object (e.g. form int to float), i only have to change it in the data-object and its users but don't have to keep in mind to check the parser too. This might not be a sensible approach, if performance is an issue...
Writing dispatchers. Twisted uses python's reflective capabilities to dispatch XML-RPC and SOAP calls. RMI uses Java's reflection api for dispatch.
Command line parsing. Building up a config object based on the command line parameters that are passed in.
When writing unit tests, it can be helpful to use reflection, though mostly I've used this to bypass access modifiers (Java).
I've used reflection in C# when there was some internal or private method in the framework or a third party library that I wanted to access.
(Disclaimer: It's not necessarily a best-practice because private and internal methods may be changed in later versions. But it worked for what I needed.)
Well, in statically-typed languages, you'd want to use reflection any time you need to do something "dynamic". It comes in handy for tooling purposes (scanning the members of an object). In Java it's used in JMX and dynamic proxies quite a bit. And there are tons of one-off cases where it's really the only way to go (pretty much anytime you need to do something the compiler won't let you do).
I generally use reflection for debugging. Reflection can more easily and more accurately display the objects within the system than an assortment of print statements. In many languages that have first-class functions, you can even invoke the functions of the object without writing special code.
There is, however, a way to do what you want(ed). Use a hashtable. Store the fields keyed against the field name.
If you really wanted to, you could then create standard Get/Set functions, or create macros that do it on the fly. #define GetX() Get("X") sort of thing.
You could even implement your own imperfect reflection that way.
For the advanced user, if you can compile the code, it may be possible to enable debug output generation and use that to perform reflection.

Resources