Associative Array with different key length - associative-array

I have a question about associative array.
Is it possible to have an associative array in which strings with different lengths used as keys?

In all the languages I'm aware of, it's allowed. IMHO, it would be an incredibly stupid way to design a language if it didn't allow this.
Are you having a problem with your program that you think is because of different key lengths?

Related

An array- or vector-like type with values stored on disk in Julia

I am looking for an Array-like type with the following properties:
stores elements on disk
elements can have composite type
elements are read into memory, not the whole array
it is possible to write individual elements without writing the whole array
supports setindex!, getindex, push!, pop!, shift!, unshift! and maybe vcat
is reasonably efficient
So far I have found the following leads:
https://docs.julialang.org/en/latest/stdlib/SharedArrays/
http://juliadb.org
https://github.com/JuliaIO/JLD.jl
The first one seems promising, but it seems the type of the elements has to be isbits (meaning a simple number, some structs but not, e.g., an Array{Float64,1}). And it's not clear if the whole array contents are loaded into memory.
If it does not exist yet, I will of course try to construct it myself.
NCDatasets.jl addresses part of the requirements:
stores elements on disk: yes
elements can have composite type: no (although some support for composite type is in NetCDF4, but not yet in NCDatasets.jl). Currently you can have only Arrays of basic types and Arrays of Vectors (of basic types).
elements are read into memory, not the whole array: yes
it is possible to write individual elements without writing the whole array supports setindex!, getindex, push!, pop!, shift!, unshift! and maybe vcat: just setindex!, getindex
is reasonably efficient: the efficency is reasonable for me :-)
The project making it yourself sounds very interesting. I think it would server certainly a gap in the current ecosystem.
Some storage technologies that might be good to have a look at are:
HDF5 (for storage, cross-platform and cross-language)
JLD2 (successor of JLD) https://github.com/simonster/JLD2.jl
rasdaman (a "database" for arrays) http://www.rasdaman.org/
possibly also BSON http://bsonspec.org/
Maybe you can also reach out to the JuliaIO group.

Using a hashmap instead of a table for memoization

In some dynamic programming problems, I notice that my cache table is very sparse. In other words, if I define a table as DP[i][j], i<=10^6, j<=10^2, only a fraction of the table is used and the rest is -1.
So my question is, is it common practice to use a hashmap instead to store (i, j) pairs with their DP value and access them in average O(1) time rather than storing them in the sparse table to save memory?
First of all, Yes you can use hashmap instead of the array for dynamic programming problems. But there are some limitations as well as well as benefits for using a hashmap.
When you use a hashmap for this particular case(dynamic programming), it reduces memory complexity but simultaneously it will increase the constant factor of your code. That means if you can perform 10^{8} operations/second with the help of array, then you will be able to perform around 10^{7} operations/second when used hashmap due to its constant factor although with the same complexity of the algorithm.
So if possible to declare that much size of the array, use array otherwise use the hashmap.
Yes, is the definitely a common practice to use hashmaps. Particularly in the case of sparsity.
It is even possible to go beyond that... For even larger problems, approximate dynamic programming draws from tools such as function approximation.

comparing CLOS objects

I am wondering why there is no built-in equality operator in Common Lisp for comparing CLOS objects (standard-classes). For instance, "equalp" can be applied on arrays, structures, hash-tables, however not on objects.
I assume a new test which descends an object and checks if slot-values are equalp can be written by the programmer, but I wonder if there is a reason for this not being part of the standard, since I imagine it's a fairly common thing to do? For instance, it seems the test functions for "make-hash-table" must be one of the built-in ones*, thus, I don't really see how to use objects as keys or values in a hash table.
*I've noticed there is a related question (Using Common Lisp CLOS objects as keys in a hashtable?), however it does not really answer my question.
Thanks and cheers!
M
I cannot offer a definitive answer, but I suspect that a part of the reason is historical:
CLOS was added to ANSI CL after the spec for equalp was finalized.
Note that equalp works on structure-objects as you expect it.
Note also that structures have a readable print syntax
while CLOS objects lack it.
However, this might not be such a big oversight as one might think at first.
CLOS objects can be relatively heavy-weight;
given that slot accessors
are generic functions,
figuring out object equality can quickly devolve to comparing functions.

Dictionary vs. hashtable

Can someone explain the difference between dictionaries and hashtables? In Java, I've read that dictionaries are a superset of hashtables, but I always thought it was the other way around. Other languages seem to treat the two as the same. When should one be used over the other, and what's the difference?
The Oxford Dictionary of Computing defines a dictionary as...
Any data structure representing a set of elements that can support the insertion and deletion of elements as well as a test for membership.
As such, dictionaries are an abstract idea that can be reasonably efficiently implemented as e.g. binary trees or hash tables, tries, or even direct array indexing if the keys are numeric and not too sparse. That said, python uses a closed-hashing hash table for its dict implementation, and C# seems to use some kind of hash table too (hence the need for a separate SortedDictionary type).
A hash table is a much more specific and concrete data structures: there are several implementations options (closed vs. open hashing being perhaps the most fundamental), but they're all characterised by O(1) amortised insertion, lookup and deletion, and there's no excuse for begin->end iteration worse than O(n + #buckets), while implementations may achieve better (e.g. GCC's C++ library has O(n) container iteration. The implementations necessarily depend on a hash function leading to an indexed probe in an array.
The way i see it, a hashtable is one way of implementing a dictionary. specifying that the key is hashfunction(x) and the value is any Object. The Java Dictionary can use any key as long as .equals(y) has been implemented for that object.
The 'answer' will also change depending on the language (C#? Java? JS?) you're using. in JS the 'dictionary' is implemented as a hashtable and there is no difference. ---- in another language (i believe it's C#), the Dictionary MUST be strongly typed fixed type key and fixed type value, while the Hashtable's value can be any type, and the two are not extended from one another.

What is the disadvantage of list as a universal data type representation?

Lisp programmers tend to use lists to represent all other data types.
However, I have heard that lists are not a good universal representation for data types.
What are the disadvantage of lists being used in this manner, in contrast to using records?
You mention "record". By this I take it that you're referring to fixed-element structs/objects/compound data. For instance, in HtDP syntax:
;; a packet is (make-packet destination source text) where destination is a number,
;; source is a number, and text is a string.
... and you're asking about the pros and cons of representing a packet as a list of length three,
rather than as a piece of compound data (or "record").
In instances where compound data is appropriate--the values have specific roles and names, and there are a fixed number of them--compound data is generally preferable; they help you to catch errors in your programs, which is the sine qua non of programming.
The disadvantage is that it isn't universal. Sometimes this is performance related: you want constant time lookups (array, hash table). Sometimes this is organization related: you want to name your data locations (Hash table, record ... although you could use name,value pairs in the list). It requires a little diligence on the part of the author to make the code self-documenting (more diligence than the record). Sometimes you want the type system to catch mistakes made by putting things in the wrong spot (record, typed tuples).
However, most issues can be addressed with OptimizeLater. The list is a versatile little data structure.
You're talking about what Peter Seibel addresses in Chapter 11 of Practical Common Lisp:
[Starting] discussion of Lisp's collections
with lists . . . often leads readers to the mistaken
conclusion that lists are Lisp's only collection type. To make matters
worse, because Lisp's lists are such a flexible data structure, it is
possible to use them for many of the things arrays and hash tables are
used for in other languages. But it's a mistake to focus too much on
lists; while they're a crucial data structure for representing Lisp
code as Lisp data, in many situations other data structures are more
appropriate.
Once you're familiar with all the data types Common Lisp offers,
you'll also see that lists can be useful for prototyping data
structures that will later be replaced with something more efficient
once it becomes clear how exactly the data is to be used.
Some reasons I see are:
A large hashtable, for example, has faster access than the equivalent alist
A vector of a single datatype is more compact and, again, faster to access
Vectors are more efficiently and easily accessed by index
Objects and structures allow you to access data by name, not position
It boils down to using the right datatype for the task at hand. When it's not obvious, you have two options: guess and fix it later, or figure it out now; either of those is sometimes the right approach.

Resources