Why _.map iteratee's arguments is (value, key) not (key, value)? - functional-programming

According to the doc,
If list is a JavaScript object, iteratee's arguments will be (value, key, list).
I constantly have to check the doc to verify the order. Why is value, key not key, value?
[EDIT]
I guess I'm (always) confused because the for loop in CoffeeScript iterates on key, value:
yearsOld = max: 10, ida: 9, tim: 11
ages = for child, age of yearsOld
"#{child} is #{age}"

Because the value is the more important, and most generic part of mapping over structures. Maybe not so much with _.map over objects, but when you map over arrays you typically use a unary function (which takes only the value). The index (or key) is hardly ever used, so it became the second argument that is usually omitted from the parameter list.

Related

Swiftui: how do you assign the value in a "String?" object to a "String" object?

Swiftui dictionaries have the feature that the value returned by using key access is always of type "optional". For example, a dictionary that has type String keys and type String values is tricky to access because each returned value is of type optional.
An obvious need is to assign x=myDictionary[key] where you are trying to get the String of the dictionary "value" into the String variable x.
Well this is tricky because the String value is always returned as an Optional String, usually identified as type String?.
So how is it possible to convert the String?-type value returned by the dictionary access into a plain String-type that can be assigned to a plain String-type variable?
I guess the problem is that there is no way to know for sure that there exists a dictionary value for the key. The key used to access the dictionary could be anything so somehow you have to deal with that.
As described in #jnpdx answer to this SO question (How do you assign a String?-type object to a String-type variable?), there are at least three ways to convert a String? to a String:
import SwiftUI
var x: Double? = 6.0
var a = 2.0
if x != nil {
a = x!
}
if let b = x {
a = x!
}
a = x ?? 0.0
Two key concepts:
Check the optional to see if it is nil
if the optional is not equal to nil, then go ahead
In the first method above, "if x != nil" explicitly checks to make sure x is not nil be fore the closure is executed.
In the second method above, "if let a = b" will execute the closure as long as b is not equal to nil.
In the third method above, the "nil-coalescing" operator ?? is employed. If x=nil, then the default value after ?? is assigned to a.
The above code will run in a playground.
Besides the three methods above, there is at least one other method using "guard let" but I am uncertain of the syntax.
I believe that the three above methods also apply to variables other than String? and String.

Java 8 Map merge VS compute, essential difference?

It seems Both merge and compute Map methods are created to reduce if("~key exists here~") when put.
My problem is: add to map a [key, value] pair when I know nothing: neither key existing in map nor it exist but has value nor value == null nor key == null.
words.forEach(word ->
map.compute(word, (w, prev) -> prev != null ? prev + 1 : 1)
);
words.forEach(word ->
map.merge(word, 1, (prev, one) -> prev + one)
);
Is the only difference 1 is moved from Bifunction to parameter?
What is better to use? Does any of merge, compute suggests key/val are existing?
And what is essential difference in use case of them?
The documentation of Map#compute(K, BiFunction) says:
Attempts to compute a mapping for the specified key and its current mapped value (or null if there is no current mapping). For example, to either create or append a String msg to a value mapping:
map.compute(key, (k, v) -> (v == null) ? msg : v.concat(msg))
(Method merge() is often simpler to use for such purposes.)
If the remapping function returns null, the mapping is removed (or remains absent if initially absent). If the remapping function itself throws an (unchecked) exception, the exception is rethrown, and the current mapping is left unchanged.
The remapping function should not modify this map during computation.
And the documentation of Map#merge(K, V, BiFunction) says:
If the specified key is not already associated with a value or is associated with null, associates it with the given non-null value. Otherwise, replaces the associated value with the results of the given remapping function, or removes if the result is null. This method may be of use when combining multiple mapped values for a key. For example, to either create or append a String msg to a value mapping:
map.merge(key, msg, String::concat)
If the remapping function returns null, the mapping is removed. If the remapping function itself throws an (unchecked) exception, the exception is rethrown, and the current mapping is left unchanged.
The remapping function should not modify this map during computation.
The important differences are:
For compute(K, BiFunction<? super K, ? super V, ? extends V>):
The BiFunction is always invoked.
The BiFunction accepts the given key and the current value, if any, as arguments and returns a new value.
Meant for taking the key and current value (if any), performing an arbitrary computation, and returning the result. The computation may be a reduction operation (i.e. merge) but it doesn't have to be.
For merge(K, V, BiFunction<? super V, ? super V, ? extends V>):
The BiFunction is invoked only if the given key is already associated with a non-null value.
The BiFunction accepts the current value and the given value as arguments and returns a new value. Unlike with compute, the BiFunction is not given the key.
Meant for taking two values and reducing them into a single value.
If the mapping function, as in your case, only depends on the current mapped value, then you can use both. But I would prefer:
compute if you can guarantee that a value for the given key exists. In this case the extra value parameter taken by the merge method is not needed.
merge if it is possible that no value for the given key exists. In this case merge has the advantage that null does NOT have to be handled by the mapping function.

Does Go (deep) copy keys when inserting into a map?

I have a map with complex keys - for example, 2D arrays:
m := make(map[[2][3]int]int)
When I insert a new key into the map, does Go make a deep copy of the key?
a := [2][3]int{{1, 2, 3}, {4, 5, 6}}
m[a] = 1
In other words, if I change the array a after using it as a map key, does the map still contain the old value of a?
Short answer, it is copied.
By specification, Arrays are value types.
Go's arrays are values. An array variable denotes the entire array; it is not a pointer to the first array element (as would be the case in C). This means that when you assign or pass around an array value you will make a copy of its contents. (To avoid the copy you could pass a pointer to the array, but then that's a pointer to an array, not an array.)
https://blog.golang.org/go-slices-usage-and-internals
See for yourself:
https://play.golang.org/p/fEUYWwN-pm
package main
import (
"fmt"
)
func main() {
m := make(map[[2][3]int]int)
a := [2][3]int{{1, 2, 3}, {4, 5, 6}}
fmt.Printf("Pointer to a: %p\n", &a)
m[a] = 1
for k, _ := range m {
fmt.Printf("Pointer to k: %p\n", &k)
}
}
The pointers do not match.
EDIT: The real reason is when inserting into a map, the key value is copied. Or, you can continue to just remember the rule above: arrays are value types and their reuse denotes a copy. Either works here. :)
Arrays are always passed by value, so, yes in this case Go will make a deep copy of the key.
From the language spec
The comparison operators == and != must be fully defined for operands of the key type; thus the key type must not be a function, map, or slice. If the key type is an interface type, these comparison operators must be defined for the dynamic key values; failure will cause a run-time panic.
The keys are copied into the map. Excluding map and slice as valid keys means that the keys can't change. Note that go doesn't follow pointers if you define a map type with a pointer as a key (eg map[*int]int) it compares the pointers directly.

Can I insert into a map by key in F#?

I'm messing around a bit with F# and I'm not quite sure if I'm doing this correctly. In C# this could be done with an IDictionary or something similar.
type School() =
member val Roster = Map.empty with get, set
member this.add(grade: int, studentName: string) =
match this.Roster.ContainsKey(grade) with
| true -> // Can I do something like this.Roster.[grade].Insert([studentName])?
| false -> this.Roster <- this.Roster.Add(grade, [studentName])
Is there a way to insert into the map if it contains a specified key or am I just using the wrong collection in this case?
The F# Map type is a mapping from keys to values just like ordinary .NET Dictionary, except that it is immutable.
If I understand your aim correctly, you're trying to keep a list of students for each grade. The type in that case is a map from integers to lists of names, i.e. Map<int, string list>.
The Add operation on the map actually either adds or replaces an element, so I think that's the operation you want in the false case. In the true case, you need to get the current list, append the new student and then replace the existing record. One way to do this is to write something like:
type School() =
member val Roster = Map.empty with get, set
member this.Add(grade: int, studentName: string) =
// Try to get the current list of students for a given 'grade'
let studentsOpt = this.Roster.TryFind(grade)
// If the result was 'None', then use empty list as the default
let students = defaultArg studentsOpt []
// Create a new list with the new student at the front
let newStudents = studentName::students
// Create & save map with new/replaced mapping for 'grade'
this.Roster <- this.Roster.Add(grade, newStudents)
This is not thread-safe (because calling Add concurrently might not update the map properly). However, you can access school.Roster at any time, iterate over it (or share references to it) safely, because it is an immutable structure. However, if you do not care about that, then using standard Dictionary would be perfectly fine too - depends on your actual use case.

how to design/create key for key/value storage?

I want to store serialized objects (or whatever) in a key/value cache.
Now I do something like this :
public string getValue(int param1, string param2, etc )
{
string key = param1+"_"+param2+"_"+etc;
string tmp = getFromCache();
if (tmp == null)
{
tmp = getFromAnotherPlace();
addToCache( key, tmp);
}
return tmp;
}
I think it can be awkward. How can I design the key?
if i understood the question, i think the simplest and smartest way to make a key is to use an unidirectional hash function as MD5, SHA1 ecc...
At least two reason for doing this:
The resulting key is unique for sure!(actually both MD5 and SHA1 have been cracked (= )
The resulting key has a fixed lenght!
You have to give your object as argument of the function and you have your unique key.
I don t know very much c# but i am quite sure you can find an unidirectional hash function builted-in.
First of all your key seems to be composed out of a lot of characters. Keep in mind that the key name also occupies memory (1byte / char) so try to keep it as short as possible. I've seen situations where the key name was larger than the value, which can happen if you have cases where you store an empty array or an empty value.
The key structure. I guess from your example that the object you want to store is identified by the params (one being the item id maybe, or maybe filters for a search [...]). Start with a prefix. The prefix should be the name of the object class (or a simplified name depicting the object in general).
Most of the time, keys will have a prefix + identifier. In your example you have multiple identifiers. If one of them is a unique id, go with only prefix + id and it should be enough.
If the object is large and you don't always use all of it then change your strategy to a multiple key storage. Use one main key for storing the most common values, or for storing the components of the object, values of which are stored in separate keys. Make use of pipes and get the whole object in one connection using one "multiple" query :
mainKey = prefix + objectId;
object = getFromCache(mainKey);
startCachePipeline();
foreach (object[properties] as property) {
object->property = getFromCache(prefix + objectId + property);
}
endCachePipeline();
The structure for an example "Person" object would then be something like :
person_33 = array(
properties => array(age, height, weight)
);
person_33_age = 28;
person_33_height = 6;
person_33_weight = 150;
Memcached uses memory most efficient when objects stored inside are of similar sizes. The bigger the size difference between objects (not talking about 1 lost big object or singular cases, although memory gets wasted then as well) the more wasted memory.
Hope it helps!

Resources