This question already has an answer here:
Go: append directly to slice found in a map
(1 answer)
Closed 4 years ago.
I want to append to a slice that is a value of a map, e.g. given m map[string][]string:
if values, exists := m[key]; exists {
values = append(values, v)
// I don't want to call: m[key] = values
} else {
m[key] = []string{ v }
}
That obviously doesn't work, so I tried instead of appending the value as is, to do something like:
valuesPtr := &values
*values = append(values, v)
But that doesn't work either. How can I do that?
You cannot do that.
append returns a new slice, since a slice may have to be resized to complete the append. You must update your map to use the newly returned slice, which cannot be done without referencing by key.
Related
This question already has an answer here:
Golang updating maps and variables in an object
(1 answer)
Closed 3 years ago.
Currently trying to learn Go.
I have the following function, but it only works when the team doesn't exist in the map already and it creates a new record in the map. It will not update the values if the team already has a struct in the map.
func AddLoss(teamMap map[string]TeamRow, teamName string) {
if val, ok := teamMap[teamName]; ok {
val.Wins++
val.GamesPlayed++
} else {
newTeamRow := TeamRow{Losses: 1}
teamMap[teamName] = newTeamRow
}
}
I have updated the function to just replace the existing record with a brand new struct with the values I want, but that seems odd that I can't update the values in a map.
Can someone explain this to me, or point me in the right direction?
You have a map of string to the value of TeamRow so when you get the val out of the map it returns the value of the team, not a pointer to the team. If you make the map a string to the pointer of TeamRow then when you get the val out it will point to the memory that is stored in the map so values will persist beyond the scope of your AddLoss function. To do this simply add a * to the map declaration - teamMap map[string]*TeamRow though when you populate it you will then also need to store pointers in the map.
This question already has answers here:
Efficiently mutate a vector while also iterating over the same vector
(2 answers)
Is there an elegant solution to modifying a structure while iterating?
(1 answer)
How to idiomatically iterate one half of an array and modify the structure of the other?
(1 answer)
How can I iterate a vector once and insert/remove/modify multiple elements along the way?
(1 answer)
How can I modify a collection while also iterating over it?
(2 answers)
Closed 4 years ago.
I have a struct GameLog that holds a Vec<Steps>. Steps holds a Vec<FieldData> and FieldData consists only of basic data types.
I create instances of these types using serde_json. After the deserialisation is done, I need to iterate over all the steps in the GameData and add to it the past version of the fields that are changed in the next turn it doesn't already contain.
I have a working Java implementation I'm trying to port to Rust, but I just can't figure this out, probably because I don't know enough about Rust's insides.
The current code looks like this:
let data = load_data("Path/to/json");
let mut gamelog: Vec<Vec<FieldData>> = Vec::with_capacity(data.steps.len() + 1);
gamelog.push(Vec::with_capacity(data.init.fields.len()));
gamelog[0] = data.init.fields;
for i in 1..(data.steps.len() - 1) {
gamelog.push(Vec::with_capacity(data.steps[i - 1].fields.len()));
gamelog[i] = data.steps[i - 1].fields.clone();
for future_field in &data.steps[i + 1].fields {
let mut inside = false;
for current_field in &data.steps[i].fields {
if current_field.x == future_field.x && current_field.y == future_field.y {
inside = true;
}
}
if !inside {
for j in i..0 {
let mut insideb = false;
for past_field in &data.steps[j].fields {
if future_field.x == past_field.x && future_field.y == past_field.y {
gamelog[i].push(past_field.clone());
insideb = true;
break;
}
}
if insideb {
break;
}
}
}
}
}
However, this only works by creating copies of the vectors and fields and creates a new Vec.
When I try to manipulate the Vec directly, I most often get a "can't move out of borrow" error on the for .. in data.steps[?].fields lines
What would a proper (and possibly much more idiomatic) way of directly manipulating the Vecs in the struct?
Starting with complex reduce sample
I have trimmed it down to a single chart and I am trying to understand how the reduce works
I have made comments in the code that were not in the example denoting what I think is happening based on how I read the docs.
function groupArrayAdd(keyfn) {
var bisect = d3.bisector(keyfn); //set the bisector value function
//elements is the group that we are reducing,item is the current item
//this is a the reduce function being supplied to the reduce call on the group runAvgGroup for add below
return function(elements, item) {
//get the position of the key value for this element in the sorted array and put it there
var pos = bisect.right(elements, keyfn(item));
elements.splice(pos, 0, item);
return elements;
};
}
function groupArrayRemove(keyfn) {
var bisect = d3.bisector(keyfn);//set the bisector value function
//elements is the group that we are reducing,item is the current item
//this is a the reduce function being supplied to the reduce call on the group runAvgGroup for remove below
return function(elements, item) {
//get the position of the key value for this element in the sorted array and splice it out
var pos = bisect.left(elements, keyfn(item));
if(keyfn(elements[pos])===keyfn(item))
elements.splice(pos, 1);
return elements;
};
}
function groupArrayInit() {
//for each key found by the key function return this array?
return []; //the result array for where the data is being inserted in sorted order?
}
I am not quite sure my perception of how this is working is quite right. Some of the magic isn't showing itself. Am I correct that elements is the group the reduce function is being called on ? also the array in groupArrayInit() how is it being indirectly populated?
Part of me feels that the functions supplied to the reduce call are really array.map functions not array.reduce functions but I just can't quite put my finger on why. having read the docs I am just not making a connection here.
Any help would be appreciated.
Also have I missed Pens/Fiddles that are created for all these examples? like this one
http://dc-js.github.io/dc.js/examples/complex-reduce.html which is where I started with this but had to download the csv and manually convert to Json.
--------------Update
I added some print statements to try to clarify how the add function is working
function groupArrayAdd(keyfn) {
var bisect = d3.bisector(keyfn); //set the bisector value function
//elements is the group that we are reducing,item is the current item
//this is a the reduce function being supplied to the reduce call on the group runAvgGroup for add below
return function(elements, item) {
console.log("---Start Elements and Item and keyfn(item)----")
console.log(elements) //elements grouped by run?
console.log(item) //not seeing the pattern on what this is on each run
console.log(keyfn(item))
console.log("---End----")
//get the position of the key value for this element in the sorted array and put it there
var pos = bisect.right(elements, keyfn(item));
elements.splice(pos, 0, item);
return elements;
};
}
and to print out the group's contents
console.log("RunAvgGroup")
console.log(runAvgGroup.top(Infinity))
which results in
Which appears to be incorrect b/c the values are not sorted by key (the run number)?
And looking at the results of the print statements doesn't seem to help either.
This looks basically right to me. The issues are just conceptual.
Crossfilter’s group.reduce is not exactly like either Array.reduce or Array.map. Group.reduce defines methods for handling adding new records to a group or removing records from a group. So it is conceptually similar to an incremental Array.reduce that supports an reversal operation. This allows filters to be applied and removed.
Group.top returns your list of groups. The value property of these groups should be the elements value that your reduce functions return. The key of the group is the value returned by your group accessor (defined in the dimension.group call that creates your group) or your dimension accessor if you didn’t define a group accessor. Reduce functions work only on the group values and do not have direct access to the group key.
So check those values in the group.top output and hopefully you’ll see the lists of elements you expect.
I am trying to create an empty map, that will be then populated within a for loop. Not sure how to proceed in Rascal. For testing purpose, I tried:
rascal>map[int, list[int]] x;
ok
Though, when I try to populate "x" using:
rascal>x += (1, [1,2,3])
>>>>>>>;
>>>>>>>;
^ Parse error here
I got a parse error.
To start, it would be best to assign it an initial value. You don't have to do this at the console, but this is required if you declare the variable inside a script. Also, if you are going to use +=, it has to already have an assigned value.
rascal>map[int,list[int]] x = ( );
map[int, list[int]]: ()
Then, when you are adding items into the map, the key and the value are separated by a :, not by a ,, so you want something like this instead:
rascal>x += ( 1 : [1,2,3]);
map[int, list[int]]: (1:[1,2,3])
rascal>x[1];
list[int]: [1,2,3]
An easier way to do this is to use similar notation to the lookup shown just above:
rascal>x[1] = [1,2,3];
map[int, list[int]]: (1:[1,2,3])
Generally, if you are just setting the value for one key, or are assigning keys inside a loop, x[key] = value is better, += is better if you are adding two existing maps together and saving the result into one of them.
I also like this solution sometimes, where you instead of joining maps just update the value of a certain key:
m = ();
for (...whatever...) {
m[key]?[] += [1,2,3];
}
In this code, when the key is not yet present in the map, then it starts with the [] empty list and then concatenates [1,2,3] to it, or if the key is present already, let's say it's already at [1,2,3], then this will create [1,2,3,1,2,3] at the specific key in the map.
It appears to me that Crossfilter never excludes a group from the results of a reduction, even if the applied filters have excluded all the rows in that group. Groups that have had all of their rows filtered out simply return an aggregate value of 0 (or whatever reduceInitial returns).
The problem with this is that it makes it impossible to distinguish between groups that contain no rows and groups that do contain rows but just legitimately aggregate to a value of 0. Basically, there's no way (that I can see) to distinguish between a null value and a 0 aggregation.
Does anybody know of a built-in Crossfilter technique for achieving this? I did come up with a way to do this with my own custom reduceInitial/reduceAdd/reduceRemove method but it wasn't totally straight forward and it seemed to me that this is behavior that might/should be more native to Crossfilter's filtering semantics. So I'm wondering if there's a canonical way to achieve this.
I'll post my technique as an answer if it turns out that there is no built-in way to do this.
A simple way to accomplish this is to have both count and total be reduce attributes:
var dimGroup = dim.group().reduce(reduceAdd, reduceRemove, reduceInitial);
function reduceAdd(p, v) {
++p.count;
p.total += v.value;
return p;
}
function reduceRemove(p, v) {
--p.count;
p.total -= v.value;
return p;
}
function reduceInitial() {
return {count: 0, total: 0};
}
Empty groups will have zero counts, so retrieving only non-empty groups is easy:
dimGroup.top(Infinity).filter(function(d) { return d.value.count > 0; });
OK, there doesn't seem to be any obvious answer jumping out so I'll answer my own question and post the technique I used to solve this.
This example assumes that I've already created a dimension and grouping, which is passed in as groupDim. Because I want to be able to sum up any arbitrary numeric field, I also pass in fieldName so that it will be available in the closure scope of my the reduction functions.
One important characteristic of this technique is that it relies on there being a way to uniquely identify which group each row belongs to. Thinking in term of OLAP, this is essentially the "tuple" that defines a particular aggregation context. But it can be anything you want as long as it deterministically returns the same value for all data rows belonging to a given group.
The end result is that empty groups will have an aggregate value of "null" which can be easily detected for and filtered out after the fact. Any group with at least one row will have a numeric value (even if it happens to be zero).
Refinements or suggestions to this are more then welcome. Here's the code with comments inline:
function configureAggregateSum(groupDim, fieldName) {
function getGroupKey(datum) {
// Given datum return key corresponding to the group to which the datum belongs
}
// This object will keep track of the number of times each group had reduceAdd
// versus reduceRemove called. It is used to revert the running aggregate value
// back to "null" if the count hits zero. This is unfortunately necessary because
// Crossfilter filters as it is aggregating so reduceAdd can be called even if, in
// the end, all records in a group end up being filtered out.
//
var groupCount = {};
function reduceAdd(p, v) {
// Here's the code that keeps track of the invocation count per group
var groupKey = getGroupKey(v);
if (groupCount[groupKey] === undefined) { groupCount[groupKey] = 0; }
groupCount[groupKey]++;
// And here's the implementation of the add reduction (sum in my case)
// Note the check for null (our initial value)
var value = +v[fieldName];
return p === null ? value : p + value;
}
function reduceRemove(p, v) {
// This code keeps track of invocations of invocation count per group and, importantly,
// reverts value back to "null" if it hits 0 for the group. Essentially, if we detect
// that group has no records again we revert to the initial value.
var groupKey = getGroupKey(v);
groupCount[groupKey]--;
if (groupCount[groupKey] === 0) {
return null;
}
// And here's the code for the remove reduction (sum in my case)
var value = +v[fieldName];
return p - value;
}
function reduceInitial() {
return null;
}
// Once returned, can invoke all() or top() to get the values, which can then be filtered
// using a native Array.filter to remove the groups with null value.
return groupedDim.reduce(reduceAdd, reduceRemove, reduceInitial);
}