Adding values to the keys in Map - dictionary

I have a record
record = [ [ name1:'value1', name2:'value2', name3:'value3' ],
[ name1:'value6', name2:'value7', name3:'value8' ] ]
I would like to add two more key/value pairs to with values as boolean(true/false) as below
record = [ [ name1:'value1', name2:'value2', name3:'value3', name4:false, name5:true ],
[ name1:'value6', name2:'value7', name3:'value8', name4:false, name5:true ] ]
When I tried to use add or put functions, doesnt seem to work(either replacing the existing values or not doing anything)

Just do:
record = record.collect { it + [ name4:false, name5:true ] }
Or you can also do:
record = record*.plus( name4:false, name5:true )

To add to Patricks answer above (+1), a Map contains a set not a list so all keys must be unique. So you cannot assign multiple values to a single key directly.
Among many solutions, you can alternatively, save an object:
Map<String, myObject>
that holds many different values and this will still maintain the uniqueness of the set since there will only be one key.

Related

Move an item within a DynamoDB list

I have a list in a DynamoDB table and would like to move items to different positions in the same list, is there a way to do this in a single update?
At the moment, I'm looking at having to read the list, modify it, then write it back again, but would prefer doing it all in a single update, is there a way to do this?
Edit to add example
So here's some noddy data that shows what I'd like to do:
If the data started like this:
Item: { COLUMN: [ "Element_0", "Element_1", "Element_2", "Element_3" ] }
Then I'd give it from and to indices and it would move the element. So for example if I gave it a from index of 0 and to index of 2 the data should end up like this:
Item: { COLUMN: [ "Element_1", "Element_2", "Element_0", "Element_3" ] }
You can do this with an Update Expression, but it's a little tricky, since you don't have the data.
Basically, you have to create a dynamic update statement that sets every value you want to move. Something like this works:
aws dynamodb update-item --table-name test --key '{"pk":{"S":"1"}}' --update-expression "SET #list[1] = #list[2], #list[2] = #list[1]" --region us-west-2 --profile jw-test --expression-attribute-names '{"#list": "list"}'
I created a table with a key of pk, with a value of 1. The list before the update was like this:
[
'one',
'two',
'three',
'four'
]
After the update it looks like this:
[
'one',
'three',
'two',
'four'
]
Default answer if this isn't possible in a single update.
Read out the list, modify it, then write it back. It's not elegant, but it works and isn't that ugly either.
It's not atomic though so any answer that can do it in a single update will get the check mark.

How to save a global variable with table format from NetLogo Behaviorspace

I have written a fairly complicated code for my ABM (634 agents having interactions, each having different variables some of which are lists with multiple values that are updated each tick). As I need to save the updated values for all agents, I have defined a global variable using table:make. This table has 634 keys (each key for one agent), and each key has a list of those values (from that agents-own list variable) for the correspondent agent. But when I use the name of this table to be reported as one of my outputs in Behavior Space, the result in csv file is a table with no keys and it has only a number in it: {{table: 1296}}. So, I was wondering how I could change this variable to be able to have all values.
If you're happy to do some post-processing with R or something after the fact, then table:to-list might be all you need. For example, with a simple setup example like:
extensions [ table ]
globals [ example-table ]
turtles-own [ turtle-list ]
to setup
ca
crt 3 [
set turtle-list ( list random 10 one-of [ "A" "B" "C" ] random 100 )
]
set example-table table:make
foreach sort turtles [
t ->
table:put example-table ( word "turtle_" [who] of t ) [turtle-list] of t
]
reset-ticks
end
And a to-report to clean each table item such that the first item is the key and all other items are the items in the list:
to-report easier-read-table [ table_ ]
let out []
foreach table:to-list table_ [ i ->
set out lput ( reduce sentence i ) out
]
report out
end
You can set up your BehaviorSpace experiment such that one of your reporters is that reporter, something like:
To get a .csv file like:
Where the reporter column outputs a list of lists that you can process how you like.
However, I probably wouldn't use the basic BehaviorSpace output for this, but instead have a call in the experiment to call a manual table output procedure. For example, using the csv extension to make this output-table procedure:
to output-table [ filename_ table_ ]
let out [["key" "col1" "col2" "col3"]]
foreach table:to-list table_ [ i ->
set out lput ( reduce sentence i ) out
]
csv:to-file filename_ out
end
This outputs a much more analysis-ready table if you're less comfortable cleaning the output of a list-of-lists that as far as I know is what you would get from the BehaviorSpace output. So, you can either call it at the end of your experiment, like:
To get a table like:
Which is a little nicer to deal with. You can obviously modify this to report more often if needed, for example:
which would output a table at each tick of the experiment (you can also do this in your code to make it a little easier).

Using ProjectionExpression on List that contain maps

Say that this is the table struct:
[{ name:"test", age:99,
Info: [
{ location:"A", num:11 },
{ location:"B", num:99 }
]
}]
What i want to get is something like this:
{ name: "test",
Info:[
{location:"A"},
{location:"B"}
]}
would that be possible? I can't seem to make it work unless I specify the index.
ProjectionExpression="name, #mp[0].location",
Select='SPECIFIC_ATTRIBUTES',
ExpressionAttributeNames={"#mp": "Info"}
How do I do this?
Based on the documentation you can either specify the whole object or with index.
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.Attributes.html#Expressions.Attributes.NestedAttributes
Working as documented.
Hope it helps.
It's a little tricky if you do not know the length of list. You could try using a max length for the list and use something like (assuming 3 is the max length)
ProjectionExpression="#mp[0].location,#mp[1].location,#mp[2].location"

CosmosDB Graph: How to update a value of property of vertex having multiple values using gremlin?

Suppose my query is :
g.addV('employee').property('id', 'john').property('country', 'USA').property('country', 'India')
which adds property country with two values i.e USA and India.
[
{
"id":"john",
"label":"employee",
"type":"vertex",
"properties":{
"country":[
{
"id":"5dc2aaf6-cb11-4d4a-a2ce-e5fe79d28c80",
"value":"USA"
},
{
"id":"fcf4baf6-b4d5-45a3-a4ba-83a859806aef",
"value":"India"
}
]
}
}
]
Now I want to change one of the existing values. For example 'India' to 'China'.
What will be query for that?
In a single query it's just that:
g.V().has('id', 'john').
sideEffect(properties('country').hasValue('India').drop()).
property(list, 'country', 'China')
We could drop the 'India' value firstly and then add the 'China'. I test it with following query on my side, it works correctly.
g.V().has('id', 'john').properties('country').hasValue('India').drop()
g.V().has('id', 'john').property(list, 'country', 'China')
g.V().has('employee','id', 'john').property('country', 'China')

Understanding the output of a Cypher query using constants.DATA_GRAPH

When I make a call to the Neo4j REST API, of the format results = gdb.query(query, data_contents=constants.DATA_GRAPH), I get back many more results, and results that are more complex, than I had expected.
Cypher version: CYPHER 2.2
For example, in a graph that has this arrangement of nodes...
(Bob) --> (Amy) --> (Cal)
... and the query ...
MATCH (n)
OPTIONAL MATCH (a)-[r]-(b)
RETURN DISTINCT n, r
... one of the results returned is as follows:
{ "relationships": [
{ "id":"270"
, "type":"LIKES"
, "startNode":"134"
, "endNode":"136"
, "properties":{}
}
]
, "nodes": [
{ "id":"134"
, "labels":["Person"]
, "properties":{"name":"Amy"}
}
, { "id":"135"
, "labels":["Person"]
, "properties":{"name":"Bob"}
}
, { "id":"136"
, "labels":["Person"]
, "properties":{"name":"Cal"}
}
]
}
If I understand correctly, this indicates a direct relationship between Amy (134) and Cal (136). As far as I can see, Bob has no place in the path between Amy and Cal. So why is Bob appearing in this entry at all?
I also get duplicate entries. For example, this entry appears twice:
{ "relationships": [
{ "id":"264"
, "type":"LIKES"
, "startNode": "134"
, "endNode":"136"
,"properties":{}
}
]
, "nodes": [
{ "id":"134"
, "labels":["Person"]
, "properties":{"name":"Amy"}
}
, { "id":"136"
, "labels":["Person"]
, "properties":{"name":"Cal"}
}
]
}
In my tests, I see rows with 2 or 3 nodes. Is it ever possible to see more nodes in one row? Is it safe to assume that, if a relationship entry includes a startNode and an endNode, that there is a direct link from one to the other, and that any additional nodes that appear in the nodes section for that row can be ignored?
Is there somewhere where I can find a complete explanation of how the graph output is calculated?
Because you have a typo and don't refer to n at all in your optional match.
Also you should use labels!
MATCH (n)
OPTIONAL MATCH (n)-[r]-()
RETURN DISTINCT n, r

Resources