Unique id for label input pairs - meteor

I'm trying to generate unique id for label & input pairs.
After googling I now know that, unlike with handlebars, there is no array #index syntax extension in spacebars yet (also anybody knows why Blaze development has been inactive since the version 0.1 for past 5 months?).
So I ended up using the JS Array .map() solution inspired by this blog post and other posts. However, this solution returns label & input pairs of objects which DOM appears to be rendering the same on 'pagination' through Session.
Live example: http://meteorpad.com/pad/NXLtGXXD4yhYr9LHC
When clicking on first set of "Non-Indexed IDs" checkboxes, then next/previous, DOM will display new set of checkboxes correctly.
However clicking on the second set of "Indexed IDs" checkboxes below, then next/previous, DOM seems to retain the same checkboxes because one selected from the previous page remains checked on the next page.
What am I doing wrong or missing?
I also put the code on github for quick testing & refinement:

The solution, which I've found by looking at the ObserveSequence source, appears to be to give your generated objects a unique field called _id (generated like {{questionId}}:{{questionIndex}}:{{choiceIndex}}). See this meteorpad: http://meteorpad.com/pad/2EaLh8ZJncnqyejSr
I don't know enough about Meteor internals to say why, but this comment seems relevant:
// 'lastSeqArray' contains the previous value of the sequence
// we're observing. It is an array of objects with '_id' and
// 'item' fields. 'item' is the element in the array, or the
// document in the cursor.
//
// '_id' is whichever of the following is relevant, unless it has
// already appeared -- in which case it's randomly generated.
//
// * if 'item' is an object:
// * an '_id' field, if present
// * otherwise, the index in the array
//
// * if 'item' is a number or string, use that value
//
// XXX this can be generalized by allowing {{#each}} to accept a
// general 'key' argument which could be a function, a dotted
// field name, or the special #index value.
When the _id is absent, it uses the index in the array, so I guess ObserveSequence assumes it's the same object with changed fields, rather than a different object, so it re-uses the old elements rather than destroying them and recreating them. I suppose the name _id is chosen so that it works well with arrays generated by .fetch() on a Minimongo cursor.
I don't know if this is documented behaviour, or if it might change in the future.

Related

GA4 with GTM - sending the items array as an event parameter without using datalayer?

Using GA4 with GTM. I'm questioning how to send an array for an event. For example, the add_to_cart event. In my situation I am triggering the tag on my purchase links. On those links I added data parameters for the id, name, and value such as :
Buy Now
There are multiple and the id, name, and value are the only things that change for each link.
Google requires an items array to be sent with the add_to_cart event. Can I enter the items array as shown in this picture using dot notation? I can't supply this information in the datalayer which is why I am grabbing the values that can be different from the link itself (data parameters)... the rest are static and won't change. I can't find any way to create an array variable in GTM so the dot notation is the only thing I could think of.
Is there another way to do this I am missing or not thinking of?
Unfortunately you can't.
Your solution sends every value from the items object as an individual event parameter.
GA4 requires you to send an array of objects, with one object for every sold item.
The good news is, you can use GTM to create the items array in the correct format using some JavaScript.
#Ramon Put me in the right direction. Set this up as a custom js variable. Since I trigger the tag on link click the {{Click Element}} lets me get those data-parameter values from it to create the array values that are dynamic. I suppose I could have also used the gtm variables I already created for those here too. Anyways, I use this variable as the items event parameter value which returns the array how I wish. Seems to be working fine.
function(){
var e = {{Click Element}};
var items = [{
item_id: e.dataset.id,
name : e.dataset.name,
affiliation : 'some name',
currency : 'USD',
item_brand : 'some name',
item_category : 'Software',
price : e.dataset.value,
quantity : 1
}];
return items;
}

Store a manipulated collection in Power Apps

(1) In my Canvas App I create a collection like this:
Collect(colShoppingBasket; {Category: varCategoryTitle ; Supplier: lblSupplier.Text ; ProductNumber: lblProductNumber.Text });;
It works - I get a collection. And whenever I push my "Add to shopping basket" button, an item are added to my collection.
(2) Now I want to sort the collection and then use the sorted output for other things.
This function sorts it by supplier. No problems here:
Sort(colShoppingBasket; Supplier)
(3) Then I want to display the SORTED version of the collection in various scenarios. And this is where the issue is. Because all I can do is manipulate a DISPLAY of the collection "colShoppingBasket" - (unsorted).
(4) What would be really nice would be the option to create and store a manipulated copy of the original collection. And the display that whereever I needed. Sort of:
Collect(colShoppingBasketSORTED; { Sort(colShoppingBasket; supplier) });; <--- p.s. I know this is not a working function
You could use the following:
ClearCollect(colShoppingBasketSorted, Sort(colShoppingBasket, Supplier))
Note that it is without the { }
This will Clear and Collect the whole colShoppingBasket sorted.
If you want to store the table in a single row in a collection, you can use
ClearCollect(colShoppingBasketSortedAlternative, {SingleRow: Sort(colShoppingBasket, Supplier)})
I wouldn't recommend this though because if you want to use the Sorted values you'd have to do something like this:
First(colShoppingBasketSortedAlternative).SingleRow -> this returns the first records of the colShoppingBasketSortedAlternative then gets the content of the SingleRow column, which in this case is a Collection
Note: You will need to replace the , with ; to work on your case

Is it possible to upsert nested fields in DynamoDB?

I would like to 'upsert' a document in DynamoDB. That is, I would like to specify a key, and a set of field/value pairs. If no document exists with that key, I want one created with that key and the key/value pairs I specified. If a document exists with that key, I want the fields I specified to be set to the values specified (if those fields did not exist before, then they should be added). Any other, unspecified fields on the existing document should be left alone.
It seems I can do this pretty well with the UpdateItem call, when the field/value pairs I am setting are all top-level fields. If I have nested structures, UpdateItem will work to set the nested fields, as long as the structure exists. In other words, if my existing document has "foo": {}, then I can set "foo.bar": 42 successfully.
However, I don't seem to be able to set "foo.bar": 42 if there is no foo object already (like in the case where there is no document with the specified field at all, and my 'upsert' is behaving as an 'insert'.
I found a discussion on the AWS forums from a few years ago which seems to imply that what I want to do cannot be done, but I'm hoping this has changed recently, or maybe someone knows of a way to do it?
UpdateItem behaves like an "upsert" operation: The item is updated if it exists in the table, but if not, a new item is added (inserted).
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SQLtoNoSQL.UpdateData.html
That ("foo.bar": 42) can be achieved using the below query:
table.update_item(Key = {'Id' : id},
UpdateExpression = 'SET foo = :value1',
ExpressionAttributeValues = {':value1': {'bar' : 42}}
)
Hope this helps :)
I found this UpdateItem limitation (top level vs nested attributes) frustrating as well. Eventually I came across this answer and was able to work around the problem: https://stackoverflow.com/a/43136029/431296
It requires two UpdateItem calls (possibly more depending on level of nesting?). I only needed a single level, so this is how I did it:
Update the item using an attribute_exists condition to create the top level attribute as an empty map if it doesn't already exist. This will work if the entire item is missing or if it exists and has other pre-existing attributes you don't want to lose.
Then do the 2nd level update item to update the nested value. As long as the parent exists (ex: an empty map in my case) it works great.
I got the impression you weren't using python, but here's the python code to accomplish the upsert of a nested attribute in an item like this:
{
"partition_key": "key",
"top_level_attribute": {
"nested_attribute": "value"
}
}
python boto3 code:
def upsert_nested_item(self, partition_key, top_level_attribute_name, nested_attribute_name, nested_item_value):
try:
self.table.update_item(
Key={'partition_key': partition_key},
ExpressionAttributeNames={f'#{top_level_attribute_name}': top_level_attribute_name},
ExpressionAttributeValues={':empty': {}},
ConditionExpression=f'attribute_not_exists(#{top_level_attribute_name})',
UpdateExpression=f'SET #{top_level_attribute_name} = :empty',
)
except self.DYNAMODB.meta.client.exceptions.ConditionalCheckFailedException:
pass
self.table.update_item(
Key={'partition_key': partition_key},
ExpressionAttributeNames={
f'#{top_level_attribute_name}': top_level_attribute_name,
f'#{nested_attribute_name}': nested_attribute_name
},
ExpressionAttributeValues={f':{top_level_attribute_name}': nested_item_value},
UpdateExpression=f'SET #{top_level_attribute_name}.#{nested_attribute_name} = :{top_level_attribute_name}',
)

modify field value in a crossfilter after insertion

I need to modify a field value for all records in a crossfilter before inserting new records.
The API doesn't say anything about it. Is there a way to do that ?
Even if it's a hack that would be really useful to me.
Looking at the code, the data array is held as a private local variable inside the crossfilter function so there's no way to get at it directly.
With that said, it looks like Crossfilter really tries to minimize the number of copies of the data it makes. So callback functions like the ones passed into crossfilter.dimension or dimension.filter are passed the actual records themselves from the data array (using the native Array.map) so any changes to make to the records will be made to the main records.
With that said, you obviously need to be very careful that you're not changing anything that is relied on by the existing dimensions, filters or groups. Otherwise you'll end up with data that doesn't agree with the internal Crossfilter structures and chaos will ensue.
The cool thing about .remove is it only removes entries that match the currently applied filters. So if you create a 'unique dimension' that returns a unique value for every entry in your dataset (like an ID column), you can have a function like this:
function editEntry(id, changes) {
uniqueDimension.filter(id); // filter to the item you want to change
var selectedEntry = uniqueDimension.top(1)[0]; // get the item
_.extend(selectedEntry, changes); // apply changes to it
ndx.remove(); // remove all items that pass the current filter (which will just be the item we are changing
ndx.add([selectedEntry]); // re-add the item
uniqueDimension.filter(null); // clear the filter
dc.redrawAll(); // redraw the UI
}
At the least you could do a cf.remove() then readd the adjusted data with cf.add().

Programmatically get and set field values

I have two fields I want to fill with the exactly same values; users should fill only one.
I also have a function which checks if the second field is empty. Is there any change in how the field values are obtained and set in Drupal 6, and Drupal 7?
EDIT:
I am trying to edit module right now.
Yes, I am talking about node fields.
$node array has only ID of terms I added to node. How do I get the term name, knowing its ID?
Since you tagged this question with cck, I'm going to assume you are working with node fields.
To copy the value of one field (x) to another (y), you can either install the Computed Field module and set it up so that the value of y is computed from the value of x, or you can create a custom module with something similar to the following hooks:
This hook copies all of the data from field x to field y:
function mymodule_node_presave($node) {
$node->field_y = $node->field_x;
}
This hook only copies the value of the first instance of field x to field y:
function mymodule_node_presave($node) {
$node->field_y[$node->language][0]['value'] = $node->field_x[$node->language][0]['value'];
}
You might want to do a print_r on $node->field_x and $node->field_y as the structure of your data may be different based on the type of field you are using. If you want to check if either of the fields are empty, you can wrap the assignment statement in a conditional that calls your custom function.
One good way for finding out a field's value, is using field_get_items() which is provided by field API.
field_get_items($entity_type, $entity, $field_name, $langcode = NULL);
Where:
$entity_type: Is something like 'node' or 'user',
$entity: Is the entity which it's field value is needed,
$field_name: machine name of the field,
$langcode: The language that entity is stored in, It is optional and if not provided, field_get_items will find it out automatically.

Resources