In Google Tag Manager a pre-defined variable type of "Data Layer Variable" exists with an input for the variable name. In a standard single level of key/value pairs this is easy.
var dataLayer = [{"mykey":"myvalue"}];
Given that data layer you'd just use mykey as your variable to input into GTM. However, if you use the CEDDL spec (http://www.w3.org/2013/12/ceddl-201312.pdf) structure you end up with a deeply nested array:
dataLayer = [
{
"product": [
{
"category": {
"primaryCategory": "Auto Loans"
},
"productInfo": {
"productID": "1",
"productName": "PurchaseLoan",
"description": "Auto finance loan"
},
"security": [
"Analytics",
"Personalization",
"Recommendations"
]
}
]
}
]
So the real question is: how do I access the value of "productName" in the above example?
In standard Javascript you might access it like so:
dataLayer[1].product[0].productInfo.productName
or
dataLayer.1.product.1.productInfo.productName
... but neither of these options work (with or without dataLayer.1 as the first node).
This is the UI to enter the variable name:
When you define your DataLayer variable in GTM, you don't need to specify "dataLayer" in the variable name, ie. it should just be:
product.0.productInfo.productName
Related
I believe this question is for Tinkerpop, not specific to the CosmosDB implementation; just some semantics might be baked into my query examples.
I've developed a data layer that creates queries based on some metadata information. Currently, my data layer will only persist non-null data values to the graph vertex; this is causing troubles with my retrieval mechanism.
Provided the following data model, where the field "HomeRoute" may or may not exist on the actual vertex (depending on whether it was populated or not).
{
"ApplicationModule": string
"Title": string
"HomeRoute": string?
}
My initial query structure is as follows, which does not support the optional properties (discussed later).
g.V()
.has('ApplicationsTest', 'partitionId', '')
.project('ApplicationModule','Title','HomeRoute')
.by('ApplicationModule')
.by('Title')
.by('HomeRoute');
To simulate, we can insert a vertex:
g.addV('ApplicationsTest')
.property('partitionId', '')
.property('ApplicationModule', 'TestApp')
.property('Title', 'Test App')
.property('HomeRoute', 'testapphome');
And we can successfully query it using my base query noted above, which returns it in my desired JSON format.
[
{
"ApplicationModule": "TestApp",
"Title": "Test App",
"HomeRoute": "testapphome"
}
]
If we now insert a vertex without the HomeRoute property (since it was null within the application layer), my base query will fail.
g.addV('ApplicationsTest')
.property('partitionId', '')
.property('ApplicationModule', 'TestApp')
.property('Title', 'Test App');
Executing my base query now results in error:
Gremlin Query Execution Error: Project By: Next: The provided
traverser of key "HomeRoute" maps to nothing.
I can apply a coalesce operation against "optional" fields; my current understanding has allowed me to return a constant value in the case of undefined properties. Updating my base query as follows will return "!dbnull" when a property does not exist on the vertex:
g.V()
.has('ApplicationsTest', 'partitionId', '')
.project('ApplicationModule','Title','HomeRoute')
.by('ApplicationModule')
.by('Title')
.by(values('HomeRoute')
.fold()
.coalesce(unfold(), constant('!dbnull')));
This query when executed returns the values as expected, again in JSON format.
[
{
"ApplicationModule": "TestApp",
"Title": "Test App",
"HomeRoute": "testapphome"
},
{
"ApplicationModule": "TestApp",
"Title": "Test App",
"HomeRoute": "!dbnull"
}
]
My question (still new to Gremlin / Tinkerpop queries) - is there any way that I can get this result with only the properties which are present on the respective vertices?
My desired output from this example is below, which would allow my data layer to only unbundle the values present on the graph vertex and not have to consider string "!dbnull" values.
[
{
"ApplicationModule": "TestApp",
"Title": "Test App",
"HomeRoute": "testapphome"
},
{
"ApplicationModule": "TestApp",
"Title": "Test App"
}
]
I've found a way to achieve what I'm looking for. Would still love input from the community though, if there's optimizations or other considerations.
g.V()
.has('ApplicationsTest', 'partitionId', '')
.project('ApplicationModule','Title','HomeRoute')
.by('ApplicationModule')
.by('Title')
.by(values('HomeRoute')
.fold()
.coalesce(unfold(), constant('!dbnull')))
.local(unfold()
.where(select(values).is(without('!dbnull')))
.group().by(select(keys)).by(select(values)))
If you only need specific keys that already exist on the vertex you can use valueMap no need to use project:
g.V()
.has('ApplicationsTest', 'partitionId', '')
.valueMap("ApplicationModule", "Title", "HomeRoute").by(unfold())
example: https://gremlify.com/9fua9jsu0dh
Referring to the default sample schema mentioned in https://hasura.io/hub/project/hasura/hello-world/data-apis i.e. to the following two tables:
1) author: id,name
2) article: id, title, content, rating, author_id
where article:author_id has an array relationship to author:id.
How do I make a query to select authors who have written at least one article? Basically, something like select author where len(author.articles) > 0
TL;DR:
There's no length function that you can use in the Hasura data API syntax right now. Workaround 1) filter on a property that is guaranteed to be true for every row. Like id > 0. 2) Build a view and expose APIs on your view.
Option 1:
Use an 'always true' attribute as a filter.
{
"type": "select",
"args": {
"table": "author",
"columns": [
"*"
],
"where": {
"articles": {
"id": {
"$gt": "0"
}
}
}
}
}
This reads as: select all authors where ANY article has id > 0
This works because id is an auto-incrementing int.
Option 2:
Create a view and then expose data APIs on them.
Head to the Run SQL window in the API console and run a migration:
CREATE VIEW author_article_count as (
SELECT au.*, ar.no_articles
FROM
author au,
(SELECT author_id, COUNT(*) no_articles FROM article GROUP BY author_id) ar
WHERE
au.id = ar.author_id)
Make sure you mark this as a migration (a checkbox below the RunSQL window) so that this gets added to your migrations folder.
Now add data APIs to the view, by hitting "Track table" on the API console's schema page.
Now you can make select queries using no_articles as the length attribute:
{
"type": "select",
"args": {
"table": "author_article_count",
"columns": [
"*"
],
"where": {
"no_articles": {
"$gt": "0"
}
}
}
}
Using the following JSON (from http://jsonpath.com):
{
"firstName": "John",
"lastName" : "doe",
"age" : 26,
"address" : {
"streetAddress": "naist street",
"city" : "Nara",
"postalCode" : "630-0192"
},
"phoneNumbers": [
{
"type" : "iPhone",
"number": "0123-4567-8888"
},
{
"type" : "home",
"number": "0123-4567-8910"
}
]
}
I would like to get the root object only if firstName is John.
I have tried these inputs and many other similar ones:
$.[?($.firstName == 'John')]
$.[?($.'firstName' == 'John')]
$.[?(#.firstName == 'John')]
$[?($.firstName == "John")]
It seems as though filtering is only intended for arrays so this is an unsupported function. Does someone know a way to do this in Json.NET, or confirm that it's not possible and maybe point me to a library which supports the above?
I'm using F# but that's not important because F# is compatible with C#, .NET and NuGet packages.
JSON path is intended to locate data in a JSON object and not to perform some processing or testing on that data. The filter notation is used to identify an item in an array with the purpose of returning that data or some part of it. Having objects in an array means that there may be many properties with the same name that have to be filtered by some other means in order to select a subset of them.
Using filter notation on an object property is not the same thing. There can only be one property in an object with a particular name so stating that name is sufficient to identify it uniquely. You can easily achieve the effect you require by getting $.firstName and then testing separately for the value "John"
I created a flatted data structure. When pushing duplicated data whats the accepted pattern for keeping that data up-to-date. Here the data for the groups info is duplicated into the users-groups and the groups tree.
{
"users": ..
"users-groups": ..
"groups": ..
}
When creating a group for a user two updates takes place:
First: push to /groups/group_key
{
"name": "Test Group",
"image: "/testimage.jpg"
}
Second: push to /users-groups/user_uid/group_key
{
"orderNum: 0,
"info": {
"name": "Test Group",
"image: "/testimage.jpg"
}
}
Should keeping this data in the user-groups up-to-date be a job for the client or should a server handle this?
The data in the groups tree will always be the newest and the changes should propagate down to all the users that are members of that group.
Is there any tutorials or reference material for this problem?
note: i'm using this structure because a user can be a member of multiple groups and I don't think it would be a good idea to make possibly several ref.once's to get the data from the /groups/ directly.
You can use multi path update. Just observe reference with function and update all other informations
db.ref("").update({
'/users/dedd': info,
'/users/cdcd': info2
})
You should not have data saved duplicated. Instead you should save reference to group.
Your data should look like this.
{
"users": {
"userkey1": {
"data": {
"name": "",
"firstname": ""
},
"groups": {
"groupkey1": true // true or orderNum value
}
}
},
"groups": {
"groupkey1": {
"data": {
"name": "Test Group",
"image": "/testimage.jpg",
"other": "data"
},
"users": {
"userkey1": true
}
}
}
}
You can easily check if user is in group by checking if value at any of these positions is true.
users/userkey1/groups/groupkey1 or groups/groupkey1/users/userkey1.
When you create new group you save in under groups/newgroupkey position and you updated groups under users node by only setting newgroupkey to true.
So you do not duplicate your data.
For more information about structuring your data check the following link.
https://firebase.google.com/docs/database/android/structure-data
I have a Players collection and a Games collection. I want to construct a data structure that looks as follows:
{
"_id": "1234",
"accounts": {
"battlenet": "blah#1234"
},
"games": {
"overwatch": {
"class": "Hanzo",
"timePlayed": ISODate
},
"world-of-warcraft": {
"class": "Shaman",
"timePlayed": ISODate
}
}
}
games is an object, where every key refers to a specific document in the Games collection's slug attribute. Every value is a sub-schema definition with autoValues.
I can't find any good way to create validation in such a way that it updates an autoform correctly without weird coersion of data. Is there any way to accomplish this validation with simple schema?