Kibana Filtering data based on aggregation - kibana

I need to show the count of running jobs and completed in pie chart. I am receiving status of jobs in real time. To show the count of the jobs that are currently in running state I have to filter those jobs which are already completed (e.g Job a). Please suggest a way to to do this.
Job timestamp Status
job a 1639381300 Running
job a 1639381301 Running
job a 1639381302 Completed
job b 1639381301 Running
Output(Pie Chart)
Count of jobs completed jobs =1
Count of jobs running jobs =1

Filter your jobs first then aggregate! Below is a sample adapt it
GET logs/_search
{
"size": 0,
// Now create aggregates level 3
"aggs" : {
"messages" : {
// do some filtering level 2
"filters" : {
"other_bucket_key": "other_messages",
"filters" : {
// do some filtering level 1
"errors" : { "match" : { "body" : "error" }},
"warnings" : { "match" : { "body" : "warning" }}
}
}
}
}
}

Related

How can I filter a subscription using a custom resolver

I am working on a messaging app using AWS AppSync.
I have the following message type...
type Message
#model
#auth(
rules: [
{ allow: groups, groups: ["externalUser"], operations: [] }
]
) {
id: ID!
channelId: ID!
senderId: ID!
channel: Channel #connection(fields: ["channelId"])
createdAt: AWSDateTime!
text: String
}
And I have a subscription onCreatemessage. I need to filter the results to only channels that the user is in. So I get a list of channels from a permissions table and add the following to my response mapping template.
$extensions.setSubscriptionFilter({
"filterGroup": [
{
"filters" : [
{
"fieldName" : "channelId",
"operator" : "in",
"value" : $context.result.channelIds
}
]
}
]
})
$util.toJson($messageResult)
And it works great. But if a user is in more than 5 channels, I get the following error.
{
"message": "Connection failed: {"errors":[{"message":"subscription exceeds maximum value limit 5 for operator `in`.","errorCode":400}]}"
}
I am new to vtl. So my question is, how can I break up that filter in to multiple or'd filters?
According to Creating enhanced subscription filters, "multiple rules in a filter are evaluated using AND logic, while multiple filters in a filter group are evaluated using OR logic".
Therefore, as I understand it, you just need to split $context.result.channelIds into groups of 5 and add an object to the filters array for each group.
Here is a VTL template that will do this for you:
#set($filters = [])
#foreach($channelId in $context.result.channelIds)
#set($group = $foreach.index / 5)
#if($filters.size() < $group + 1)
$util.qr($filters.add({
"fieldName" : "channelId",
"operator" : "in",
"value" : []
}
))
#end
$util.qr($filters.get($group).value.add($channelId))
#end
$extensions.setSubscriptionFilter({
"filterGroup": [
{
"filters" : $filters
}
]
})
You can see this template running here: https://mappingtool.dev/app/appsync/042769cd78b0e928db31212f5ee6aa17
(Note: The Mapping Tool errors on line 15 are a result of the $filters array being dynamically populated. You can safely ignore them.)
Do you want to add server-side filter for GraphQL Subscriptions?
If so, Now, Amplify is supported for server-side filter for Subscriptions.
After you checking below blog, you may sense it.
https://aws.amazon.com/blogs/mobile/announcing-server-side-filters-for-real-time-graphql-subscriptions-with-aws-amplify/

Google Analytics V4 API - Right Syntax for dynamicSegment and metricFilter

Trying (using Python) to create dynamic segment to get all sessions who completed a specific goal.
Current syntax I'm using for the metricFIlter:
"metricFilter":
{
"metricName":"ga:goal3Completions",
"operator":"NUMERIC_GREATER_THAN",
"comparisonValue":[0]
}
I've also tried other options like ['0'], 0, '0' but with no success.
Here is the response error I'm getting:
"Invalid value at 'report_requests[0].segments[0].dynamic_segment.session_segment.segment_filters[0].simple_segment.or_filters_for_segment.segment_filter_clauses[0].metric_filter.operator' (TYPE_ENUM), "NUMERIC_GREATER_THAN""
Any suggestions how to fix it ?
The operator NUMERIC_GREATER_THAN is only valid for filtering dimensions, not metrics.
"metricFilterClauses" : [
{
"filters" : [
{
"metricName" : "ga:goal3Completions",
"operator":"GREATER_THAN",
"comparisonValue": "0"
}
]
}
]
You can find a list of operators in the API docs.

How to make consistent delete in Firebase database when the data lies in multiple paths in a fan out way? [duplicate]

This question already has an answer here:
Firebase -- Bulk delete child nodes
(1 answer)
Closed 6 years ago.
With Firebase fan out data to different nodes and paths is recommended by Firebase like below example from Firebase sample:
{
"post-comments" : {
"PostId1" : {
"CommentID1" : {
"author" : "User1",
"text" : "Comment1!",
"uid" : "UserId1"
}
}
},
"posts" : {
"PostId1" : {
"author" : "user1",
"body" : "Firebase Mobile platform",
"starCount" : 1,
"stars" : {
"UserId1" : true
},
"title" : "About firebase",
"uid" : "UserId1"
}
},
"user-posts" : {
"UserId1" : {
"PostId1" : {
"author" : "user1",
"body" : "Firebase Mobile platform",
"starCount" : 1,
"stars" : {
"UserId1" : true
},
"title" : "About firebase",
"uid" : "UserId1"
}
}
},
"users" : {
"UserId1" : {
"email" : "user1#gmail.com",
"username" : "user1"
}
}
}
With multipath updates we can atomically update all the paths for a post, however if we want to delete a blog post in above kind of schema then how can we do it atomically? There is no multi path delete, I guess. If client losses network connection while deleting then only few paths would be deleted!
Also in case there is a requirement like when a user is deleted for all the post he has starred, we should remove the stars and unstar the post for that user. This becomes difficult as there is no direct tracking of what posts user has starred. For this do we need to fan out the starring of posts as well like have a node user-stars. Then while deleting we know what all activity the user has done and act on it while deleting user. Is there a better way of handling this?
"user-stars":{
"UserId1":{
"PostID1":true
}
}
In both cases the question on atomically or consistently deleting the data from multipaths (either all or nothing) is seemingly not available.
In that case the only option available looks to be putting the delete command in Firebase queue which will resolve the task in queue only if everything is deleted. That will be eventually consistent option but should be fine. But that is expensive option requiring server. Is there a better way?
You can implement a multi-path delete, by writing a value of null to the paths.
So:
var updates = {
"user-posts/UserId1/PostId1": null,
"post-comments/PostId1": null,
"posts/PostId1": null
}
ref.update(updates);
I had already answered this before: Firebase -- Bulk delete child nodes
It's also quite explicitly mentioned in the documentation on deleting data:
You can also delete by specifying null as the value for another write operation such as set() or update(). You can use this technique with update() to delete multiple children in a single API call.

Firebase: structuring data via per-user copies? Risk of data corruption?

Implementing an Android+Web(Angular)+Firebase app, which has a many-to-many relationship: User <-> Widget (Widgets can be shared to multiple users).
Considerations:
List all the Widgets that a User has.
A User can only see the Widgets which are shared to him/her.
Be able to see all Users to whom a given Widget is shared.
A single Widget can be owned/administered by multiple Users with equal rights (modify Widget and change to whom it is shared). Similar to how Google Drive does sharing to specific users.
One of the approaches to implement fetching (join-style), would be to go with this advice: https://www.firebase.com/docs/android/guide/structuring-data.html ("Joining Flattened Data") via multiple listeners.
However I have doubts about this approach, because I have discovered that data loading would be worryingly slow (at least on Android) - I asked about it in another question - Firebase Android: slow "join" using many listeners, seems to contradict documentation .
So, this question is about another approach: per-user copies of all Widgets that a user has. As used in the Firebase+Udacity tutorial "ShoppingList++" ( https://www.firebase.com/blog/2015-12-07-udacity-course-firebase-essentials.html ).
Their structure looks like this:
In particular this part - userLists:
"userLists" : {
"abc#gmail,com" : {
"-KBt0MDWbvXFwNvZJXTj" : {
"listName" : "Test List 1 Rename 2",
"owner" : "xyz#gmail,com",
"timestampCreated" : {
"timestamp" : 1456950573084
},
"timestampLastChanged" : {
"timestamp" : 1457044229747
},
"timestampLastChangedReverse" : {
"timestamp" : -1457044229747
}
}
},
"xyz#gmail,com" : {
"-KBt0MDWbvXFwNvZJXTj" : {
"listName" : "Test List 1 Rename 2",
"owner" : "xyz#gmail,com",
"timestampCreated" : {
"timestamp" : 1456950573084
},
"timestampLastChanged" : {
"timestamp" : 1457044229747
},
"timestampLastChangedReverse" : {
"timestamp" : -1457044229747
}
},
"-KByb0imU7hFzWTK4eoM" : {
"listName" : "List2",
"owner" : "xyz#gmail,com",
"timestampCreated" : {
"timestamp" : 1457044332539
},
"timestampLastChanged" : {
"timestamp" : 1457044332539
},
"timestampLastChangedReverse" : {
"timestamp" : -1457044332539
}
}
}
},
As you can see, the copies of shopping list "Test List 1 Rename 2" info appears in two places (for 2 users).
And here is the rest for completeness:
{
"ownerMappings" : {
"-KBt0MDWbvXFwNvZJXTj" : "xyz#gmail,com",
"-KByb0imU7hFzWTK4eoM" : "xyz#gmail,com"
},
"sharedWith" : {
"-KBt0MDWbvXFwNvZJXTj" : {
"abc#gmail,com" : {
"email" : "abc#gmail,com",
"hasLoggedInWithPassword" : false,
"name" : "Agenda TEST",
"timestampJoined" : {
"timestamp" : 1456950523145
}
}
}
},
"shoppingListItems" : {
"-KBt0MDWbvXFwNvZJXTj" : {
"-KBt0heZh-YDWIZNV7xs" : {
"bought" : false,
"itemName" : "item",
"owner" : "xyz#gmail,com"
}
}
},
"uidMappings" : {
"google:112894577549422030859" : "abc#gmail,com",
"google:117151367009479509658" : "xyz#gmail,com"
},
"userFriends" : {
"xyz#gmail,com" : {
"abc#gmail,com" : {
"email" : "abc#gmail,com",
"hasLoggedInWithPassword" : false,
"name" : "Agenda TEST",
"timestampJoined" : {
"timestamp" : 1456950523145
}
}
}
},
"users" : {
"abc#gmail,com" : {
"email" : "abc#gmail,com",
"hasLoggedInWithPassword" : false,
"name" : "Agenda TEST",
"timestampJoined" : {
"timestamp" : 1456950523145
}
},
"xyz#gmail,com" : {
"email" : "xyz#gmail,com",
"hasLoggedInWithPassword" : false,
"name" : "Karol Depka",
"timestampJoined" : {
"timestamp" : 1456952940258
}
}
}
}
However, before I jump into implementing a similar structure in my app, I would like to clarify a few doubts.
Here are my interrelated questions:
In their ShoppingList++ app, they only permit a single "owner" - assigned in the ownerMappings node. Thus no-one else can rename the shopping list. I would like to have multiple "owners"/admins, with equal rights. Would such a keep-copies-per-user structure still work for multiple owner/admin users, without risking data corruption/"desynchronization" or "pranks"?
Could data corruption arise in scenarios like this: User1 goes offline, renames Widget1 to Widget1Prim. While User1 is offline, User2 shares Widget1 to User3 (User3's copy would not yet be aware of the rename). User1 goes online and sends the info about the rename of Widget1 (only to his own and User2's copies, of which the client code was aware at the time of the rename - not updating User3's copy). Now, in a naive implementation, User3 would have the old name, while the others would have the new name. This would probably be rare, but still worrying a bit.
Could/should the data corruption scenario in point "2." be resolved via having some process (e.g. on AppEngine) listening to changes and ensuring proper propagation to all user copies?
And/or could/should the data corruption scenario in point "2." be resolved via implementing a redundant listening to both changes of sharing and renaming, and propagating the changes to per-user copies, to handle the special case? Most of the time this would not be necessary, so it could result in performance/bandwidth penalty and complicated code. Is it worth it?
Going forward, once we have multiple versions deployed "in the wild", wouldn't it become unwieldy to evolve the schema, given how much of the data-handling responsibility lies with the code in the clients? For example if we add a new relationship, that the older client versions don't yet know about, doesn't it seem fragile? Then, back to the server-side syncer-ensurerer process on e.g. AppEngine (described in question "3.") ?
Would it seem like a good idea, to also have a "master reference copy" of every Widget / shopping-list, so as to give good "source of truth" for any syncer-ensurerer type of operations that would update per-user copies?
Any special considerations/traps/blockers regarding rules.json / rules.bolt permissions for data structured in such a (redundant) way ?
PS: I know about atomic multi-path updates via updateChildren() - would definitely use them.
Any other hints/observations welcome. TIA.
I suggest having only one copy of a widget for the entire system. It would have an origin user ID, and a set of users that have access to it. The widget tree can hold user permissions and change history. Any time a change is made, a branch is added to the tree. Branches can then be "promoted" to the "master" kind of like GIT. This would guarantee data integrity because past versions are never changed or deleted. It would also simplify your fetches... I think :)
{
users:[
bob:{
widgets:[
xxx:{
widgetKey: xyz,
permissions: *,
lastEdit...
}
]
}
...
]
widgets:[
xyz:{
masterKey:abc,
data: {...},
owner: bob,
},
...
]
widgetHistory:[
xyz:[
v1:{
data:{...},
},
v2,
v3
]
123:[
...
],
...
]
}

Drupal 7 Rules - on cron, check date field and if past set field [Status] from “active” to “ended”

OK... let me start by saying I know there is a similar post here (How to create a Drupal rule to check (on cron) a date field and if passed set field "status" to "ended"?) but the answer on that post does not work. Step 4 (In the component add the condition 'Data comparison' and select node:type) does not work or even exists as an option.
What I need to do is this:
On Cron > If content type is event and the end date has passed the current date then change the status field from Active to Ended. (select list)
I was able to do this by using the Event: Content is viewed but I really need to to work when cron is ran.
Side note: with the current version I have (Content is viewed) it does change Active to Ended but it also for some reason deletes the title of the node which is strange becuase the title filed is required by Drupal... any idea wht that is happening?
Not sure if it helps but here is an export of what I have done myself:
{ "rules_event_status" : {
"LABEL" : "Event Status",
"PLUGIN" : "reaction rule",
"ACTIVE" : false,
"REQUIRES" : [ "rules", "php" ],
"ON" : [ "node_view" ],
"IF" : [
{ "node_is_of_type" : { "node" : [ "node" ], "type" : { "value" : { "event" : "event" } } } },
{ "AND" : [] },
{ "php_eval" : { "code" : "\/\/dpm(strtotime($node-\u003Efield_event_date_time[LANGUAGE_NONE][0][\u0027value2\u0027]));\r\nif (time() \u003E strtotime($node-\u003Efield_event_date_time[LANGUAGE_NONE][0][\u0027value2\u0027]))\r\n{\r\n return true;\r\n}" } }
],
"DO" : [
{ "data_set" : { "data" : [ "node:field-event-status" ], "value" : "Ended" } }
]
}
}
Any help is very much appreciated.
Thanks
C
to use any custom fields or fields created by other modules than node, you have to add condition "entity has field" to your rules which will make that field "visible" and accesible for later work
side note: I think you can do the date comparison without php_eval, just add another entity has field condition and create "data comparison" condition. There should be tokens available to your needs
Not sure I fully understand the question: rules can be triggered by cron.
You should be able to get it to run when cron executes by picking the "React on event" attribute of the rule to "System > Cron maintenance tasks are executed".
Am I missing something?

Resources