How can I filter a subscription using a custom resolver - aws-amplify

I am working on a messaging app using AWS AppSync.
I have the following message type...
type Message
#model
#auth(
rules: [
{ allow: groups, groups: ["externalUser"], operations: [] }
]
) {
id: ID!
channelId: ID!
senderId: ID!
channel: Channel #connection(fields: ["channelId"])
createdAt: AWSDateTime!
text: String
}
And I have a subscription onCreatemessage. I need to filter the results to only channels that the user is in. So I get a list of channels from a permissions table and add the following to my response mapping template.
$extensions.setSubscriptionFilter({
"filterGroup": [
{
"filters" : [
{
"fieldName" : "channelId",
"operator" : "in",
"value" : $context.result.channelIds
}
]
}
]
})
$util.toJson($messageResult)
And it works great. But if a user is in more than 5 channels, I get the following error.
{
"message": "Connection failed: {"errors":[{"message":"subscription exceeds maximum value limit 5 for operator `in`.","errorCode":400}]}"
}
I am new to vtl. So my question is, how can I break up that filter in to multiple or'd filters?

According to Creating enhanced subscription filters, "multiple rules in a filter are evaluated using AND logic, while multiple filters in a filter group are evaluated using OR logic".
Therefore, as I understand it, you just need to split $context.result.channelIds into groups of 5 and add an object to the filters array for each group.
Here is a VTL template that will do this for you:
#set($filters = [])
#foreach($channelId in $context.result.channelIds)
#set($group = $foreach.index / 5)
#if($filters.size() < $group + 1)
$util.qr($filters.add({
"fieldName" : "channelId",
"operator" : "in",
"value" : []
}
))
#end
$util.qr($filters.get($group).value.add($channelId))
#end
$extensions.setSubscriptionFilter({
"filterGroup": [
{
"filters" : $filters
}
]
})
You can see this template running here: https://mappingtool.dev/app/appsync/042769cd78b0e928db31212f5ee6aa17
(Note: The Mapping Tool errors on line 15 are a result of the $filters array being dynamically populated. You can safely ignore them.)

Do you want to add server-side filter for GraphQL Subscriptions?
If so, Now, Amplify is supported for server-side filter for Subscriptions.
After you checking below blog, you may sense it.
https://aws.amazon.com/blogs/mobile/announcing-server-side-filters-for-real-time-graphql-subscriptions-with-aws-amplify/

Related

on conflict mutation gives unexpected result

on_conflict returns unknown argument
new to hasura, tried looking at multiple how to on_conflict, ran mutation from api explorer and from frontend, tried upsert_users (suggest me to change it to insert)
mutation upsert_users {
insert_users(
objects: [{
auth0_id: "iexistindb",
name: "somename"}
],
on_conflict: {
constraint: users_pkey,
update_columns: [last_seen, name]
}
) {
affected_rows
}
}
expected to update the user table if auth0 already exist
so i just encountered this now. i had the on_conflict / update_columns but hadn't given update permissions to the role, only insert

AppSync BatchDeleteItem not executes properly

I'm working on a React Native application with AppSync, and following is my schema to the problem:
type JoineeDeletedConnection {
items: [Joinee]
nextToken: String
}
type Mutation {
deleteJoinee(ids: [ID!]): [Joinee]
}
In 'request mapping template' to resolver to deleteJoinee, I have following (following the tutorial from https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-dynamodb-batch.html):
#set($ids = [])
#foreach($id in ${ctx.args.ids})
#set($map = {})
$util.qr($map.put("id", $util.dynamodb.toString($id)))
$util.qr($ids.add($map))
#end
{
"version" : "2018-05-29",
"operation" : "BatchDeleteItem",
"tables" : {
"JoineesTable": $util.toJson($ids)
}
}
..and in 'response mapping template' to the resolver,
$util.toJson($ctx.result.data.JoineesTable)
The problem is, when I ran the query, I got empty result and nothing deleted to database as well:
// calling the query
mutation DeleteJoinee {
deleteJoinee(ids: ["xxxx", "xxxx"])
{
id
}
}
// returns
{
"data": {
"deleteJoinee": [
null
]
}
}
I finally able to solve this puzzle, thanks to the answer mentioned here to point me to some direction.
Although, I noticed that JoineesTable does have trusted entity/role to the IAM 'Roles' section, yet it wasn't working for some reason. Looking into this more, I noticed that the existing policy had following actions as default:
"Action": [
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:UpdateItem"
]
Once I added following two more actions to the list, things have started working:
"dynamodb:BatchWriteItem",
"dynamodb:BatchGetItem"
Thanks to #Vasileios Lekakis and #Ionut Trestian on this appSync quest )

Publishing role names into a table Meteor alanning:roles and aldeed:tabular

I'm having trouble displaying the created roles in a table. I'm using alanning:roles and aldeed:tabular. To create the table I have:
TabularTables.RolesAdmin = new Tabular.Table({
name: "Roles",
collection: Meteor.roles,
pub:"rolesadmin",
allow: function (userId) {
return Roles.userIsInRole(userId, 'admin');
},
columns: [
{
data: "name",
title: "Role Name",
},
],
});
And the publication looks like this:
Meteor.publish( 'rolesadmin', function() {
return Meteor.roles.find( {}, { fields: { "name": 1 } } );
});
When running the app the table only displays "Processing..." thus there is an error and it is not able at access/find the data?
I'm getting the following exception in the server terminal:
Exception from sub rolesadmin id 6c6x3mDzweP8MbB9A
Error: Did not check() all arguments during publisher 'rolesadmin'
If I check in mongo db.roles.find(), there is no role with 6c6x3mDzweP8MbB9A id. What does this error refer to?
From the meteor-tabular docs:
To tell Tabular to use your custom publish function, pass the
publication name as the pub option. Your function:
MUST accept and check three arguments: tableName, ids, and fields
MUST publish all the documents where _id is in the ids array.
MUST do any necessary security checks
SHOULD publish only the fields listed in the fields object, if one is > provided.
MAY also publish other data necessary for your table
So it looks like you'll need to account for those three arguments mentioned in the documentation appropriately. I'm not sure you actually need a custom pub for this, though, based on what you are publishing.

Proper way to apply custom analyzers to fields with elastic search, apply multiple analyzers to one field or multiple fields with single analyzers?

EDIT: Added my current query to the end
I have a large database of human names and am using elastic search (via symfony2's FOSElasticaBundle and Elastica) to do smarter searching of the names.
I have a full name field, and I want to index the people's names with standard, ngram, and phonetic analyzers.
I've got the analyzers set up in elastic search, and I can begin dumping data into the index. I'm wondering if the way I'm doing it here is the best way, or if I can apply the analyzers to a single field...the reason I ask is because when I do a get /website/person/:id, I see all three fields in plain text...I was expecting to see the analyzed data here, although I guess it must only exist in an inverted index rather than on the document. Examples I've seen use multiple fields, but is it possible to add multiple analyzers to a single field?
My config.yml:
fos_elastica:
clients:
default: { host: %elastica_host%, port: %elastica_port% }
indexes:
website:
settings:
index:
analysis:
analyzer:
phonetic_analyzer:
type: "custom"
tokenizer: "lowercase"
filter: ["name_metaphone", "lowercase", "standard"]
ngram_analyzer:
type: "custom"
tokenizer: "lowercase"
filter : [ "name_ngram" ]
filter:
name_metaphone:
encoder: "metaphone"
replace: false
type: "phonetic"
name_ngram:
type: "nGram"
min_gram: 2
max_gram: 4
client: default
finder: ~
types:
person:
mappings:
name: ~
nameNGram:
analyzer: ngram_analyzer
namePhonetic:
analyzer: phonetic_analyzer
When I check the mapping it looks good:
{
"website" : {
"mappings" : {
"person" : {
"_meta" : {
"model" : "acme\\websiteBundle\\Entity\\Person"
},
"properties" : {
"name" : {
"type" : "string",
"store" : true
},
"nameNGram" : {
"type" : "string",
"store" : true,
"analyzer" : "ngram_analyzer"
},
"namePhonetic" : {
"type" : "string",
"store" : true,
"analyzer" : "phonetic_analyzer"
}
}
}
}
}
}
When I GET the document, I see that all three fields are stored in plain text... maybe i need to set STORE: FALSE for these extra fields, or, is it not being analyzed properly?
{
"_index" : "website",
"_type" : "person",
"_id" : "1",
"_version" : 1,
"found" : true,
"_source":{
"name":"John Doe",
"namePhonetic":"John Doe",
"nameNGram":"John Doe"
}
}
EDIT: The solution I'm currently using, which still requires some refinement but tests well for most names
//Create the query object
$boolQuery = new \Elastica\Query\Bool();
//Boost exact name matches
$exactMatchQuery = new \Elastica\Query\Match();
$exactMatchQuery->setFieldParam('name', 'query', $name);
$exactMatchQuery->setFieldParam('name', 'boost', 10);
$boolQuery->addShould($exactMatchQuery);
//Create a basic Levenshtein distance query
$levenshteinMatchQuery = new \Elastica\Query\Match();
$levenshteinMatchQuery->setFieldParam('name', 'query', $name);
$levenshteinMatchQuery->setFieldParam('name', 'fuzziness', 1);
$boolQuery->addShould($levenshteinMatchQuery);
//Create a phonetic query, seeing if the name SOUNDS LIKE the name that was searched
$phoneticMatchQuery = new \Elastica\Query\Match();
$phoneticMatchQuery->setFieldParam('namePhonetic', 'query', $name);
$boolQuery->addShould($phoneticMatchQuery);
//Create an NGRAM query
$nGramMatchQuery = new \Elastica\Query\Match();
$nGramMatchQuery->setFieldParam('nameNGram', 'query', $name);
$nGramMatchQuery->setFieldParam('nameNGram', 'boost', 2);
$boolQuery->addMust($nGramMatchQuery);
return $boolQuery;
No, you can't have multiple analyzers on a single field. The way you are doing is correct way of applying multiple analyzers by having different field names for same field.
The reason you are getting namePhonetic and nameNGram also in _source field is use of
"store" : true
It tells the ElasticSearch that you need those extra fields also in response. Use
"store" : false
that will solve your problem.
If you want to see the analyzed data on a field you can use _analyze api of elasticsearch.
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-analyze.html
Yes, these fields are stored in inverted index after analysis.
I hope I have answered all your doubts. Please let me know if you need more help on this.
Thanks

Drupal 7 Rules - on cron, check date field and if past set field [Status] from “active” to “ended”

OK... let me start by saying I know there is a similar post here (How to create a Drupal rule to check (on cron) a date field and if passed set field "status" to "ended"?) but the answer on that post does not work. Step 4 (In the component add the condition 'Data comparison' and select node:type) does not work or even exists as an option.
What I need to do is this:
On Cron > If content type is event and the end date has passed the current date then change the status field from Active to Ended. (select list)
I was able to do this by using the Event: Content is viewed but I really need to to work when cron is ran.
Side note: with the current version I have (Content is viewed) it does change Active to Ended but it also for some reason deletes the title of the node which is strange becuase the title filed is required by Drupal... any idea wht that is happening?
Not sure if it helps but here is an export of what I have done myself:
{ "rules_event_status" : {
"LABEL" : "Event Status",
"PLUGIN" : "reaction rule",
"ACTIVE" : false,
"REQUIRES" : [ "rules", "php" ],
"ON" : [ "node_view" ],
"IF" : [
{ "node_is_of_type" : { "node" : [ "node" ], "type" : { "value" : { "event" : "event" } } } },
{ "AND" : [] },
{ "php_eval" : { "code" : "\/\/dpm(strtotime($node-\u003Efield_event_date_time[LANGUAGE_NONE][0][\u0027value2\u0027]));\r\nif (time() \u003E strtotime($node-\u003Efield_event_date_time[LANGUAGE_NONE][0][\u0027value2\u0027]))\r\n{\r\n return true;\r\n}" } }
],
"DO" : [
{ "data_set" : { "data" : [ "node:field-event-status" ], "value" : "Ended" } }
]
}
}
Any help is very much appreciated.
Thanks
C
to use any custom fields or fields created by other modules than node, you have to add condition "entity has field" to your rules which will make that field "visible" and accesible for later work
side note: I think you can do the date comparison without php_eval, just add another entity has field condition and create "data comparison" condition. There should be tokens available to your needs
Not sure I fully understand the question: rules can be triggered by cron.
You should be able to get it to run when cron executes by picking the "React on event" attribute of the rule to "System > Cron maintenance tasks are executed".
Am I missing something?

Resources