Serverless SNS filterPolicy does not put filter in place when subscription created - amazon-sns

I'm using the following function definition:
missing:
handler: functions/eeegMissing.handler
events:
- sns: arn:aws:sns:us-west-2:xxx
filterPolicy:
type:
- EPILOG_PAGE_DATA_RECEIVED
The SNS topic already exists. When I deploy it, a subscription is created with the name
arn:aws:lambda:us-west-2:xxx:function:eeeg-dev-missing`
but the filter is blank. I would expect the filter to show as:
{ "type":["EPILOG_PAGE_DATA_RECEIVED"]}
What am I missing?

You just have a minor syntax error. I have a Serverless function with the following event and it works:
events:
- sns:
arn: ${self:custom.devicesTopicArn}
filterPolicy:
operation:
- INSERT
So in your case, it should be:
missing:
handler: functions/eeegMissing.handler
events:
- sns:
arn: arn:aws:sns:us-west-2:xxx
filterPolicy:
type:
- EPILOG_PAGE_DATA_RECEIVED

To clarify the previous answer, per the Serverless docs, when specifying a topic by arn (versus name), the arn value must include the arn: keyword (in addition to the arn: prefix included in the AWS resource name).

Related

Google Cloud Datastore "out of bounds of 'Number.MAX_SAFE_INTEGER'"

One of the data in datastore is 7766277975020011920 and similarities.
The error shown in nodejs is -
Error: We attempted to return all of the numeric values, but chain value 7766277975129421920 is out of bounds of 'Number.MAX_SAFE_INTEGER'.
It suggested "options.wrapNumbers=true" to the file
"node_modules/#google-cloud/datastore/build/src/entity.js:412:19".
But I am using google cloud RUN version and not able to edit the files. How can I pass it ?
Moving my comment into an answer, according to the Node.js client reference for Datastore, when you run queries or calls for entities, you can pass options as an additional argument, which supports passing the wrapNumbers = True option:
const [values] = await datastore.runQuery(query,{"wrapNumbers":true});
You can use that in your calls to avoid receiving out of bounds errors for large integers. The rest of the supported options are documented in this code snippet from the official repository:
const options = {
consistency: 'string',
gaxOptions: {},
wrapNumbers: true,
};

Converting SSM Value To Number

I am trying to deploy an SNS subscription through Serverless Framework to AWS using a range filter policy. My resource definition looks something like this:
MySusbscription:
Type: AWS::SNS::Subscription
Properties:
TopicArn:
Ref: MyTopic
Endpoint: !GetAtt MyQueue.Arn
Protocol: sqs
RawMessageDelivery: true
FilterPolicy:
percentage:
- numeric:
- '>='
- ${ssm:/path/to/ssm/key}
RedrivePolicy:
deadLetterTargetArn: !GetAtt MyQueueDlq.Arn
The problem is that Serverless pulls all SSM values as strings, so the compiled version of the deployment config will be:
- numeric:
- '>='
- '0.25'
This will fail to deploy as SNS requires 0.25 to be a number instead of a string. Serverless has a strToBool function, but I haven't seen documentation to indicate there is an equivalent function for converting to a number/float.
I'm sure I can get around this by using env variables, but we store all configurations in SSM and I'm hoping to not have to do a one-off just to get past this issue.
The first solution is not the cleanest one, but will work for sure.
In the directory define your custom JS file ssmToNumber.js with following script:
const AWS = require('aws-sdk');
module.exports = async (serverless) => {
# Setup AWS SDK with region and profile
const { region } = serverless.processedInput.options;
AWS.config.update({ region });
if (serverless.processedInput.options['aws-profile']) {
process.env.AWS_PROFILE = serverless.processedInput.options['aws-profile'];
}
const ssm = new AWS.SSM({ apiVersion: '2014-11-06' });
# Get SSM params details from serverless.yml
const ssmKey = serverless.service.custom.ssmKeys.percentageNumericThreshold.path;
const decryption = serverless.service.custom.ssmKeys.percentageNumericThreshold.decryption;
# Get SSM parameter value and convert it to int
const result = await ssm.getParameter({
Name: ssmKey,
WithDecryption: decryption,
}).promise();
if (result.Parameter) {
return parseInt(result.Parameter.Value, 10);
}
throw new Error(`Failed to read SSM parameter for key ${ssmKey}`);
};
Now in serverless.yml define following values:
custom:
ssmKeys:
percentageNumericThreshold:
path: '/path/to/ssm/key'
decryption: false
And in the place, where you want to have your number value from SSM, simply invoke it this way:
FilterPolicy:
percentage:
- numeric:
- '>='
- ${file(./ssmToNumber.js)}
How it works?
Serverless Framework can start any JavaScript/TypeScript file during the build and put it's output into serverless.yml file.
And this is exactly what we do here. We define our ssmToNumber.js script, that simply reads SSM parameter from SSM and then converts it to integer and returns the value.
It knows which SSM path to use, thanks to the custom.ssmKeys section in serverless.yml file.
Of course if you want to customise ssmToNumber.js file, to make it more verbose and fault tolerant, then you simply need to edit the JavaScript file.
Another, more elegant way
It requires more work. Check out the official example for Serverless with Typescript.
As you can see, it's possible to use serverless.ts or serverless.js instead of YAML file.
It would require from you some work about refactoring existing YML file, but writing helper function, that would convert it to number, is very easy and elegant way to achieve your use case.
It have some downsides, like problems with including directly other YAML templates, but still you can define your CloudFormation YAML templates in separate files and then simply import those using import call from TS/JS code.

How to check permissions of an entity on create in appsync

Sorry for the unspecific title. However, I am having a hard time to describe it.
I am using aws-appsync with aws cognito for authentication.
I've followed the amplify docs about the #auth annotation to handle permissions for mutations and queries.
Here is an example of my schema.
A user can create an entry and share it with others. However, they should only read the entry and should not have permissions to edit it.
An entry also has multiple notes. (And some more fields)
type Entry #model #versioned #auth (rules: [
{ allow: owner },
{ allow: owner, ownerField: "shared", queries: [get, list], mutations: []}
]) #searchable {
id: ID!
date: AWSDate
updated_at: AWSDateTime
text: String
notes: [Note] #connection(name: "EntryNotes")
shared: [String]!
}
And here is the note
type Note #model #versioned #auth (rules: [{ allow: owner }]) {
id: ID!
text: String
track: Track!
diary: DiaryEntry #connection(name: "EntryNotes")
}
This works fine so far. But the problem is the Note connection.
Because if you create a note you would create it like this:
mutation makeNote {
createNote (input: {
text: "Hello there!"
noteEntryId: "444c80ee-6fd9-4267-b371-c2ed4a3ccda4"
}) {
id
text
}
}
The problem is now, that you can create notes for entries that you do not have access to. If you somehow find out which id they have.
Is there a way to check if you have permissions to the entry before creating the note?
Currently, the best way to do this is via custom resolvers within the Amplify CLI. Specifically, you are able to use AppSync pipeline resolvers to perform the authorization check before creating the note. Your pipeline resolver would contain two functions. The first would look up the entry and compare the owner to the $ctx.identity. The second function would handle writing the record to DynamoDB. You can use the same logic found in build/resolvers/Mutation.createNote.re(q|s).vtl to implement the second function by copying it into the top level resolvers/ directory and then referencing it from your custom resource. After copying the logic, you will want to disable the default createNote mutation by changing #model to #model(mutations: { update: "updateNote", delete: "deleteNote" }).
For more information on how to setup custom resolvers see https://aws-amplify.github.io/docs/cli/graphql#add-a-custom-resolver-that-targets-a-dynamodb-table-from-model. For more information on pipeline resolvers (slightly different than the example in the amplify docs) see https://docs.aws.amazon.com/appsync/latest/devguide/pipeline-resolvers.html. Also see the CloudFormation reference docs for AppSync https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-reference-appsync.html.
Looking towards the future, we are working on a design that would allow you to define auth rules that span #connections. When this is done, it will automatically configure this pattern but there is not yet a set release date.

When are writeFields specified in Firestore requests and what replaces them?

The simulator now displays an error message trying to access request.writeFields.
Before that writeFields in Firestore Security Rules did just not work in real requests.
The message states the following:
The simulator only simulates client SDK calls; request.writeFields is always null for these simulations
Does this mean that writeFields are only specified in HTTP requests?
The documentation only states this:
writeFields: List of fields being written in a write request.
A problem that arises from this
I am searching for something that replaces this property because it is "always null".
request.resource.data in update also contains fields that are not in the requests, but already in the document to my knowledge.
Example
// Existing document:
document:
- name: "Peter"
- age: 52
- profession: "Baker"
// Update call:
document:
- age: 53
// request.resource.data in allow update contains the following:
document:
- name: "Peter"
- age: 53
- profession: "Baker"
But I only want age.
EDIT Mar 4, 2020: Map.diff() replaces writeFields functionality
The Map.diff() function gives the difference between two maps:
https://firebase.google.com/docs/reference/rules/rules.Map#diff
To use it in rules:
// Returns a MapDiff object
map1.diff(map2)
A MapDiff object has the following methods
addedKeys() // a set of strings of keys that are in after but not before
removedKeys() // a set of strings of keys that are in before but not after
changedKeys() // a set of strings of keys that are in both maps but have different values
affectedKeys() // a set of strings that's the union of addedKeys() + removedKeys() + updatedKeys()
unchangedKeys() // a set of strings of keys that are in both maps and have the same value in both
For example:
// This rule only allows updates where "a" is the only field affected
request.resource.data.diff(resource.data).affectedKeys().hasOnly(["a"])
EDIT Oct 4, 2018: writeFields is no longer supported by Firestore and its functionality will eventually be removed.
writeFields is still valid, as you can see from the linked documentation. What the error message in the simulator is telling you is that it's unable to simulate writeFields, as it only works with requests coming from client SDKs. The simulator itself seems to be incapable of simulating requests exactly as required in order for writeFields to be tested. So, if you write rules that use writeFields, you'll have to test them by using a client SDK to perform the read or write that would trigger the rule.

Iron router data findOne returning undefined

I have a streams publication and subscription setup but for some reason if I do the following in my route the view gets no data:
Router.route '/seasons/:season/episodes/:episode',
name: 'episode'
action: ->
#render(
'episode',
data: ->
Streams.findOne({season: #params.season, episode: #params.episode})
)
If I log the params they are there as expected, and doing a findOne manually either via the db or the browser console returns the data as expected.
If I remove the params so it just does Streams.findOne() the data returns the first stream from the database and the view has access to it as expected. I'm really not sure what's going on here.
You probably need to wait on your streams publication before trying to access the data : Pub/Sub mechanism in Meteor is asynchronous, when you subscribe to some data, you don't instantly get it back in the browser because of the underlying client/server latency.
Try reorganizing your code as follow :
Router.route '/seasons/:season/episodes/:episode',
name: 'episode'
template: 'episode'
data: ->
Streams.findOne({season: #params.season, episode: #params.episode})
waitOn: ->
Meteor.subscribe 'streams', #params.season, #params.episode

Resources