I am trying to deploy an SNS subscription through Serverless Framework to AWS using a range filter policy. My resource definition looks something like this:
MySusbscription:
Type: AWS::SNS::Subscription
Properties:
TopicArn:
Ref: MyTopic
Endpoint: !GetAtt MyQueue.Arn
Protocol: sqs
RawMessageDelivery: true
FilterPolicy:
percentage:
- numeric:
- '>='
- ${ssm:/path/to/ssm/key}
RedrivePolicy:
deadLetterTargetArn: !GetAtt MyQueueDlq.Arn
The problem is that Serverless pulls all SSM values as strings, so the compiled version of the deployment config will be:
- numeric:
- '>='
- '0.25'
This will fail to deploy as SNS requires 0.25 to be a number instead of a string. Serverless has a strToBool function, but I haven't seen documentation to indicate there is an equivalent function for converting to a number/float.
I'm sure I can get around this by using env variables, but we store all configurations in SSM and I'm hoping to not have to do a one-off just to get past this issue.
The first solution is not the cleanest one, but will work for sure.
In the directory define your custom JS file ssmToNumber.js with following script:
const AWS = require('aws-sdk');
module.exports = async (serverless) => {
# Setup AWS SDK with region and profile
const { region } = serverless.processedInput.options;
AWS.config.update({ region });
if (serverless.processedInput.options['aws-profile']) {
process.env.AWS_PROFILE = serverless.processedInput.options['aws-profile'];
}
const ssm = new AWS.SSM({ apiVersion: '2014-11-06' });
# Get SSM params details from serverless.yml
const ssmKey = serverless.service.custom.ssmKeys.percentageNumericThreshold.path;
const decryption = serverless.service.custom.ssmKeys.percentageNumericThreshold.decryption;
# Get SSM parameter value and convert it to int
const result = await ssm.getParameter({
Name: ssmKey,
WithDecryption: decryption,
}).promise();
if (result.Parameter) {
return parseInt(result.Parameter.Value, 10);
}
throw new Error(`Failed to read SSM parameter for key ${ssmKey}`);
};
Now in serverless.yml define following values:
custom:
ssmKeys:
percentageNumericThreshold:
path: '/path/to/ssm/key'
decryption: false
And in the place, where you want to have your number value from SSM, simply invoke it this way:
FilterPolicy:
percentage:
- numeric:
- '>='
- ${file(./ssmToNumber.js)}
How it works?
Serverless Framework can start any JavaScript/TypeScript file during the build and put it's output into serverless.yml file.
And this is exactly what we do here. We define our ssmToNumber.js script, that simply reads SSM parameter from SSM and then converts it to integer and returns the value.
It knows which SSM path to use, thanks to the custom.ssmKeys section in serverless.yml file.
Of course if you want to customise ssmToNumber.js file, to make it more verbose and fault tolerant, then you simply need to edit the JavaScript file.
Another, more elegant way
It requires more work. Check out the official example for Serverless with Typescript.
As you can see, it's possible to use serverless.ts or serverless.js instead of YAML file.
It would require from you some work about refactoring existing YML file, but writing helper function, that would convert it to number, is very easy and elegant way to achieve your use case.
It have some downsides, like problems with including directly other YAML templates, but still you can define your CloudFormation YAML templates in separate files and then simply import those using import call from TS/JS code.
Related
One of the data in datastore is 7766277975020011920 and similarities.
The error shown in nodejs is -
Error: We attempted to return all of the numeric values, but chain value 7766277975129421920 is out of bounds of 'Number.MAX_SAFE_INTEGER'.
It suggested "options.wrapNumbers=true" to the file
"node_modules/#google-cloud/datastore/build/src/entity.js:412:19".
But I am using google cloud RUN version and not able to edit the files. How can I pass it ?
Moving my comment into an answer, according to the Node.js client reference for Datastore, when you run queries or calls for entities, you can pass options as an additional argument, which supports passing the wrapNumbers = True option:
const [values] = await datastore.runQuery(query,{"wrapNumbers":true});
You can use that in your calls to avoid receiving out of bounds errors for large integers. The rest of the supported options are documented in this code snippet from the official repository:
const options = {
consistency: 'string',
gaxOptions: {},
wrapNumbers: true,
};
I need to to trigger a Data Fusion pipeline located on a GCP project called myDataFusionProject through a Data Fusion operator (CloudDataFusionStartPipelineOperator) inside a DAG whose Cloud Composer instance is located on another project called myCloudComposerProject.
I have used the official documentation as well as the source code to write the code that roughly resembles the below snippet:
LOCATION = "someLocation"
PIPELINE_NAME = "myDataFusionPipeline"
INSTANCE_NAME = "myDataFusionInstance"
RUNTIME_ARGS = {"output.instance":"someOutputInstance", "input.dataset":"someInputDataset", "input.project":"someInputProject"}
start_pipeline = CloudDataFusionStartPipelineOperator(
location=LOCATION,
pipeline_name=PIPELINE_NAME,
instance_name=INSTANCE_NAME,
runtime_args=RUNTIME_ARGS,
task_id="start_pipeline",
)
My issue is that, every time I trigger the DAG, Cloud Composer looks for myDataFusionInstance inside myCloudComposerProject instead of myDataFusionProject, which gives an error like this one:
googleapiclient.errors.HttpError: <HttpError 404 when requesting https://datafusion.googleapis.com/v1beta1/projects/myCloudComposerProject/locations/someLocation/instances/myDataFusionInstance?alt=json returned "Resource 'projects/myCloudComposerProject/locations/someLocation/instances/myDataFusionInstance' was not found". Details: "[{'#type': 'type.googleapis.com/google.rpc.ResourceInfo', 'resourceName': 'projects/myCloudComposerProject/locations/someLocation/instances/myDataFusionInstance'}]"
So the question is: how can I force my operator to use the Data Fusion project instead of the Cloud Composer project? I suspect I may do that by adding a new runtime argument but I'm not sure how to do that.
Last piece of information: the Data Fusion pipeline simply extracts data from a BigQuery source and sends everything to a BigTable sink.
As a recommendation while developing operators on airflow, we should check the classes that are implementing the operators as documentation may lack some information due to versioning.
As commented, if you check CloudDataFusionStartPipelineOperator you will find that it makes use of a hook that gets the instance base on a project-id. This project-id its optional, so you can add your own project-id.
class CloudDataFusionStartPipelineOperator(BaseOperator):
...
def __init__(
...
project_id: Optional[str] = None, ### NOT MENTION IN THE DOCUMENTATION
...
) -> None:
...
self.project_id = project_id
...
def execute(self, context: dict) -> str:
...
instance = hook.get_instance(
instance_name=self.instance_name,
location=self.location,
project_id=self.project_id, ### defaults your project-id
)
api_url = instance["apiEndpoint"]
...
Adding the parameter to your operator call should fix your issue.
start_pipeline = CloudDataFusionStartPipelineOperator(
location=LOCATION,
pipeline_name=PIPELINE_NAME,
instance_name=INSTANCE_NAME,
runtime_args=RUNTIME_ARGS,
project_id=PROJECT_ID,
task_id="start_pipeline",
)
As a final note, besides the official documentation site you can also explore the files of apache airflow on github.
Sorry for the unspecific title. However, I am having a hard time to describe it.
I am using aws-appsync with aws cognito for authentication.
I've followed the amplify docs about the #auth annotation to handle permissions for mutations and queries.
Here is an example of my schema.
A user can create an entry and share it with others. However, they should only read the entry and should not have permissions to edit it.
An entry also has multiple notes. (And some more fields)
type Entry #model #versioned #auth (rules: [
{ allow: owner },
{ allow: owner, ownerField: "shared", queries: [get, list], mutations: []}
]) #searchable {
id: ID!
date: AWSDate
updated_at: AWSDateTime
text: String
notes: [Note] #connection(name: "EntryNotes")
shared: [String]!
}
And here is the note
type Note #model #versioned #auth (rules: [{ allow: owner }]) {
id: ID!
text: String
track: Track!
diary: DiaryEntry #connection(name: "EntryNotes")
}
This works fine so far. But the problem is the Note connection.
Because if you create a note you would create it like this:
mutation makeNote {
createNote (input: {
text: "Hello there!"
noteEntryId: "444c80ee-6fd9-4267-b371-c2ed4a3ccda4"
}) {
id
text
}
}
The problem is now, that you can create notes for entries that you do not have access to. If you somehow find out which id they have.
Is there a way to check if you have permissions to the entry before creating the note?
Currently, the best way to do this is via custom resolvers within the Amplify CLI. Specifically, you are able to use AppSync pipeline resolvers to perform the authorization check before creating the note. Your pipeline resolver would contain two functions. The first would look up the entry and compare the owner to the $ctx.identity. The second function would handle writing the record to DynamoDB. You can use the same logic found in build/resolvers/Mutation.createNote.re(q|s).vtl to implement the second function by copying it into the top level resolvers/ directory and then referencing it from your custom resource. After copying the logic, you will want to disable the default createNote mutation by changing #model to #model(mutations: { update: "updateNote", delete: "deleteNote" }).
For more information on how to setup custom resolvers see https://aws-amplify.github.io/docs/cli/graphql#add-a-custom-resolver-that-targets-a-dynamodb-table-from-model. For more information on pipeline resolvers (slightly different than the example in the amplify docs) see https://docs.aws.amazon.com/appsync/latest/devguide/pipeline-resolvers.html. Also see the CloudFormation reference docs for AppSync https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-reference-appsync.html.
Looking towards the future, we are working on a design that would allow you to define auth rules that span #connections. When this is done, it will automatically configure this pattern but there is not yet a set release date.
I have a template to create a key vault and a secret within it. I also have a service fabric template, that requires 3 things from the key vault: the Vault URI, the certificate URL, and the certificate thumbprint.
If I create the key vault and secret with powershell, it is easy to manually copy these 3 things from the output, and paste them into the parameters of the service fabric template. However, what I am hoping to do, due to the fact that this cert has the same life cycle as the service fabric cluster, is to link from the key vault template to the service fabric template, so when I deploy the key vault and secret (which btw is a key that has been base 64 encoded to a string. I could have this as a secret in yet another key vault...), I can pass the 3 values on as parameters.
So I have two questions.
How do I retrieve the 3 values in the arm template. Powershell outputs them as 'ResourceId' of the key vault, 'Id' of the secret, and 'Version' of the secret. My attempt:
"sourceVaultValue": {
"value": "resourceId('Microsoft.KeyVault/vaults/', parameters('keyVaultName')"
},
"certificateThumbprint": {
"value": "[listKeys(resourceId('secrets', parameters('secretName')), '2015-06-01')"
},
"certificateUrlValue": { "value": "[concat('https://', parameters('keyVaultName'), '.vault.azure.net:443/secrets/', parameters('secretName'), resourceId('secrets', parameters('secretName')))]"
But the certificateUrlValue is incorrect. You can see I tried with and without listKeys, but neither seemed to work... (The thumbprint is within the certUrl itself)
If I were to get the correct values, I would like to try pass them as parameters to the next template. The template in question has quite a few more parameters than the 3 I want to pass however. So is it possible to have a parametersLink element to link to the parameter file, as well as a parameters element for just those 3? Or is there an intended way of doing this?
Cheers
Ok, try this when you get back to the keyboard...
1) for the uri, you can use an output like:
"secretUri": {
"type": "string",
"value": "[reference(resourceId('Microsoft.KeyVault/vaults/secrets', parameters('keyVaultName'), parameters('secretName'))).secretUri]"
}
For #2, you cannot mix and match the link and some values, it's one or the other.
A couple thoughts on how you could do this (it depends a bit on how you want to structure the rest of your deployment)...
One way to think of this is instead of nesting the SF, deploy them in the same template since they have the same lifecycle
instead of nesting the SF template, nest the KV template and reference the outputs of that deployment in the SF template...
Aside from that I can't think of anything elegant - since you want to pass "dynamic" params to a nested deployment really the only way to do that is to dynamically write the param file behind the link or pass all the params into the deployment resource.
HTH - LMK if it doesn't...
Can't Reference a secret with dynamic id !!!!
The obvious problems with this way of doing things are:
Someone needs to type the cleartext password which means:
it needs to be known to anyone who provisions the environment and how do I feed it into an automated environment deployment? If I store the password in a parameter… ???????
"variables": {
"tenantPassword": {
"reference": {
"keyVault": {
"ID": "[concat(subscription().id,'/resourceGroups/',parameters('keyVaultResourceGroup'),'/providers/Microsoft.KeyVault/vaults/', parameters('VaultName'))]"
},
"secretName": "tenantPassword"
}
}
},
I'm trying to use Riak's mapreduce via http. his is what i'm sending:
{
"inputs":{
"bucket":"test",
"key_filters":[["matches", ".*"]]
},
"query":[
{
"map":{
"language":"erlang",
"source":"value(RiakObject, _KeyData, _Arg) -> Key = riak_object:key(RiakObject), Count = riak_kv_crdt:value(RiakObject, <<\"riak_kv_pncounter\">>), [ {Key, Count} ]."
}
}
]}
Riak fails with "[worker_startup_failed]", which isn't very informative. Could anyone please help me get this to actually execute the function?
WARNING
Allowing arbitrary Erlang functions via map-reduce is a security risk. Any valid Erlang can be executed, including sending your entire data set offsite or formatting the hard drive.
You have been warned.
However, if you implicitly trust any client that may connect to your cluster, you can allow Erlang source to be passed in a map-reduce request by setting {allow_strfun, true} in the riak_kv section of app.config, (or in the advanced.config if you are using riak.conf).
Once you have allowed passing an Erlang function in a map-reduce phase, you need to pass in a function of the form fun(RiakObject,KeyData,Arg) -> [result] end. Note that this must be an anonymous fun, so fun is a keyword, not a name, and it must end with end.
Your function should handle the case where {error,notfound} is passed as the first argument instead of an object. Simply adding a catch-all clause to the function could accomplish that.
Perhaps something like:
{
"inputs":{
"bucket":"test",
"key_filters":[["matches", ".*"]]
},
"query":[
{
"map":{
"language":"erlang",
"source":"fun(RiakObject, _KeyData, _Arg) ->
Key = riak_object:key(RiakObject),
Count = riak_kv_crdt:value(
RiakObject,
<<\"riak_kv_pncounter\">>),
[ {Key, Count} ];
(_,_,_) -> [{error,0}]
end."
}
}
]}
Allowing the source to be passed in the request is very useful while developing and debugging. For production, you really should put the functions in a dedicated pre-compiled module that you copy to the code path of each node so that the phase spec can specify the module and function by name instead of providing arbitrary code.
{"map":{
"language":"erlang",
"module":"yourprecompiledmodule",
"function":"functionname"}}
You need to enable allow_strfun on all nodes in your cluster. To do so in Riak 2, you will need to use the advanced.config file to add this to the riak_kv configuration:
[
{riak_kv, [
{allow_strfun, true}
]}
].
The other option is to create your own Erlang module by using the compiler shipped with Riak and placing the *.beam file in a well-known location for Riak to find. The basho-patches directory is one such place.
Please see the documentation as well:
advanced.config
Installing custom Erlang code
HTTP MapReduce
Using MapReduce
Advanced MapReduce
MapReduce / curl example