Sorry for the unspecific title. However, I am having a hard time to describe it.
I am using aws-appsync with aws cognito for authentication.
I've followed the amplify docs about the #auth annotation to handle permissions for mutations and queries.
Here is an example of my schema.
A user can create an entry and share it with others. However, they should only read the entry and should not have permissions to edit it.
An entry also has multiple notes. (And some more fields)
type Entry #model #versioned #auth (rules: [
{ allow: owner },
{ allow: owner, ownerField: "shared", queries: [get, list], mutations: []}
]) #searchable {
id: ID!
date: AWSDate
updated_at: AWSDateTime
text: String
notes: [Note] #connection(name: "EntryNotes")
shared: [String]!
}
And here is the note
type Note #model #versioned #auth (rules: [{ allow: owner }]) {
id: ID!
text: String
track: Track!
diary: DiaryEntry #connection(name: "EntryNotes")
}
This works fine so far. But the problem is the Note connection.
Because if you create a note you would create it like this:
mutation makeNote {
createNote (input: {
text: "Hello there!"
noteEntryId: "444c80ee-6fd9-4267-b371-c2ed4a3ccda4"
}) {
id
text
}
}
The problem is now, that you can create notes for entries that you do not have access to. If you somehow find out which id they have.
Is there a way to check if you have permissions to the entry before creating the note?
Currently, the best way to do this is via custom resolvers within the Amplify CLI. Specifically, you are able to use AppSync pipeline resolvers to perform the authorization check before creating the note. Your pipeline resolver would contain two functions. The first would look up the entry and compare the owner to the $ctx.identity. The second function would handle writing the record to DynamoDB. You can use the same logic found in build/resolvers/Mutation.createNote.re(q|s).vtl to implement the second function by copying it into the top level resolvers/ directory and then referencing it from your custom resource. After copying the logic, you will want to disable the default createNote mutation by changing #model to #model(mutations: { update: "updateNote", delete: "deleteNote" }).
For more information on how to setup custom resolvers see https://aws-amplify.github.io/docs/cli/graphql#add-a-custom-resolver-that-targets-a-dynamodb-table-from-model. For more information on pipeline resolvers (slightly different than the example in the amplify docs) see https://docs.aws.amazon.com/appsync/latest/devguide/pipeline-resolvers.html. Also see the CloudFormation reference docs for AppSync https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-reference-appsync.html.
Looking towards the future, we are working on a design that would allow you to define auth rules that span #connections. When this is done, it will automatically configure this pattern but there is not yet a set release date.
Related
I learned about firebase and cloud functions recently and have been able to develop simple applications with them.
I now want to expand my knowledge and really struggling with Trigger Email Extension.
On a specific event on my firebase, I want to fire an email to the user in a custom format, but I am unable to even activate the extension for now.
Can someone please explain with example please about these fields marked in the picture?
I had this question too, but got it resolved. Here's your answer:
"Email documents collection" is the collection that will be read to trigger the emails. I recommend leaving named "mail" unless you already have a collection named mail.
"Users collection (Optional)" refers to a collection (if any) that you want to use in tandem with a user auth system. I haven't had this specific use case yet, but I imagine once you understand how Trigger Email operates, it should be somewhat self-explanatory.
"Templates collection (Optional)" is helpful for templates in which you can use handlebar.js is automatically input specific information per user. (eg. <p>Hello, {{first_name}}</p> etc.) Similar to the previously mentioned collections, you can name it whatever you want.
How to create a template (I have yet to actually implement this, so take this with a grain of salt):
In your templates collection, you want to name each document with a memorable ID. Firebase gives the example:
{
subject: "#{{username}} is now following you!",
html: "Just writing to let you know that <code>#{{username}}</code> ({{name}}) is now following you.",
attachments: [
{
filename: "{{username}}.jpg",
path: "{{imagePath}}"
}
]
}
...specifying a good ID would be following. As you can see, the documents should be structured just like any other email you would send out.
Here is an example of using the above template in javascript:
firestore()
.collection("mail")
.add({
toUids: ["abc123"], // This relates to the Users Collection
template: {
name: "following", // Specify the template
// Specify the information for the Handlebars
// which can also be pulled from your users (toUids)
// if you have the data stored in a user collection.
// Of course that gets more into the world of a user auth system.
data: {
username: "ada",
name: "Ada Lovelace",
imagePath: "https://path-to-file/image-name.jpg"
},
},
})
I hope this helps. Let me know if you have an issues getting this set up.
I have a template to create a key vault and a secret within it. I also have a service fabric template, that requires 3 things from the key vault: the Vault URI, the certificate URL, and the certificate thumbprint.
If I create the key vault and secret with powershell, it is easy to manually copy these 3 things from the output, and paste them into the parameters of the service fabric template. However, what I am hoping to do, due to the fact that this cert has the same life cycle as the service fabric cluster, is to link from the key vault template to the service fabric template, so when I deploy the key vault and secret (which btw is a key that has been base 64 encoded to a string. I could have this as a secret in yet another key vault...), I can pass the 3 values on as parameters.
So I have two questions.
How do I retrieve the 3 values in the arm template. Powershell outputs them as 'ResourceId' of the key vault, 'Id' of the secret, and 'Version' of the secret. My attempt:
"sourceVaultValue": {
"value": "resourceId('Microsoft.KeyVault/vaults/', parameters('keyVaultName')"
},
"certificateThumbprint": {
"value": "[listKeys(resourceId('secrets', parameters('secretName')), '2015-06-01')"
},
"certificateUrlValue": { "value": "[concat('https://', parameters('keyVaultName'), '.vault.azure.net:443/secrets/', parameters('secretName'), resourceId('secrets', parameters('secretName')))]"
But the certificateUrlValue is incorrect. You can see I tried with and without listKeys, but neither seemed to work... (The thumbprint is within the certUrl itself)
If I were to get the correct values, I would like to try pass them as parameters to the next template. The template in question has quite a few more parameters than the 3 I want to pass however. So is it possible to have a parametersLink element to link to the parameter file, as well as a parameters element for just those 3? Or is there an intended way of doing this?
Cheers
Ok, try this when you get back to the keyboard...
1) for the uri, you can use an output like:
"secretUri": {
"type": "string",
"value": "[reference(resourceId('Microsoft.KeyVault/vaults/secrets', parameters('keyVaultName'), parameters('secretName'))).secretUri]"
}
For #2, you cannot mix and match the link and some values, it's one or the other.
A couple thoughts on how you could do this (it depends a bit on how you want to structure the rest of your deployment)...
One way to think of this is instead of nesting the SF, deploy them in the same template since they have the same lifecycle
instead of nesting the SF template, nest the KV template and reference the outputs of that deployment in the SF template...
Aside from that I can't think of anything elegant - since you want to pass "dynamic" params to a nested deployment really the only way to do that is to dynamically write the param file behind the link or pass all the params into the deployment resource.
HTH - LMK if it doesn't...
Can't Reference a secret with dynamic id !!!!
The obvious problems with this way of doing things are:
Someone needs to type the cleartext password which means:
it needs to be known to anyone who provisions the environment and how do I feed it into an automated environment deployment? If I store the password in a parameter… ???????
"variables": {
"tenantPassword": {
"reference": {
"keyVault": {
"ID": "[concat(subscription().id,'/resourceGroups/',parameters('keyVaultResourceGroup'),'/providers/Microsoft.KeyVault/vaults/', parameters('VaultName'))]"
},
"secretName": "tenantPassword"
}
}
},
I have a list of records in firebase which will have a group property with zero or more groups on it. I also have the firebase auth object which will also have zero or more groups on it as well. I would like to set up a .read firebase rule for my records that will check if the two have at lease one group that exists in both lists.
Put another way I have a user that has an array of groups that have been assigned to it. I have some records that also has some list of groups on them that specify what groups the user must have to access them. If the logged in user tries to access the record, I want to make sure that the user has at least one group that the record requires.
On the client I would do something like _.intersect(userGroups, recordGroups).length > 0
I'm not sure how I would do this in a firebase rule expression. It would be cool if it worked something like this.
Record:
{
someData: "test"
groups: ['foo', 'bar']
}
Firebase Auth Object:
{
userName: "Bob",
groups: ['foo', 'bar']
}
Rule Data:
{
"rules": {
"records": {
"$recordId": {
".read": "data.child('groups').intersectsWith(auth.groups)"
}
}
}
}
Thanks.
Update:
I think that if hasChildren() used || instead of && I could put the group names in they key position and check for their existence this way. Something like "data.child('groups').hasChildren(auth.groups, 'or')"
Where Record:
{
someData: "test"
groups: {
'foo': '',
'bar': ''
}
}
Update2:
Based off Kato's comment & link I realize that even if hasChildren could do OR it still wouldn't work quite right. Requests for individual records would work but requests for all records would error if the current user didn't have access to every record.
It is still not clear how you would structure data to make this work. If a record could belong to many groups how would that work? This is a very common scenario(basically how linux group permissions work) so I can't be the only one trying to do this. Anyone have any ideas/examples of how to accomplish this in firebase?
At the current moment, I believe it's impossible. There's a limited number of variables, methods, and operators allowed, listed here:
Firebase Security Rules API
Since function definitions are not allowed in the rules, you can't do anything fancy like call array.some(callback) on an array to do the matching yourself.
You have three options that I know of:
1) Copy data so you don't need to do the check. This is what I did in my project: I wanted some user data (names) available to users that shared a network in their network lists. Originally I wanted to check both member's network lists to see if there was at least one match. Eventually I realized it would be easier to just save each user's name as part of the network data so there wouldn't have to be a user look up requiring this odd permissions. I don't know enough about your data to suggest what you need to copy.
2) Use strings instead of arrays. You can turn one string into a regex (or just save it in regex format) and use it to search the other string for a match.Firebase DB Regex Docs
3) If you have enough weird cases like this, actually run a server that validates the request in a custom fashion. In the DB, just allow permissions to your server. You could use Firebase Cloud Functions or roll your own server that uses the Firebase Admin SDK
Nowadays, there's another possibility: to use Firestore to deliver your content, possibly in sync with the Realtime Database.
In Firestore, you can create rules like this:
function hasAccessTo(permissionList) {
return get(/databases/$(database)/documents/permissions/$(request.auth.uid))
.data.userPermissions.keys().hasAny(permissionList)
}
match /content/{itemId} {
allow read: if hasAccessTo(resource.data.permissions.keys());
}
The following data would allow a read of $CONTENTID by $UID, because the user permissions set intersects with the possible permissions required to access the content (with access123). My scenario is that a piece of content can be unlocked by multiple In-App Purchases.
{
permissions: {
$UID: { userPermissions: { access123:true, access456:true } },
...
},
content: {
$CONTENTID: { ..., permissions: { access123, access789 } },
...
}
}
For a progressive migration, you can keep data in sync between the Realtime Database and Firestore by using a one-way Cloud Function like this for example:
exports.fsyncContent = functions.database
.ref("/content/{itemId}")
.onWrite((snapshot, context) => {
const item = snapshot.after.val();
return admin
.firestore()
.collection("content")
.doc(context.params.itemId)
.set(item);
});
I am using Meteor 0.8.2 with accounts-facebook. I set up a limited publication for the users this way:
Meteor.publish('users', function () {
return Meteor.users.find({}, {fields: {'profile.picture': 1, 'profile.gender':1, 'profile.type':1}, sort: {'profile.likes': -1}});
});
Now this works great: when I requests a user list from the client I get a list of all users, with the current user's fields all shown and only the 3 published fields for the others. Except: right after login.
When I login and type Meteor.user(), here is what I get:
_id: "uACx6sTiHSc4j4khk"
profile: Object { gender="male", type="1", picture="http://....jpg"}
This stays like that until I refresh the page using the browser button. After refreshing, Meteor.user() gives all the fields available, while Meteor.users.find() still gives the correct restrictions. (except for the current user of course)
Why does my current user not get all its fields right away? I read about a Meteor.userLoaded() method used to wait for the user to be loaded, but it seems to be obsolete in the latest version.
You're running into an interaction between the restriction of merging fields across publications, and the default user publication which sends the profile field.
First, note that there is a built-in publication that always sends the currently logged in user's entire profile field to that user:
https://github.com/meteor/meteor/blob/devel/packages/accounts-base/accounts_server.js#L1172
Second, merging of fields at more than one level deep is currently not supported:
https://github.com/meteor/meteor/issues/998
What you currently have is an issue where the default publication is sending something like the following
{
username: ...,
emails: [ ... ],
profile: {
... all fields ...
}
}
whereas the publication you have set up is sending
{
profile: {
picture: ...
gender: ...
type: ...
}
}
These get merged on the client according to the rules for how subscriptions are resolved (http://docs.meteor.com/#meteor_subscribe). In particular, see the last paragraph. Meteor knows to merge the username and email fields with the profile field. However, it doesn't do this merging at the inner level. So one of the profile fields will get chosen arbitrarily to show up in the client's collection. If the first one wins, you will see profile.likes. If the second one wins, you won't.
It's likely that this behavior is somewhat deterministic and changes depending on whether a normal login handler is called or a resume handler (i.e. when reloading the browser). Hence why it looks like it hasn't loaded.
As Andrew explained, and as I kinda thought, what happened is that there is another "hidden" publication for the current user, which conflicts with mine. All I had to do in order to fix this was to simply exclude the current user from my publication, since it is already fully published by default:
Meteor.publish('users', function () {
return Meteor.users.find({_id:{$ne: this.userId}}, {fields: {'profile.picture': 1, 'profile.gender':1, 'profile.type':1}, sort: {'profile.likes': -1}});
});
This simple $ne does it for me.
In the discover meteor book, the deny statement is used as follows:
https://github.com/DiscoverMeteor/Microscope/commit/chapter8-3
Posts.deny({
update: function(userId, post, fieldNames) {
.....
});
I don't understand how the update function is getting UserId, post, or even fieldnames since the edit form is doing the following:
var postProperties = {
url: $(e.target).find('[name=url]').val(),
title: $(e.target).find('[name=title]').val()
}
Posts.update(currentPostId, {$set: postProperties}, function(error) {
Those parameters are given by Meteor. The signature for update functions on the client and in the deny object are different.
http://docs.meteor.com/#allow:
update(userId, doc, fieldNames, modifier):
The user userId wants to
update a document doc. (doc is the current version of the document
from the database, without the proposed update.) Return true to permit
the change.
fieldNames is an array of the (top-level) fields in doc that the
client wants to modify, for example ['name', 'score'].
modifier is the raw Mongo modifier that the client wants to execute;
for example, {$set: {'name.first': "Alice"}, $inc: {score: 1}}.
Only Mongo modifiers are supported (operations like $set and $push).
If the user tries to replace the entire document rather than use
$-modifiers, the request will be denied without checking the allow
functions.
The short answer is that these values are filled out for you by meteor. It understands who is making what modifications to which document and tells the server about it.
The client is calling Posts.update which sends a message to the server that userId is attempting to update a document (the contents of which are post), and the fields being updated are fieldNames. The server can then choose the accept the update based on those inputs.
This is documented here and here.