How to trigger onCreate in Firestore cloud functions shell without using an existing document - firebase

I am using the firebase-tools shell CLI to test Firestore cloud functions.
My functions respond to the onCreate trigger for all documents in a certain collection, by using a wildcard, and then mutate that document with an update call.
firestore
.document(`myCollection/{documentId}`)
.onCreate(event => {
const ref = event.data.ref
return ref.update({ some: "mutation"})
})
In the shell I run something like this, (passing some fake auth data required by my database permissions):
myFunction({some: "data"}, { auth: { variable: { uid: "jj5BpbX2PxU7fQn87z10d4Ks6oA3" } } } )
Hoever this results in an error, because the update tries to mutate a document that is not in the database.
Error: no entity to update
In the documentation about unit testing it is explained how you would create mocks for event.data in order to execute the function without touching the actual database.
However I am trying to invoke a real function which should operate on the database. A mock would not make sense, otherwise this is nothing more then a unit test.
I'm wondering what the strategy should be for invoking a function like this?
By using an existing id of a document the function can execute successfully, but this seems cumbersome because you need look it up in the database for every test, and it might not be there anymore at some point.
I think it would be very helpful if the shell would somehow create a new document from the data you pass in, and run the trigger from that. Would this be possible maybe, or is there another way?

The Cloud Functions emulator can only emulate events that could happen within your project. It doesn't emulate the actual change to the database that would have triggered it.
As you're discovering, when your function depends on that actual change previously occurring, you can run into problems. The fact of the matter is that it's entirely possible that the created document may have already been deleted by the time you're handling the event in the function (imagine a user acts quickly to delete, but the event is delayed for whatever reason).
All that said, perhaps you want to use set() with SetOptions that indicate you want to merge instead of overwrite. Bear in mind that if the document was previously deleted (with good reason) before the event triggered, you'll unconditionally recreate the document, which may not be what the user wanted.

Related

Firebase cloud function use current value and not value when function was initially called

Not sure if this is even possible with firebase cloud functions.
Let's assume, I want to trigger a cloud function onCreate on all documents in a specific collection.
After creation, the cloud function should add another document in a different collection.
Passing a value from the manually created document.
Sure, that works!:
export const createAutomaticInvoice = functions.firestore.document('users/{userId}/lessons/{lesson}').onCreate((snap, context) => {
let db = admin.firestore();
let info = snap.ref.data()
db.collection('toAdd').add({
info: info
})
})
But if I create a document within users/{userId}/lessons/ and change the value of info directly afterwards, before the cloud function is triggered, the cloud function takes the old value of info as supposed to the one it was changed to.
Is this expected behaviour? For me it is definetely not as I would assume that it takes the values at runtime.
How can I make my example work as expected?
This is the expected behavior - the function is going to execute as soon as possible after that document is created. The snapshot is always going to contain the contents of the document as it was originally created. It's not going to wait around to see if that document changes at some point in the future, and it's not going to try to query that document in case it might have changed.
If you want to handle updates to a document, you should also be using an onUpdate trigger to know if that happens.

How to update the same document with a read from the same collection in an onUpdate function

I'm trying to update the same document which triggered an onUpdate cloud function, with a read value from the same collection.
This is in a kind of chat app made in Flutter, where the previous response to an inquiry is replicated to the document now being updated, for easier showing in the app.
The code does work, however when a user quickly responds to two separate inquiries, they both read the same latest response thus setting the same previousResponse. This must be down to the asynchronous nature of flutter and/or the cloud function, but I can't figure out where to await or if there's a better way to make the function, so it is never triggering the onUpdate for the same user, until a previous trigger is finished.
Last part also sound a bit like a bad idea.
So far I tried sticking the read/update in a transaction, however that only seems to work for the single function call, and not when they're asynchronous.
Also figured I could fix it, by reading the previous response in a transaction on the client, however firebase doesn't allow reading from a collection in a transaction, when not using the server API.
async function setPreviousResponseToInquiry(
senderUid: string,
recipientUid: string,
inquiryId: string) {
return admin.firestore().collection('/inquiries')
.where('recipientUid', '==', recipientUid)
.where('senderUid', '==', senderUid)
.where('responded', '==', true)
.orderBy('modified', 'desc')
.limit(2)
.get().then(snapshot => {
if (!snapshot.empty &&
snapshot.docs.length >= 2) {
return admin.firestore()
.doc(`/inquiries/${inquiryId}`)
.get().then(snap => {
return snap.ref.update({
previousResponse: snapshot.docs[1].data().response
})
})
}
})
}
I see three possible solutions:
Use a transaction on the server, which ensures that the update you write must be based on the version of the data you read. If the value you write depends on the data that trigger the Cloud Function, you may need to re-read that data as part of the transaction.
Don't use Cloud Functions, but run all updates from the client. This allows you to use transactions to prevent the race condition.
If it's no possible to use a transaction, you may have to include a custom version number in both the upstream data (the data that triggers the write), and the fanned out data that you're updating. You can then use security rules to ensure that the downstream data can only be written if its version matches the current upstream data.
I'd consider/try them in the above order, as they gradually get more involved.

How to avoid loops when writing cloud functions?

When writing event based cloud functions for firebase firestore it's common to update fields in the affected document, for example:
When a document of users collection is updated a function will trigger, let's say we want to determine the user info state and we have a completeInfo: boolean property, the function will have to perform another update so that the trigger will fire again, if we don't use a flag like needsUpdate: boolean to determine if excecuting the function we will have an infinite loop.
Is there any other way to approach this behavior? Or the situation is a consequence of how the database is designed? How could we avoid ending up in such scenario?
I have a few common approaches to Cloud Functions that transform the data:
Write the transformed data to a different document than the one that triggers the Cloud Function. This is by far the easier approach, since there is no additional code needed - and thus I can't make any mistakes in it. It also means there is no additional trigger, so you're not paying for that extra invocation.
Use granular triggers to ensure my Cloud Function only gets called when it needs to actually do some work. For example, many of my functions only need to run when the document gets created, so by using an onCreate trigger I ensure my code only gets run once, even if it then ends up updating the newly created document.
Write the transformed data into the existing document. In that case I make sure to have the checks for whether the transformation is needed in place before I write the actual code for the transformation. I prefer to not add flag fields, but use the existing data for this check.
A recent example is where I update an amount in a document, which then needs to be fanned out to all users:
exports.fanoutAmount = functions.firestore.document('users/{uid}').onWrite((change, context) => {
let old_amount = change.before && change.before.data() && change.before.data().amount ? change.before.data().amount : 0;
let new_amount = change.after.data().amount;
if (old_amount !== new_amount) {
// TODO: fan out to all documents in the collection
}
});
You need to take care to avoid writing a function that triggers itself infinitely. This is not something that Cloud Functions can do for you. Typically you do this by checking within your function if the work was previously done for the document that was modified in a previous invocation. There are several ways to do this, and you will have to implement something that meets your specific use case.
I would take this approach from an execution time perspective, this means that the function for each document will be run twice. Each time when the document is triggered, a field lastUpdate would be there with a timestamp and the function only updates the document if the time is older than my time - eg 10 seconds.

firebase functions manage multiple operations on a single database trigger

We are using firebase realtime DB and firebase functions. We have wrote a DB trigger for whenever a user is updated. In this case we have a referral system. So whenever a user adds a referrer to his account then this trigger gives some reward to the referrar. Hence the user_update update trigger does the job.
This works well. Now, we need to do one more unrelated activity whenever user is updated. To be specific we want to keep total reward given so far to all the users for analytics purpose.
So, what is the best way to implement two independent operations on a single update trigger?
Technically we can embed one operation call into another but that will make like hell and messy especially if need more operations like that in future.
You have two options, either use 1 realtime database trigger like now and put the logic in that function. You can make it clean and tidy by putting all the logic in separate functions that this trigger just calls
Or you can simply create another trigger exactly how you did this time and just change its export name e.g. like below. With this method all it means is you have 2 functions being called so doubling the cost.
exports.userUpdate = functions.database.ref('/users/{uid}').onUpdate(async (change, context) => { /* LOGIC */ });
exports.userUpdateSecond = functions.database.ref('/users/{uid}').onUpdate(async (change, context) => { /* LOGIC */ });

Update document in Meteor mini-mongo without updating server collections

In Meteor, I got a collection that the client subscribes to. In some cases, instead of publishing the documents that exists in the collection on the server, I want to send down some bogus data. Now that's fine using the this.added function in the publish.
My problem is that I want to treat the bogus doc as if it were a real document, specifically this gets troublesome when I want to update it. For the real docs I run a RealDocs.update but when doing that on the bogus doc it fails since there is no representation of it on the server (and I'd like to keep it that way).
A collection API that allowed me to pass something like local = true this would be fantastic but I have no idea how difficult that would be to implement and I'm not to fond of modifying the core code.
Right now I'm stuck at either creating a BogusDocs = new Meteor.Collection(null) but that makes populating the Collection more difficult since I have to either hard code fixtures in the client code or use a method to get the data from the server and I have to make sure I call BogusDocs.update instead of RealDocs.update as soon as I'm dealing with bogus data.
Maybe I could actually insert the data on the server and make sure it's removed later, but the data really has nothing to do with the server side collection so I'd rather avoid that.
Any thoughts on how to approach this problem?
After some further investigation (the evented mind site) it turns out that one can modify the local collection without making calls to the server. This is done by running the same methods as you usually would, but on MyCollection._collection instead of just on Collection. MyCollection.update() would thus become MyCollection._collection.update(). So, using a simple wrapper one can pass in the usual arguments to a update call to update the collection as usual (which will try to call the server which in turn will trigger your allow/deny rules) or we can add 'local' as the last argument to only perform the update in the client collection. Something like this should do it.
DocsUpdateWrapper = function() {
var lastIndex = arguments.length -1;
if (arguments[lastIndex] === 'local') {
Docs._collection.update(arguments.slice(0, lastIndex);
} else {
Docs.update(arguments)
}
}
(This could of course be extended to a DocsWrapper that allows for insertion and removals too.)(Didnt try this function yet but it should serve well as an example.)
The biggest benefit of this is imo that we can use the exact same calls to retrieve documents from the local collection, regardless of if they are local or living on the server too. By adding a simple boolean to the doc we can keep track of which documents are only local and which are not (An improved DocsWrapper could check for that bool so we could even omit passing the 'local' argument.) so we know how to update them.
There are some people working on local storage in the browser
https://github.com/awwx/meteor-browser-store
You might be able to adapt some of their ideas to provide "fake" documents.
I would use the transform feature on the collection to make an object that knows what to do with itself (on client). Give it the corruct update method (real/bogus), then call .update rather than a general one.
You can put the code from this.added into the transform process.
You can also set up a local minimongo collection. Insert on callback
#FoundAgents = new Meteor.Collection(null, Agent.transformData )
FoundAgents.remove({})
Meteor.call 'Get_agentsCloseToOffer', me, ping, (err, data) ->
if err
console.log JSON.stringify err,null,2
else
_.each data, (item) ->
FoundAgents.insert item
Maybe this interesting for you as well, I created two examples with native Meteor Local Collections at meteorpad. The first pad shows an example with plain reactive recordset: Sample_Publish_to_Local-Collection. The second will use the collection .observe method to listen to data: Collection.observe().

Resources