How to properly upload files to Firebase Storage from client? - firebase

Situation
I want my client to be able to set a new profile picture ("pfp") for their profile. The original pfp gets used to generate thumbnails which are also stored in Firebase Storage.
Current solution
I simply upload the pfp from the client
File pfp = File("original.jpg");
await storageRef.putFile(file);
After that, a cloud function is triggered which downloads the image, resizes it, then uploads the thumbnails again. It is very similar to this solution: https://fireship.io/lessons/image-thumbnail-resizer-cloud-function/
Problem
The issue here is that if the thumbnail generation function fails, meaning the thumbnails aren't generated and uploaded, the client doesn't get any response which shows that. They will think that they set their new pfp successfully as the original image was uploaded, however, there are no thumbnails. I want a solution which ensures that as the user sets their pfp, it will either all succeed or they will be notified that something went wrong, from which they can possibly retry (no in-between). Maybe I am approaching this incorrectly...

The Cloud Function used in the fireship.io tutorial is a onFinalize() triggered one, meaning that it is triggered when the original image is uploaded to Cloud Storage. In your case it means that it is triggered when the call to the putFile() method is complete.
You could change the Cloud Function type to a Callable one that you call, from the front-end, right after calling await storageRef.putFile(file);. With a Callable Cloud Function your front-end can receive the result of the thumbnail generation. Note that you'll need to pass the path of the original image when calling the Callable Cloud Function.

Related

Is there a way to determine once a cloud function has finished running - either through looping getDownloadURL or messaging?

I am still getting the hang of Firebase and Cloud functions, but here is what I'm trying to figure out.
The current setup
My app has a cloud function that will take a PDF that has been uploaded into a storage bucket and convert it into PNG. It doesn't destroy the original PDF, so I am left with both files.
The URL for the newly created PNG is then attached to a property on one of our documents in Firestore.
What I am trying to accomplish
I want to be able to upload a new PDF to use as a replacement image. I think I am running into a race condition where the cloud function hasn't finished executing by the time I am trying to call updateDoc() with the new PNG.
On the client side, I have the storageRef returned from the upload method:
uploadFunction(...).then((snapshot) => {
return snapshot.ref
}
I'm saving the result of this function to a variable, and I am trying to pass that into the update method that will adjust the property on my document in Firestore:
const storageRef = await functionThatUploadsPDF(file);
updateDocumentInFirestore(storageRef);
Within updateDocumentInFirestore, I'm trying to navigate to the new reference that should exist once the cloud function has finished, get a download URL, and update that property on my document:
const newImageRef = ref(storageRef.parent, "generatedImage.png");
const newDownloadURL = getDownloadURL(newImageRef).then((url) => {
updateDoc(documentRef, backgroundImage: url);
});
However, I am getting the following error - I believe due to the cloud function having not finished yet:
Firebase Storage: Object 'storage-bucket/generatedImage.png' does not exist. (storage/object-not-found)
My thoughts on potential solutions
I could try to poll the storage for the existence of generatedImage.png until the getDownloadURL call returns an actual URL, but I worry about the amount of calls this would yield.
If there is a way for the cloud function to send a message to let me know that the conversion is finished, I can send a call once for the download URL after receiving said message. However, I can't figure out how to accomplish this.
Efforts so far
I have been pursuing course 1. So far, but have not met any success yet. Scouring through Firebase documentation, I haven't been able to find any supporting resources on how to accomplish 1 or 2. Does anyone have any suggestions - either on my planned courses of action, or a new option that I haven't considered?
You can use this onFinalize trigger to send a message or update a document in Firestore to indicate that the function has finished running. This trigger is triggered whenever a file is created or updated.
onFinalize Sent when a new object (or a new generation of an existing
object) is successfully created in the bucket. This includes copying
or rewriting an existing object. A failed upload does not trigger this
event.
you can also create a promise that resolves when the downloadURL is not null, and use that promise in your updateDocumentInFirestore function. This way, the updateDoc function will only be called once the downloadURL is available.
Additionally, as was mentioned in the comments, you can consider cloud workflow.The exact implementation will depend on your specific use case
You can also check these similar cases
Firebase Storage: Object does not exist
Error: storage/object-not-found when trying to upload large image file
Firebase Storage Put could not get object

Firebase cloud function use current value and not value when function was initially called

Not sure if this is even possible with firebase cloud functions.
Let's assume, I want to trigger a cloud function onCreate on all documents in a specific collection.
After creation, the cloud function should add another document in a different collection.
Passing a value from the manually created document.
Sure, that works!:
export const createAutomaticInvoice = functions.firestore.document('users/{userId}/lessons/{lesson}').onCreate((snap, context) => {
let db = admin.firestore();
let info = snap.ref.data()
db.collection('toAdd').add({
info: info
})
})
But if I create a document within users/{userId}/lessons/ and change the value of info directly afterwards, before the cloud function is triggered, the cloud function takes the old value of info as supposed to the one it was changed to.
Is this expected behaviour? For me it is definetely not as I would assume that it takes the values at runtime.
How can I make my example work as expected?
This is the expected behavior - the function is going to execute as soon as possible after that document is created. The snapshot is always going to contain the contents of the document as it was originally created. It's not going to wait around to see if that document changes at some point in the future, and it's not going to try to query that document in case it might have changed.
If you want to handle updates to a document, you should also be using an onUpdate trigger to know if that happens.

How to trigger onCreate in Firestore cloud functions shell without using an existing document

I am using the firebase-tools shell CLI to test Firestore cloud functions.
My functions respond to the onCreate trigger for all documents in a certain collection, by using a wildcard, and then mutate that document with an update call.
firestore
.document(`myCollection/{documentId}`)
.onCreate(event => {
const ref = event.data.ref
return ref.update({ some: "mutation"})
})
In the shell I run something like this, (passing some fake auth data required by my database permissions):
myFunction({some: "data"}, { auth: { variable: { uid: "jj5BpbX2PxU7fQn87z10d4Ks6oA3" } } } )
Hoever this results in an error, because the update tries to mutate a document that is not in the database.
Error: no entity to update
In the documentation about unit testing it is explained how you would create mocks for event.data in order to execute the function without touching the actual database.
However I am trying to invoke a real function which should operate on the database. A mock would not make sense, otherwise this is nothing more then a unit test.
I'm wondering what the strategy should be for invoking a function like this?
By using an existing id of a document the function can execute successfully, but this seems cumbersome because you need look it up in the database for every test, and it might not be there anymore at some point.
I think it would be very helpful if the shell would somehow create a new document from the data you pass in, and run the trigger from that. Would this be possible maybe, or is there another way?
The Cloud Functions emulator can only emulate events that could happen within your project. It doesn't emulate the actual change to the database that would have triggered it.
As you're discovering, when your function depends on that actual change previously occurring, you can run into problems. The fact of the matter is that it's entirely possible that the created document may have already been deleted by the time you're handling the event in the function (imagine a user acts quickly to delete, but the event is delayed for whatever reason).
All that said, perhaps you want to use set() with SetOptions that indicate you want to merge instead of overwrite. Bear in mind that if the document was previously deleted (with good reason) before the event triggered, you'll unconditionally recreate the document, which may not be what the user wanted.

Does removing all observers also disable keeySynced()

I want to keep a part of my database synced, but only need an actual callback when a certain view is loaded. When the view loads I'm calling:
FIRDatabase.database().reference().child("data").observe(.childAdded...
then when the view exits I want to call
FIRDatabase.database().reference().child("data").removeAllObservers()
Elsewhere in my app I'm calling:
FIRDatabase.database().reference().child("data").keepSynced(true)
I know that keepSynced() simply adds an observer to the ref, so when I call removeAllObservers() will it cancel out the keepSynced(true)?
No, calling keepSynced(true) means that the actual data on the device will be kept in sync with Firebase servers.
This means that when you do eventually add a listener to that location, you will be able to very quickly retrieve the data, because the device has been keeping it in sync for you for a while.
The only way to disable the syncing is to call keepSynced(false).

Meteor Client-side Collection Document Appears and Disappears

After lots of reading, I'm starting to get a better handle on Meteor's publish/subscribe model. I've removed the autopublish training wheels from my first app and while I have most everything working, I am seeing one issue.
When the app first loads, my publish and subscribe hooks work great. I have a block of code that runs in a Tracker.autorun() block which makes the subscribe calls, I am able to sequentially wait for data from the server using ready() on my subscribe handles, etc.
One feature of my app is that it allows the user to insert new documents into a collection. More specifically, when the user performs a certain action, this triggers an insert. At that point, the client-side JS runs and the insert into MiniMongo completes. The reactive autorun block runs and the client can see the inserted documented. The client updates the DOM with the new inserted data and all is well.
Furthermore, when I peek into the server-side MongoDB, I see the inserted document which means the server-side JS is running fine as well.
Here's where it gets weird. The client-side autorun block runs a second time (I'm not sure why) and this time, the client no longer has the inserted item. When the DOM renders, the newly inserted item is now gone. If I reload the page, all is well again.
Has anyone seen this behavior before? I'm also noticing that the server-side publish call runs once on page load but then it doesn't run again after the insert. This seems wrong because how else will the client get the reconciled data from the server after the insertion (i.e. after Meteor's client-side latency compensation)?
The important functions (ComponentInstances is the collection that is bugging out):
Publish block:
Meteor.publish('allComponentInstances', function (documentId, screenIndex) {
console.log(`documentId: ${documentId} screenIndex: ${screenIndex}`)
const screens = Screens.find({ownerDocumentId: documentId})
const selectedScreen = screens.fetch()[screenIndex]
return ComponentInstances.find({_id: {$in: selectedScreen.allComponentInstanceIds}})
})
Subscription block in autorun:
// ... a bunch of irrelevant code above
const allComponentInstancesHandle = Meteor.subscribe('allComponentInstances', document._id, 0)
if (allComponentInstancesHandle.ready()) {
isReady = true
screens = Screens.find({ownerDocumentId: document._id}).fetch()
const componentInstanceObjects = ComponentInstances.find().fetch()
allComponentInstances = {}
componentInstanceObjects.map((componentInstance) => {
allComponentInstances[componentInstance._id] = componentInstance
})
}
This is most probably you're inserting documents from client side. And you have not set up your permission rules properly. When you remove autopublish and insecure from your app, you are not allowed to insert/update/remove documents into collection unless you have allow/deny rules set up in the server side.
Meteor has a great feature called latency compensation which tries emulate your db operations before it gets the actual write operation in the db. And when the server tries to write in the db, it looks for allow/deny rules.If the permission rules doesn't allow the db operation or Whatever the reason( either allow/deny or authentication) for not actually written in the db, then the server data gets synchronized with your client side db.
This is why i assume you are seeing your document being inserted for the first time and gets disappeared within a second.
check this section of meteor docs.
http://docs.meteor.com/#/full/allow
I ended up solving this a different way. The core issue, I believe, has nothing to do with accept/deny rules. In fact, their role is still hazy to me.
I realize now what I've been reading all along in the Meteor docs: the publish functions return cursors. If the cursor itself doesn't change (e.g. if you're passing specific keys you want to fetch), then it won't really work as a reactive data source in the sense that new documents in a collection will not make the data publish again. You are, after all, still requesting the same keys.
The way forward is to come up with a publish cursor that accurately reflects the reactive data you want to retrieve. This sounds abstract but in practice, it means make sure the cursor is general, not specific to the specific keys you are retrieving.

Resources