Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I followed the following guide to set up automated imports and exports for my firestore database:
https://firebase.google.com/docs/firestore/solutions/schedule-export
However, the docs specify that
An export may include changes made while the operation was running.
Are batched transactions safe, or should I disable write access while a export is taking place?
There's nothing "unsafe" about the exports. You just have to realize that you don't get a guarantee about the contents of the export, given that the export doesn't represent a snapshot in time of the entire database. The database could be changing over time while the export happens, and the contents of all of the documents don't necessarily come from the point in time when you initiated the export. It's not possible to change this behavior. Your best bet is to simply lock down access to the database while the export is happening so you can guarantee for yourself some sort of consistency.
As such, exports are not suitable for what many people would consider a "backup". It's merely a convenience for you to save and load the contents of a database without having to write code.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 months ago.
Improve this question
I try to link a document to another document in my database. As far as I understand, I can save the ID or a reference to it.
What's the difference between those methods and which one suits me better?
A simple string document ID has no context while a Reference contains the entire path to a document. For the ease of use in a custom objects it will be useful to have the entire Reference object stored.
When writing queries you can also filter on a Reference object in the database by comparing it to another.
However, it's a personal preference. Anything that a string can do, Reference can also provide. Having a Reference type can saves you for trouble of building a new Reference object in your code and you can also use it in security rules for the same purpose when it comes time to use get() to fetch another document referenced by a field.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I am developing an app that the users can upload and vote for TagImages, so what is needed is to when someone check into a TagTopic get the most popular images and the newest one, so to do this ranking operation.
How should I approach this?
I think what you need is to use Firestore or Realtime, but in you case realtime would be better because the number of reads and writes, however, what you could do is to create an object for each image that contains the metadata about it like the number of votes, may be also down votes, who upload it, upload time, tags, or any things else you want. Then in you app or website you'll make a query that reads lets say top 5, and get them, then use the images names to get them from the cloud storage. For example:
images:{
image1Name:{
upvote: 10;
downvote: 2;
totolvote: 8;
uploader: 'Remoo';
uploadTime: '10:00:00AM 23/11/2021' //whatever the structure
}
}
Then you query them (this is Flutter example):
_firebaseDatabase
.reference()
.child("images")
.orderByChild('totolvote')
.limitToFirst(5)
.once().then(()=>{...})
Then get image1Name from cloud storage.
But note you can only use one orderByChild with each query, on the other hand you can use multiple where in Firestore, but there will be more cost on the reads and writes. Eventually it up to you and how you structure it. Hope this work for you.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I'm currently stuck on my websites with a csv product import. I want the csv to sit on the server where i can simply update the csv and all the data will be transferred over to my products instantly without having to consistently upload the csv.
Does anyone know any way how to make this happen automatically ?
Thanks for your help!
A simple way to this can be a function which will be executed on every page load. This function checks if there is a csv-file, and if so, starts the import. But there a weaknesses by doing it this way. The import process will stuck the page load. And if the functions checks whether there is a cs-file, it can be that this csv-file is not completely uploaded, because the ftp-transfer is still running in this moment. (see here)
A better way is to use a Webhook to invoke the import function. The Webhook ideally should triggered by the process which transfer the csv-file (after completion) to the server.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I want to migrate the data which already exists in my firebase database: Add a new field to all children; manipulate an existing field (ie a -> a + 3).
Since there is no real frontend available to do that, I wonder how it could be done?
If there is no real front end then:
If the database is small and you are using the RTD then download the JSON and edit it
If the database is large since you have no front end you should do it with Functions
How to do it with Functions
You have to create a Functions project that will have an HTTP request trigger, once you access that url, then the trigger will query the data and for each result will create new data.
For doing this the simplest way to start is following this video. You have to do the same but instead of returning something to the browser with send, just end the Function with a code 200 (if it worked).
I would recommend creating an extra node for verification something like migration_march: false and then set it to true once the migration is completed. That way you can avoid unintentional re-migrations. There should be a validation for this once the trigger is started.
Doing a query on Functions is fairly the same as doing it in any other SDK this is the Functions docs.
You will probably need to know how to work with promises since your algorithm is gonna be: a query where for each value found set a new value in another place and then move forward to the new value, here is an illustrative video (couldn't find the original video)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
For example, to automate the process of producing a daily report for selected events and the duration of time that unique users spend on some specific event. And it is even better if I can customize the reporting information and have it generated in excel sheet automatically. Any suggestions or ideas would be much appreciated.
You could use a Cloud Function to generate you excel report, for example by using excel4node (https://www.npmjs.com/package/excel4node)
And to call this Cloud Function regularly, you have to trigger it via http through a cron-job.
Have a look at:
https://firebase.google.com/docs/functions/http-events).
and
https://www.youtube.com/watch?v=CbE2PzvAMxA
Note: What works quite well too is to generate some PDFs via the Cloud Function, using pdfmake.