Store Firebase Real Time Database data in Firestore? [closed] - firebase

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I'm trying to figure out if it's possible to automatically have firebase database data be stored as different documents in Firestore? I've thought of trying to use Cloud Functions to achieve this, but I'm not familiar with javascript at all.
I'm an engineering student working on an automated garden project. I decided that it would be cool to have an app that updates with the real-time data (updated every 5min) as well as historical data that shows all of the previous update values. I've got an ESP8266 coded from Arduino IDE to update my Realtime Database with the sensor values, but I can't figure out the best way to store these values in order to view the historical data. If there is a better way of doing this I'm all ears.

I'm trying to figure out if it's possible to automatically have firebase database data be stored as different documents in Firestore? I've thought of trying to use Cloud Functions to achieve this, but I'm not familiar with javascript at all.
Yes, it's possible, and yes, you can use Cloud Functions for this. You don't have to be familiar with JavaScript as Cloud Functions supports several languages in Google Cloud Platform. The Firebase tools only support JavaScript, but you don't have to use that if you don't want to.

Related

Where do I the ranking functions for firebase Storage [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I am developing an app that the users can upload and vote for TagImages, so what is needed is to when someone check into a TagTopic get the most popular images and the newest one, so to do this ranking operation.
How should I approach this?
I think what you need is to use Firestore or Realtime, but in you case realtime would be better because the number of reads and writes, however, what you could do is to create an object for each image that contains the metadata about it like the number of votes, may be also down votes, who upload it, upload time, tags, or any things else you want. Then in you app or website you'll make a query that reads lets say top 5, and get them, then use the images names to get them from the cloud storage. For example:
images:{
image1Name:{
upvote: 10;
downvote: 2;
totolvote: 8;
uploader: 'Remoo';
uploadTime: '10:00:00AM 23/11/2021' //whatever the structure
}
}
Then you query them (this is Flutter example):
_firebaseDatabase
.reference()
.child("images")
.orderByChild('totolvote')
.limitToFirst(5)
.once().then(()=>{...})
Then get image1Name from cloud storage.
But note you can only use one orderByChild with each query, on the other hand you can use multiple where in Firestore, but there will be more cost on the reads and writes. Eventually it up to you and how you structure it. Hope this work for you.

Is Firestore a good choice for time series data? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have a small project involving some simple financial time-series data with some real-time components on the front end. I was hoping to use the Firebase infrastructure since it offers a lot of things without having to set up much infrastructure, but upon investigating it doesn't seem to be a good choice for storing time series data.
Admittedly, I have more experience with relational databases so it's possible I am asking an extremely basic question. If I were to use Firestore to store time-series data, could someone provide an example of how one might structure it for efficient querying?
Am I better served using something like Postgres?
Probably best bet would be to use a time-series database. Looks like Warp 10 has already been mentioned (https://www.warp10.io).
The benefit of something like warp is the ability to query on the time component of your database. I believe firebase only has simple greater/lesser than queries available for time.

How scalable is Firebase dashboard? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
n case of rather large dataset with million+ objects, how scalable is firebase dashboard (web interface)?
On test project all changes in dataset are immediately propagated to browser. However in case of large project the browser will not be able to handle it, or will be?
if I have the index structure:
update_index:{
object_00000001:{},
object_00000002:{},
.
.
object_99999999:{}
}
and there are constant changes on various elements. Is there a way only to indicate a change in dataset without passing data to snapshot and propagate the changes on user request?
How is it handled in firebase dashboard?
Its difficult to understand what you're asking. Assuming you mean Forge for the dashboard, then Forge will load all the data in your firebase, which can be an expensive operation, and can definitely be slow. Additionally, if you're opening an object with any more than a couple hundred keys then it becomes quite slow.
Every read operation in firebase is done with a .on or a .once as far as I'm aware, and you can listen for 'child_changed' event type but the only way to handle read data is with a snapshot.
If you're referring to Forge in your question, this may help: Performance of Firebase with large data sets

Advice on structuring an social science experiment in Meteor [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am playing around with Meteor as a framework for building web-deployed economics experiments. I am seeking advice on how to structure the app.
Here is the logic:
Users create an account and are put in a queue.
Groups of size N are formed from users in the queue.
Each group proceeds together through a series of stages.
Each stage consists of:
a. The display of information to and collection of simple input from group members.
b. Computing new information based on the inputs of all group members.
Groups move from stage to stage together, because the group data from one stage provides information needed in the next stage.
When the groups complete the last stage, data relating to their session is saved for analysis.
If a user loses their connection, they can rejoin by logging in and resume the state they were in before.
It is much like a multiplayer game, and I know there are many examples of those, so perhaps the answer to my question is just a pointer to a similarly structured and well-developed Meteor game.
I'm writing a framework for deploying multi-user experiments on Meteor. It basically does everything you asked for and a lot more.
https://github.com/HarvardEconCS/turkserver-meteor
We should talk! I'm currently using this for a couple of experiments but it is not well documented yet. However, it is a redesign of https://github.com/HarvardEconCS/TurkServer so all of the important features are there. Also, I'm looking to broaden out from using Amazon Mechanical Turk for subject recruitment, and Meteor makes that easy.
Even if you're not interested in using this, you should check out the source for how people are split into groups and kept segregated. The original idea came from here, but I've expanded on it significantly so that it is automagically managed by the smart package instead of having to do it yourself.

Modularization of PL/SQL packages [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Currently I am doing a restructuring project mainly on the Oracle PL/SQL packages in our company. It involves working on many of the core packages of our company. We never had documentation for the back end work done so far and the intention of this project is to create a new set of APIs based on the current logic in a structured way along with avoiding all unwanted logic that currently exists in the system.
We are also making a new module currently for the main business of the organization that would work based on these newly created back-end APIs.
As I started of this project, I found out that most of the wrapper APIs had around more than 8000 lines of code. I managed to covert this code into many single APIs and invoked them from the wrapper API.
This activity in itself has been a time-consuming process but I was able to cut down the number of lines of code to just 900 in the wrapper API by calling independent APIs for each business functionality.
I would like to know from you experts if this mode of modularizing the code is good and worth the time invested in it as I am not sure if it would have many performance benefits.
But from a code readability perspective, this is definitely helping and now I am able to understand the 8000 lines of code much better after restructuring and I am sure the other developers in my organization too will understand.
Requesting you to let me know if I am doing the right thing and if its having its advantages apart from readability please do mention them. Sorry for the long explanation.
And is it okay having more than 1000 lines of code in a wrapper API.
Easy to debug
Easy to update
Easy to Modify/maintain
Less change proneness due to low coupling.
Increases reuse if the modules are made generic
Can identify unused code easily

Resources