Firebase, is there anything better than a cloud scheduler? [closed] - firebase

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
When some information is stored in the firestore, each document is storing some specific time in the future, and according to that time, the event should occur in the user's app.
The first way I could find was the Cloud Function pub sub scheduler. However, I could not use this because the time is fixed.
The second method was to use Cloud Function + Cloud Task. I have referenced this. https://medium.com/firebase-developers/how-to-schedule-a-cloud-function-to-run-in-the-future-in-order-to-build-a-firestore-document-ttl-754f9bf3214a
This perfectly performed the function I really wanted, but there was a fatal drawback in the Cloud Task, because I could only save the event within 30 days. In other words, future time exceeding 30 days did not apply to this.
I want this event to be saved over the long term. And I want it to be somewhat smooth for large traffic.
I`m using Flutter/Firebase, how to implement this requirements above?
thank you for reading happy new year

I know this is not a satisfying answer, but my previous company had similar challenges, and we ended up going with using Cloud Function pub sub scheduler to invoke function every minute. Basically we invoked cloud function every minute and see if there are unprocessed queues. I wish if there were a better way of scheduling a function in the future, but I believe this is as good as it gets as of right now.

Related

How can I use data filtering(?) on my app? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I'm studying Java since 8/2020 ans I'm making recipe app for my school project.
I'm using android studio,Java and currently finished making Frontend UI , every layout of my app and connected to FireBase.
on my app I want users to add their food,health data on app and get recipe which fit for their data.
At this point I want to know what tool,program should I use to build my recipe recommendation algorithm and apply it on my app,Firebase.
I searched many Question,post on google and couldn't find what to use.
This is my first complicated app and I don't know much about programming
I'm stuck at this point for 5 days. Can somebody Help me?
The question can have a vast number of answers, and it depends upon your database structure and you to decide, which approach to pick.
You need to search for the data on two bases, i.e. ingredients and health data
One approach could be:
For the ingredients part, this would do the job:
Including an array of tags in each document of a recipe which would list out the ingredients of the recipe by which you can query (array-contains)
And for the health data part, I'm not quite sure what you mean by filtering based on health but let's say; you are going to search for the recipes based on the calories the user needs in a day. So after calculating the calories required in a day, it would be like:
Including a field of totalCalories in each document of a recipe which would tell the total calories in the recipe by which you can query (whereEqualTo or whereLessThan)
I suggest you read the Firebase documentation for executing the queries to get to know about all the possible ways.
Perform simple and compound queries in Cloud Firestore

How to make a choice between OpenTSDB and InfluxDB or other TSDS? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
They both are open source distributed time series databases, OpenTSDB for metrics, InfluxDB for metrics and events with no external dependencies, on the other OpenTSDB based on HBase.
Any other comparation between them?
And if I want to store and query|analyze metrics real-time with no deterioration loss based on time series, which would be better?
At one of the conferences I've heard people running something like Graphite/OpenTSDB for collecting metrics centrally and InfluxDB locally on each server to collect metrics only for this server. (InfluxDB was chosen for local storage as it is easy to deploy and lightweight on memory).
This is not directly related to your question but the idea appealed to me much so I wanted to share it.
Warp 10 is another option worth considering (I'm part of the team building it), check it out at http://www.warp10.io/.
It is based on HBase but also has a standalone version which will work fine for volumes in the low 100s billions of datapoints, so it should fit most use cases out there.
Among the strengths of Warp 10 is the WarpScript language which is built from the ground up for manipulating (Geo) Time Series.
Yet another open-source option is blueflood: http://blueflood.io.
Disclaimer: like Paul Dix, I'm biased by the fact that I work on Blueflood.
Based on your short list of requirements, I'd say Blueflood is a good fit. Perhaps if you can specify the size of your dataset, the type of analysis you need to run or any other requirements that you think make your project unique, we could help steer you towards a more precise answer. Without knowing more about what you want to do, it's going to be hard for us to answer more meaningfully.

How scalable is Firebase dashboard? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
n case of rather large dataset with million+ objects, how scalable is firebase dashboard (web interface)?
On test project all changes in dataset are immediately propagated to browser. However in case of large project the browser will not be able to handle it, or will be?
if I have the index structure:
update_index:{
object_00000001:{},
object_00000002:{},
.
.
object_99999999:{}
}
and there are constant changes on various elements. Is there a way only to indicate a change in dataset without passing data to snapshot and propagate the changes on user request?
How is it handled in firebase dashboard?
Its difficult to understand what you're asking. Assuming you mean Forge for the dashboard, then Forge will load all the data in your firebase, which can be an expensive operation, and can definitely be slow. Additionally, if you're opening an object with any more than a couple hundred keys then it becomes quite slow.
Every read operation in firebase is done with a .on or a .once as far as I'm aware, and you can listen for 'child_changed' event type but the only way to handle read data is with a snapshot.
If you're referring to Forge in your question, this may help: Performance of Firebase with large data sets

Advice on structuring an social science experiment in Meteor [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am playing around with Meteor as a framework for building web-deployed economics experiments. I am seeking advice on how to structure the app.
Here is the logic:
Users create an account and are put in a queue.
Groups of size N are formed from users in the queue.
Each group proceeds together through a series of stages.
Each stage consists of:
a. The display of information to and collection of simple input from group members.
b. Computing new information based on the inputs of all group members.
Groups move from stage to stage together, because the group data from one stage provides information needed in the next stage.
When the groups complete the last stage, data relating to their session is saved for analysis.
If a user loses their connection, they can rejoin by logging in and resume the state they were in before.
It is much like a multiplayer game, and I know there are many examples of those, so perhaps the answer to my question is just a pointer to a similarly structured and well-developed Meteor game.
I'm writing a framework for deploying multi-user experiments on Meteor. It basically does everything you asked for and a lot more.
https://github.com/HarvardEconCS/turkserver-meteor
We should talk! I'm currently using this for a couple of experiments but it is not well documented yet. However, it is a redesign of https://github.com/HarvardEconCS/TurkServer so all of the important features are there. Also, I'm looking to broaden out from using Amazon Mechanical Turk for subject recruitment, and Meteor makes that easy.
Even if you're not interested in using this, you should check out the source for how people are split into groups and kept segregated. The original idea came from here, but I've expanded on it significantly so that it is automagically managed by the smart package instead of having to do it yourself.

Modularization of PL/SQL packages [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Currently I am doing a restructuring project mainly on the Oracle PL/SQL packages in our company. It involves working on many of the core packages of our company. We never had documentation for the back end work done so far and the intention of this project is to create a new set of APIs based on the current logic in a structured way along with avoiding all unwanted logic that currently exists in the system.
We are also making a new module currently for the main business of the organization that would work based on these newly created back-end APIs.
As I started of this project, I found out that most of the wrapper APIs had around more than 8000 lines of code. I managed to covert this code into many single APIs and invoked them from the wrapper API.
This activity in itself has been a time-consuming process but I was able to cut down the number of lines of code to just 900 in the wrapper API by calling independent APIs for each business functionality.
I would like to know from you experts if this mode of modularizing the code is good and worth the time invested in it as I am not sure if it would have many performance benefits.
But from a code readability perspective, this is definitely helping and now I am able to understand the 8000 lines of code much better after restructuring and I am sure the other developers in my organization too will understand.
Requesting you to let me know if I am doing the right thing and if its having its advantages apart from readability please do mention them. Sorry for the long explanation.
And is it okay having more than 1000 lines of code in a wrapper API.
Easy to debug
Easy to update
Easy to Modify/maintain
Less change proneness due to low coupling.
Increases reuse if the modules are made generic
Can identify unused code easily

Resources