Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
n case of rather large dataset with million+ objects, how scalable is firebase dashboard (web interface)?
On test project all changes in dataset are immediately propagated to browser. However in case of large project the browser will not be able to handle it, or will be?
if I have the index structure:
update_index:{
object_00000001:{},
object_00000002:{},
.
.
object_99999999:{}
}
and there are constant changes on various elements. Is there a way only to indicate a change in dataset without passing data to snapshot and propagate the changes on user request?
How is it handled in firebase dashboard?
Its difficult to understand what you're asking. Assuming you mean Forge for the dashboard, then Forge will load all the data in your firebase, which can be an expensive operation, and can definitely be slow. Additionally, if you're opening an object with any more than a couple hundred keys then it becomes quite slow.
Every read operation in firebase is done with a .on or a .once as far as I'm aware, and you can listen for 'child_changed' event type but the only way to handle read data is with a snapshot.
If you're referring to Forge in your question, this may help: Performance of Firebase with large data sets
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I'm trying to figure out if it's possible to automatically have firebase database data be stored as different documents in Firestore? I've thought of trying to use Cloud Functions to achieve this, but I'm not familiar with javascript at all.
I'm an engineering student working on an automated garden project. I decided that it would be cool to have an app that updates with the real-time data (updated every 5min) as well as historical data that shows all of the previous update values. I've got an ESP8266 coded from Arduino IDE to update my Realtime Database with the sensor values, but I can't figure out the best way to store these values in order to view the historical data. If there is a better way of doing this I'm all ears.
I'm trying to figure out if it's possible to automatically have firebase database data be stored as different documents in Firestore? I've thought of trying to use Cloud Functions to achieve this, but I'm not familiar with javascript at all.
Yes, it's possible, and yes, you can use Cloud Functions for this. You don't have to be familiar with JavaScript as Cloud Functions supports several languages in Google Cloud Platform. The Firebase tools only support JavaScript, but you don't have to use that if you don't want to.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am playing around with Meteor as a framework for building web-deployed economics experiments. I am seeking advice on how to structure the app.
Here is the logic:
Users create an account and are put in a queue.
Groups of size N are formed from users in the queue.
Each group proceeds together through a series of stages.
Each stage consists of:
a. The display of information to and collection of simple input from group members.
b. Computing new information based on the inputs of all group members.
Groups move from stage to stage together, because the group data from one stage provides information needed in the next stage.
When the groups complete the last stage, data relating to their session is saved for analysis.
If a user loses their connection, they can rejoin by logging in and resume the state they were in before.
It is much like a multiplayer game, and I know there are many examples of those, so perhaps the answer to my question is just a pointer to a similarly structured and well-developed Meteor game.
I'm writing a framework for deploying multi-user experiments on Meteor. It basically does everything you asked for and a lot more.
https://github.com/HarvardEconCS/turkserver-meteor
We should talk! I'm currently using this for a couple of experiments but it is not well documented yet. However, it is a redesign of https://github.com/HarvardEconCS/TurkServer so all of the important features are there. Also, I'm looking to broaden out from using Amazon Mechanical Turk for subject recruitment, and Meteor makes that easy.
Even if you're not interested in using this, you should check out the source for how people are split into groups and kept segregated. The original idea came from here, but I've expanded on it significantly so that it is automagically managed by the smart package instead of having to do it yourself.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I am planning to create Login Form for my system. Is it better to use ASP.NET built in Authentication and role management OR create my own way? Which is better and convenient? I want the administrator (Group of people) to be allowed to create users and assign roles to that specific user. Is it possible..? Maybe the question is silly but appreciate your help.
Do NOT create your own authentication system!
Authentication is one of those things where it's easy to build something that seems to work — even passes a rigorous set of unit tests — but is actually flawed in subtle ways that you won't find out about until six months after you get hacked.
The best thing to do is lean as much as possible on the authentication features provided by your platform of choice. If the platform doesn't already provide something suitable, find an existing third-party option that is suitable. What you want is something that is battle-tested; that when a flaw is discovered (there always are some) it's likely because of a break on someone else's system, not your own, and you can just apply the vendor patch to fix it, before your site is really compromised.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Currently I am doing a restructuring project mainly on the Oracle PL/SQL packages in our company. It involves working on many of the core packages of our company. We never had documentation for the back end work done so far and the intention of this project is to create a new set of APIs based on the current logic in a structured way along with avoiding all unwanted logic that currently exists in the system.
We are also making a new module currently for the main business of the organization that would work based on these newly created back-end APIs.
As I started of this project, I found out that most of the wrapper APIs had around more than 8000 lines of code. I managed to covert this code into many single APIs and invoked them from the wrapper API.
This activity in itself has been a time-consuming process but I was able to cut down the number of lines of code to just 900 in the wrapper API by calling independent APIs for each business functionality.
I would like to know from you experts if this mode of modularizing the code is good and worth the time invested in it as I am not sure if it would have many performance benefits.
But from a code readability perspective, this is definitely helping and now I am able to understand the 8000 lines of code much better after restructuring and I am sure the other developers in my organization too will understand.
Requesting you to let me know if I am doing the right thing and if its having its advantages apart from readability please do mention them. Sorry for the long explanation.
And is it okay having more than 1000 lines of code in a wrapper API.
Easy to debug
Easy to update
Easy to Modify/maintain
Less change proneness due to low coupling.
Increases reuse if the modules are made generic
Can identify unused code easily
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a project idea for which I want to mine publicly available data on another website that it received by crowd-sourcing. This is so I have initial data for my own project. To reiterate, I want to write a robot to grab data that is displayed on another website and use it for my own website. Does anyone know the legality of this sort of thing? Does the original website own the data that was given to it by a crowd? Even if so, can I use it?
Web scraping is a legally complicated issue.
The hassles of legal action and enforceability often keep scrapers from getting in trouble.
Outright duplication is considered actionable, although courts have ruled that "duplication of facts" is permitted (US).
I advise you read up here: http://en.wikipedia.org/wiki/Web_scraping#Legal_issues
Best,
legally, you should be fine. as long as the data is made available and the people have consented; you aren't hacking and the other site has permission to share. check for a license on the other site, if there isn't one inquire or be prepared for access to be denied at some point. and even though it is publicly available doesn't mean the other site wants it to be.
also, double check and make sure that you don't inadvertently publish private data as well.