I am working on using the Firebase database in a Unity project. I read that you want to structure your database as flat as possible for preformance so for a leaderboard I am structuring it like this
users
uid
name
uid
name
leaderboard
uid
score
uid
score
I want a class that can get this data and trigger a callback when it is done. I run this in my constructor.
root.Child("leaderboard").OrderByChild("score").LimitToLast(_leaderboardCount).ValueChanged += onValueChanged;
To trigger the callback with the data I wrote this.
void onValueChanged(object sender, ValueChangedEventArgs args)
{
int index = 0;
List<string> names = new List<string>();
foreach (DataSnapshot snapshot in args.Snapshot.Children)
{
root.Child("players").Child(snapshot.Key).GetValueAsync().ContinueWith(task =>
{
if (task.Result.Exists)
{
names.Add(task.Result.Child("name").Value.ToString());
index++;
if (index == args.Snapshot.ChildrenCount)
{
_callback(names);
}
}
});
}
}
I am wondering if there is a better way to do this that I am missing? I'm worried if the tasks finish out of order my leaderboard will be jumbled. Can I get the names and scores in one snapshot?
Thanks!
You can only load:
a single complete node
a subset of all child nodes matching a certain condition under a single node
It seems that what you are trying to do is get a subset of the child nodes from multiple roots, which isn't possible. It actually looks like you're trying to do a join, which on Firebase is something you have to do with client-side code.
Note that loading the subsequent nodes is often a lot more efficient than developers expect, since Firebase can usually pipeline the requests (everything goes over a single web socket connection). For more on this, see http://stackoverflow.com/questions/35931526/speed-up-fetching-posts-for-my-social-network-app-by-using-query-instead-of-obse/35932786#35932786.
Related
I have been getting this problem now a few times when I'm coding and I think I just don't understand the way SwiftUI execute the order of the code.
I have a method in my context model that gets data from Firebase that I call in .onAppear. But the method doesn't execute the last line in the method after running the whole for loop.
And when I set breakpoints on different places it seems that the code first is just run through without making the for loop and then it returns to the method again and then does one run of the for loop and then it jumps to some other strange place and then back to the method again...
I guess I just don't get it?
Has it something to do with main/background thread? Can you help me?
Here is my code.
Part of my UI-view that calls the method getTeachersAndCoursesInSchool
VStack {
//Title
Text("Settings")
.font(.title)
Spacer()
NavigationView {
VStack {
NavigationLink {
ManageCourses()
.onAppear {
model.getTeachersAndCoursesInSchool()
}
} label: {
ZStack {
// ...
}
}
}
}
}
Here is the for-loop of my method:
//Get a reference to the teacher list of the school
let teachersInSchool = schoolColl.document("TeacherList")
//Get teacherlist document data
teachersInSchool.getDocument { docSnapshot, docError in
if docError == nil && docSnapshot != nil {
//Create temporary modelArr to append teachermodel to
var tempTeacherAndCoursesInSchoolArr = [TeacherModel]()
//Loop through all FB teachers collections in local array and get their teacherData
for name in teachersInSchoolArr {
//Get reference to each teachers data document and get the document data
schoolColl.document("Teachers").collection(name).document("Teacher data").getDocument {
teacherDataSnapshot, teacherDataError in
//check for error in getting snapshot
if teacherDataError == nil {
//Load teacher data from FB
//check for snapshot is not nil
if let teacherDataSnapshot = teacherDataSnapshot {
do {
//Set local variable to teacher data
let teacherData: TeacherModel = try teacherDataSnapshot.data(as: TeacherModel.self)
//Append teacher to total contentmodel array of teacherdata
tempTeacherAndCoursesInSchoolArr.append(teacherData)
} catch {
//Handle error
}
}
} else {
//TODO: Error in loading data, handle error
}
}
}
//Assign all teacher and their courses to contentmodel data
self.teacherAndCoursesInSchool = tempTeacherAndCoursesInSchoolArr
} else {
//TODO: handle error in fetching teacher Data
}
}
The method assigns data correctly to the tempTeacherAndCoursesInSchoolArr but the method doesn't assign the tempTeacherAndCoursesInSchoolArr to self.teacherAndCoursesInSchool in the last line. Why doesn't it do that?
Most of Firebase's API calls are asynchronous: when you ask Firestore to fetch a document for you, it needs to communicate with the backend, and - even on a fast connection - that will take some time.
To deal with this, you can use two approaches: callbacks and async/await. Both work fine, but you might find that async/await is easier to read. If you're interested in the details, check out my blog post Calling asynchronous Firebase APIs from Swift - Callbacks, Combine, and async/await | Peter Friese.
In your code snippet, you use a completion handler for handling the documents that getDocuments returns once the asynchronous call returns:
schoolColl.document("Teachers").collection(name).document("Teacher data").getDocument { teacherDataSnapshot, teacherDataError in
// ...
}
However, the code for assigning tempTeacherAndCoursesInSchoolArr to self.teacherAndCoursesInSchool is outside of the completion handler, so it will be called before the completion handler is even called.
You can fix this in a couple of ways:
Use Swift's async/await for fetching the data, and then use a Task group (see Paul's excellent article about how they work) to fetch all the teachers' data in parallel, and aggregate them once all the data has been received.
You might also want to consider using a collection group query - it seems like your data is structure in a way that should make this possible.
Generally, iterating over the elements of a collection and performing Firestore queries for each of the elements is considered a bad practice as is drags down the performance of your app, since it will perform N+1 network requests when instead it could just send one single network request (using a collection group query).
To optimize usage, I have a Firestore collection with only one document, consisting in a single field, which is an array of strings.
This is what the data looks like in the collection. Just one document with one field, which is an array:
On the client side, the app is simply retrieving the entire status document, picking one at random, and then sending the entire array back minus the one it picked
var all = await metaRef.doc("status").get();
List tokens=all['all'];
var r=new Random();
int numar=r.nextInt(tokens.length);
var ales=tokens[numar];
tokens.removeAt(numar);
metaRef.doc("status").set({"all":tokens});
Then it tries to do some stuff with the string, which may fail or succeed. If it succeeds, then no more writing to the database, but if it fails it fetches that array again, adds the string back and pushes it:
var all = await metaRef.doc("status").get();
List tokens=all['all'];
List<String> toate=(tokens.map((element) => element as String).toList());
toate.add(ales.toString());
metaRef.doc("status").set({"all":toate});
You can use the methods associated with the Set object.
Here is an example to check that only 1 item was removed:
allow update: if checkremoveonlyoneitem()
function checkremoveonlyoneitem() {
let set = resource.data.array.toSet();
let setafter = request.resource.data.array.toSet();
return set.size() == setafter.size() + 1
&& set.intersection(setafter).size() == 1;
}
Then you can check that only one item was added. And you should also add additional checks in case the array does not exist on your doc.
If you are not sure about how the app performs the task i.e., successfully or not, then I guess it is nice idea to implement this logic in the client code. You can just make a simple conditional block which deletes the field from the document if the operation succeeds, either due to offline condition or any other issue. You can find the following sample from the following document regarding how to do it. Like this, with just one write you can delete the field which the user picks without updating the whole document.
city_ref = db.collection(u'cities').document(u'BJ')
city_ref.update({
u'capital': firestore.DELETE_FIELD
})snippets.py
is it possible to insert battery_cost_change: field multiple time with counter value in firebase in same user Message Which is LM2JKCacawaW7tlK4yK. like battery_cost_change1: ,battery_cost_change2:,battery_cost_change3: and so on And how these fields will be retrieve in android programmaticallyenter image description here
You may wish to consider the db structure for battery_cost_changes to hold multiple instances of battery_cost_change object.
Everytime you add a new battery_cost_change - you can push the object into the path .../LM2JKCacawaW7tlK4yK/battery_cost_changes. <pushId> is the random ID Firebase generates, when an object is pushed/added into the node.
{ LM2JKCacawaW7tlK4yK: {
....
battery_cost_changes: {
<pushId>: {
battery_cost_change: 6
},
<pushId>: {
battery_cost_change: 3
}
}
....
}
While retrieving the values, you can read the path .../LM2JKCacawaW7tlK4yK/battery_cost_changes and observe the values for an array of battery_change_cost objects. Please share more details, to explore a better or more suitable answer. However, I feel this is a generic idea to store multiple values under a node.
I am building a recommender system where I use Firebase to store and retrieve data about movies and user preferences.
Each movie can have several attributes, and the data looks as follows:
{
"titanic":
{"1997": 1, "english": 1, "dicaprio": 1, "romance": 1, "drama": 1 },
"inception":
{ "2010": 1, "english": 1, "dicaprio": 1, "adventure": 1, "scifi": 1}
...
}
To make the recommendations, my algorithm requires as input all the data (movies) and is matched against an user profile.
However, in production mode I need to retrieve over >10,000 movies. While the algorithm can handle this relatively fast, it takes a lot of time to load this data from Firebase.
I retrieve the data as follows:
firebase.database().ref(moviesRef).on('value', function(snapshot) {
// snapshot.val();
}, function(error){
console.log(error)
});
I am there wondering if you have any thoughts on how to speed things up? Are there any plugins or techniques known to solve this?
I am aware that denormalization could help split the data up, but the problem is really that I need ALL movies and ALL the corresponding attributes.
My suggestion would be to use Cloud Functions to handle this.
Solution 1 (Ideally)
If you can calculate suggestions every hour / day / week
You can use a Cloud Functions Cron to fire up daily / weekly and calculate recommendations per users every week / day. This way you can achieve a result more or less similar to what Spotify does with their weekly playlists / recommendations.
The main advantage of this is that your users wouldn't have to wait for all 10,000 movies to be downloaded, as this would happen in a cloud function, every Sunday night, compile a list of 25 recommendations, and save into your user's data node, which you can download when the user accesses their profile.
Your cloud functions code would look like this :
var movies, allUsers;
exports.weekly_job = functions.pubsub.topic('weekly-tick').onPublish((event) => {
getMoviesAndUsers();
});
function getMoviesAndUsers () {
firebase.database().ref(moviesRef).on('value', function(snapshot) {
movies = snapshot.val();
firebase.database().ref(allUsersRef).on('value', function(snapshot) {
allUsers = snapshot.val();
createRecommendations();
});
});
}
function createRecommendations () {
// do something magical with movies and allUsers here.
// then write the recommendations to each user's profiles kind of like
userRef.update({"userRecommendations" : {"reco1" : "Her", "reco2", "Black Mirror"}});
// etc.
}
Forgive the pseudo-code. I hope this gives an idea though.
Then on your frontend you would have to get only the userRecommendations for each user. This way you can shift the bandwidth & computing from the users device to a cloud function. And in terms of efficiency, without knowing how you calculate recommendations, I can't make any suggestions.
Solution 2
If you can't calculate suggestions every hour / day / week, and you have to do it each time user accesses their recommendations panel
Then you can trigger a cloud function every time the user visits their recommendations page. A quick cheat solution I use for this is to write a value into the user's profile like : {getRecommendations:true}, once on pageload, and then in cloud functions listen for changes in getRecommendations. As long as you have a structure like this :
userID > getRecommendations : true
And if you have proper security rules so that each user can only write to their path, this method would get you the correct userID making the request as well. So you will know which user to calculate recommendations for. A cloud function could most likely pull 10,000 records faster and save the user bandwidth, and finally would write only the recommendations to the users profile. (similar to Solution 1 above) Your setup would like this :
[Frontend Code]
//on pageload
userProfileRef.update({"getRecommendations" : true});
userRecommendationsRef.on('value', function(snapshot) { gotUserRecos(snapshot.val()); });
[Cloud Functions (Backend Code)]
exports.userRequestedRecommendations = functions.database.ref('/users/{uid}/getRecommendations').onWrite(event => {
const uid = event.params.uid;
firebase.database().ref(moviesRef).on('value', function(snapshot) {
movies = snapshot.val();
firebase.database().ref(userRefFromUID).on('value', function(snapshot) {
usersMovieTasteInformation = snapshot.val();
// do something magical with movies and user's preferences here.
// then
return userRecommendationsRef.update({"getRecommendations" : {"reco1" : "Her", "reco2", "Black Mirror"}});
});
});
});
Since your frontend will be listening for changes at userRecommendationsRef, as soon as your cloud function is done, your user will see the results. This might take a few seconds, so consider using a loading indicator.
P.S 1: I ended up using more pseudo-code than originally intended, and removed error handling etc. hoping that this generally gets the point across. If there's anything unclear, comment and I'll be happy to clarify.
P.S. 2: I'm using a very similar flow for a mini-internal-service I built for one of my clients, and it's been happily operating for longer than a month now.
Firebase NoSQL JSON structure best practice is to "Avoid nesting data", but you said, you don't want to change your data. So, for your condition, you can have REST call to any particular node (node of your each movie) of the firebase.
Solution 1) You can create some fixed number of Threads via ThreadPoolExecutors. From each worker thread, you can do HTTP (REST call request) as below. Based on your device performance and memory power, you can decide how many worker threads you want to manipulate via ThreadPoolExecutors. You can have code snippet something like below:
/* creates threads on demand */
ThreadFactory threadFactory = Executors.defaultThreadFactory();
/* Creates a thread pool that creates new threads as needed, but will reuse previously constructed threads when they are available */
ExecutorService threadPoolExecutor = Executors.newFixedThreadPool(10); /* you have 10 different worker threads */
for(int i = 0; i<100; i++) { /* you can load first 100 movies */
/* you can use your 10 different threads to read first 10 movies */
threadPoolExecutor.execute(() -> {
/* OkHttp Reqeust */
/* urlStr can be something like "https://earthquakesenotifications.firebaseio.com/movies?print=pretty" */
Request request = new Request.Builder().url(urlStr+"/i").build();
/* Note: Firebase, by default, store index for every array.
Since you are storing all your movies in movies JSON array,
it would be easier, you read first (0) from the first worker thread,
second (1) from the second worker thread and so on. */
try {
Response response = new OkHttpClient().newCall(request).execute();
/* OkHttpClient is HTTP client to request */
String str = response.body().string();
} catch (IOException e) {
e.printStackTrace();
}
return myStr;
});
}
threadPoolExecutor.shutdown();
Solution 2) Solution 1 is not based on the Listener-Observer pattern. Actually, Firebase has PUSH technology. Means, whenever something particular node changes in Firebase NoSQL JSON, the corresponding client, who has connection listener for particular node of the JSON, will get new data via onDataChange(DataSnapshot dataSnapshot) { }. For this you can create an array of DatabaseReferences like below:
Iterable<DataSnapshot> databaseReferenceList = FirebaseDatabase.getInstance().getReference().getRoot().child("movies").getChildren();
for(DataSnapshot o : databaseReferenceList) {
#Override
public void onDataChange(DataSnapshot o) {
/* show your ith movie in ListView. But even you use RecyclerView, showing each Movie in your RecyclerView's item is still show. */
/* so you can store movie in Movies ArrayList. When everything completes, then you can update RecyclerView */
}
#Override
public void onCancelled(DatabaseError databaseError) {
}
}
Although you stated your algorithm needs all the movies and all attributes, it does not mean that it processes them all at once. Any computation unit has its limits, and within your algorithm, you probably chunk the data into smaller parts that your computation unit can handle.
Having said that, if you want to speed things up, you can modify your algorithm to parallelize fetching and processing of the data/movies:
| fetch | -> |process | -> | fetch | ...
|chunk(1)| |chunk(1)| |chunk(3)|
(in parallel) | fetch | -> |process | ...
|chunk(2)| |chunk(2)|
With this approach, you can spare almost the whole processing time (but the last chunk) if processing is really faster than fetching (but you have not said how "relatively fast" your algorithm run, compared to fetching all the movies)
This "high level" approach of your problem is probably your better chance if fetching the movies is really slow although it requires more work than simply activating a hypothetic "speed up" button of a Library. Though it is a sound approach when dealing with large chunk of data.
{message:{
"id1":{
"name":"name1"
},
"id2":{
"name":"name2"
},
"id3":{
"name":"name3"
}... and so on
}}
Let say I am having firebase data structure like above, now I am trying to get the first id's value (e.g its name) by using child("id1")...value(). Will I potentially access and query data of "id2" or the rest of the nodes on the same level?
Excluding the cost of storing data in the database, will it cost more to query a single node (getting all its inner values) when the number of nodes in "message" is starting to grow?
You have to do like this is you want all child value on same level
firebase.child("message").addListenerForSingleValueEvent(new ValueEventListener() {
#Override
public void onDataChange(DataSnapshot dataSnapshot) {
for (DataSnapshot innerDataSanpShot : dataSnapshot.getChildren()) {
//DataSnapshot of inner Childerns mean id1, id2.... so on
String name = innerDataSnapShot.child("name").getValue().toString();
}
}
#Override
public void onCancelled(FirebaseError firebaseError) {
}
});
other way is that you can get whole data of message and after that you can get every child data by this
dataSnapshot.child("id1"). getValue();
Without knowing your target platform, I will assume its a web based application. In order to execute a query for the data at the node in question, you may do something like the following:
var data = snapshot.val();
var userMessage = data.id1.message;
This simple example will show the message at that particular location, which you will need to substitute the 'id1' in the correct format. You will not call the data from other levels as you use the 'id1' value to pinpoint the node you wish to query data from.
In terms of your second question, I don't believe it will cost you more by querying a single node as the database grows, providing you make a query at that particular location. Of course, if you add a loop then this will change things depending on the size of your database.