Limit reading children from a location through client side - firebase

var commentsRef = new Firebase('https://test.firebaseio.com/comments');
var last10Comments = commentsRef.limit(10);
//Rendering last 10 comments
last10Comments.on('child_added', function (snapshot) {
});
From the client side a user can change the limit number and can render all comments from comments reference.
Is there any way to restrict reading limit to some number at any point of time for a location?

No, there isn't currently a way to put Firebase security rules around that type of limiting of data. Another approach that would work would be to have another section of the tree that contains a denormalized portion of the data that just contains the last 10 comments and nothing more.
Thanks for bringing this up. I've added this to our internal tracker to keep it in mind when we design V2 of our security API.

Related

Firebase storage url, new file keep same access token

Duplicate of: Firebase storage URL keeps changing with new token
When a user uploads a profile pic I store this in firebase storage with the file name as the uid.
Lets say the user then goes and makes say 100 posts + 500 comments and then updates their profile image.
Currently I have a trigger which goes and updates the profile image url in all of the post and comment documents. The reason I have to do this is that when the image is changed in storage the access token is changed and this is part of the url so the old url no longer works.
What I want to do is not have the access token change. If I can do this I can avoid the mass updates that will massively increase my firestore writes.
Is there any way to do this? or an alternative?
Edit:
Another solution if you don't mind making the file public.
Add this storage rule and you won't have to use a token to access the file.
This will allow read access to "mydir" globally in any subfolder.
match /{path=**}/mydir/{doc} {
allow read: if true;
}
There are only two options here:
You store the profile image URL only once, probably in the user's profile document, and look it up every time it is needed. In return you only have to write it once.
You store the profile image URL for every post, in which case you only have to load the post documents and not the profile URL for each. In return you'll have to write the profile URL in each post document, and update it though.
For smaller networks the former is more common, since you're more likely to see multiple posts from the same user, so you amortizing the cost of the extra lookup over multiple posts.
The bigger the network of users, the more interesting the second approach becomes, as you'll care about read performance and simplicity more than the writes you're focusing on right now.
In the end, there's no singular right answer here though. You'll have to decide for yourself what performance and cost profile you want your app to have.
Answer provided by #Prodigy here: https://stackoverflow.com/a/64129850/10222449
I tried this and it works well.
This will save millions of writes.
var storage = firebase.storage();
var pathReference = storage.ref('users/' + userId + '/avatar.jpg');
pathReference.getDownloadURL().then(function (url) {
$("#large-avatar").attr('src', url);
}).catch(function (error) {
// Handle any errors
});

How to store keywords in firebase firestore

My application use keywords extensively, everything is tagged with keywords, so whenever use wants to search data or add data I have to show keywords in auto complete box.
As of now I am storing keywords in another collection as below
export interface IKeyword {
Id:string;
Name:string;
CreatedBy:IUserMin;
CreatedOn:firestore.Timestamp;
}
export interface IUserMin {
UserId:string;
DisplayName:string;
}
export interface IKeywordMin {
Id:string;
Name:string;
}
My main document holds array of Keywords
export interface MainDocument{
Field1:string;
Field2:string;
........
other fields
........
Keywords:IKeywordMin[];
}
But problem is auto complete reads data frequently and my document reads quota increases very fast.
Is there a way to implement this without increasing reads for keyword ? Because keyword is not the real data we need to get.
Below is my query to get main documents
query = query.where("Keywords", "array-contains-any", keywords)
I use below query to get keywords in auto complete text box
query = query.orderBy("Name").startAt(searchTerm).endAt(searchTerm+ '\uf8ff').limit(20)
this query run many times when user types auto complete search which is causing more document reads
Does this answer your question
https://fireship.io/lessons/typeahead-autocomplete-with-firestore/
Though the receommended solution is to use 3rd party tool
https://firebase.google.com/docs/firestore/solutions/search
To reduce documents read:
A solution that come to my mind however I'm not sure if it's suitable for your use case is using Firestore caching feature. By default, firestore client will always try to reach the server to get the new changes on your documents and if it cannot reach the server, it will reach to the cached data on the client device. you can take advantage of this feature by using the cache first and reach the server only when you want. For web application, this feature is disabled by default and you can enable it like in
https://firebase.google.com/docs/firestore/manage-data/enable-offline
to help you understand this feature more check this article:
https://firebase.google.com/docs/firestore/manage-data/enable-offline
I found a solution, thought I would share here
Create a new collection named typeaheads in below format
export interface ITypeAHead {
Prefix:string;
CollectionName:string;
FieldName:string;
MatchingValues:ILookupItem[]
}
export interface ILookupItem {
Key:string;
Value:string;
}
depending on the minimum letters add either 2 or 3 letters to Prefix, and search based on the prefix, collection and field. so most probably you will end up with 2 or 3 document reads for on search.
Hope this helps someone else.

How to read documents from Change Feed in Azure Cosmos DB since last checkpoint after restart?

I am using Change Feed processor library to read the Change Feed on a partitioned collection and below is the code for how I have configure it. I ma using most of the default options.
ChangeFeedProcessorOptions feedProcessorOptions = new
{
LeaseRenewInterval = TimeSpan.FromSeconds(15),
};
var docObserverFactory = DocumentFeedObserverFactory.Create(this.destinationCollectionInfo, this.dbRepository);
this.builder
.WithHostName(hostName)
.WithFeedCollection(this.monitoredCollectionInfo)
.WithLeaseCollection(this.leaseCollectionInfo)
.WithProcessorOptions(feedProcessorOptions)
.WithObserverFactory(docObserverFactory);
This runs fine as long as the Change Feed application is running and documents are being inserted/updated in the collection and the Change Feed app picks them up as expected.
The problem happens when I stop the Change Feed app for sometime and insert/update few documents in the Collection. Then when I start the Change Feed app, it doesn't pick changes from where it last left. Those changes that were inserted when the Change Feed app was stopped are lost. But when I set the flag StartFromBeginning to true, it picks everything from the start including changes that were inserted when the Change Feed app was stopped in between for sometime.
My understanding of read from current (StartFromBeginning to false) is that the Change Feed reads documents since it last left. But that doesn't seem to happen. Please help.
There are two ways to continue from exactly where you left it.
The first, and more accurate one, is to store the Continuation token of the last thing you read. That way you can specify it when you start again and it will win over both the StartTime and the StartFromBeginning flags.
The second one is to provide the StartTime property which will try and find the continuation token of a given time automatically. It has an approximate 5 second precision so there is a chance that you might miss some documents though.

Meteor realtime game - match two players according to their score?

I want to build a realtime quiz game which randomly matches two players (according to their winning rate if they are logged in). I've read through the book Discover Meteor and have a basic understanding of the framework, but I just have no idea of how to implement the matching part. Anyone know how to do that?
if you want to match users who have scores close to each other, you can do something like this : mongodb - Find document with closest integer value
The Meteor code for those Mongo queries is very similar, but there are some subtle differences that are kind of tricky. In Meteor, it would look something like this :
SP // "selected player" = the User you want to match someone up with
var score = SP.score; // selected player's score
var queryLow = {score: {$lte:score},_id:{$ne:SP._id}};
var queryHigh = {score:{$gte:score},_id:{$ne:SP._id}};
// "L" is the player with the closest lower score
var L=Players.findOne(queryLow,{sort:{score:-1},limit:1});
// "H" is the player with the closest higher score
var H=Players.findOne(queryHigh,{sort:{score:1},limit:1});
so, now you have references to the players with scores right above and right below the 'selected player'. In terms of making it random, perhaps start with a simple algorithm like "match me with the next available player who's score is closest" , then if it's too predictable and boring you can throw some randomness into the algorithm.
you can view the above Meteor code working live here http://meteorpad.com/pad/4umMP4iY8AkB9ct2d/ClosestScore
and you can Fork it and mess about with the queries to see how it works.
good luck! Meteor is great, I really like it.
If you add the package peppelg:random-opponent-matcher to your application, you can match together opponents like this:
On the server, you need to have an instance of RandomOpponentMatcher like this:
new RandomOpponentMatcher('my-matcher', {name: 'fifo'}, function(user1, user2){
// Create the match/game they should play.
})
The function you pass to RandomOpponentMatcher will get called when two users been matched to play against each other. In it, you'll probably want to create the match the users should play against each other (this package does only match opponents together, it does not contain any functionality for playing games/matches).
On the client, you need to create an instance of RandomOpponentMatcher as well, but you only pass the name to it (the same name as you used on the server):
myMatcher = new RandomOpponentMatcher('my-matcher')
Then when the users is logged in and which to be matched with a random opponent, all you need to do is to call the add method. For example:
<template name="myTemplate">
<button class="clickMatchesWithOpponent">Match me with someone!</button>
</template>
Template.myTemplate.events({
'click .clickMatchesWithOpponent': function(event, template){
myMatcher.add()
}
})
When two different logged in users has clicked on the button, the function you passed to RandomOpponentMatcher on the server will get called.
One implementation might be as follows:
A user somehow triggers a 'looking for game' event that sets an attribute on user.profile.lookingForGame to true. The event then makes a call to a server side Meteor method which queries for all other online users looking for games.
From there you it really depends on how you want to handle users once they 'match'.
To determine all online users, try using the User Status package:
https://github.com/mizzao/meteor-user-status
Once added, any online user will have an attribute in the profile object of 'online'. You can use this to query for all online users.

Firebase data structure - is the Firefeed structure relevant?

Firefeed is a very nice example of what can be achieved with Firebase - a fully client side Twitter clone. So there is this page : https://firefeed.io/about.html where the logic behind the adopted data structure is explained. It helps a lot to understand Firebase security rules.
By the end of the demo, there is this snippet of code :
var userid = info.id; // info is from the login() call earlier.
var sparkRef = firebase.child("sparks").push();
var sparkRefId = sparkRef.name();
// Add spark to global list.
sparkRef.set(spark);
// Add spark ID to user's list of posted sparks.
var currentUser = firebase.child("users").child(userid);
currentUser.child("sparks").child(sparkRefId).set(true);
// Add spark ID to the feed of everyone following this user.
currentUser.child("followers").once("value", function(list) {
list.forEach(function(follower) {
var childRef = firebase.child("users").child(follower.name());
childRef.child("feed").child(sparkRefId).set(true);
});
});
It's showing how the writing is done in order to keep the read simple - as stated :
When we need to display the feed for a particular user, we only need to look in a single place
So I do understand that. But if we take a look at Twitter, we can see that some accounts has several millions followers (most followed is Katy Perry with over 61 millions !). What would happen with this structure and this approach ? Whenever Katy would post a new tweet, it would make 61 millions Write operations. Wouldn't this simply kill the app ? And even more, isn't it consuming a lot of unnecessary space ?
With denormalized data, the only way to connect data is to write to every location its read from. So yeah, to publish a tweet to 61 million followers would require 61 million writes.
You wouldn't do this in the browser. The server would listen for child_added events for new tweets, and then a cluster of workers would split up the load paginating a subset of followers at a time. You could potentially prioritize online users to get writes first.
With normalized data, you write the tweet once, but pay for the join on reads. If you cache the tweets in feeds to avoid hitting the database for each request, you're back to 61 million writes to redis for every Katy Perry tweet. To push the tweet in real time, you need to write the tweet to a socket for every online follower anyway.

Resources