my code is like below
updateUserCount(){
console.log("started user update")
this.db.object('/analytics').valueChanges().map(
(snapshot) => {return snapshot}
).subscribe(
(res:any) => {
console.log("Vik:::total users in the system are:" + res.userCount)
this.db.object('/analytics').update({"userCount": Number(res.userCount) + 1}).then(
(r) => console.log("count updated")
).catch(
err => console.log("error updating count:" + err)
)
})
}
It simply trying to add one to property userCount. But the update statement puts this into infinite loop.
What is understand from the code above is that you want to update a "userCount", every time the collection is updated.
Although I do not know the exact solution here, but may I suggest an alternative approach.
1) Add a cloud function (https://firebase.google.com/docs/functions/) which triggers every time a value is added to the collection.
2) The code would look something like this in TypeScript
import * as functions from 'firebase-functions';
import { DocumentSnapshot } from '../node_modules/firebase-functions/lib/providers/firestore';
export const valueAddFunction = functions.firestore.document('/analytics/{id}')
.onCreate((snap: DocumentSnapshot, context: any) => {
const original = snap.data().original
console.log("original " + original)
return snap.ref.set({key: "WhateverValueYouWant"}, {})
})
This function would be triggered whenever a new document is added to the collection. You can similarly add functions listening to updates.
Hope this helps.
Related
I have a collection in firebase cloud firestore called 'posts' and I want to show the most liked posts in the last 24h on my web app.
The post documents have a field called 'like_count' (number) and another field called 'time_posted' (timestamp).
I also want to be able to limit the results to apply pagination.
I tried to apply a filter to only get the posts posted in the last 24 hours and then ordering them by the 'like_count' and then the 'time_posted' since I want the posts with the most likes to appear first.
postsRef.where("time_posted", ">", twentyFourHoursAgo)
.orderBy("like_count", "desc")
.orderBy("time_posted", "desc")
.limit(10)
However, I quickly found out that it is not possible to filter and then sort by two different fields.
(See the Limitations part of the documentation for Order and limit data with Cloud Firestore)
It states:
Invalid: Range filter and first orderBy on different fields
I thought about sorting the results by 'like_count' in the frontend, but this won't work properly because I don't have all the documents. And getting all the documents is infeasible for a large number of daily posts.
Is there an easy work-around I am missing or how can I go about this?
When performing a query, Firestore must be able to traverse an index in a continuous fashion.
This introduction video is a little outdated (because "OR" queries are now possible using the "in" operator) but it does give a good visualization of what Firestore is doing as it runs a query.
If your query was just postsRef.orderBy("like_count", "desc").limit(10), Firestore would load up the index it has for a descending "like_count", pluck the first 10 entries and return them.
To handle your query, it would have to pluck an entry off the descending "like_count" index, compare it to your "time_posted" requirement, and either discard it or add it to a list of valid entries. Once it has all of the recent posts, it then needs to sort the results as you specified. As these steps don't make use of a continuous read of an index, it is disallowed.
The solution would be to build your own index from the recent posts and then pluck the results off of that. Because doing this on the client is ill-advised, you should instead make use of a Cloud Function to do the work for you. The following code makes use of a Callable Cloud Function.
const MS_TWENTY_FOUR_HOURS = 24 * 60 * 60 * 1000;
export getRecentTopPosts = function.https.onCall((data, context) => {
// unless otherwise stated, return only 10 entries
const limit = Number(data.limit) || 10;
const postsRef = admin.firestore().collection("posts");
// OPTIONAL CODE SEGMENT: Check Cached Index
const twentyFourHoursAgo = Date.now() - MS_TWENTY_FOUR_HOURS;
const recentPostsSnapshot = await postsRef
.where("time_posted", ">", twentyFourHoursAgo)
.get();
const orderedPosts = recentPostsSnapshot.docs
.map(postDoc => ({
snapshot: postDoc,
like_count: postDoc.get("like_count"),
time_posted: postDoc.get("time_posted")
})
.sort((p1, p2) => {
const deltaLikes = p2.like_count - p1.like_count; // descending sort based on like_count
if (deltaLikes !== 0) {
return deltaLikes;
}
return p2.time_posted - p1.time_posted; // descending sort based on time_posted
});
// OPTIONAL CODE SEGMENT: Save Cached Index
return orderedPosts
.slice(0, limit)
.map(post => ({
_id: post.snapshot.id,
...post.snapshot.data()
}));
})
If this code is expected to be called by many clients, you may wish to cache the index to save it getting constantly rebuilt by inserting the following segments into the function above.
// OPTIONAL CODE SEGMENT: Check Cached Index
if (!data.skipCache) { // allow option to bypass cache
const cachedIndexSnapshot = await admin.firestore()
.doc("_serverCache/topRecentPosts")
.get();
const oneMinuteAgo = Date.now - 60000;
// if the index was created in the past minute, reuse it
if (cachedIndexSnapshot.get("timestamp") > oneMinuteAgo) {
const recentPostMetadataArray = cachedIndexSnapshot.get("posts");
const recentPostIdArray = recentPostMetadataArray
.slice(0, limit)
.map((postMeta) => postMeta.id)
const postDocs = await fetchDocumentsWithId(postsRef, recentPostIdArray); // see https://gist.github.com/samthecodingman/aea3bc9481bbab0a7fbc72069940e527
// postDocs is not ordered, so we need to be able to find each entry by it's ID
const postDocsById = {};
for (const doc of postDocs) {
postDocsById[doc.id] = doc;
}
return recentPostIdArray
.map(id => {
// may be undefined if not found (i.e. recently deleted)
const postDoc = postDocsById[id];
if (!postDoc) {
return null; // deleted post, up to you how to handle
} else {
return {
_id: postDoc.id,
...postDoc.data()
};
}
});
}
}
// OPTIONAL CODE SEGMENT: Save Cached Index
if (!data.skipCache) { // allow option to bypass cache
await admin.firestore()
.doc("_serverCache/topRecentPosts")
.set({
timestamp: Date.now(),
posts: orderedPosts
.slice(0, 25) // cache the maximum expected amount
.map(post => ({
id: post.snapshot.id,
like_count: post.like_count,
time_posted: post.time_posted,
}))
});
}
Other improvements you could add to this function include:
A field mask - i.e. instead of return every part of the post documents, return just the title, like count, time posted and the author.
Variable post age (instead of 24 hours)
Variable minimum likes count
Filter by author
i am new in Vue JS and in Firebase. My target is get all 'eventos' that has same category. I mean, let's i have two eventos, an eventos category "SMIX" and another has "DAM". Now i want to get the eventos has category 'SMIX'
My data structure is here :
created() {
var datos = []
firebase.database().ref('usuarios').on("value", data => {
data.forEach(function(user){
user.child("eventos").orderByChild("categoria").equalTo("SMIX")
.forEach(function(evento){
datos.push(evento.val())
});
});
this.eventos = datos;
});
}[My data Structure][1]
There are several errors and points to be noted in your code:
Firstly, if you receive the error user.child(...).orderByChild is not a function
it is because with data.forEach(function(user) {...}), user is a DataSnapshot (see the forEach() doc), and by calling the child() method on this DataSnapshot you get another DataSnapshot... which does not have a orderByChild() method.
The orderByChild() method is a method of a Reference, so you need to do
user.child(...).ref.orderByChild()
using the ref property of the DataSnapshot
Secondly, you cannot do
user.ref.child("eventos").orderByChild("categoria").equalTo("SMIX")
.forEach()
because you need to use the once() or on() methods to get the data at a database location represented by a Reference.
Thirdly, Since you are going to execute several queries within a loop, you need to use the once() method instead of the on() method. The on() method set a listener that continuously "listens for data changes at a particular location."
Finally, note that you need to use Promise.all() to manage the parallel asynchronous queries to the database.
So, having noted all the points above, the following code will do the trick (to put in created()):
var datos = []
firebase.database().ref('usuarios').once('value')
.then(dataSnapshot => {
var promises = [];
dataSnapshot.forEach(user => {
promises.push(user.ref.child("eventos").orderByChild("categoria").equalTo("SMIX").once('value'));
});
return Promise.all(promises);
})
.then(results => {
//console.log(results);
results.forEach(dataSnapshot => {
dataSnapshot.forEach(evento => {
datos.push(evento.val());
});
});
this.eventos = datos;
});
I'm using the Nodejs library for talking to Jira called jira-connector. I can get all of the boards on my jira instance by calling
jira.board.getAllBoards({ type: "scrum"})
.then(boards => { ...not important stuff... }
the return set looks something like the following:
{
maxResults: 50,
startAt: 0,
isLast: false,
values:
[ { id: ... } ]
}
then while isLast === false I keep calling like so:
jira.board.getAllBoards({ type: "scrum", startAt: XXX })
until isLast is true. then I can organize all of my returns from promises and be done with it.
I'm trying to reason out how I can get all of the data on pages with Ramda, I have a feeling it's possible I just can't seem to sort out the how of it.
Any help? Is this possible using Ramda?
Here's my Rx attempt to make this better:
const pagedCalls = new Subject();
pagedCalls.subscribe(value => {
jira.board.getAllBoards({ type:"scrum", startAt: value })
.then(boards => {
console.log('calling: ' + value);
allBoards.push(boards.values);
if (boards.isLast) {
pagedCalls.complete()
} else {
pagedCalls.next(boards.startAt + 50);
}
});
})
pagedCalls.next(0);
Seems pretty terrible. Here's the simplest solution I have so far with a do/while loop:
let returnResult = [];
let result;
let startAt = -50;
do {
result = await jira.board.getAllBoards( { type: "scrum", startAt: startAt += 50 })
returnResult.push(result.values); // there's an array of results under the values prop.
} while (!result.isLast)
Many of the interactions with Jira use this model and I am trying to avoid writing this kind of loop every time I make a call.
I had to do something similar today, calling the Gitlab API repeatedly until I had retrieved the entire folder/file structure of the project. I did it with a recursive call inside a .then, and it seems to work all right. I have not tried to convert the code to handle your case.
Here's what I wrote, if it will help:
const getAll = (project, perPage = 10, page = 1, res = []) =>
fetch(`https://gitlab.com/api/v4/projects/${encodeURIComponent(project)}/repository/tree?recursive=true&per_page=${perPage}&page=${page}`)
.then(resp => resp.json())
.then(xs => xs.length < perPage
? res.concat(xs)
: getAll(project, perPage, page + 1, res.concat(xs))
)
getAll('gitlab-examples/nodejs')
.then(console.log)
.catch(console.warn)
The technique is pretty simple: Our function accepts whatever parameters are necessary to be able to fetch a particular page and an additional one to hold the results, defaulting it to an empty array. We make the asynchronous call to fetch the page, and in the then, we use the result to see if we need to make another call. If we do, we call the function again, passing in the other parameters needed, the incremented page number, and the merge of the current results and the ones just received. If we don't need to make another call, then we just return that merged list.
Here, the repository contains 21 files and folders. Calling for ten at a time, we make three fetches and when the third one is complete, we resolve our returned Promise with that list of 21 items.
This recursive method definitely feels more functional than your versions above. There is no assignment except for the parameter defaulting, and nothing is mutated along the way.
I think it should be relatively easy to adapt this to your needs.
Here is a way to get all the boards using rubico:
import { pipe, fork, switchCase, get } from 'rubico'
const getAllBoards = boards => pipe([
fork({
type: () => 'scrum',
startAt: get('startAt'),
}),
jira.board.getAllBoards,
switchCase([
get('isLast'),
response => boards.concat(response.values),
response => getAllBoards(boards.concat(response.values))({
startAt: response.startAt + response.values.length,
})
]),
])
getAllBoards([])({ startAt: 0 }) // => [...boards]
getAllBoards will recursively get more boards and append to boards until isLast is true, then it will return the aggregated boards.
I have refactor problems because my code dosnt work to the new versions of angular and angularfire.
Error
The line: upload.url = uploadTask.snapshot.downloadURL; is undefined.
Code
uploadTask.on(firebase.storage.TaskEvent.STATE_CHANGED,
// three observers
// 1.) state_changed observer
(snapshot) => {
// upload in progress
upload.progress = (uploadTask.snapshot.bytesTransferred / uploadTask.snapshot.totalBytes) * 100;
console.log(upload.progress);
},
// 2.) error observer
(error) => {
// upload failed
console.log(error);
},
// 3.) success observer
(): any => {
upload.url = uploadTask.snapshot.downloadURL; //?!?!UNDEFINED
upload.name = upload.file.name;
this.saveFileData(upload);
}
);
Questions
I had tried different solutions from stackoverflow but it dosnt really work. Most of the example is also more about how to retrieve the image but i want to set the variable upload.url to a value.
Another question:
I'm new to angular and web. Will it take long time to change it to firestore? The code is based on realtime firebase.
To get the downloadURL, you have to call the getDownloadURL() method of the Storage Reference Object.
Try this:
uploadTask.snapshot.ref.getDownloadURL()
.subscribe(url => console.log(url))
I have the following function that validates that rangeFrom is not superior to rangeTo and that the rangeFrom does not already exist in the list of ranges.
How can rewrite this using RxJS?
const isTagAlreadyExist = (tags, currentTag) => _(tags)
.filter(x => x.id !== currentTag.id)
.some(x => _.inRange(currentTag.rangeTo, x.rangeFrom, x.rangeTo))
.value();
const validateRangeFrom = (tags, currentTag) => {
const errors = {};
if (isNumeric(currentTag.rangeFrom)) {
if (!_.inRange(currentTag.rangeFrom, 0, currentTag.rangeTo)) {
errors.rangeFrom = 'FROM_TAG_CANNOT_BE_GREATER_THAN_TO_TAG';
} else if (isTagAlreadyExist(tags, currentTag)) {
errors.rangeFrom ='TAG_ALREADY_EXISTS';
}
}
return {
errors
};
};
The question is: what parts do you want to rewrite to rxjs? Those are two pure functions that run synchronously from what I can see, I do not really see much a usecase for rxjs here - of course you could always utilize your functions within an rxjs stream:
const validateRangeFrom$ = (tags, currentTag) => {
return Observable.of(currentTag)
.map(tag => validateRangeFrom(tags, tag));
}
validateRangeFrom$(myTags, currentTag)
.subscribe(errors => console.log(errors));
But as you can see, this does not make much sense if you simply wrap it inside a stream, the essence of useful reactive programming is, that everything is reactive, not just some small parts, so for your example, you should start with having tags$ and currentTag$ as observables - let's assume that you have that, then you could do something like:
const tags$: Observable<ITag[]>... // is set somewhere, and emits a new array whenever it is changed
const currentTag$: Observable<ITag>... // is set somewhere and emits the tag whenever a new currentTag is set
const validateRangeFrom$ = Observable
.combineLatest(tags$, currentTag$, (tags, tag) => ({tags, tag}))
.map(({tags, tag}) => validateRangeFrom(tags, tag));
validateRangeFrom$.subscribe(errors => console.log(errors));
This will automatically trigger the validation for you whenever a new tags-array is emitted or a new currentTag is selected/set - but again: your validation-method is kept the same - as even in reactive programming you have to do validation and logic-operations at some point, the reactive part usually just concerns the flow of the data (see: tags$ and currentTag$)