I'm trying to get a collection's count from the server to the client. I want to use it for paging and just so users will know about the number of documents available. It's important the count does update if documents are added or removed.
One problem is paging, where I'm limiting the amount of documents sent to the client with publish/subscribe. But in the case below, the client will not know if the MyPix collection does contain more than 4 documents:
Meteor.publish('MyPix', function(cursor) {
return MyPix.find({}, {limit:4, skip:cursor});
})
This is abit tricky,
a quick solution is to use this package, publish-counts
Server
Meteor.publish('publication', function() {
Counts.publish(this, 'numberOfPosts', Posts.find());
Counts.publish(this, 'numberOfUsers', Users.find());
});
Client
Meteor.subscribe('publication')
then to get numberofUsers or numberOfPosts
Counts.get('numberOfUsers') // returns numberOfUSers users
Related
My application use keywords extensively, everything is tagged with keywords, so whenever use wants to search data or add data I have to show keywords in auto complete box.
As of now I am storing keywords in another collection as below
export interface IKeyword {
Id:string;
Name:string;
CreatedBy:IUserMin;
CreatedOn:firestore.Timestamp;
}
export interface IUserMin {
UserId:string;
DisplayName:string;
}
export interface IKeywordMin {
Id:string;
Name:string;
}
My main document holds array of Keywords
export interface MainDocument{
Field1:string;
Field2:string;
........
other fields
........
Keywords:IKeywordMin[];
}
But problem is auto complete reads data frequently and my document reads quota increases very fast.
Is there a way to implement this without increasing reads for keyword ? Because keyword is not the real data we need to get.
Below is my query to get main documents
query = query.where("Keywords", "array-contains-any", keywords)
I use below query to get keywords in auto complete text box
query = query.orderBy("Name").startAt(searchTerm).endAt(searchTerm+ '\uf8ff').limit(20)
this query run many times when user types auto complete search which is causing more document reads
Does this answer your question
https://fireship.io/lessons/typeahead-autocomplete-with-firestore/
Though the receommended solution is to use 3rd party tool
https://firebase.google.com/docs/firestore/solutions/search
To reduce documents read:
A solution that come to my mind however I'm not sure if it's suitable for your use case is using Firestore caching feature. By default, firestore client will always try to reach the server to get the new changes on your documents and if it cannot reach the server, it will reach to the cached data on the client device. you can take advantage of this feature by using the cache first and reach the server only when you want. For web application, this feature is disabled by default and you can enable it like in
https://firebase.google.com/docs/firestore/manage-data/enable-offline
to help you understand this feature more check this article:
https://firebase.google.com/docs/firestore/manage-data/enable-offline
I found a solution, thought I would share here
Create a new collection named typeaheads in below format
export interface ITypeAHead {
Prefix:string;
CollectionName:string;
FieldName:string;
MatchingValues:ILookupItem[]
}
export interface ILookupItem {
Key:string;
Value:string;
}
depending on the minimum letters add either 2 or 3 letters to Prefix, and search based on the prefix, collection and field. so most probably you will end up with 2 or 3 document reads for on search.
Hope this helps someone else.
I am using firebase for data storage. The data structure is like this:
products:{
product1:{
name:"chocolate",
}
product2:{
name:"chochocho",
}
}
I want to perform an auto complete operation for this data, and normally i write the query like this:
"select name from PRODUCTS where productname LIKE '%" + keyword + "%'";
So, for my situation, for example, if user types "cho", i need to bring both "chocolate" and "chochocho" as result. I thought about bringing all data under "products" block, and then do the query at the client, but this may need a lot of memory for a big database. So, how can i perform sql LIKE operation?
Thanks
Update: With the release of Cloud Functions for Firebase, there's another elegant way to do this as well by linking Firebase to Algolia via Functions. The tradeoff here is that the Functions/Algolia is pretty much zero maintenance, but probably at increased cost over roll-your-own in Node.
There are no content searches in Firebase at present. Many of the more common search scenarios, such as searching by attribute will be baked into Firebase as the API continues to expand.
In the meantime, it's certainly possible to grow your own. However, searching is a vast topic (think creating a real-time data store vast), greatly underestimated, and a critical feature of your application--not one you want to ad hoc or even depend on someone like Firebase to provide on your behalf. So it's typically simpler to employ a scalable third party tool to handle indexing, searching, tag/pattern matching, fuzzy logic, weighted rankings, et al.
The Firebase blog features a blog post on indexing with ElasticSearch which outlines a straightforward approach to integrating a quick, but extremely powerful, search engine into your Firebase backend.
Essentially, it's done in two steps. Monitor the data and index it:
var Firebase = require('firebase');
var ElasticClient = require('elasticsearchclient')
// initialize our ElasticSearch API
var client = new ElasticClient({ host: 'localhost', port: 9200 });
// listen for changes to Firebase data
var fb = new Firebase('<INSTANCE>.firebaseio.com/widgets');
fb.on('child_added', createOrUpdateIndex);
fb.on('child_changed', createOrUpdateIndex);
fb.on('child_removed', removeIndex);
function createOrUpdateIndex(snap) {
client.index(this.index, this.type, snap.val(), snap.name())
.on('data', function(data) { console.log('indexed ', snap.name()); })
.on('error', function(err) { /* handle errors */ });
}
function removeIndex(snap) {
client.deleteDocument(this.index, this.type, snap.name(), function(error, data) {
if( error ) console.error('failed to delete', snap.name(), error);
else console.log('deleted', snap.name());
});
}
Query the index when you want to do a search:
<script src="elastic.min.js"></script>
<script src="elastic-jquery-client.min.js"></script>
<script>
ejs.client = ejs.jQueryClient('http://localhost:9200');
client.search({
index: 'firebase',
type: 'widget',
body: ejs.Request().query(ejs.MatchQuery('title', 'foo'))
}, function (error, response) {
// handle response
});
</script>
There's an example, and a third party lib to simplify integration, here.
I believe you can do :
admin
.database()
.ref('/vals')
.orderByChild('name')
.startAt('cho')
.endAt("cho\uf8ff")
.once('value')
.then(c => res.send(c.val()));
this will find vals whose name are starting with cho.
source
The elastic search solution basically binds to add set del and offers a get by wich you can accomplish text searches.
It then saves the contents in mongodb.
While I love and reccomand elastic search for the maturity of the project, the same can be done without another server, using only the firebase database.
That's what I mean:
(https://github.com/metaschema/oxyzen)
for the indexing part basically the function:
JSON stringifies a document.
removes all the property names and JSON to leave only the data
(regex).
removes all xml tags (therefore also html) and attributes (remember
old guidance, "data should not be in xml attributes") to leave only
the pure text if xml or html was present.
removes all special chars and substitute with space (regex)
substitutes all instances of multiple spaces with one space (regex)
splits to spaces and cycles:
for each word adds refs to the document in some index structure in
your db tha basically contains childs named with words with childs
named with an escaped version of "ref/inthedatabase/dockey"
then inserts the document as a normal firebase application would do
in the oxyzen implementation, subsequent updates of the document ACTUALLY reads the index and updates it, removing the words that don't match anymore, and adding the new ones.
subsequent searches of words can directly find documents in the words child. multiple words searches are implemented using hits
SQL"LIKE" operation on firebase is possible
let node = await db.ref('yourPath').orderByChild('yourKey').startAt('!').endAt('SUBSTRING\uf8ff').once('value');
This query work for me, it look like the below statement in MySQL
select * from StoreAds where University Like %ps%;
query = database.getReference().child("StoreAds").orderByChild("University").startAt("ps").endAt("\uf8ff");
I am trying to pass a object { key:value} and send it to meteor publish so i can query to database.
My Mongo db database has (relevant datas only) for products:
products : {
categs:['Ladies Top','Gents'],
name : Apple
}
In meteor Publish i have the following:
Meteor.publish('product', (query) =>{
return Clothings.find(query);
})
In client i use the following to subscribe:
let query = {categs:'/ladies top/i'}; // please notice the case is lower
let subscribe = Meteor.subscribe('product',query);
if (subscribe.ready()){
clothings = Products.find(query).fetch().reverse();
let count = Products.find(query).fetch().reverse().length; // just for test
}
The issue is, when i send the query from client to server, it is automatically encoded eg:
{categs:'/ladies%top/i'}
This query doesnot seem to work at all. There are like total of more than 20,000 products and fetching all is not an option. So i am trying to fetch based on the category (roughly around 100 products each).
I am new to ,meteor and mongo db and was trying to follow existing code, however this doesnot seem to be correct. Is there a better way to improve the code and achieve the same ?
Any suggestion or idea is highly appreciated.
I did go through meteor docs but they dont seem to have examples for my scenario so i hope someone out there can help me :) Cheers !
Firstly, you are trying to send a regex as a parameter. That's why it's being encoded. Meteor doesn't know how to pass functions or regexes as parameters afaict.
For this specific publication, I recommend sending over the string you want to search for and building the regex on the server:
client:
let categorySearch = 'ladies top';
let obj = { categorySearch }; // and any other things you want to query on.
Meteor.subscribe('productCategory',obj);
server:
Meteor.publish('productCategory',function(obj){
check(obj,Object);
let query = {};
if (obj.categorySearch) query.category = { $regex: `/${obj.categorySearch}/i` };
// add any other search parameters to the query object here
return Products.find(query);
});
Secondly, sending an entire query objet to a publication (or Method) is not at all secure since an attacker can then send any query. Perhaps it doesn't matter with your Products collection.
Tryng to get a simple result using "Where" style in firebase but get null althe time, anyone can help with that?
http://jsfiddle.net/vQEmt/68/
new Firebase("https://examples-sql-queries.firebaseio.com/messages")
.startAt('Inigo Montoya')
.endAt('Inigo Montoya')
.once('value', show);
function show(snap) {
$('pre').text(JSON.stringify(snap.val(), null, 2));
}
Looking at the applicable records, I see that the .priority is set to the timestamp, not the username.
Thus, you can't startAt/endAt the user's name as you've attempted here. Those are only applicable to the .priority field. These capabilities will be expanding significantly over the next year, as enhancements to the Firebase API continue to roll out.
For now, your best option for arbitrary search of fields is use a search engine. It's wicked-easy to spin one up and have the full power of a search engine at your fingertips, rather than mucking with glacial SQL-esque queries. It looks like you've already stumbled on the appropriate blog posts for that topic.
You can, of course, use an index which lists users by name and stores the keys of all their post ids. And, considering this is a very small data set--less than 100k--could even just grab the whole thing and search it on the client (larger data sets could use endAt/startAt/limit to grab a recent subset of messages):
new Firebase("https://examples-sql-queries.firebaseio.com/messages").once('value', function(snapshot) {
var messages = [];
snapshot.forEach(function(ss) {
if( ss.val().name === "Inigo Montoya" ) {
messages.push(ss.val());
}
});
console.log(messages);
});
Also see: Database-style queries with Firebase
In the discover meteor book, the deny statement is used as follows:
https://github.com/DiscoverMeteor/Microscope/commit/chapter8-3
Posts.deny({
update: function(userId, post, fieldNames) {
.....
});
I don't understand how the update function is getting UserId, post, or even fieldnames since the edit form is doing the following:
var postProperties = {
url: $(e.target).find('[name=url]').val(),
title: $(e.target).find('[name=title]').val()
}
Posts.update(currentPostId, {$set: postProperties}, function(error) {
Those parameters are given by Meteor. The signature for update functions on the client and in the deny object are different.
http://docs.meteor.com/#allow:
update(userId, doc, fieldNames, modifier):
The user userId wants to
update a document doc. (doc is the current version of the document
from the database, without the proposed update.) Return true to permit
the change.
fieldNames is an array of the (top-level) fields in doc that the
client wants to modify, for example ['name', 'score'].
modifier is the raw Mongo modifier that the client wants to execute;
for example, {$set: {'name.first': "Alice"}, $inc: {score: 1}}.
Only Mongo modifiers are supported (operations like $set and $push).
If the user tries to replace the entire document rather than use
$-modifiers, the request will be denied without checking the allow
functions.
The short answer is that these values are filled out for you by meteor. It understands who is making what modifications to which document and tells the server about it.
The client is calling Posts.update which sends a message to the server that userId is attempting to update a document (the contents of which are post), and the fields being updated are fieldNames. The server can then choose the accept the update based on those inputs.
This is documented here and here.