In RxDB, to list all documents in a collection on a remote db that has documents, I've tried:
myCollection.dump()
.then(json => console.dir(json));
and
myCollection.find().exec() // <- find all documents
.then(documents => console.dir(documents));
from the documentation: https://rxdb.info/rx-collection.html#dump
https://rxdb.info/rx-document.html#find
but both do a _find post with body:
{"selector":{"_id":{}}}
that return an empty docs [] array. That same _find selector executed outside of RxDB also returns an empty docs array.
If I add documents to the collection with myCollection.upsert(), the doc is added to the remote server and then appears as a response in the two above calls. But maybe only from what's stored in memory, as there's still this remote _find POST with an empty docs: [] response. So on a page refresh those list calls are empty again.
I'm using:
"pouchdb-adapter-http": "7.0.0",
"rxdb": "8.0.4",
"rxjs": "6.3.3"
At this point of time, there is no RxDB support for remote collections.
You can sync the remote database into your local collection and then run queries there. But it is not possible to send queries to remote and get the results, like what is done with pouchdb-http-adapter.
You can head on to rxdb.info since the major version release 9.0.0 has announced several improvements in querying the document fields in a better way
Related
My application use keywords extensively, everything is tagged with keywords, so whenever use wants to search data or add data I have to show keywords in auto complete box.
As of now I am storing keywords in another collection as below
export interface IKeyword {
Id:string;
Name:string;
CreatedBy:IUserMin;
CreatedOn:firestore.Timestamp;
}
export interface IUserMin {
UserId:string;
DisplayName:string;
}
export interface IKeywordMin {
Id:string;
Name:string;
}
My main document holds array of Keywords
export interface MainDocument{
Field1:string;
Field2:string;
........
other fields
........
Keywords:IKeywordMin[];
}
But problem is auto complete reads data frequently and my document reads quota increases very fast.
Is there a way to implement this without increasing reads for keyword ? Because keyword is not the real data we need to get.
Below is my query to get main documents
query = query.where("Keywords", "array-contains-any", keywords)
I use below query to get keywords in auto complete text box
query = query.orderBy("Name").startAt(searchTerm).endAt(searchTerm+ '\uf8ff').limit(20)
this query run many times when user types auto complete search which is causing more document reads
Does this answer your question
https://fireship.io/lessons/typeahead-autocomplete-with-firestore/
Though the receommended solution is to use 3rd party tool
https://firebase.google.com/docs/firestore/solutions/search
To reduce documents read:
A solution that come to my mind however I'm not sure if it's suitable for your use case is using Firestore caching feature. By default, firestore client will always try to reach the server to get the new changes on your documents and if it cannot reach the server, it will reach to the cached data on the client device. you can take advantage of this feature by using the cache first and reach the server only when you want. For web application, this feature is disabled by default and you can enable it like in
https://firebase.google.com/docs/firestore/manage-data/enable-offline
to help you understand this feature more check this article:
https://firebase.google.com/docs/firestore/manage-data/enable-offline
I found a solution, thought I would share here
Create a new collection named typeaheads in below format
export interface ITypeAHead {
Prefix:string;
CollectionName:string;
FieldName:string;
MatchingValues:ILookupItem[]
}
export interface ILookupItem {
Key:string;
Value:string;
}
depending on the minimum letters add either 2 or 3 letters to Prefix, and search based on the prefix, collection and field. so most probably you will end up with 2 or 3 document reads for on search.
Hope this helps someone else.
I read that Firestore can now query across subcollections. Is the firestoreConnect HOC from react-redux-firebase capable of utilizing this feature?
Collection Group Queries were released at Google I/O last week (May 7, 2019). A quick scan of the react-redux-firebase release notes shows no mention of them at this time, so it seems like they're not supported yet. You might want to file an issue/feature request for it and monitor said release notes for updates.
I read about that too. There is info about how to perform subcollection queries here: Link. I am not sure about react-redux however, what are your intentions?
//To query all subcolections with react-redux-firebase useFirestoreConnect function, use:
useFirestoreConnect([
{
collectionGroup: "COLLECTION_GROUP_NAME",
storeAs: "ANY_NAME",
},
]);
//To Read and save to a variable
let YOUR_VAR = useSelector(
(state) => state.firestore.ordered.ANY_NAME
);
We are attempting to Wiremock (https://github.com/WireMock-Net/WireMock.Net) CosmosDb invocations - so we can build integrationtests in our .net core 2.1 microservice.
By looking at the WireMock instance Request/Response entries, we can observe the following:
1) GET towards "/"
We mock the returning metadata of databases
THIS IS OK
2) GET towards collection (in our case: "/dbs/Chunker/colls/RHTMLChunks")
Returns metadata about the collections
THIS IS OK
3) POST a Query that results in one document being returned towards the documents endpoint on the collection (in our case: "/dbs/Chunker/colls/RHTMLChunks/docs")
I have tried to emulate what we get when we do the exact same query towards the CosmosDb instance in Postman, including headers and response.
However I observe that the lib does the query again, and again, and again....
(I can see this by pausing in Visual Studio, then look at the RequestLog in WireMock)
Does anyone know what should be returned. I have set up WireMock to return the following json payload:
{
"_rid": "q0dcAOelSAI=",
"Documents": [
{
"id": "gL20020621z2D34-1",
"ChunkSize": 658212,
"TotalChunks": 2,
"Metadata": {
"Active": true,
"PublishedDate": "",
},
"ChunkId": 1,
"Markup": "<h1>hello</h1>",
"MainDestination": "gL20020621z2D34",
"_rid": "q0dcAOelSAIHAAAAAAAAAA==",
"_self": "dbs/q0dcAA==/colls/q0dcAOelSAI=/docs/q0dcAOelSAIHAAAAAAAAAA==/",
"_etag": "\"0100e92a-0000-0000-0000-5ba96cf70000\"",
"_attachments": "attachments/",
"_ts": 1537830135
}
],
"_count": 0
}
Problems:
1) Can not find .pdb belonging to Microsoft.Azure.DocumentDB.Core v2.1.0
2) What payload/headers should be returned, so the library will NOT blow up, and retry when we invoke:
var response = await documentQuery.ExecuteNextAsync<DocumentDto>(); // this hangs forever
Please help :)
We're working on open sourcing the C# code base and some other fun improvements to make this easier. In the mean time, I'd advocate for using the emulator for local testing/etc., although I understand mocking is still a lot faster an nicer - it'll just be hard :)
My best pointer is actually our Node.js code base since that's public already. The query code is relatively hard to follow, but basically, you create a query, we look up all the partitions we need to talk to, then we send a request for each partition and keep querying until we don't get back a continuation token anymore (or maxBufferedItem Count/etc. goes over the limit, and we pause until goes back down, etc.)
Effectively, we send out N number of requests for each partition, where N is the number of pages of results, and can vary per partition and query. You'd likely be able to mock a single partition, single page response relatively easy, but a full partition response isn't gonna be fun.
As I mentioned in the beginning, we've got some cool stuff coming, hopefully before the end of the year, which will make offline mocking easier, as well as open sourcing it finally. You might be better off with the emulator until then.
I'm trying to get a collection's count from the server to the client. I want to use it for paging and just so users will know about the number of documents available. It's important the count does update if documents are added or removed.
One problem is paging, where I'm limiting the amount of documents sent to the client with publish/subscribe. But in the case below, the client will not know if the MyPix collection does contain more than 4 documents:
Meteor.publish('MyPix', function(cursor) {
return MyPix.find({}, {limit:4, skip:cursor});
})
This is abit tricky,
a quick solution is to use this package, publish-counts
Server
Meteor.publish('publication', function() {
Counts.publish(this, 'numberOfPosts', Posts.find());
Counts.publish(this, 'numberOfUsers', Users.find());
});
Client
Meteor.subscribe('publication')
then to get numberofUsers or numberOfPosts
Counts.get('numberOfUsers') // returns numberOfUSers users
I am trying to create a two column unique index on the underlying mongodb in a meteor app and having trouble. I can't find anything in the meteor docs. I have tried from the chrome console. I have tried from term and even tried to point mongod at the /db/ dir inside .meteor . I have tried
Collection.ensureIndex({first_id: 1, another_id: 1}, {unique: true}); variations.
I want to be able to prevent duplicate entries on a meteor app mongo collection.
Wondering if anyone has figured this out?
I answered my own question, what a noob.
I figured it out.
Start meteor server
Open 2nd terminal and type meteor mongo
Then create your index...for example I did these for records of thumbsup and thumbsdown type system.
db.thumbsup.ensureIndex({item_id: 1, user_id: 1}, {unique: true})
db.thumbsdown.ensureIndex({item_id: 1, user_id: 1}, {unique: true})
Now, just gotta figure out a bootstrap install setup that creates these when pushed to prod instead of manually.
Collection._ensureIndex(index, options)
Searching inside Meteor source code, I found a bind to ensureIndex called _ensureIndex.
For single-key basic indexes you can follow the example of packages/accounts-base/accounts_server.js that forces unique usernames on Meteor:
Meteor.users._ensureIndex('username', {unique: 1, sparse: 1});
For multi-key "compound" indexes:
Collection._ensureIndex({first_id:1, another_id:1}, {unique: 1});
The previous code, when placed on the server side, ensures that indexes are set.
Warning
Notice _ensureIndex implementation warning:
We'll actually design an index API later. For now, we just pass
through to Mongo's, but make it synchronous.
According to the docs "Minimongo currently doesn't have indexes. This will come soon." And looking at the methods available on a Collection, there's no ensureIndex.
You can run meteor mongo for a mongo shell and enable the indexes server-side, but the Collection object still won't know about them. So the app will let you add multiple instances to the Collection cache, while on the server-side the additional inserts will fail silently (errors get written to the output). When you do a hard page refresh, the app will re-sync with server
So your best bet for now is probably to do something like:
var count = MyCollection.find({first_id: 'foo', another_id: 'bar'}).count()
if (count === 0)
MyCollection.insert({first_id: 'foo', another_id: 'bar'});
Which is obviously not ideal, but works ok. You could also enable indexing in mongodb on the server, so even in the case of a race condition you won't actually get duplicate records.
The Smartpackage aldeed:collection2 supports unique indices, as well as schema-validation. Validation will both occure on server and client (reactivly), so you can react on errors on the client.
Actually why not use upsert on the server with a Meteor.method and you could also send also track it with a ts:
// Server Only
Meteor.methods({
add_only_once = function(id1,id2){
SomeCollection.update(
{first_id:id1,another_id:id2},{$set:{ts:Date.now()}},{upsert:True});
}
});
// Client
Meteor.call('add_only_once',doc1._id, doc2._id);
// actual code running on server
if(Meteor.is_server) {
Meteor.methods({
register_code: function (key,monitor) {
Codes.update({key:key},{$set:{ts:Date.now()}},{upsert:true});
}
...