I'd like to clone a minimongo collection so I can do some calculations, get a result, then push those results back to the server.
Assuming this is a suitable pattern, how best can I clone a minimongo collection?
It appears that in the object no longer has a ._deepcopy (1.0.4), and attempting an EJSON.clone exceeds the callstack size for even tiny collections. Underscore's _.clone() only copies by reference.
Alternatively, I could just edit the local collection via collection._collection.update. But if that's the case, what would happen if on the off chance the server updated or removed a doc while it was processing? I watched this video, but am still unclear on that scenario: https://www.eventedmind.com/feed/meteor-how-does-the-client-synchronize-writes-with-the-server
The why behind your pattern escapes me but one solution could be to define a null collection, (docs) copy the records you need to that, do your work, and then copy back the results into the original collection for automatic sync back to the server.
myLocalCollection = new Mongo.Collection(null);
Related
After lots of reading, I'm starting to get a better handle on Meteor's publish/subscribe model. I've removed the autopublish training wheels from my first app and while I have most everything working, I am seeing one issue.
When the app first loads, my publish and subscribe hooks work great. I have a block of code that runs in a Tracker.autorun() block which makes the subscribe calls, I am able to sequentially wait for data from the server using ready() on my subscribe handles, etc.
One feature of my app is that it allows the user to insert new documents into a collection. More specifically, when the user performs a certain action, this triggers an insert. At that point, the client-side JS runs and the insert into MiniMongo completes. The reactive autorun block runs and the client can see the inserted documented. The client updates the DOM with the new inserted data and all is well.
Furthermore, when I peek into the server-side MongoDB, I see the inserted document which means the server-side JS is running fine as well.
Here's where it gets weird. The client-side autorun block runs a second time (I'm not sure why) and this time, the client no longer has the inserted item. When the DOM renders, the newly inserted item is now gone. If I reload the page, all is well again.
Has anyone seen this behavior before? I'm also noticing that the server-side publish call runs once on page load but then it doesn't run again after the insert. This seems wrong because how else will the client get the reconciled data from the server after the insertion (i.e. after Meteor's client-side latency compensation)?
The important functions (ComponentInstances is the collection that is bugging out):
Publish block:
Meteor.publish('allComponentInstances', function (documentId, screenIndex) {
console.log(`documentId: ${documentId} screenIndex: ${screenIndex}`)
const screens = Screens.find({ownerDocumentId: documentId})
const selectedScreen = screens.fetch()[screenIndex]
return ComponentInstances.find({_id: {$in: selectedScreen.allComponentInstanceIds}})
})
Subscription block in autorun:
// ... a bunch of irrelevant code above
const allComponentInstancesHandle = Meteor.subscribe('allComponentInstances', document._id, 0)
if (allComponentInstancesHandle.ready()) {
isReady = true
screens = Screens.find({ownerDocumentId: document._id}).fetch()
const componentInstanceObjects = ComponentInstances.find().fetch()
allComponentInstances = {}
componentInstanceObjects.map((componentInstance) => {
allComponentInstances[componentInstance._id] = componentInstance
})
}
This is most probably you're inserting documents from client side. And you have not set up your permission rules properly. When you remove autopublish and insecure from your app, you are not allowed to insert/update/remove documents into collection unless you have allow/deny rules set up in the server side.
Meteor has a great feature called latency compensation which tries emulate your db operations before it gets the actual write operation in the db. And when the server tries to write in the db, it looks for allow/deny rules.If the permission rules doesn't allow the db operation or Whatever the reason( either allow/deny or authentication) for not actually written in the db, then the server data gets synchronized with your client side db.
This is why i assume you are seeing your document being inserted for the first time and gets disappeared within a second.
check this section of meteor docs.
http://docs.meteor.com/#/full/allow
I ended up solving this a different way. The core issue, I believe, has nothing to do with accept/deny rules. In fact, their role is still hazy to me.
I realize now what I've been reading all along in the Meteor docs: the publish functions return cursors. If the cursor itself doesn't change (e.g. if you're passing specific keys you want to fetch), then it won't really work as a reactive data source in the sense that new documents in a collection will not make the data publish again. You are, after all, still requesting the same keys.
The way forward is to come up with a publish cursor that accurately reflects the reactive data you want to retrieve. This sounds abstract but in practice, it means make sure the cursor is general, not specific to the specific keys you are retrieving.
I just need suggestion in this case. There is a PIN code field in my project in asp.net environment. I have stored 50,000 around pin code in sql server database. When I run project in local host, it becomes slow down. Since I have a drop-down to get value from database. I think it is because of huge data is being rendered into html, since when I click on view source at run-time, I can see all the PIN-code inside it.
Moreover, I have also done this for Select CITY, and STATE from database in a same way.
I will really appreciate you, if you get me any logic or technique to lessen this slowdown
If you are using all the Pincode in the single page then You have multiple option to optimized this slow down If this is in initialized phase then Try MongoDB ,No SQL DB otherwise go for Solr , Redis that gives fast accessing of the data. If you are not able to using these then You can optimised it by eager loading , Cache Storing of data.
If its not in single page then break it to batch via paginate the pincode.
This is common problem with any website where we deal with large amount of data. To be frank there is no code level solution for this. You need to select any of following approach.
You can try multiple options for faster retrieval.
Caching -
Use redis or memcache - in simpler words, on the first request cache manager will read and store your data from SQL server. For subsequent requests, data will be served from cache.
Also, don't forget to make a provision to invalidate the data when new pin codes are added.
Edit: You can also use object caching provided by .Net framework. Refer: object caching
Code will be something like.
if (Cache["key_pincodes"] == null)
{
// if No object is present in Cache, add it to the cache with expiry time of 10 minutes
// Read data to datatable or any object
DataTable pinCodeObject = GetPinCodesFromdatabase();
Cache.Insert("key_pincodes", pinCodeObject, null, DateTime.MaxValue, TimeSpan.FromMinutes(10));
}
else // If pinCodes are cached, dont make Database call and read it from cache
{
// This will get execute
DataTable pinCodeObject = (DataTable)Cache["key_pincodes"];
}
// bind it your dropdown
No-sql database-
MongoDB, XML, Txt files could be used to read the data. It will take much lesser time than the database hit.
I have a situation in which I need to subscribe to the same collection twice. The two publish methods in my server-side code are as follows:
Meteor.publish("selected_full_mycollection", function (important_id_list) {
check(important_id_list, Match.Any); // should do better check
// this will return the full doc, including a very long array it contains
return MyCollection.find({
important_id: {$in: important_id_list}
});
});
Meteor.publish("all_brief_mycollection", function() {
// this will return all documents, but only the id and first item in the array
return MyCollection.find({}, {fields: {
important_id: 1,
very_long_array: {$slice: 1}
}});
});
My problem is that I am not seeing the full documents on the client end after I subscribe to them. I think this is because they are being over-written by the method that publishes only the brief versions.
I don't want to clog up my client memory with long arrays when I don't need them, but I do want them available when I do need them.
The brief version is subscribed to on startup. The full version is subscribed to when the user visits a template that drills down for more insight.
How can I properly manage this situation?
TL/DR - skip to the third paragraph.
I'd speculate that this is because the publish function thinks that the very_long_array field has already been sent to the client, so it doesn't send it again. You'd have to fiddle around a bit to confirm this, but sending different data on the same field is bound to cause some problems.
In terms of subscribing on two collections, you're not supposed to be able to do this as the unique mongo collection name needs to be provided to the client and server-side collections object. In practice, you might be able to do something really hacky by making one client subscription a fake remote subscription via DDP and having it populate a totally separate Javascript object. However, this cannot be the best option.
This situation would be resolved by publishing your summary on something other than the same field. Unfortunately, you can't use transforms when returning cursors from a publish function (which would be the easiest way), but you have two options:
Use the low-level publications API as detailed in this answer.
Use collection hooks to populate another field (like very_long_array_summary) with the first item in the array whenever very_long_array changes and publish just the summary field in the former publication.
A third option might be publishing the long version to a different collection that exists for this purpose on the client only. You might want to check the "Advanced Pub/Sub" Chapter of Discover Meteor (last sub chapter).
In Meteor, I got a collection that the client subscribes to. In some cases, instead of publishing the documents that exists in the collection on the server, I want to send down some bogus data. Now that's fine using the this.added function in the publish.
My problem is that I want to treat the bogus doc as if it were a real document, specifically this gets troublesome when I want to update it. For the real docs I run a RealDocs.update but when doing that on the bogus doc it fails since there is no representation of it on the server (and I'd like to keep it that way).
A collection API that allowed me to pass something like local = true this would be fantastic but I have no idea how difficult that would be to implement and I'm not to fond of modifying the core code.
Right now I'm stuck at either creating a BogusDocs = new Meteor.Collection(null) but that makes populating the Collection more difficult since I have to either hard code fixtures in the client code or use a method to get the data from the server and I have to make sure I call BogusDocs.update instead of RealDocs.update as soon as I'm dealing with bogus data.
Maybe I could actually insert the data on the server and make sure it's removed later, but the data really has nothing to do with the server side collection so I'd rather avoid that.
Any thoughts on how to approach this problem?
After some further investigation (the evented mind site) it turns out that one can modify the local collection without making calls to the server. This is done by running the same methods as you usually would, but on MyCollection._collection instead of just on Collection. MyCollection.update() would thus become MyCollection._collection.update(). So, using a simple wrapper one can pass in the usual arguments to a update call to update the collection as usual (which will try to call the server which in turn will trigger your allow/deny rules) or we can add 'local' as the last argument to only perform the update in the client collection. Something like this should do it.
DocsUpdateWrapper = function() {
var lastIndex = arguments.length -1;
if (arguments[lastIndex] === 'local') {
Docs._collection.update(arguments.slice(0, lastIndex);
} else {
Docs.update(arguments)
}
}
(This could of course be extended to a DocsWrapper that allows for insertion and removals too.)(Didnt try this function yet but it should serve well as an example.)
The biggest benefit of this is imo that we can use the exact same calls to retrieve documents from the local collection, regardless of if they are local or living on the server too. By adding a simple boolean to the doc we can keep track of which documents are only local and which are not (An improved DocsWrapper could check for that bool so we could even omit passing the 'local' argument.) so we know how to update them.
There are some people working on local storage in the browser
https://github.com/awwx/meteor-browser-store
You might be able to adapt some of their ideas to provide "fake" documents.
I would use the transform feature on the collection to make an object that knows what to do with itself (on client). Give it the corruct update method (real/bogus), then call .update rather than a general one.
You can put the code from this.added into the transform process.
You can also set up a local minimongo collection. Insert on callback
#FoundAgents = new Meteor.Collection(null, Agent.transformData )
FoundAgents.remove({})
Meteor.call 'Get_agentsCloseToOffer', me, ping, (err, data) ->
if err
console.log JSON.stringify err,null,2
else
_.each data, (item) ->
FoundAgents.insert item
Maybe this interesting for you as well, I created two examples with native Meteor Local Collections at meteorpad. The first pad shows an example with plain reactive recordset: Sample_Publish_to_Local-Collection. The second will use the collection .observe method to listen to data: Collection.observe().
On my meteor project users can post events and they have to choose (via an autocomplete) in which city it will take place. I have a full list of french cities and it will never be updated.
I want to use a collection and publish-subscribes based on the input of the autocomplete because I don't want the client to download the full database (5MB). Is there a way, for performance, to tell meteor that this collection is "static"? Or does it make no difference?
Could anyone suggest a different approach?
When you "want to tell the server that a collection is static", I am aware of two potential optimizations:
Don't observe the database using a live query because the data will never change
Don't store the results of this query in the merge box because it doesn't need to be tracked and compared with other data (saving memory and CPU)
(1) is something you can do rather easily by constructing your own publish cursor. However, if any client is observing the same query, I believe Meteor will (at least in the future) optimize for that so it's still just one live query for any number of clients. As for (2), I am not aware of any straightforward way to do this because it could potentially mess up the data merging over multiple publications and subscriptions.
To avoid using a live query, you can manually add data to the publish function instead of returning a cursor, which causes the .observe() function to be called to hook up data to the subscription. Here's a simple example:
Meteor.publish(function() {
var sub = this;
var args = {}; // what you're find()ing
Foo.find(args).forEach(function(document) {
sub.added("client_collection_name", document._id, document);
});
sub.ready();
});
This will cause the data to be added to client_collection_name on the client side, which could have the same name as the collection referenced by Foo, or something different. Be aware that you can do many other things with publications (also, see the link above.)
UPDATE: To resolve issues from (2), which can be potentially very problematic depending on the size of the collection, it's necessary to bypass Meteor altogether. See https://stackoverflow.com/a/21835534/586086 for one way to do it. Another way is to just return the collection fetch()ed as a method call, although this doesn't have the benefits of compression.
From Meteor doc :
"Any change to the collection that changes the documents in a cursor will trigger a recomputation. To disable this behavior, pass {reactive: false} as an option to find."
I think this simple option is the best answer
You don't need to publish your whole collection.
1.Show autocomplete options only after user has inputted first 3 letters - this will narrow your search significantly.
2.Provide no more than 5-10 cities as options - this will keep your recordset really small - thus no need to push 5mb of data to each user.
Your publication should look like this:
Meteor.publish('pub-name', function(userInput){
var firstLetters = new RegExp('^' + userInput);
return Cities.find({name:firstLetters},{limit:10,sort:{name:1}});
});