How to test a custom Meteor method with Velocity + Jasmine [closed] - meteor

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have a Collection 'workouts' as follows:
Workouts = new Mongo.Collection('workouts');
Meteor.methods({
workoutInsert: function () {
var user = Meteor.user();
check(user._id, String);
var workout = {
completed: false,
createdAt: new Date(),
userId: user._id
};
var workoutId = Workouts.insert(workout);
return {
_id: workoutId
};
}
});
I am wondering:
1) What would a Velocity + Jasmine test look like for this method? I'm unsure where to start and would really appreciate and example!
2) Is this the best practice to define this method and call it client-side? Or perhaps should I create a Workout class and add call this method from an instance method of that class? Or should I perhaps extend Workouts to be it's own class and add instance methods to that?

In Meteor there is several types of testing: Client Integration, Client Unit, Server Integration and Server Unit.
Integration tests mirror your site and will load in your Meteor methods for you(ie. workoutInsert).
If I were testing this, I might have something such as:
//File Location: app/tests/server/integration/workoutsSpec.js
Jasmine.onTest(function () {
describe('workouts', function () {
it("should call to Workouts.insert",function(){
//Make user return truthy _id for testing
Meteor.user() = function(){return {_id : "1";}}
//Setup a spy to watch for calls to Workouts.insert
spyOn("Workouts",insert);
//Call workoutInsert Meteor Method
Meteor.call('workoutInsert');
//Verify if Workouts.insert was called
expect("Workouts.insert").toHaveBeenCalled();
});
});
});
Lastly, MeteorJS gives you a lot of freedom as to how you implement things and there's no clear best way to do things that works for every scenario. Although, I'd advise against placing any code that interacts with your database on your client. Anything located in your client folder is publicly accessible/readable to your users( Do they need to see low level validation details?).

To answer your second question, the best practice is to keep your Meteor methods isolated on the server directory. Meteor uses these reserved directory names to give you control over resources that are served to the client, server, or both. You don't need to have them in the same file or directory as your Mongo collections, as all your collections can be available on both client and server. This is usually considered the best practice, especially if you're using frameworks like angular-meteor which rely on Collection definitions being available on the client, so that filters can be passed to them. You can secure and modify permissions for these Collections on the using collection.allow()/deny()
So if you kept all your collections in the collections/ directory they could be defined like so:
Workouts = new Mongo.Collection('workouts');
would be the contents of collections/workouts.js
Then, in your server/ directory, on the same level as your collections/, you can put all your methods in a file at this level or deeper in the tree, like a server/methods/ directory. Then you can put your methods in a workouts.js in this directory, if you like.
Meteor.methods({
workoutInsert: function () {
var user = Meteor.user();
check(user._id, String);
var workout = {
completed: false,
createdAt: new Date(),
userId: user._id
};
var workoutId = Workouts.insert(workout);
return {
_id: workoutId
};
}
});

Related

Meteor GroundDB granularity for offline/online syncing

Let's say that two users do changes to the same document while offline, but in different sections of the document. If user 2 goes back online after user 1, will the changes made by user 1 be lost?
In my database, each row contains a JS object, and one property of this object is an array. This array is bound to a series of check-boxes on the interface. What I would like is that if two users do changes to those check-boxes, the latest change is kept for each check-box individually, based on the time the when the change was made, not the time when the syncing occurred. Is GroundDB the appropriate tool to achieve this? Is there any mean to add an event handler in which I can add some logic that would be triggered when syncing occurs, and that would take care of the merging ?
The short answer is "yes" none of the ground db versions have conflict resolution since the logic is custom depending on the behaviour of conflict resolution eg. if you want to automate or involve the user.
The old Ground DB simply relied on Meteor's conflict resolution (latest data to the server wins) I'm guessing you can see some issues with that depending on the order of when which client comes online.
Ground db II doesn't have method resume it's more or less just a way to cache data offline. It's observing on an observable source.
I guess you could create a middleware observer for GDB II - one that checks the local data before doing the update and update the client or/and call the server to update the server data. This way you would have a way to handle conflicts.
I think to remember writing some code that supported "deletedAt"/"updatedAt" for some types of conflict handling, but again a conflict handler should be custom for the most part. (opening the door for reusable conflict handlers might be useful)
Especially knowing when data is removed can be tricky if you don't "soft" delete via something like using a "deletedAt" entity.
The "rc" branch is currently grounddb-caching-2016 version "2.0.0-rc.4",
I was thinking about something like:
(mind it's not tested, written directly in SO)
// Create the grounded collection
foo = new Ground.Collection('test');
// Make it observe a source (it's aware of createdAt/updatedAt and
// removedAt entities)
foo.observeSource(bar.find());
bar.find() returns a cursor with a function observe our middleware should do the same. Let's create a createMiddleWare helper for it:
function createMiddleWare(source, middleware) {
const cursor = (typeof (source||{}).observe === 'function') ? source : source.find();
return {
observe: function(observerHandle) {
const sourceObserverHandle = cursor.observe({
added: doc => {
middleware.added.call(observerHandle, doc);
},
updated: (doc, oldDoc) => {
middleware.updated.call(observerHandle, doc, oldDoc);
},
removed: doc => {
middleware.removed.call(observerHandle, doc);
},
});
// Return stop handle
return sourceObserverHandle;
}
};
}
Usage:
foo = new Ground.Collection('test');
foo.observeSource(createMiddleware(bar.find(), {
added: function(doc) {
// just pass it through
this.added(doc);
},
updated: function(doc, oldDoc) {
const fooDoc = foo.findOne(doc._id);
// Example of a simple conflict handler:
if (fooDoc && doc.updatedAt < fooDoc.updatedAt) {
// Seems like the foo doc is newer? lets update the server...
// (we'll just use the regular bar, since thats the meteor
// collection and foo is the grounded data
bar.update(doc._id, fooDoc);
} else {
// pass through
this.updated(doc, oldDoc);
}
},
removed: function(doc) {
// again just pass through for now
this.removed(doc);
}
}));

Why does my Meteor.method return undefined? [duplicate]

This question already has answers here:
How do I return the response from an asynchronous call?
(41 answers)
Closed 6 years ago.
There is a simple field to input a youtube ID. I am using renaldo's youtube api (https://atmospherejs.com/renaldo/youtube-api) to retrieve the title of the youtube clip.
The client side event passes the track ID (var tid) value to the method 'addTrack' successfully - it outputs the track's title to the console on the server. I am having a really bad time returning anything at all back to the client.
If I call the 'getVideoData' method from the 'addTrack' method, it still returns 'undefined' every time. I am no expert in meteor or javascript, this is just something I am learning for the hell of it.
I understand the concept of callbacks and the synchronous nature of javascript (I think!).
Thanks.
[EDIT The code below solves my problem, using Future]
if (Meteor.isServer) {
YoutubeApi.authenticate({
type: 'key',
key: API_KEY
});
Meteor.methods({
addTrack: function(tid) {
Meteor.call("getVideoData", tid, function(err,res) {
console.log(res);
});
},
getVideoData: function(tid) {
var future = new Future();
YoutubeApi.videos.list({
part: "snippet",
id: tid,
}, function (err,data) {
var _data = {"title":data.items[0].snippet.title,"desc":data.items[0].snippet.description};
future["return"](_data)
});
return future.wait();
}
Meteor.startup(function () {
Future = Npm.require('fibers/future');
});
}
Meteor methods are tricky, as they use Fibers to be synchronous (well, they appear to the developer as synchronous). So you need to use Meteor.wrapAsync() to wrap the all to YoutubeApi. I haven't tested the following code, but it should look something like:
Meteor.methods({
getVideoData: function(tid) {
var syncYT = Meteor.wrapAsync(YoutubeApi.videos.list);
var data = syncYT({part: "snippet",id: tid,});
var transformed = {"title":data.items[0].snippet.title,"desc":data.items[0].snippet.description};
console.log(transformed.title);
return transformed.title;
}
});
You'll want to read more about error handling in this case, but this should get you going. Just remember that client-side Meteor is always asynchronous and server-side is optionally asynchronous. Use wrapAsync or Futures to handle async needs in methods.
Lastly, the Meteor guide is great, use it!

meteor - get all subscriber session handles for a publisher method

I want to broadcast NON-MONGO-DB data from a server publisher to client collections. Currently I save all registered subscriber handles to use those for posting the data
client.js:
col = new Meteor.Collection("data")
Meteor.subscribe("stream")
On server side it looks like
server.js
all_handles = [];
Meteor.publish("stream", function() {
// safe reference to this sessions
var self = this;
// save reference to this subscriber
all_handles.push(self);
// signal ready
self.ready();
// on stop subscription remove this handle from list
self.onStop(function() {
all_handles = _.without(all_handles, self);
}
}
Then I can use the all_handles somewhere in my app to send data to those clients, like:
function broadcast(msg) {
all_handles.forEach(function(handle) {
handle.added("data", Random.id(), msg);
}
}
This is already in use and running.
Q: What I am looking for is: Can I get all handles from currently already existing meteor (internal) objects like _sessions or something else?
It would be great if I had not to organize the subscribers handle all the time by myself.
Please do not answer with links to other broadcast packages like streamy or else. I want to continue with standard collections but with as less code as possible.
Thanks for help and feedback
Tom
Maybe this might work for you: https://stackoverflow.com/a/30814101/2005564
You could get the connections via var connections = Meteor.server.stream_server.open_sockets; but as looshi said this might break with a future meteor update as it is not part of the public API...
As informed by #laberning I used for now the "undocumented" meteor connections.
You can post to all subscribers of a publishing method like:
// publish updated values to all subscribers
function publish_to_all_subscribers(subscription_name, id, data) {
_.each(Meteor.server.stream_server.open_sockets, function(connection) {
_.each(connection._meteorSession._namedSubs, function(sub) {
if (sub._name == subscription_name) {
sub.insert(subscription_name, id, data);
}
})
});
}
// create stream publisher
Meteor.publish('stream', function(){
// set ready
this.ready();
});
...
// use publishing somewhere in your app
publish_to_all_subscribers('stream', Random.id(), {msg: "Hello to all"});
...
updated: See an example MeteorPad for Publish and Subscribe and Broadcast messages

Meteor: Publish a subset of another publication

I have a custom publication on my server (which in some way join 2 collections).
This resulting set of this publication is exactly what I need but for performances issues I would like to avoid sending it entirely to the client.
If I did not care about performances, I would only subscribe to the
publication and do something like
theCollection.find({"my":"filter"})
I am therefore trying to find a way to publish a subset of the custom publication so that the filter would be applied on the custom publication on the server side.
Is there a way to chain or filter publications (server side) ?
For the question we can assume the custom publication to look like this and cannot be modified:
Meteor.publish('customPublication', function() {
var sub = this;
var aCursor = Resources.find({type: 'someFilter'});
Mongo.Collection._publishCursor(aCursor, sub, 'customPublication');
sub.ready();
});
if i understand the question right, you are looking for https://atmospherejs.com/reywood/publish-composite
It let's you "publish a set of related documents from various collections using a reactive join. This makes it easy to publish a whole tree of documents at once. The published collections are reactive and will update when additions/changes/deletions are made."
Ok I came to the following workaround. Instead of working on the publication, I simply added a new collection I update according to the other collections. In order to do so I am using the meteor hooks package
function transformDocument(doc)
{
doc.aField = "aValue"; // do what you want here
return doc;
}
ACollection.after.insert(function(userId, doc)
{
var transformedDocument = transformDocument(doc);
AnotherCollection.insert(transformedDocument);
});
ACollection.after.update(function(userId, doc, fieldNames, modifier, options)
{
var transformedDocument = transformDocument(doc);
delete transformedDocument._id;
AnotherCollection.update(doc._id,{$set:transformedDocument});
});
ACollection.after.remove(function(userId, doc)
{
AnotherCollection.remove(doc._id);
});
Then I have the new collection I can publish subsets the regular way
Benefits:
You can filter whatever you want into this db, no need to worry if the field is virtual or real
Only one operation every time a db changes. This avoid having several publication merging the same data
Cave eats:
This requires one more Collection = more space
The 2 db might not be always synchronised, there is few reasons for this:
The client manually changed the data of "AnotherCollection"
You had documents in "ACollection" before you added "AnotherCollection".
The transform function or source collection schema changed at some point
To fix this:
AnotherCollection.allow({
insert: function () {
return Meteor.isServer;
},
update: function () {
return Meteor.isServer;
},
remove: function () {
return Meteor.isServer;
}
});
And to synchronise at meteor startup (i.e. build the collection from scratch). Do this only once for maintenance or after adding this new collection.
Meteor.startup(function()
{
AnotherCollection.remove({});
var documents = ACollection.find({}).fetch();
_.each(documents, function(doc)
{
var transformedDocument = transformDocument(doc);
AnotherCollection.insert(transformedDocument);
});
});

How to prevent a client race condition between Meteor.userId() and subscription updates that depend on userId?

I am seeing a repeatable issue where a user authenticates ("logs in") with a Meteor server, and then a client subscription that depends on userId is updated (and dependent UI templates reactively update) before Meteor.userId() registers the successful login.
For example, in this code snippet, the assert will throw:
var coll = new Meteor.Collection("test");
if (Meteor.isServer) {
Meteor.publish('mineOrPublic', function () {
// Publish public records and those owned by subscribing user
return coll.find({owner: { $in: [ this.userId, null ]}});
});
}
if (Meteor.isClient) {
var sub = Meteor.subscribe('mineOrPublic');
var cursor = coll.find({});
cursor.observe({
added: function (doc) {
if (doc.owner) {
// This should always be true?!
assert(doc.owner === Meteor.userId());
}
}
});
}
Analogous to the added function above, if I write a template helper that checks Meteor.userId(), it will see a value of null, even when it is invoked with a data context of a document with an owner.
There is apparently a race condition between Meteor collection Pub/Sub and the Account userId update mechanisms. It seems to me that Meteor.userId() should always be updated before any subscriptions update based on a change in this.userId in a server publish function, but for some reason the opposite usually seems to be true (that is, the assert in the code above will usually throw).
The reason I care is because I have packages that depend on obtaining a valid Meteor Authentication token (using Accounts._storedLoginToken()) on the client for use in securing HTTP requests for files stored on the Meteor server. And the authentication token isn't correct until Meteor.userId() is. So the flow of events usually goes something like this:
User logs in
Publish function on server reruns based on the change in this.userId.
Client begins receiving new documents corresponding to the change in userId.
UI Template reactively updates to add DOM elements driven by new documents
Some of the DOM elements are <img> tags with src= values that depend on the data context.
HTTP requests are triggered and ultimately fail with 403 (forbidden) errors because the required authentication cookie hasn't been set yet.
Meteor.userId() finally updates on the client, and code reactively runs to set the authentication cookie
Helpers in the template that depend on a session variable set in the cookie update code are rerun, but the DOM doesn't change, because the URLs in the <img> tags don't change.
Because the DOM doesn't change, the tags don't retry their failed attempts to load the images.
Everything settles down, and the user has to manually reload the page to get their images to appear.
I've come up with two possible approaches to work around this issue:
In the template helper that generates the URL for the <img> tag, always append a dummy query string such as: "?time=" + new Date().getTime(). This causes the DOM to change every time the helper is called and fixes the problem, but it screws-up browser caching and if not coordinated will cause some assets to unnecessarily load multiple times, etc.
In every template helper that touches document data add a test of:
if (this.owner && this.owner !== Meteor.userId()) {
// Perhaps Meteor.loggingIn() could be used above?
// Invalid state, output placeholder
} else {
// Valid state, output proper value for template
}
I really hope someone knows of a less kludgy way to work around this. Alternatively, if consensus arises that this is a bug and Meteor's behavior is incorrect in this respect. I will happily file an issue on Github. I mostly really enjoy working with Meteor, but this is the kind of gritty annoyance that grinds in the gears of "it just works".
Thanks for any and all insights.
After trying lots of things, this variation on the example code in the OP seems to consistently solve the race condition, and I find this an acceptable resolution, unlike my initial attempted workarounds.
I still feel that this kind of logic should be unnecessary and welcome other approaches or opinions on whether Meteor's behavior in the OP sample code is correct or erroneous. If consensus emerges in the comments that Meteor's behavior is wrong, I will create an issue on Github for this.
Thanks for any additional feedback or alternative solutions.
var coll = new Meteor.Collection("test");
if (Meteor.isServer) {
Meteor.publish('mineOrPublic', function (clientUserId) {
if (this.userId === clientUserId) {
// Publish public records and those owned by subscribing user
return coll.find({owner: { $in: [ this.userId, null ]}});
} else {
// Don't return user owned docs unless client sub matches
return coll.find({owner: null});
}
});
}
if (Meteor.isClient) {
Deps.autorun(function () {
// Resubscribe anytime userId changes
var sub = Meteor.subscribe('mineOrPublic', Meteor.userId());
});
var cursor = coll.find({});
cursor.observe({
added: function (doc) {
if (doc.owner) {
// This should always be true?!
assert(doc.owner === Meteor.userId());
}
}
});
}
This code works by giving the server publish function the information it needs to recognize when it is running ahead of the client's own login state, thereby breaking the race condition.
I think this is something that Meteor should do automatically: clients should not see documents based on changes to this.userId in a publish function until after the client Meteor.userId() has been updated.
Do others agree?
I tried with this code that works on server too. In association with FileCollection package.
if (Meteor.isServer) {
CurrentUserId = null;
Meteor.publish(null, function() {
CurrentUserId = this.userId;
});
}
....
OrgFiles.allow({
read: function (userId, file) {
if (CurrentUserId !== file.metadata.owner) {
return false;
} else {
return true;
}
}
...

Resources