I need a little help understanding QUnit internas.
I read its source from time to time, but i'm still writing weird test when it comes to asynchronous tests.
I understand the concept of asynchronous tests, and the stop() and start() methods (and why they are needed), but when i combine them with setup and teardown i get a lot of weired situations.
Here is my Testcode:
use(['Psc.Exception','Psc.Code'], function () {
module("async", {
setup: function () {
console.log('setup');
}, teardown: function () {
console.log('teardown');
}
});
asyncTest("test1", function () {
expect(0);
console.log('test1');
start();
});
asyncTest("test2", function () {
expect(0);
console.log('test2');
start();
});
asyncTest("test3", function () {
expect(0);
console.log('test3');
start();
});
asyncTest("test4", function () {
expect(0);
console.log('test4');
start();
});
asyncTest("test5", function () {
expect(0);
console.log('test5');
start();
});
});
Allthough these are asynchron tests, i thought i would get something like this in the console:
setup
test1
teardown
setup
test2
teardown
setup
test3
teardown
...
because i thought qunit would call setup and teardown around the test bodys?
but I get everything mixed up, from request to request in another way shuffled.
setup
test1
teardown
setup
setup
setup
setup
test5
teardown
test4
teardown
test3
teardown
test2
teardown
is someone able to explain it step by step?
It was a documented issue:
http://api.qunitjs.com/QUnit.config/
its is recommended to set QUnit.config.autostart to false, when loading tests asynchronously. This is my case because "use" is doing it asynchronously.
The head looks like this:
QUnit.config.autostart = false;
use(['Psc.Exception','Psc.Code'], function () {
QUnit.start();
module("async", {
So its basically like doing stop() and start() but for loading the tests itself. I tested it and the teardown / setup / test now get correctly executed in the right order
Related
Iām using meteor with check() and audit-check-arguments package.
When I use a meteor method using async/await and pass a parameter, even though I use check() to validate the function parametrs, the audit package still throws an exception indicating that not all input parameters have been checked. If I remove the async/await implementation, the package does not crib. What am I missing?
Example:
Meteor.methods({
test: async function(param1){
check(param1, String);
...
await ....
}
});
Throws an exception:
=> Client modified -- refreshing
I20200513-10:43:27.978(5.5)? Exception while invoking method 'test' Error: Did not check() all arguments during call to 'test'
I20200513-10:43:27.979(5.5)? at ArgumentChecker.throwUnlessAllArgumentsHaveBeenChecked (packages/check/match.js:515:13)
Whereas this traditional meteor method does not throw any exceptions
Meteor.methods({
test: function(param1){
check(param1, String);
...
}
});
I know for sure that I am passing exactly one parameter.
It looks like audit-argument-checks only works for synchronous functions.
I don't have this issue because we use mdg:validated-method, which uses requires you to specify an argument validator for each method.
It shuts up the argument checker by wrapping the method function with this:
// Silence audit-argument-checks since arguments are always checked when using this package
check(args, Match.Any);
The simplest solution I can think of, is to separate the check from the async function. You could use a wrapper function to do this:
function checkAndRun(check, run) {
return function(...args) {
check.apply(this, args);
return run.apply(this, args);
}
}
Meteor.methods({
'example': checkAndRun(
function(exampleID){
check(exampleID, String);
},
async function(exampleID) {
const result = await doSomethingAsync(exampleID);
SomeDB.update({ _id: exampleID }, { $set: { someKey: result.value } });
return result.status;
}
}
});
or you could even do it inline with an async IIFE:
Meteor.methods({
example(exampleID) {
check(exampleID, String);
return (async () => {
const result = await doSomethingAsync(exampleID);
SomeDB.update({ _id: exampleID }, { $set: { someKey: result.value } });
return result.status;
})()
}
});
Which, come to think of it, is much simpler than the simplest solution I could think of at first š
You just want to separate the sync check from the async method body somehow
In case you're curious, let diving through the source to see where it's called. When the method is called (in ddp-server/livedata-server), we end up here, a sync method call for the first reference of audit-argument-checks:
https://github.com/meteor/meteor/blob/master/packages/ddp-server/livedata_server.js#L1767-L1770
Which takes us into check/Match for another sync call here: https://github.com/meteor/meteor/blob/71f67d9dbac34aba34ceef733b2b51c2bd44b665/packages/check/match.js#L114-L123
Which uses the strange Meteor.EnvironmentVariable construct, which under the hood has another sync call: https://github.com/meteor/meteor/blob/master/packages/meteor/dynamics_nodejs.js#L57
I am using mocha and selenium-webdriver for E2E tests. Most of the tests are async and I am using async/await functions to handle this. Unfortunately right now I can't get a single one done. Here is what my code looks like:
describe('Some test', function () {
before(function () {
driver.navigate().to('http://localhost:3000')
})
after(function () {
driver.quit()
})
it('should display element', async function () {
let elementFound = false
try {
await driver.wait(until.elementIsVisible(driver.findElement(By.className('element'))), 1000)
assessForm = await driver.findElement(By.className('element')).isDisplayed()
assert.ok(elementFound)
console.log('elementFound', elementFound)
} catch (err) {
console.log(err)
assert.fail(err)
}
})
})
The problem that is happening seems to be that the after function is being called before the test can finish. Here are the error logs:
Error: Timeout of 2000ms exceeded. For async tests and hooks, ensure
"done()" is called; if returning a Promise, ensure it resolves.
{ NoSuchSessionError: no such session (Driver info:
chromedriver=2.36.540469
(1881fd7f8641508feb5166b7cae561d87723cfa8),platform=Mac OS X 10.13.3
x86_64)
at Object.checkLegacyResponse (/Users/me./myproject/node_modules/selenium-webdriver/lib/error.js:585:15)
at parseHttpResponse (/Users/me./myproject/node_modules/selenium-webdriver/lib/http.js:533:13)
at Executor.execute (/Users/me./myproject/node_modules/selenium-webdriver/lib/http.js:468:26)
at
at process._tickCallback (internal/process/next_tick.js:188:7) name: 'NoSuchSessionError', remoteStacktrace: '' }
If I remove my after() function, I still get
Error: Timeout of 2000ms exceeded. For async tests and hooks, ensure
"done()" is called; if returning a Promise, ensure it resolves.
but, my console.log shows that my element has been found.
If I then try making after() async, like this:
after(async function () {
await driver.quit()
})
I get the same error as the first one.
It is also important to note that I have read that I don't have to use done() when I am doing async/await. So what in the world is that all about? And even if I did, I keep getting the same error.
How do I solve this? It seems like everything is in order, but I can't seem to appropriately have the tests run through without running into each other.
Instead of using:
await driver.wait(until.elementIsVisible(driver.findElement(By.className('element'))), 1000)
try:
await driver.wait(until.elementLocated(By.className('element'))).isDisplayed()
I am trying to implement a jasmine test on a simple promise implementation (asynchronous code) with the done() function and my test fails although the code being tested works perfectly fine.
Can anyone please help me to figure out what is missing in my test?
var Test = (function () {
function Test(fn) {
this.tool = null;
fn(this.resolve.bind(this));
}
Test.prototype.then = function (cb) {
this.callback = cb;
};
Test.prototype.resolve = function (value) {
var me = this;
setTimeout(function () {
me.callback(value);
}, 5000);
};
return Test;
})();
describe("setTimeout", function () {
var test, newValue = false,
originalTimeout;
beforeEach(function (done) {
originalTimeout = jasmine.DEFAULT_TIMEOUT_INTERVAL;
jasmine.DEFAULT_TIMEOUT_INTERVAL = 10000;
test = new Test(function (cb) {
setTimeout(function () {
cb();
}, 5000);
});
test.then(function () {
newValue = true;
console.log(1, newValue);
done();
});
});
it("Should be true", function (done) {
expect(1).toBe(1);
expect(newValue).toBeTruthy();
});
afterEach(function () {
jasmine.DEFAULT_TIMEOUT_INTERVAL = originalTimeout;
});
});
the same test in jsfiddle: http://jsfiddle.net/ravitb/zsachqpg/
This code is testing a simple promise like object, so will call the Test object a promise for convenience.
There are two different async events after the promise creation:
1. The call to the .then() method
2. The resolving of the promise by calling the cb() function in the beforeEach() function.
In the real world these two can be called in any order and at any time.
For the test, the .then() call must be moved to the it() section's callback and all spec methods (e.g expect()) need to be called in it's callback or they'll run before it's resolved. The beforeEach() is part of the test setup while the it() function is the spec, the test itself.
The done() method needs to be called twice,
When the beforeEach() async action is finished (i.e after the cb() is called), that will start running the spec. So it should look something like this:
beforeEach(function (done) {
test = new Test(function (cb) {
setTimeout(function () {
console.log("in beforeEach() | setTimeout()");
cb(resolvedValue);
done()
}, 500);
});
});
When the spec's (it() section's) async action is finished inside the .then() method after all calls to jasmine test methods, this will tell Jasmine the spec finished running (and so the time-out won't be reached). So:
it("Should be " + resolvedValue, function (done) {
test.then(function (value) {
console.log("in then()");
expect(value).toBe(resolvedValue);
done();
});
});
Also, as you can see instead of testing that a variable's value has changed I'm testing that the value passed to the .then() method is the same as the one passed to the promise resolve cb() function as that is the right behaviour you are expecting.
Here's an updated version of your fiddle.
You can check in the browser's console to see that all callbacks are being called
Note: Changing Jasmine's DEFAULT_TIMEOUT_INTERVAL just makes it more convoluted for no reason, so I removed it and some extraneous code.
In order to perform my integration tests, the callback from the Meteor.loginWithPassword(... has to have been executed.
If I postpone the test until the user exists, or some Session variable is defined, the tests are ignored.
This is my login function:
Meteor.loginWithPassword(username, forge.util.encode64(aesKey), function(error) {
if(!error) {
Log.info("user loged in");
Session.set("loggedIn", true);
...
})
and then in the Mocha test:
Meteor.startup(function () {
Tracker.autorun(function (c) {
if (Session.get("loggedIn")) {
c.stop();
MochaWeb.testOnly(function () {
Log.info("executing tests");
describe("the KeyPair has been created", function () {
...
My question is: is there a way to make mocha wait for some state or do I have to mock the environment (which would defeat the integration test purpose)?
Also, on code changes, I get following error:
stream error Network error: ws://localhost:51366/websocket: connect ECONNREFUSED
thank you for your support
The feature request that #stubailo pointed out has now been implemented. See https://github.com/meteor/meteor/issues/3572 for more details.
I want to run a command but after a task finishes in grunt.
uglify: {
compile: {
options: {...},
files: {...}
}
?onFinish?: {
cmd: 'echo done!',
// or even just a console.log
run: function(){
console.log('done!');
}
}
},
Either run a command in shell, or even just be able to console.log. Is this possible?
Grunt does not support before and after callbacks, but next version could implement events that would work in the same way, as discussed in issue #542.
For now, you should go the task composition way, this is, create tasks for those before and after actions, and group them with a new name:
grunt.registerTask('newuglify', ['before:uglify', 'uglify', 'after:uglify']);
Then remember to run newuglify instead of uglify.
Another option is not to group them but remember to add the before and after tasks individually to a queue containing uglify:
grunt.registerTask('default', ['randomtask1', 'before:uglify', 'uglify', 'after:uglify', 'randomtask2']);
For running commands you can use plugins like grunt-exec or grunt-shell.
If you only want to print something, try grunt.log.
The grunt has one of the horrible code that I've ever seen. I don't know why it is popular. I would never use it even as a joke. This is not related to "legacy code" problem. It is defected by design from the beginning.
var old_runTaskFn = grunt.task.runTaskFn;
grunt.task.runTaskFn = function(context, fn, done, asyncDone) {
var callback;
var promise = new Promise(function(resolve, reject) {
callback = function (err, success) {
if (success) {
resolve();
} else {
reject(err);
}
return done.apply(this, arguments);
};
});
something.trigger("new task", context.name, context.nameArgs, promise);
return old_runTaskFn.call(this, context, fn, callback, asyncDone);
}
You can use callback + function instead of promise + trigger. This function will request the new callback wrapper for new task.