Data Driven testing is an important aspect of writing automated test cases for any tool. I have been experimenting with testcafe lately and haven't been able to find a convincing way of doing data-driven tests i.e. executing a test for different inputs.
I came across this example: https://testcafe-discuss.devexpress.com/t/multiple-execution-of-one-test-with-different-data/219
but in the above example, we are dealing with different login usernames as inputs. If I imagine a scenario where I have to check a list of elements appear on the page or not, I would surely have some steps leading to the validation; in which case I may not want to execute the leading steps each time a new input is passed. In the above example looks like the input is at a test case level and not at a test step level because we put the test case inside the for loop and therefore all the validation/navigational points will be executed whether I want to repeat them or not
Since, I am new to testcafe, and going over scattered documentation, my question is - for data-driven testing is that the only approach we have in test cafe? or there is something more convincing, non-verbose approach in testcafe- if yes could someone point me to the documentation for it?
The main concept of the data-driven testing is that you pass some data contained parameters and test the expected values to the.
The example provided in the Multiple execution of one test with different data topic's comment is a good start point:
const users = [
{ login: 'System', password: 'System' }, { login: 'Admin', password: 'Admin' }
]
for (let i = 0; i < users.length; i++) {
let user = users[i]; test(`Login with user '${user.login}'`, async t => {
await t.typeText(page.login.userEdit, user.login);
// ...
});
}
Next, you may be required to load your test data from a database, a csv file or anything else. In this case, you can use an appropriate standard Node.js module (see FAQ).
To give any further recommendations, please clarify your requirements and the task you are trying to accomplish in greater detail. Also, I've created an issue in the TestCafe repository to extend its documentation with an example of the data-driven testing.
The concept of step level does not exist in TestCafe. You have only two levels : the fixture level and the test level.
If you want to do data-driven testing at the step level, you should have a look to the BDD frameworks that integrates with TestCafe.
Related
I use console.groupCollapsed() to hide functions I don't generally need to review, but may occasionally want to dig into. One downside of this is that if I use console.warn or console.error inside that collapsed group, I may not notice it or it may be very hard to find. So when I encounter an error, I would like to force the collapsed group open to make it easy to spot the warning/error.
Is there any way to use JS to force the current console group (or just all blindly) to open?
Some way to jump directly to warnings/errors in Chrome debugger? Filtering just to warnings/errors does not work, as they remain hidden inside collapsed groups.
Or perhaps some way to force Chrome debugger to open all groups at once? <alt/option>-clicking an object shows all levels inside it, but there does not appear to be a similar command to open all groups in the console. This would be a simple and probably ideal solution.
There is no way to do this currently, nor am I aware of any plans to introduce such functionality, mainly because I don't think enough developers are actively using the feature to enough of a degree to create demand for this.
You can achieve what you're trying to do, but you need to write your own logging library. First thing you'll need to do is override the console API. Here is an example of what I do:
const consoleInterceptorKeysStack: string[][] = [];
export function getCurrentlyInterceptedConsoleKeys () { return lastElement(consoleInterceptorKeysStack); }
export function interceptConsole (keys: string[] = ['trace', 'debug', 'log', 'info', 'warn', 'error']) {
consoleInterceptorKeysStack.push(keys);
const backup: any = {};
for (let i = 0; i < keys.length; ++i) {
const key = keys[i];
const _log = console[key];
backup[key] = _log;
console[key] = (...args: any[]) => {
const frame = getCurrentLogFrame();
if (isUndefined(frame)) return _log(...args);
frame.children.push({ type: 'console', key, args });
frame.hasLogs = true;
frame.expand = true;
_log(...args);
};
}
return function restoreConsole () {
consoleInterceptorKeysStack.pop();
for (const key in backup) {
console[key] = backup[key];
}
};
}
You'll notice a reference to a function getCurrentLogFrame(). Your logging framework will require the use of a global array that represents an execution stack. When you make a call, push details of the call onto the stack. When you leave the call, pop the stack. As you can see, when logging to the console, I'm not immediately writing the logs to the console. Instead, I'm storing them in the stack I'm maintaining. Elsewhere in the framework, when I enter and leave calls, I'm augmenting the existing stack frames with references to stack frames for child calls that were made before I pop the child frame from the stack.
By the time the entire execution stack finishes, I've captured a complete log of everything that was called, who called it, what the return value was (if any), and so on. And at that time, I can then pass the root stack frame to a function that prints the entire stack out to the console, now with the full benefit of hindsight on every call that was made, allowing me to decide what the logs should actually look like. If deeper in the stack there was (for example) a console.debug statement or an error thrown, I can choose to use console.group instead of console.groupCollapsed. If there was a return value, I could print that as a tail argument of the console.group statement. The possibilities are fairly extensive. Here's a screenshot of what my console logs look like:
Note that you will have to architect your application in a way that allows for logging to be deeply integrated into your code, otherwise your code will get very messy. I use a visitor pattern for this. I have a suite of standard interface types that do almost everything of significance in my system's architecture. Each interface method includes a visitor object, which has properties and methods for every interface type in use in my system. Rather than calling interface methods directly, I use the visitor to do it. I have a standard visitor implementation that simply forwards calls to interface methods directly (i.e. the visitor doesn't do anything much on its own), but I then have a subclassed visitor type that references my logging framework internally. For every call, it tells the logging framework that we're entering a new execution frame. It then calls the default visitor internally to make the actual call, and when the call returns, the visitor tells the logging framework to exit the current call (i.e. to pop the stack and finalize any references to child calls, etc.). By having different visitor types, it means you can use your slow, expensive, logging visitor in development, and your fast, forwarding-only, default visitor in production.
I am writing unit tests for a complex application which has so many rules to be checked into a single flow by using NUnit and Playwright in .Net5. Actually the case is, to save the time for writing the test scripts for Playwright (front-end testing tool), we have used a library named Bogus to create dummy data dynamically based on the rules (because the test cases has numerous rules to be checked and it was much more difficult to write fresh data to every case). I am using Playwright script into the NUnit test and providing the data source by using [TestCaseSource("MethodName")] to provide dynamic data object for different cases.
Now, we are facing a problem that some of the tests cases get passed and some are failed and we are unable to identify that particularly which test case is causing the problem because the testcase data is being provided by the dynamic source and in that source the data is being generated by the Bogus library on the bases of the rules which we have generated. Plus, we cannot look at the tests for a long time that's why we have automated the process.
[Test]
[TestCaseSource("GetDataToSubmit")]
public async Task Test_SubmitAssignmentDynamicFlow(Assignment assignment)
{
using var playwright = await Playwright.CreateAsync();
await using var browser = await playwright.Chromium.LaunchAsync(new BrowserTypeLaunchOptions
{
Headless = false,
...
});
....
private static IEnumerable<TestCaseData> GetDataToSubmit()
{
//creating data for simple job
var simpleAssignment = new DummyAssigmentGenerator()
....
.Generate();
yield return new TestCaseData(simpleAssignment);
....
Now, my question is, is there any way so that we can view that what were the actual values in the object in the failed case, when we see the whole report of the testcases? So that we can come to know that which certain values are causing problems and eventually fixed those.
Two approaches...
Assuming that DummyAssignmentGenerator is your own class, override its ToString() method to display whatever you would like to see. That string will become part of the name of the test case generated, like...
Test_SubmitAssignmentDynamicFlow(YOUR_STRING)
Apply a name to each TestCaseData item you yield using the SetName() fluent method. In that case, you are supplying the full display name of the test case, not just the part in parentheses. Use {m}(YOUR_STRING) in order to have it appear the same as in the first approach.
If you can use it, the first approach is clearly the simpler of the two.
I'm working towards testrail integrations, i want to update testrail after each test pass/fail. Lets day if i've a test like below
it('rightpanel should exist', () => {
//some logic or preparatory work
expect(rightpanel.isLoaded()).to.be.true;
// here i want to know whether above expect statement failed or passed.
// based on it, i want to update test rail by making a webservice call
});
we are using WDIO, is there a better way to integrate to testrail? No one replies on their community forum so, i'm asking here.
With mocha you can use a repoter to manage the results of your tests.
The default reporter is Spec, but if you use json-stream you can attach another process to this stream to send test reports to testrail while executing.
Otherwise, if you don't need to send them in real time, you can use the json reporter and parse them in a single call.
You can also check on github some existing reporter that connects directly to testrail:
https://github.com/CommodoreBeard/mocha-testrail-advanced-reporter
https://github.com/awaragi/mocha-testrail-reporter
complete list
I have added a cache layer to my project . now I wonder if I could unit test methods that manipulate cache ? or is there a better way to test Layer's logic ?
I just want to check the process , for example :
1- when the item is not in the cache , method should hit the database
2- the next time method should use cache
3- when a change is made to database , cache should be cleared
4- if data retrieved from databse is null , it shouldn't be added to cache
I want to ensure that the logic I have placed into the methods are working as expected .
I'm presuming the cache is a third party cache? If so, I would not test it. You're testing someone else's code otherwise.
If this caching is so important you need to test it, I'd go with an integration or acceptance test. In other words, hit the page(s)/service(s) in question and check the content that way. By the very definition of what you wish to test, this is not a unit test.
On the flip side, if the cache is one you've rolled yourself, you'll easily be able to unit test the functionality. You might want to check out verification based testing in order to test the behavior of the cache, as apposed to actually checking stuff is added/removed from the cache. Check out mocking for ways to achieve this.
To test for behaviour via Mock objects (or something similar) I'd do the following - although your code will vary.
class Cacher
{
public void Add(Thing thing)
{
// Complex logic here...
}
public Thing Get(int id)
{
// More complex logic here...
}
}
void DoStuff()
{
var cacher = new Cacher();
var thing = cacher.Get(50);
thing.Blah();
}
To test the above method I'd have a test which used a mock Cacher. You'd need to pass this into the method at runtime or inject the dependency into the constructor. From here the test would simply check that cache.Get(50) is invoked. Not that the item is actually retrieved from the cache. This is testing the behavior of how the cacher should be used, not that it is actually caching/retrieving anything.
You could then fall back to state based testing for the Cacher in isolation. E.g you add/remove items.
Like I said previously, this may be overkill depending on what you wish to do. However you seem pretty confident that the caching is important enough to warrant this sort of testing. In my code I try to limit mock objects as much as possible, though this sounds like a valid use case.
I am trying to get a better code coverage with my unittests, and recently I switched to using RhinoMock for my Mocking needs.
But I have got a question with how to write a specific unit-test, the Save() function.
I have an IView interface with several functions to retrieve values from the view (aspx page), examples are GetUsername(), GetPassword(), GetAddress() and GetCountry().
When the user clicks the submit button I want to have tests that tests if all these functions are actually being called. So I wrote this test:
[TestMethod]
public void MainController_Save_ShouldRetrieveLUsername()
{
//Initialize the IView and Controller
InitViewAndController();
//Trigger the Save function triggering the controller
//to collect information for storage
_controller.Save();
_view.AssertWasCalled(s => s.GetUsername(), o => o.Repeat.Once());
}
Now finally comes the question, considering the aspx contains 15 input fields that needs to be saved, is there a better way to test this behaviour that writing and maintaining 15 of these tests?
On one hand test should be simple and optimally only one Assert, but 15 of these functions feels like a waste.
Instead of testing whether these functions are called (they look more like properties instead of functions, BTW), you should check the results of the Save function. You should treat your test code more like a black box and try not to insert too much of its interior knowledge into your tests. This way your tests will be less brittle when the Save code changes.
Google for "state based testing".