Is there any way to get WebdriverIO E2E testing code coverage? - automated-tests

I am using WebdriverIO for the e2e automation testing. I would like to get the code coverage for my e2e tests. Is there any way I can get the code coverage for e2e tests?
Thanks in advance :)

Of course that's possible, see https://webdriver.io/docs/devtools-service.html#chrome-devtools-access.
Basically, instrument your code and grab the count after ech test or use browser build in feature
I created a npm module for this: https://www.npmjs.com/package/wdio-coverage-service

It is possible. Most automation tools now days have a plugin which can talk external code coverage services and get the stats after a test run is complete.
The steps are
Instrument your dev src code using a popular instrumentation tool called istanbul (https://istanbul.js.org/)
Serve the instrumented src code on your local machine
Run E2E Tests against the instrumented src code app
After run, the code coverage report is generated
more details: https://webdriver.io/docs/devtools-service/#capture-code-coverage
Also if you need complete details with video, Cypress has some live tutorial videos, this won't help you with wdio exactly, but you will get an overall idea of how coverage works with e2e: https://docs.cypress.io/guides/tooling/code-coverage#Introduction

No, it is not possible. It is black box testing - your tests can't reach code of your code and check what functions are called and check code coverage.
Check Black box testing - because E2E is kind of Black box testing.

Related

how to run my all test cases on every developer build automatically?

I just want to know something about Katalon Studio. I have not worked in automation testing before but now I have some assignment about testing in Katalon.
My client wants to test in Katalon but his requirement is that he wants to run test cases on every build automatically and he also doesn't want to install Katalon IDE or any library he just want reference so that he just added that reference on every build so that all the test cases run automatically on every Dev build.
Is this possible using Katalon? Kindly help me, please. Thanks.
You have to establish full CI Pipeline for your requirements. My advice is, to use Katalon with Jenkins and your developers code repository (perhaps GIT or SVN). Than you are able to implement a server/slave pipeline, where you can execute your Katalon scripts on slave, every time DEV builds.
See:
Katalon/Jenkins Tutorial

TestCafe - How can I run multiple fixtures (one after the other, not concurrent)?

I have multiple fixtures (interaction between a website and a hybrid app) and I would like to have a test suite run (Smoke test, regression test ...) but I'm not sure how to do it.
This is how my tests look like:
What I want to do is run all tests (eg. CreatePollAndCloseIt.js, CreatePollPinAndDelete.js, CreateStreamAssertions.js and so on until the last one VotePollWhitelabel ) and get a report after.
Thank you!
You can run all your tests by specifying the parent folder of your test file. For example you can run all your Mobile Tests with the command:
testcafe chrome <path>/MobileTests
Also it's possible to reorganize your folder structure, eg:
- Mobile
- Smoke
- Regression
In this case you can run your test suites via following commands:
testcafe chrome <path>/Mobile - for all tests
testcafe chrome <path>/Mobile/Smoke - only smoke
testcafe chrome <path>/Mobile/Regression - only regression
Another way to organize the test-suites is using the testing metadata. Please refer to the following articles to get the details:
Specifying Testing Metadata
Filtering by Testing Metadata

Integration test code coverage for remote application

I have written my tests in such a way that it interacts with the application in test which is deployed somewhere in the cloud (remote). I need to measure the code coverage for these tests.
I am using maven. And intend on using jacoco + sonar for the code coverage
What is it that I need to do to add to my POM file to get the coverage
Do I need to change anything for the JVM where my application in test is running.
I have tried multiple solutions on the internet but none of them have given me any clear solution. I could be missing the point.
Can someone please guide me through step by step? Help would be much appreciated.

Protractor implicit waiting not working when using grunt-protractor-runner

I am writing e2e Tests for some JS application at the moment. Since I am not a JS developer I was investigating on this theme for a while and ended up with the following setup:
Jasmine2 as testing framework
grunt as "build-tool"
protractor as test runner
jenkins as CI server (already in use for plenty java projects)
Although the application under tests is not written in angular I decided to go for protractor, following a nice guide on howto make protractor run nicely even without angular.
Writing some simple tests and running them locally worked like a charm. In order to implicitly wait for some elements to show up in den DOM I used the following code in my conf.js:
onPrepare: function() {
browser.driver.manage().timeouts().implicitlyWait(5000);
}
All my tests were running as expected and so I decided to go to the next step, i.e. installation in the CI server.
The development team of the aplication I want to tests was already using grunt to build their application so I decided to just hook myself into that. The goal of my new grunt task is to:
assemble the application
start a local webserver running the application
run my protractor test
write some test reports
Finally I accomplished all of the above steps, but I am dealing with a problem now I cannot solve and did not find any help googling it. In order to run the protractor test from grunt I installed the grunt-protractor-runner.
The tests are running, BUT the implicit wait is not working, causing some tests to fail. When I added some explicit waits (browser.sleep(...)) everything is ok again but that is not what I want.
Is there any chance to get implicitly waiting to work when using the grunt-protractor-runner?
UPDATE:
The problem does not have anything to do with the grunt-protractor-runner. When using a different webserver I start up during my taks its working again. To be more precisley: Using the plugin "grunt-contrib-connect" the tests is working using the plugin "grunt-php" the test fails. So I am looking for another php server for grunt now. I will be updating this question.
UPDATE 2:
While looking for some alternatives I considered and finally decided to mock the PHP part of the app.

MsTest noob - how to set up testing infrastructure the right way

We are a MSFT shop with a far-reaching MSDN license.
After many years of doing things wrong, we finally have to start doing automated testing.
My group is the Guinea pigs at this. We need to create what was not there before. We looked at the multitude of options out there. Some people get by just fine with open-source alternatives such as CC.Net, Bamboo, MbUnit, etc. We want to give MsTest, CodedUI, Team Build a good try ... might as well because of MSDN licensing and MSFT focus.
The plus and minus of doing things the MSFT way is that MSFT makes monolithic things. You have got to install various tools that play with each other nicely, but with outsiders - not necessarily. The plus is that when things are done correctly, it should all function rather smoothly. There is the option of gated check-ins, of using TFS to store the reports, etc.
Frankly, I am confused by all of the options. Our traditional build system was hacked together with a bunch of perl, batch scripts, executables, but now the build team switched to Team Build, which ought to be cleaner, but for the most part it is just a wrapper to the same old perl crap.
I am inclined to hack things together for testing too, because I can at least see what the pieces are. So, I envision the poor man's version as:
* A dedicated fast computer to run tests
* Some script to copy build files (test code as well as product code) over to that computer.
* A batch/perl script which would run mstest.exe from command line and execute a few test batches on some by-category filter within some test dlls (the product is so huge, that we do want to organize tests by various categories).
* Some script which will invoke the latter script remotely from the build server using psexec.exe (http://technet.microsoft.com/en-us/sysinternals/bb897553), as well as grabbing the xml output from a shared drive, and then sending out an email with results to those who are interested.
This can probably work, but then I have to worry about how well error handling can work with so many potential points of failure. It would be nice to configure things the "right way", taking advantage of whatever MSFT has cooked up. I am just not sure where to turn for a good guide. Have you done something like this?
Eventually we will want to have a farm of test computers, if we are to run out of the allotted time. Something else of concern is - for coded ui tests to succeed, I think a user has to be logged in, so I am not sure if psexec will be of much help here.
Can you share your positive/negative experience, point me to a good guide perhaps? Thanks!
Here are some tips off the top of my head if you want to get started with testing using the MS tools:
If you have an MSDN subscription, install a Test Rig by installing the Test Controller on your network and the the Test Agent service on each of the machines that will be collecting diagnostic data. See the following link for reference: http://msdn.microsoft.com/en-us/library/dd293551.aspx.
Add a Test Project to your solution. See the first part of the following blog post: http://blogs.microsoft.co.il/blogs/eranruso/archive/2010/03/27/visual-studio-2010-coded-ui-test-user-guide-create-a-simple-coded-ui-test.aspx.
Automated test options can be configured through the .testsettings file(s) that are added automatically when you add a Test project (you can also manually add these files to your solution).
Install Team Foundation Server (2010 recommended) in order to take advantage automating your tests with a daily build. You will also need TFS 2010 if you want to use the VS2010 Test Manager tool to define test environments and plan manual tests (these can be fully automated with CodedUI). Customize your new automated build to setup / deploy your application after build and set the build to run tests. Deployment will likely not be necessary for unit tests, but they will be for Web Performance and CodedUI test types.
If you have VS Ultimate or Test Professional licenses, you can also go further and set up virtual test labs using "Lab Management" features.

Resources