combining istanbul coverage reports for integration tests - integration-testing

I have a Node app skeleton (https://github.com/jedwards1211/crater) with integration tests that start the server and run tests using webdriver.io. I'm not sure how to generate full code coverage because there are three pieces that need to be covered, all running separately:
The code that builds and spawns the server process
The code that runs on the server
The code that runs on the client (i.e. PhantomJS)
I know that I can build all that code using babel-plugin-istanbul to instrument it. And I know it would be easy to run the server with nyc and get coverage for only the server code. But is there any way to get a combined coverage report for all three pieces after running the integration test?
What I've found out so far
It's possible to combine coverage info using: http://gotwarlost.github.io/istanbul/public/apidocs/classes/Collector.html
I should be able to transport coverage from instrumented code in the browser to the test scripts by running (via webdriver.io):
(await browser.execute(() => window.__coverage__)).value

Related

In Robot framework test case scripts works fine but getting while triggering in Azure pipeline

We are having some difficulties implementing the Azure DevOps pipeline as a team. We are conducting the testing on our agent-hosted Windows os in the background. A couple of our test scripts were successful, while others were unsuccessful for unknown reasons.
However, the identical test scripts are all run with a pass in locally 100% in versus code result. Unable to determine why the agent-hosted Windows operating system is failing
Ideally, all suggestions would be constructive.
All test scripts which execute need to pass in azure pipeline any suggestion would be helpful

GitLab CI/runner with embedded testing

We're writing software for an embedded platform for which we also need automated on-device testing.
When a push/tagging is made to the repository a GitLab pipeline is started. The first step is to build the software.
The next step is to execute a custom written Python script (doing the testing) on a machine running locally in our office. This machine is physically connected to the embedded device via USB, hence the testing-part of the CI pipeline needs to be executes on this machine exclusively. The Python script needs to be executes multiple times with different parameters, but may (and should) run concurrently if possible.
Is this setup possible with GitLab runners, and if so, how do I configure it to only run this exact part of the pipeline locally?
Thanks in advance

TFS release managment test result view fails with JSON error

I have a TFS (on premises version 15.105.25910.0) server with build and release management definitions. One of the definitions deploys a web site, the test assemblies and then runs my MSTest based Selenium tests. Most pass, some are not run, and a few fail.
When I attempt to view the test results in the TFS web portal the view of "failed" test results fails and it shows the following error message:
can't run your query: bad json escape sequence: \p. path
'build.branchname', line 1, position 182.
Can anyone explain how this fault arises? or more to the point what steps I might take to either diagnose this further or correct the fault
The troublesome environment and its "Run Functional Tests" task are shown below
Attempted diagnostics
As suggested by Patrick-MSFT I added the requisite three steps to a build (the one that makes the selenium tests)
Windows machine file copy (Copy MStest assembly containing selenium test to c:\tests on a test machine)
Visualstudio test agent deploy (to same machine)
Run functional tests (the assembly shipped in 1)
The test run (and have the same mix of pass fail, skipped) but the test results can be browsed just fine with the web pages test links.
Results after hammering the same test into a different environment to see how that behaves...
Well, same 3 steps (targeting the same test machine) in a different environment works as expected - same mix of results, but view shows results without errors.
To be clear this is a different (pre-existing) environment in the same release definition, targeting the same test PC. It would seem the issue is somehow tied to that specific environment. So how do I fix that then?
So next step, clone the failing environment and see what happens. Back later with the results.
Try to run the test with same settings in build definition instead of release. This could narrow down if the issue is related to your tests or task configuration.
Double check you have use the right settings of related tasks. You could refer related tutorial for Selenium test in MSDN: Get started with Selenium testing in a continuous integration pipeline
Try to run the same release in another environment.
Also go through your log files to see if there are some related info for troubleshooting.

Generating Coverage report for Protractor E2E test running on different machine

I want to get the coverage report for my Protractor E2E UI tests against a running node code.
I have tried the following steps:
Using Istanbul, I instrumented the code on one of my app server
managed through Nginx.
istanbul instrument . --complete-copy -o instrumented
Stopped the actual node code, and started instrumented code on the
same port (port 3000), without changing the NGINX config, so that any
traffic hitting that app server will be directed to the instrument
code which is running on the same server.
Run the protractor end to end tests which is on another machine. This is another local machine, which I run the test from and the instrumented app is in another server.
At the end of the run, I stopped the Instrumented code
Now:
- There is no coverage variable available
- There is no Coverage Folder
- No report generated
I thought the coverage report would be generated if the instrumented code was hit through the protractor script.
I also googled around, and found some plugin "protractor-istanbul-plugin" but not sure if this is what I should use.
My questions:
Is it even possible to generate coverage report if the instrumented code is in a different server and protractor script is run from a different machine?
If possible, is my assumption that report would be generated if instrumented code is hit is wrong?
should I use istanbul cover command here and if yes, how?
My goal is to instrument the code after deploying to QA env. and trigger the protractor script which is placed in another machine pointing to the QA env having the instrumented code.
Thanks in Advance.

Protractor implicit waiting not working when using grunt-protractor-runner

I am writing e2e Tests for some JS application at the moment. Since I am not a JS developer I was investigating on this theme for a while and ended up with the following setup:
Jasmine2 as testing framework
grunt as "build-tool"
protractor as test runner
jenkins as CI server (already in use for plenty java projects)
Although the application under tests is not written in angular I decided to go for protractor, following a nice guide on howto make protractor run nicely even without angular.
Writing some simple tests and running them locally worked like a charm. In order to implicitly wait for some elements to show up in den DOM I used the following code in my conf.js:
onPrepare: function() {
browser.driver.manage().timeouts().implicitlyWait(5000);
}
All my tests were running as expected and so I decided to go to the next step, i.e. installation in the CI server.
The development team of the aplication I want to tests was already using grunt to build their application so I decided to just hook myself into that. The goal of my new grunt task is to:
assemble the application
start a local webserver running the application
run my protractor test
write some test reports
Finally I accomplished all of the above steps, but I am dealing with a problem now I cannot solve and did not find any help googling it. In order to run the protractor test from grunt I installed the grunt-protractor-runner.
The tests are running, BUT the implicit wait is not working, causing some tests to fail. When I added some explicit waits (browser.sleep(...)) everything is ok again but that is not what I want.
Is there any chance to get implicitly waiting to work when using the grunt-protractor-runner?
UPDATE:
The problem does not have anything to do with the grunt-protractor-runner. When using a different webserver I start up during my taks its working again. To be more precisley: Using the plugin "grunt-contrib-connect" the tests is working using the plugin "grunt-php" the test fails. So I am looking for another php server for grunt now. I will be updating this question.
UPDATE 2:
While looking for some alternatives I considered and finally decided to mock the PHP part of the app.

Resources