I have some Xamarin UI Tests which have the Attributes
[TestFixture(Platform.Android)]
[TestFixture(Platform.iOS)]
Enabling me to run both platform tests in the cloud (AppCenter Test), however, when I get the XML NUnit test results I have problems because I have inconclusive tests.
E.g. Say I have 15 Android tests passing if I publish the XML tests results to Azure DevOps Pipeline it will say only 50% of my tests passed with 15 inconclusive.
What I want to know is what I can do to separate these tests entirely so I end up with an tests result only for Android and for iOS.
Thanks in advance.
Related
We are having some difficulties implementing the Azure DevOps pipeline as a team. We are conducting the testing on our agent-hosted Windows os in the background. A couple of our test scripts were successful, while others were unsuccessful for unknown reasons.
However, the identical test scripts are all run with a pass in locally 100% in versus code result. Unable to determine why the agent-hosted Windows operating system is failing
Ideally, all suggestions would be constructive.
All test scripts which execute need to pass in azure pipeline any suggestion would be helpful
I am able to run automation tests on sample flutter application using flutter_driver on Android Emulator. I am looking for options of executing on Device Clouds. There are few threads which talk about executions on AWS Device Farm. However, I am interested in Firebase Test Lab. Similar to how we can execute automated scripts in SauceLabs, is there an option to run automated tests in Firebase Test Lab using flutter_driver?
It's currently not possible. Test Lab only supports testing Android apps with Espresso or UI Automator, and iOS apps with XCTest. There is currently no other framework support. As stated in the documentation:
Test Lab runs Espresso and UI Automator 2.0 tests on Android apps, and XCTest tests on iOS apps. Write tests using one of those frameworks, then run them through the Firebase console or the gcloud command line interface.
Feel free to file a feature request with Firebase support.
Instead of flutter_driver , use integration_test package
Tests written with the integration_test package can:
1- Run directly on the target device, allowing you to test on multiple Android or iOS devices using Firebase Test Lab.
2- Run using flutter_driver.
3- Use flutter_test APIs, making integration tests more like writing widget tests.
You can learn more about Migrating from flutter_driver here .
I am able to run automation tests on sample flutter application using flutter_driver on Android Emulator. I am looking for options of executing on Device Clouds. There are few threads which talk about executions on AWS Device Farm. However, I am interested in Firebase Test Lab. Similar to how we can execute automated scripts in SauceLabs, is there an option to run automated tests in Firebase Test Lab using flutter_driver?
It's currently not possible. Test Lab only supports testing Android apps with Espresso or UI Automator, and iOS apps with XCTest. There is currently no other framework support. As stated in the documentation:
Test Lab runs Espresso and UI Automator 2.0 tests on Android apps, and XCTest tests on iOS apps. Write tests using one of those frameworks, then run them through the Firebase console or the gcloud command line interface.
Feel free to file a feature request with Firebase support.
Instead of flutter_driver , use integration_test package
Tests written with the integration_test package can:
1- Run directly on the target device, allowing you to test on multiple Android or iOS devices using Firebase Test Lab.
2- Run using flutter_driver.
3- Use flutter_test APIs, making integration tests more like writing widget tests.
You can learn more about Migrating from flutter_driver here .
I started using the Firebase Test Lab web page to run instrumented tests of my new apps. One of the advanced settings is "Test Timeout", which is the point Firebase will kill a long running test.
I started to launch tests directly from Android Studio (3.1.1). In setting up tests with the run configuration editor, I can't seem to find the setting for Test Timeout. Am I missing something or is this feature not available when launching tests from AS.
There is currently no way to set a Test Lab timeout using the Android Studio UI. Please feel free to file a feature request for missing functionality like this.
I have a TFS (on premises version 15.105.25910.0) server with build and release management definitions. One of the definitions deploys a web site, the test assemblies and then runs my MSTest based Selenium tests. Most pass, some are not run, and a few fail.
When I attempt to view the test results in the TFS web portal the view of "failed" test results fails and it shows the following error message:
can't run your query: bad json escape sequence: \p. path
'build.branchname', line 1, position 182.
Can anyone explain how this fault arises? or more to the point what steps I might take to either diagnose this further or correct the fault
The troublesome environment and its "Run Functional Tests" task are shown below
Attempted diagnostics
As suggested by Patrick-MSFT I added the requisite three steps to a build (the one that makes the selenium tests)
Windows machine file copy (Copy MStest assembly containing selenium test to c:\tests on a test machine)
Visualstudio test agent deploy (to same machine)
Run functional tests (the assembly shipped in 1)
The test run (and have the same mix of pass fail, skipped) but the test results can be browsed just fine with the web pages test links.
Results after hammering the same test into a different environment to see how that behaves...
Well, same 3 steps (targeting the same test machine) in a different environment works as expected - same mix of results, but view shows results without errors.
To be clear this is a different (pre-existing) environment in the same release definition, targeting the same test PC. It would seem the issue is somehow tied to that specific environment. So how do I fix that then?
So next step, clone the failing environment and see what happens. Back later with the results.
Try to run the test with same settings in build definition instead of release. This could narrow down if the issue is related to your tests or task configuration.
Double check you have use the right settings of related tasks. You could refer related tutorial for Selenium test in MSDN: Get started with Selenium testing in a continuous integration pipeline
Try to run the same release in another environment.
Also go through your log files to see if there are some related info for troubleshooting.