I want to run same test method on multiple device at the same time.
I have already achieved running 2 different test scripts on multiple device at the same time but facing difficulty in running same scripts at same time.
Can we create 1 test script using multiple appium instances at the same time?
Is this scenario is possible? Or we can only run different test cases in parallel?
Related
Right now when my team made a deploy in out QA environment, I run one suite tests of robotframework. This, have several test that failed at first cause the environment is not "warm up". So, in the same pipeline I have the "--rerun" option if some test has failed. Usually, in the second run they work just fine. Then, I merge the outputs with rebot:
rebot --merge output.xml output2.xml
And even the log.html is showing correctly the information (on the test and on the suite level).
Now comes the fun part. Even when in the output.xml now I have two runs of the test (all with some failled and the retries with all pass), when I upload this to XRay, it create a test execution with the results of the first run only.
So, my question is: why? In the output.xml is clearly al the results as the last run. If not, I would understand that this create one test execution and then put all the results inside (first and second run) which is not the case.
It seems to me that XRay is not importing the data correctly.
First, I have never used rebot in that way.
However, I think what you obtained in Xray is the consequence of the "merged" output.xml you have and how Xray works.
Whenever you upload test automation results to Xray, usually a Test Execution is going to be created, containing Test Runs (one per each Test)🤞🏻.
Test issues will be autoprovisioned unless they already exist; if they exist, only Test Runs will be created for those Tests.
A Test Execution cannot contain more than one Test Run for each Test. In other words, a test execution is like a task for running (or that contains the results of) a list of tests; this list cannot have duplicates.
I would advise to upload all the reports, individually. Xray will consider the latest results whenever showing coverage status.
I wrote a couple of Cypress end-2-end tests for our website. These tests run against a staging environment. Since we have a couple of those staging environments for our dev teams I would like to make sure these tests are stable on all stagings.
For some tests I need certain test data, so I wrote a cypress test, creating that test data. Normally this test data generation test is not executed on our CI system. This test is located in a separate file within the integration directory, so that cypress is able to find and execute it. It is just executed once per staging environment. The test data will just remain there and does not have to be generated again and again.
When opening the cypress GUI (cypress open) I would like to have this test data generation test being ignored by cypress so that I can simply run all suites at once. But, when I add this test data generation test to the set of ignoredTestFiles, I can't run the test data generation test at all anymore.
Do you have an idea how I can make the cypress GUI ignore my test data generation test on the one side and on the other keep it executable by cypress, when I explicitly want it to execute?
You could make the test generation data test depend on an environment variable, and only set that variable when you explicitly want it to execute. Something like:
if (Cypress.env('GENERATE_TEST_DATA')) {
// Generate your test data here
// You could even put this if around the entire it block
// so the test doesn't execute at all when the environment variable isn't set
}
Then when you want the test data to be generated, set the environment variable like this:
CYPRESS_GENERATE_TEST_DATA=true npx cypress run
Also see the documentation on Cypress.env()
Is there a way to run all my test cases simultaneously? if so could you please post an example of how to do it?
Regards,
Meir
There is nothing built-in to robot to run tests in parallel. If your tests are spread across multiple suites you can write a script that runs them all at the same time, and then you can use rebot to combine all of the output.xml files into a single report.
There is an open source program that might do what you want. The project is named "pabot" and can be found here: https://github.com/mkorpela/pabot
We have a complete Lab Management environment running Coded UI tests in nightly builds. What we are trying to achieve is to run our integration tests (regular TestMethod() with SQL connections) just before all the Coded UI tests to verify that our db scripts are executed correctly and that there are no new changes causing any problems.
So far I have found a way to execute tests remotely through .testrunconfig. The problem we have with that approach is that it's not possible to choose a testcontroller connected to a team project so I guess that would be only useful for running tests on physical machines outside of Lab Management?
One option seem to be to create a Test Case for each integration test and that should run it together with the UI tests but it feels like it will be to much maintenance managing hundreds of test cases just to run the integration tests. Also it would be better to completely separate the test runs for the different kinds of tests.
Is there any easy way to achieve this that I have totally missed? Or do I have to modify the lab build template to deploy and run the tests?
I guess that would be only useful for running tests on physical machines outside of Lab Management?
If you run your tests remotely through .testrunconfig you have to connect the Test Agent to another Test Controller which is NOT connected to the team project.
Unfortunately it is impossible for the environment which are running under the Lab Management, to my knowledge.
What about this approach:
Create an Ordered Test containing all you integration tests.
Create a new Test Case "Integration Tests" and automate it by the ordered test
So you do not have to maintain hundreds of Test Cases.
You could also create several Ordered Tests if you want to group the integration tests and then
create a "main" Ordered Test containing them.
This way it will be easier to analyze test results especially if you have a lot of tests.
Let the integration tests run as a part of your existing nightly build.
Create a new Build Definition which does not start a build but uses the last successful nightly build and let your CodedUI tests run using Lab Build Template.
This way you will have different test runs for the different kinds of tests.
The only drawback is that you have to "synchronize" these two builds...
You could just schedule the second build later so you could be sure the fist build is done.
It's not really perfect, I know... but this way you could easily achieve your goal.
I am not sure if there is an alternative solution, but on the project I am currently working on we have both our Unit and Integration Test Assemblies set under the Process options (Process>Basic>AutomatedTest>TestAssembly) in our Nightly Build. This was achieved through altering the Default Build Process Template (not the Lab Default) a bit, as you suggested (I thought this was standard, but it's been a while).
Our team has hundreds of integration tests that hit a database and verify results. I've got two base classes for all the integration tests, one for retrieve-only tests and one for create/update/delete tests. The retrieve-only base class regenerates the database during the TestFixtureSetup so it only executes once per test class. The CUD base class regenerates the database before each test. Each repository class has its own corresponding test class.
As you can imagine, this whole thing takes quite some time (approaching 7-8 minutes to run and growing quickly). Having this run as part of our CI (CruiseControl.Net) is not a problem, but running locally takes a long time and really prohibits running them before committing code.
My question is are there any best practices to help speed up the execution of these types of integration tests?
I'm unable to execute them in-memory (a la sqlite) because we use some database specific functionality (computed columns, etc.) that aren't supported in sqlite.
Also, the whole team has to be able to execute them, so running them on a local instance of SQL Server Express or something could be error prone unless the connection strings are all the same for those instances.
How are you accomplishing this in your shop and what works well?
Thanks!
Keep your fast (unit) and slow (integration) tests separate, so that you can run them separately. Use whatever method for grouping/categorizing the tests is provided by your testing framework. If the testing framework does not support grouping the tests, move the integration tests into a separate module that has only integration tests.
The fast tests should take only some seconds to run all of them and should have high code coverage. These kind of tests allow the developers to refactor ruthlessly, because they can do a small change and run all the tests and be very confident that the change did not break anything.
The slow tests can take many minutes to run and they will make sure that the individual components work together right. When the developers do changes that might possibly break something which is tested by the integration tests but not the unit tests, they should run those integration tests before committing. Otherwise, the slow tests are run by the CI server.
in NUnit you can decorate your test classes (or methods) with an attribute eg:
[Category("Integration")]
public class SomeTestFixture{
...
}
[Category("Unit")]
public class SomeOtherTestFixture{
...
}
You can then stipulate in the build process on the server that all categories get run and just require that your developers run a subset of the available test categories. What categories they are required to run would depend on things you will understand better than I will. But the gist is that they are able to test at the unit level and the server handles the integration tests.
I'm a java developer but have dealt with a similar problem. I found that running a local database instance works well because of the speed (no data to send over the network) and because this way you don't have contention on your integration test database.
The general approach we use to solving this problem is to set up the build scripts to read the database connection strings from a configuration file, and then set up one file per environment. For example, one file for WORKSTATION, another for CI. Then you set up the build scripts to read the config file based on the specified environment. So builds running on a developer workstation run using the WORKSTATION configuration, and builds running in the CI environment use the CI settings.
It also helps tremendously if the entire database schema can be created from a single script, so each developer can quickly set up a local database for testing. You can even extend this concept to the next level and add the database setup script to the build process, so the entire database setup can be scripted to keep up with changes in the database schema.
We have an SQL Server Express instance with the same DB definition running for every dev machine as part of the dev environment. With Windows authentication the connection strings are stable - no username/password in the string.
What we would really like to do, but haven't yet, is see if we can get our system to run on SQL Server Compact Edition, which is like SQLite with SQL Server's engine. Then we could run them in-memory, and possibly in parallel as well (with multiple processes).
Have you done any measurements (using timers or similar) to determine where the tests spend most of their time?
If you already know that the database recreation is why they're time consuming a different approach would be to regenerate the database once and use transactions to preserve the state between tests. Each CUD-type test starts a transaction in setup and performs a rollback in teardown. This can significantly reduce the time spent on database setup for each test since a transaction rollback is cheaper than a full database recreation.