Cypress test data generation scripts not as part of the test suites - automated-tests

I wrote a couple of Cypress end-2-end tests for our website. These tests run against a staging environment. Since we have a couple of those staging environments for our dev teams I would like to make sure these tests are stable on all stagings.
For some tests I need certain test data, so I wrote a cypress test, creating that test data. Normally this test data generation test is not executed on our CI system. This test is located in a separate file within the integration directory, so that cypress is able to find and execute it. It is just executed once per staging environment. The test data will just remain there and does not have to be generated again and again.
When opening the cypress GUI (cypress open) I would like to have this test data generation test being ignored by cypress so that I can simply run all suites at once. But, when I add this test data generation test to the set of ignoredTestFiles, I can't run the test data generation test at all anymore.
Do you have an idea how I can make the cypress GUI ignore my test data generation test on the one side and on the other keep it executable by cypress, when I explicitly want it to execute?

You could make the test generation data test depend on an environment variable, and only set that variable when you explicitly want it to execute. Something like:
if (Cypress.env('GENERATE_TEST_DATA')) {
// Generate your test data here
// You could even put this if around the entire it block
// so the test doesn't execute at all when the environment variable isn't set
}
Then when you want the test data to be generated, set the environment variable like this:
CYPRESS_GENERATE_TEST_DATA=true npx cypress run
Also see the documentation on Cypress.env()

Related

How to mark pass the test when is retested in robotframework

Right now when my team made a deploy in out QA environment, I run one suite tests of robotframework. This, have several test that failed at first cause the environment is not "warm up". So, in the same pipeline I have the "--rerun" option if some test has failed. Usually, in the second run they work just fine. Then, I merge the outputs with rebot:
rebot --merge output.xml output2.xml
And even the log.html is showing correctly the information (on the test and on the suite level).
Now comes the fun part. Even when in the output.xml now I have two runs of the test (all with some failled and the retries with all pass), when I upload this to XRay, it create a test execution with the results of the first run only.
So, my question is: why? In the output.xml is clearly al the results as the last run. If not, I would understand that this create one test execution and then put all the results inside (first and second run) which is not the case.
It seems to me that XRay is not importing the data correctly.
First, I have never used rebot in that way.
However, I think what you obtained in Xray is the consequence of the "merged" output.xml you have and how Xray works.
Whenever you upload test automation results to Xray, usually a Test Execution is going to be created, containing Test Runs (one per each Test)🤞🏻.
Test issues will be autoprovisioned unless they already exist; if they exist, only Test Runs will be created for those Tests.
A Test Execution cannot contain more than one Test Run for each Test. In other words, a test execution is like a task for running (or that contains the results of) a list of tests; this list cannot have duplicates.
I would advise to upload all the reports, individually. Xray will consider the latest results whenever showing coverage status.

ReTest bloc of tests in Robotframework

I have a list of tests to execute on robot framework where it has a block of tests that can be executed again if a specific test fails as the following flow explains, And i want to know if this is dowable with robot framework.
No, it is not doable. At least, not in a single test run, and not without a lot of work. Robot has no ability to re-run a test within a single test run. You would have to exec a second instance of robot where the output is sent to a separate output file, and then you would have to somehow merge the output files of the original test run and the second exec.
However, robot does support being able to give it the output.xml from a previous run so that it will re-run only the failed test cases. You can do that with the --rerunfailed command line option. See Re-executing failed test cases in the robot framework user guide.

Execution Level Setup & Teardown with Robot Framework

I understand there's Test Setup which will get executed before every test case and Suite Setup which will get executed before every suite (i.e. each .robot file).
However, I'm trying to do Setup and Teardown of the command level which is Setup once I run the robot command and when all test suites have ran, run Teardown.
Tried with having __init__.robot file in my scenario directory but they didn't get called at all.
*** Settings ***
Resource ../_common/keywords.robot
Suite Setup Prepare Browser
Suite Teardown Close Browser
I want to be able to launch the browser before any test is started and then close browser only after all tests are completed.
For example, robot 1.robot 2.robot should:
Open browser
Run 1.robot test suite
Run 2.robot test suite
Close browser
You could do that by having "special" suites just for that, and call them first and last in a run. With the SeleniumLibrary having a global scope, the browser initialized in the first one should be accessible for all follow-up suites in the same run.
E.g. the suite "Startup.robot" will open the browser, and "Closing.robot" will close it, and any in between will use it:
robot Startup.robot 1.robot 2.robot Closing.robot
When you pass a directory for execution, the framework takes the .robot files in it in alphabetical order, so you can name those special suites "0000_Startup.robot" and "zzzz_Closing.robot" for them to be ran in the corresponding order (that's if you use only ascii/latin file names).
And yep, the initialization files are not used to run something before the other suites - they are there to set default values for those other suites, which can be overridden downstream. See their description in the documentation.

Using Test Configuration when running on-demand automated tests from VSTS Test Hub

My organization has moved from executing automated tests within MTM to executing automated tests within the VSTS Test Hub, via Release Definitions. We're using MSTest to run tests with Selenium / C#.
It is a requirement for us to be able to run any individual test on-demand, post-build, and to use the Test Configuration to control the browser. This was working without an issue when runnings tests through MTM.
Previously, when running tests through MTM, my MSTest TestContext's "Properties" were populated with runtime parameters, such as the Test Configuration, such as TestContext.Properties["__Tfs_TestConfigurationName__"]. The runtime properties could also be seen in the resulting .trx file.
Now, when running tests through the VSTS Test Hub, those values are no longer set in TestContext automatically, nor do I see them in the .trx file, or in my Release deployment logs.
Are there any suggestions as to how the Test Configuration values of each test can be propagated into TestContext (or accessible via code otherwise) to be able to control the browser at test run-time?
I have considered creating a .runsettings file, but am unaware as to how Test Configuration variables would be dynamically populated into that file.
I have considered attempting to query the Test Run / Test Results in my Selenium code to determine the Test Points / Configuration, but am unaware of how to determine the Test Run ID (I see the Test Run ID in the release logs, but don't know how to programmatically access it)
I have seen a VSTS Extension called 'VSTS Test Extensions' which is purported to inject the VSTS variables into a .runsettings file, but there are no details as to how to configure the .runsettings file in order to accomplish this, and the old MTM-style parameter names do not seem to work.
Are Test Configuration variables accessible globally in VSTS somehow?
I.e. could I create a PowerShell Release task to somehow access test run / test point data? Microsoft does not provide any indication about how to use Test Configurations other than using them for manual tests.
Any help is greatly appreciated.
After some more review, it seems that my only option is a giant hack:
I noticed that in the Release logs, the [TEST_RUNID] is output, which points me to the $(test.runid) variable, which I didn't know existed.
I can output the Run ID to an external file via a Powershell Release task.
I can then have my test code read that file, perform an API call to VSTS querying the Test Results for that Run ID, and then use that response to output the Test Configuration Name for each Test Case ID.
I can then match up my Test Case ID within each of my test methods with the Test Case IDs from the service call response, and set the browser according to the Test Configuration Name.
The only issue then is if the same test exists multiple times in the same run with different configurations, which can be hacked around by querying the Test Run every time a test begins and checking which configuration has already been run based on the State ("Pending", etc.) or by some other means I haven't thought of yet.

Run unit/integration tests with Lab Management

We have a complete Lab Management environment running Coded UI tests in nightly builds. What we are trying to achieve is to run our integration tests (regular TestMethod() with SQL connections) just before all the Coded UI tests to verify that our db scripts are executed correctly and that there are no new changes causing any problems.
So far I have found a way to execute tests remotely through .testrunconfig. The problem we have with that approach is that it's not possible to choose a testcontroller connected to a team project so I guess that would be only useful for running tests on physical machines outside of Lab Management?
One option seem to be to create a Test Case for each integration test and that should run it together with the UI tests but it feels like it will be to much maintenance managing hundreds of test cases just to run the integration tests. Also it would be better to completely separate the test runs for the different kinds of tests.
Is there any easy way to achieve this that I have totally missed? Or do I have to modify the lab build template to deploy and run the tests?
I guess that would be only useful for running tests on physical machines outside of Lab Management?
If you run your tests remotely through .testrunconfig you have to connect the Test Agent to another Test Controller which is NOT connected to the team project.
Unfortunately it is impossible for the environment which are running under the Lab Management, to my knowledge.
What about this approach:
Create an Ordered Test containing all you integration tests.
Create a new Test Case "Integration Tests" and automate it by the ordered test
So you do not have to maintain hundreds of Test Cases.
You could also create several Ordered Tests if you want to group the integration tests and then
create a "main" Ordered Test containing them.
This way it will be easier to analyze test results especially if you have a lot of tests.
Let the integration tests run as a part of your existing nightly build.
Create a new Build Definition which does not start a build but uses the last successful nightly build and let your CodedUI tests run using Lab Build Template.
This way you will have different test runs for the different kinds of tests.
The only drawback is that you have to "synchronize" these two builds...
You could just schedule the second build later so you could be sure the fist build is done.
It's not really perfect, I know... but this way you could easily achieve your goal.
I am not sure if there is an alternative solution, but on the project I am currently working on we have both our Unit and Integration Test Assemblies set under the Process options (Process>Basic>AutomatedTest>TestAssembly) in our Nightly Build. This was achieved through altering the Default Build Process Template (not the Lab Default) a bit, as you suggested (I thought this was standard, but it's been a while).

Resources