how to run "only" single test in specflow while running it normally (ignore all tests except one) - integration-testing

is there "only" (like in mocha) in specflow to run only this tests and ignore all the others?
I have thousands of tests so don't want to ignore 1 by 1.
I know that I can run only 1 manually, but I mean while running all the tests, to use some API like "only" to run only single test

You could implement BeforeScenario hook where you could fail tests other than the selected. The selected test could be marked with a tag e.g. 'OnlyThis' -> the logic of failing other test cases would be to verify if they are marked with the required tag.
I believe, there is no build-in option in SpecFlow.
It also depends on the test runner you use. You could filter tests e.g. using the test name.

Related

Dynamic test set up/ tear down in Robot Framework?

Our team is transitioning to Robot Framework and we have some conceptual questions. Our biggest at the moment is how to set up a database insert/ delete dynamically, depending on the test we are trying to run. For example, we may have a test suite where we want to test an endpoint like so:
Record exists, endpoint is valid
Record exists with data flaw, error is handled correctly
Record does not exist, error is handled correctly
...etc
Each of these requires a different kind of document to be inserted into our database, then deleted, in set up/ tear down.
Currently, we're setting up each test as a separate suite and explicitly setting the insert statements (in this case, the JSON, since we're using MongoDB) in each Suite Setup keyword. We have some factory-method resources to help reduce redundancy but it's still very redundant. We are copying and pasting entire .robot files and changing a value or two in each one.
We have tried the data-driver library but we haven't been able to get it working with the variable scoping. And we can't set these set up/ tear down steps as simple test steps since we need to be sure that each document is created/ destroyed before and after each test.

What are Tags in Robot framework

In Robot Framework, I have seen a term TAG. What is the use of it.
When and where we can use this TAG?
Can we create or own tags and how?
From User Guide:
Tags are shown in test reports, logs and, of course, in the test
data, so they provide metadata to test cases.
Statistics about test cases (total, passed, failed are automatically collected based on tags).
With tags, you can include or exclude test cases to be executed.
With tags, you can specify which test cases are considered critical.
and my points how i use:
Mark tests cases that are not allowed to be re-run at the end
Mark tests cases that are allowed to be run in parallel
Add defect ID as tag so I will know which test cases should pass after fix

Dynamic test cases in Nunit3

I have integer values as test cases(ids of different users), and I don't want to hardcode them, I have a method that gets users from API. It is said in specs, that dynamic test cases spec is not implemented yet. Is it possible to load test cases before test is executed?
We have used the term "dynamic test cases" to mean that the tests are not created before the run but during it. Specifically, the test cases can change while the test is running.
It doesn't sound like this is what you need. If I understand correctly, you want to get the user ids programmatically at the time the tests are created. You can easily do this using the TestCaseSourceAttribute on a method that uses your API to get the user id.

How do I keep Robot test suites DRY?

I'm halfway through a set of automated tests using the Robot Framework, and am starting to notice a lot of repetition. At the moment, my tests are organized by the page being tested (i.e. homepage, login page).
The uncertainty I'm feeling is that some tests are word-for-word repeated in two different test suites, with only the setup differing; but on the other hand, with the refactoring I've done, it feels like the keywords themselves are the test cases. I just want to know if there's a more standard practice way of doing this.
I've listed a trivial example below:
common.robot
...
*** Keywords ***
User logs in
# login logic here
...
home_page.robot
...
*** Test Cases ***
Verify user login
User logs in
...
other_page.robot
...
*** Test Cases ***
Verify user login
User logs in
...
If you want to share test keywords, you can do that on many levels.
So you could define a resource.txt file and put all your common keywords in there and then call them for different tests.
You can have a single parent test where you are simply reusing keywords with the differing parameters.
You could also feed the parameters through a list and call the same keyword in a For loop.
That being said, regarding your bigger concern of how to organize the structure of your test suite, that is a much-discussed topic and no single answer would suffice. You could look at the Pekka's writings on this topic (Link).
Test-framework-design is an 'art-form' similar to code design.

Integration Test Best Practice

When creating integration tests, what is the best approach to introduce data?
Should sql scripts be used to create the data in the setup of the test or would it be better to use the actual business objects to generate data which can then be used by the tests.
Any help would be greatly appreciated.
When creating test data for automated test there are a few rules I try to stick to and I find these rules help me achieve reliable tests that have a lower maintenance overhead:
Avoid making the output of one test the input of another test i.e. dont use test A to create the test data for test B
Avoid using objects under test to create test data i.e. if your testing module A dont use module A to create test data for any test
Create test data in a way that's repeatable reliably at low cost e.g use SQL scripts to setup data
When deciding how test data is to be created also consider how the test data will be removed so that your tests can be ran from a clean base state
In my environment I create test data using SQL at either the test fixture or test set-up point and then I clean out the test data using SQL at either the test fixture or test tear-down point.

Resources