How do I keep Robot test suites DRY? - automated-tests

I'm halfway through a set of automated tests using the Robot Framework, and am starting to notice a lot of repetition. At the moment, my tests are organized by the page being tested (i.e. homepage, login page).
The uncertainty I'm feeling is that some tests are word-for-word repeated in two different test suites, with only the setup differing; but on the other hand, with the refactoring I've done, it feels like the keywords themselves are the test cases. I just want to know if there's a more standard practice way of doing this.
I've listed a trivial example below:
common.robot
...
*** Keywords ***
User logs in
# login logic here
...
home_page.robot
...
*** Test Cases ***
Verify user login
User logs in
...
other_page.robot
...
*** Test Cases ***
Verify user login
User logs in
...

If you want to share test keywords, you can do that on many levels.
So you could define a resource.txt file and put all your common keywords in there and then call them for different tests.
You can have a single parent test where you are simply reusing keywords with the differing parameters.
You could also feed the parameters through a list and call the same keyword in a For loop.
That being said, regarding your bigger concern of how to organize the structure of your test suite, that is a much-discussed topic and no single answer would suffice. You could look at the Pekka's writings on this topic (Link).
Test-framework-design is an 'art-form' similar to code design.

Related

how to run "only" single test in specflow while running it normally (ignore all tests except one)

is there "only" (like in mocha) in specflow to run only this tests and ignore all the others?
I have thousands of tests so don't want to ignore 1 by 1.
I know that I can run only 1 manually, but I mean while running all the tests, to use some API like "only" to run only single test
You could implement BeforeScenario hook where you could fail tests other than the selected. The selected test could be marked with a tag e.g. 'OnlyThis' -> the logic of failing other test cases would be to verify if they are marked with the required tag.
I believe, there is no build-in option in SpecFlow.
It also depends on the test runner you use. You could filter tests e.g. using the test name.

Dynamic test set up/ tear down in Robot Framework?

Our team is transitioning to Robot Framework and we have some conceptual questions. Our biggest at the moment is how to set up a database insert/ delete dynamically, depending on the test we are trying to run. For example, we may have a test suite where we want to test an endpoint like so:
Record exists, endpoint is valid
Record exists with data flaw, error is handled correctly
Record does not exist, error is handled correctly
...etc
Each of these requires a different kind of document to be inserted into our database, then deleted, in set up/ tear down.
Currently, we're setting up each test as a separate suite and explicitly setting the insert statements (in this case, the JSON, since we're using MongoDB) in each Suite Setup keyword. We have some factory-method resources to help reduce redundancy but it's still very redundant. We are copying and pasting entire .robot files and changing a value or two in each one.
We have tried the data-driver library but we haven't been able to get it working with the variable scoping. And we can't set these set up/ tear down steps as simple test steps since we need to be sure that each document is created/ destroyed before and after each test.

Firebase/Firestore Testing Pattern: mock fake data without code duplication (DRY)

This is a question about programming style and patterns when it comes to writing tests for complex systems written in Firebase/Firestore. I'm writing a web app using Firebase, Typescript, Angular, Firestore, etc...
Objective:
I have written basic security rules tests to test my users collection. I'll also be testing the function that writes the document, and e2e testing that the document is written when a new user signs up. So far so good.
The tests are clean so far - I manually define a few user objects, write it to the database before the tests with beforeEach() and run the tests. The tests depend on me knowing what data I wrote in the first place - what were the document ids, what were the field values etc... and then I check for certain operations to pass, certain operations to fail, depending on the custom claims provided. Similarly, with functions tests and e2e tests, I'll be checking if correct data was written, generated, etc...
The next step would be to test the chat functionality. Here's where I run into code duplication issues.
Issue:
So let's say I want to test the chat functionality or the transaction functionality. For these modules to be tested, the database needs to have fake user data, transaction data, test data etc... and, furthermore, within the tests, I need to have access to the fields, document Ids, etc.. that were written to the database, so I can access the documents for tests, etc...
For example, whether or not a chat message can be written to firestore depends on whether certain fields exist on the user document.
This would require me to manually define all the objects I plan to write to the database in the same file as the tests themselves, so I have access to what I wrote. As I test more complex and dependent modules of the system, since each test for each module is in a separate file, I either have to manually write out each object, or require it from another test file and write it. Each document has its payload and its Id that I need to keep track of, and even the fake user token objects I have to pass to firestore (or actual auth user records I have to import for online testing). This would mean a whole bunch of boilerplate code and duplication simply writing objects as such in all of my files:
const fakeUserPayload: User = {
handle: 'username',
email: faker.internet.email(),
...etc
}
// And so on and on for all test users, chat docs, transaction docs, etc...
Potential Solution: so there are a few potential solutions I came up with, but none seem to solve the problem.
For example, I thought I would write a module that simply populates the firestore database at the start of each test. The module would have a userPayloadFactory and other loops (using faker module) to automatically populate the Db with fake data.
Problem: If I did this, I wouldn't have access to the document Ids and field data in my tests, since it's automatically generated. I thought, maybe I could populate firestore with fake data, and then use an administrative db connection to read the fake user documents and their Ids back into the tests, and then use this data to conduct the tests. For example, I would find the user id and then generate a chat document and test for correct data / success. Except it seems incredibly messy to write data in one module and then read it back in another, especially since most tests require a specific document to be written to test for certain cases/scenarios. Which makes auto-populated mock data useless - so we're back to square one, where I have to manually define and write out a large number of fake objects in order to test rules, functions and functionality.
Potential Solution: I could maybe keep auth and firestore data in a JSON backup file, (so they remain static) and import them into the database with a shell command before each test suite. However, this has its own issues as it's not easy to dynamically generate new test cases or edge cases, and also difficult to continually re-export and update the JSON backup files as the project grows.
What is a better way to structure and write my tests so that I can automatically generate the documents and payloads I need, while having control and access over what gets written?
I'm hoping there's some kind of factory or pattern that can make this easier, scalable and more consistent and robust.
You're asking a really huge question, writing tests for big environments is a complex task and even more when it's coupled to database state. I will try to answer to the best of my knowledge so take my words with a grain of salt.
I believe you're dealing with two similar yet different concerns, automatic creation of mock data and edge-cases that require very specific document setup. These two are tightly coupled within the tests itself as you need both kinds of data to run them, however the requirements differ one another and therefore their creation should take that in account. Let's talk about your potential solutions from that perspective.
The JSON backup provides a static and consistent dataset that allows to repeat the tests over time being sure the environment hasn't changed and it's a good candidate to address the edge-case problem. It's downsides are that is hardly maintainable because any object modification to accommodate changes in TestA may break the expectations of TestB that also relies on it, it's almost assured you will loss the track of these nuances at some point; You can add new objects to accommodate code and test changes but this could lead to a combinatoric explosion of the objects you need to take care of as your project grows. Finally JSON files are not the way to go if you are going to require dynamically generated data.
The factory method is a great option to deal with the creation of arbitrary mock data since there are less restrictions placed on it, so writing a generator seems a good idea. You disliked this based on the fact that you need to know your data while running the tests but I think that's solvable. Your test might load the Factory module, then create the data and store it in-memory/HDD in addition to commit this changes to Firestore, there's no need to read the data from the database.
Your other other concern were the corner case documents which is trickier because you need very specifically shaped data. You might handcraft the documents yourself but then you got a poorly scalable solution. The alternative is trying to look for constraints/invariants in the shape of edge-case documents that you can abstract into factory methods. The worst scenario here is that when some edge-cases do not share any similarity with the rest you will need to write a whole method for each of these. I won't consider this a downside as it improves the modularity and maintainability of the Factory .
Overall, I would stick with the Factory pattern because it already offers techniques to follow the DRY principle by the means of isolating the creation of distinct objects, decoupling data creation from test execution and facilities to avoid disruptive breaks as the tests evolve with the project.
With that being said a little research got me to this page about the Builder Pattern that you may find interesting. Also this thread about code duplication in tests might be of interest. And finally just to comment out that Firebase has some testing functionality that can be found here.
Hope this helps.

What are Tags in Robot framework

In Robot Framework, I have seen a term TAG. What is the use of it.
When and where we can use this TAG?
Can we create or own tags and how?
From User Guide:
Tags are shown in test reports, logs and, of course, in the test
data, so they provide metadata to test cases.
Statistics about test cases (total, passed, failed are automatically collected based on tags).
With tags, you can include or exclude test cases to be executed.
With tags, you can specify which test cases are considered critical.
and my points how i use:
Mark tests cases that are not allowed to be re-run at the end
Mark tests cases that are allowed to be run in parallel
Add defect ID as tag so I will know which test cases should pass after fix

Dynamic test cases in Nunit3

I have integer values as test cases(ids of different users), and I don't want to hardcode them, I have a method that gets users from API. It is said in specs, that dynamic test cases spec is not implemented yet. Is it possible to load test cases before test is executed?
We have used the term "dynamic test cases" to mean that the tests are not created before the run but during it. Specifically, the test cases can change while the test is running.
It doesn't sound like this is what you need. If I understand correctly, you want to get the user ids programmatically at the time the tests are created. You can easily do this using the TestCaseSourceAttribute on a method that uses your API to get the user id.

Resources