I have a question about Robot Framework:
For example, I have 2 cases in a suite and I trigger the suite running for the 2 cases. After case 1 running finishes, I pause the running. Then I make some modification for case 2 and resume the running. However the Robot still run case 2 with the old version.
Is there any way to have Robot run case 2 with the updated version when resuming? I'm asking the question because it's important for my automation case writing/debugging. In some situation the automation suite setup requires about 20 minutes and I really hope to stay in the suite environment to write new case and make update based on the previous running result.
Related
We have an application we inherited thats on version 500+, we are not sure why its such a high version, but so be it. We have been using Flyway for a few years on it now, and have multiple releases. Sample would be it started as 500.10.4, and we are now on 500.10.20, so 16 releases containing various flyway scripts on a lot of them, but not all.
Anyway, its been determined that for simplification we are to re-version the application to 6.0.0 in the next release. Is there an easy way to let flyway know of this change, so that if we stand up another instance when it runs through the scripts it would run the 500's first, then go back to the 6's?
Currently our flyway script files are named as such:
V500.10.20_2022.05.12.0000.1__xxxx.sql and so on. So in theory our next would be
V6.0.0_2022.05.13.0000.1__xxxx.sql
I know that flyway would see version 6 as lower than 500 and ignore it. We currently have flyway out of order set to false. Is there any other options to solve this other then to set out-of-order processing to true?
In our situation we do not have any flyway scripts pre version 500. So what we are going to attempt to do is have a manual script run that will update all the data in our xxx_db_version table to be version 5.00.xxxx instead of 500.xxxx. This way when we move to 6.0, all of the scripts would be seen as next in the sequence appropriately. While the versions in this table will then not match previous actual versions of the application this table is not used for the purposes of the actual displayed version of the system or anything, and once we move to version 6, the 500 vs 5 won't really matter. and the order/sequence will still be maintained.
If this does not work, I will post a follow up.
I am looking for a way to make robotframework exits the execution of a test suite if a specific test passes. It is the exact contrary of what --exitonfailuredoes so I want to know if there is a way to do this with robot framework.
Up to and including robot framework 3.1 there is no good way to skip tests once a test run has started, except to call [Fatal Error][1]. Being able to skip tests has been a feature that people have wanted for many years now.
At the time that I write this, it does not appear that this feature will be added in version 3.2.
On-prem TFS 2015 Update 3.
I have multiple machines (different Operating Systems) that I want to run my tests on. I'm having issues getting this simple flow to work successfully. Here's what I've tried:
Deploy Test Agent task on multiple machines are successful.
If I put multiple machines in one "Run Functional Tests" task, it will execute the test one ONE of those machines in step 1 only (and will complete successful if this is the first task). Logs here: One Task
If I set up 2 separate tasks, one for each machine, the 1st task will execute successfully, but as seen in bullet 2, the test is run on ANY ONE of the machines in step 1 (NOT the specific one specified for the task). In the example attached, the 1st task is set up to run on Win7, but the test was actually executed on the Win8 machine.
Then the 2nd task (which is set up to run against the Win10 machine) will not complete, no matter what machine or test I put in it. Logs for this scenario attached: Two Tasks
It seems that the PS script(s) for this task is broken in our environment.
Thanks!
The solution is that you can configure test agent separately: configure a agent, then run test, then configure another agent and run test.
P.S.: No, I do not want to debug my script. It is pretty awesome.
The problem is the application under test. If I place a few orders, it crashes. So what I want to do is: mid-execution, when I see that the application is crashed, I want to pause the test script, bring the application back up and running, and then resume the test.
I know that this is not the point of time when I should be running the test scripts as the application is not stable enough, but the developers are working on it and hopefully soon enough, they will fix it. I am just curious to know if there is a solution, because I couldn't find one. Of course I could've integrated bringing the application up again when it crashes in my tests, but that is not what I want to do.
My system:
OS: Linux Mint
Tests: Watir (Ruby) + Cucumber on Chrome
I run the tests on linux terminal using cucumber tags.
I just want to know in general if there is any way to pause and resume execution. For example, when I want to stop all the tests, I give the command line interruption Ctrl + C. So is there any such interrupt command to pause and resume?
Okay, since you want a "general" answer, here goes...
Based on your context, you are looking for a "crashed" condition in your project.
My own approach to solving this problem would involve writing a helper method that would look for this condition and, if true, it would "pause". For example...
def pause_if_crashed
sleep 30 if #browser.product_price.nil?
end
Then I would sprinkle this helper method in likely "crash" spots in my other functional methods.
Without more specifics about your needs, this is about as helpful as I can get, I think.
We use grunt protractor runner and have 49 specs to run.
When I run them in sauce labs, there are times it just runs x number of tests but not all. Any idea why? Are there any sauce settings to be passed over apart from user and key in my protarctor conf.js?
Using SauceLabs selenium server at http://ondemand.saucelabs.com:80/wd/hub
[launcher] Running 1 instances of WebDriver
Started
.....
Ran 5 of 49 specs
5 specs, 0 failures
This kind of output is usually produced when there are "focused" tests present in the codebase. Check if there are fdescribe, fit in your tests.
As a side note, to avoid focused tests being committed to the repository, we've used static code analysis - eslint with eslint-plugin-jasmine plugin. Then, we've added a "pre-commit" git hook with the help of pre-git package that would run the eslint task before every commit eventually prohibiting any code style violations to be committed into the repository.