How to stop Robot Framework suite execution on console error? - robotframework

Robot framework may show some errors to the console if e.g. it cannot find some file at some path. This is not test or suite related failures. This error just loggs to the console, but Robot Framework continues to execute suites and test after this kind of error. So my question is: how can I change this default behavior and tell Robot Framework to stop execution if it finds this kind of errors?
Maybe it can be some command line option or special tag. I searched in docs here https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#errors-and-warnings and did not find any info about that. Am I missing something?
Example of such error:
[ ERROR ] Error in file 'D:\Projects\project\suites\02__app\05__d\editing.robot': Resource file '..\..\..\resources\keywords\attribute.resource' does not exist.
Suites.App.d.Editing

I believe the option --exitonerror is the command line option you are looking for.

Related

how to verify the syntax of Robot framework code

I have robot framework written in Linux OS. I often get syntax issue in my ROBOT FRAMEWORK code.
Do we have any online compiler or in python to trace where the syntax error is occurring(Which line)?
You can use the --dryrun command line argument to check test data validity and syntax.
From the user guide which I strongly suggest to browse in general.
Robot Framework supports so called dry run mode where the tests are run normally otherwise, but the keywords coming from the test libraries are not executed at all. The dry run mode can be used to validate the test data; if the dry run passes, the data should be syntactically correct. This mode is triggered using option --dryrun.
The dry run execution may fail for following reasons:
Using keywords that are not found.
Using keywords with wrong number of arguments.
Using user keywords that have invalid syntax.

Commandline option for robot framework to get console output instead of get output logs

How to get robot framework running output in console instead of logs by using command line option?
There is no such option.
You might be able to create something with the listener interface, hooking up at suite/case/keyword level and getting the current output, printing to console filtered info.
Have in mind the logging is excessive - look at a sample output.xml, having that much information in the console may be overwhelming.

How can we redirect 'RobotTempDir' folder to save at different location on windows 10?

I am working with Robot Framework using RED editor on Eclipse IDE. When i ran a Robot test case an error as shown in the screenshot has occurred
Upon tracing back my actions, I have noticed that RobotTempDir... got deleted from Temp folder. I restored that folder and ran the test case. Then it executed successfully.
In future there are chances that while cleaning temp folder contents, RobotTempDir... may get deleted unknowingly. Is there a way to redirect this RobotTempDir... contents to save in a different location?
I looked into the C:\Python36\Lib\site-packages\robot path and didn't find any files where i can change/update Robot temp folder details.
The TestRunnerAgent.py is not part of the Robot Framework application but instead comes with the RED plugin. It is part of their Robot Run functionality which allows it to retrieve information from Robot Framework while it is running.
This information is then displayed in the Eclipse Message Log panel or used when using the RED debugger functionality.
In my view this file is generated every time Eclipse is started and I think the only time this error would occur is when that file/folder is deleted while Eclipse is running. Restarting Eclipse should fix this.
TestRunnerAgent.py is custom listener which is attached to Robot process to report back to RED what is happening during test execution. For normal test runs,this is Execution View information,also Message Log stuff printed there. For Debug run, TestRunnerAgent.py allows to control execution process (breakpoint stop,stepping) and changing internals of Robot (state of variables).
It is embedded in RED package,and as you said, it is temporary placed in Temp dir for execution. If you would like to check source,either check jar file: org.robotframework.ide.core-functions-0.0.1-SNAPSHOT.jar or on GitHub: https://github.com/nokia/RED/tree/master/src/RobotFrameworkCore/org.robotframework.ide.core-functions/src/main/python/scripts
Back to your issue:
RED starts Robot execution with following command:
<selected python interpreter> -m robot.run --listener <path to TestRunnerAgent.py> <details what to run and other miscs>
There is no indication of error such error in TestRunnerAgent.py although there is in RobotLaunchConfigurationDelegate.java which tries to start Robot using interpreter in current Project configuration. I assume that there is something wrong in your env setup (either in RED or in OS)
I would suggest to check following:
check if you selected proper python interpreter with installed Robot (from Windows->Preferences-> RobotFramework ->Interpreters
check if your project looks similar as here: http://nokia.github.io/RED/help/user_guide/quick_start.html
you can try to use custom script to catch robot execution command and remove --listener part to validate if this is the culprit: http://nokia.github.io/RED/help/user_guide/launching/local_launch_scripting.html
there should be command in Console View - try to run it by yourself

Is there a way to monitor test log for error string using robot framework?

I am a newbie to robot framework, I just wanted to know if i can monitor my application log for a certain keyword i.e. FAIL, If i find the message i stop test and report failure.
It seems to me that the standard keyword Grep File might be appropriate. In the built in Operating System Library documentation you can find more details. In this SO Answer an example is given.

After importing failed test results build status is succeed in TeamBuild

Using VSTS (formerly know as VSO), I'm importing test results from a 3rd party testing tool.
This is working fine, however when the imported results have a failure in them, i would expect the build to fail, however it doesn't. As seen below.
Any advice? this seems like a bug.
The vNext build pass/fail base on the execution status of each steps in the build definition. It does not check the published test result. You can submit a feature request on VSTS User Voice.

Resources