I am new to robot framework. I want to know how to capture screenshots on failure.
Doesnt robot framework automatically take screenshots if script fails?
An example will be of great help!
this is actually a feature of the Selenium2Library that would be required with Robot if you were doing Selenium based tests.
More information can be found here: http://robotframework.org/Selenium2Library/doc/Selenium2Library.html
As it says it the documentation, setting up screenshots on failure is very easy, for example here is mine from a test suite I'm working with:
Library Selenium2Library timeout=10 implicit_wait=1.5 run_on_failure=Capture Page Screenshot
You can use the below keyword to capture screen shot after any of the step you want :
Capture Page Screenshot
Hope so this was helpful!
In this case teardown will be executed once the test case is executed/not executed and if the test case fail, it will take screenshot:
[Teardown] Run Keyword If Test Failed Capture Page Screenshot
Or you can do it even better on suite level if you don't need different teardowns for particular tests:
[Test Teardown] Run Keyword If Test Failed Capture Page Screenshot
So far, all of the other answers assume that you are using Selenium
If you are not, there is a "Screenshot" library that has the keyword "Take Screenshot." To include this library, all you need to do is put "Library Screenshot" in your settings table.
In my robotframework code, my teardowns all just reference a keyword I made called "Default Teardown" which is defined as:
Default Teardown
Run Keyword If Test Failed Take Screenshot
Close All
(I think that the Close All might be one of my custom keywords).
I have noticed a few issues with the Take Screenshot keyword. Some of this may be configurable, but I don't know. First off, it will take a screenshot of your screen, not necessarily just the application that you are interested in. So if you're using this and are allowing other people to view the resulting screenshots, make sure that you don't have anything else on your screen that you wouldn't want to share.
Also, if you kick off your tests and then lock your screen so you can take a quick break while it runs, all of your screenshots will just be pictures of your lock screen.
I'm using this on my Jenkins server as well which is using the xvfb-run command to create sort of a fake GUI to run the robot framework tests. If you're doing this as well, make sure that in your xvfb-run command you include something along the lines of
xvfb-run --server-args="-screen 0 1024x768x24" <rest of your command>
You'll have to decide what screen resolution works the best for you, but I found that with the default screen resolution, only a small portion of my app was captured.
Long story short, I think that you're better off using Capture Page Screenshot if you're using selenium. However if you're not, this may be your best (or only) solution.
Related
I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047
I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047
I am trying to show a progress in the taskbar of the plasma desktop using the KDE Frameworks. In short, it want to do the same thing as dolphin, when it copies files:
I'm kinda stuck, because I don't even know where to get started. The only thing I found that could be useful is KStatusBarJobTracker, but I don't know how to use it. I could not find any tutorials or examples how to do this.
So, after digging around, and thanks to the help of #leinir, I was able to find out the following:
Since Plasma 5.6 KDE supports the Unitiy DBus Launcher-API, which can be used, for example, to show progress
I found a post on AskUbuntu that explains how to use the API with Qt
The real problem is: This only works, if you have a valid desktop file in one of the standard locations! You need to pass the file as parameter of the DBus message to make it work.
Based on this information, I figured out how to use it and created a GitHub repository, that supports cross platform taskbar progress, and uses this API for the linux implementation.
However, here is how to do it anyways. It should work for KDE Plasma and the Unity desktop, maybe more (haven't tried any others):
Create a .desktop file for your application. For test purpose, this can be a "dummy" file, that could look like this:
[Desktop Entry]
Type=Application
Version=1.1
Name=MyApp
Exec=<path_to>/MyApp
Copy that file to ~/.local/share/applications/ (or wherever user specific desktop files go on your system)
In your code, all you need to do is execute the following code, to update the taskbar state:
auto message = QDBusMessage::createSignal(QStringLiteral("/com/example/MyApp"),
QStringLiteral("com.canonical.Unity.LauncherEntry"),
QStringLiteral("Update"));
//you don't always have to specify all parameters, just the ones you want to update
QVariantMap properties;
properties.insert(QStringLiteral("progress-visible"), true);// enable the progress
properties.insert(QStringLiteral("progress"), 0.5);// set the progress value (from 0.0 to 1.0)
properties.insert(QStringLiteral("count-visible"), true);// display the "counter badge"
properties.insert(QStringLiteral("count"), 42);// set the counter value
message << QStringLiteral("application://myapp.desktop") //assuming you named the desktop file "myapp.desktop"
<< properties;
QDBusConnection::sessionBus().send(message);
Compile and run your application. You don't have to start it via the desktop file, at least I did not need to. If you want to be sure your application is "connected" to that desktop file, just set a custom icon for the file. Your application should show that icon in the taskbar.
And thats basically it. Note: The system remembers the last state when restarting the application. Thus, you should reset all those parameters once when starting the application.
Right, so as it turns out you are right, there is not currently a tutorial for this. This reviewboard request, however, shows how it was implemented in KDevelop, and it should be possible for you to work it out through that :) https://git.reviewboard.kde.org/r/127050/
ps: that there is no tutorial now might be a nice way for you to hop in and help out, by writing a small, self contained tutorial for it... something i'm sure would be very much welcomed :)
I use squish-4.2.2 for testing GUI of our tool and use purecov.i386_linux2.7.3 for covering them. Our tools are based on qt-4.7.4_qsci version of QT. After building our tools in Purecov mode, when we run our tests, they fail if tests contain operation with "popsup menu". Purecov cannot generate the result *.pcv file. Also I would like to note that our tools do not fail when they are run without Squish, however "Popsup Menu" opens not earlier than after 30-60 seconds (in normal mode it is done during 1-2 seconds).
So I have 2 issues:
1. when tests are run with Squish, they fail when tests contain operation with "Menu" item;
2. Purecov does not generate *.pcv file when tests fail.
I tried to find some interesting things on your site for resolving those problems, but I couldn't find anything related to my issues.
In my opinion, Squish failed because when I try to open "Menu" item, GUI runs faster than its logic part, and after opening "Menu" item, Squish considers that operation is done and kills my tool.
Could you please tell me what I can do with my tests or tools for resolving those problems?
Thanks.
I had a similar issue in the past, with clicking menus from my app.
I hope this helps you also!
Example:
I wanted to open a "File" menu, followed by a sub-menu (which is pop-up) "New". When I'm in record mode of Squish, Squish records the following code in python:
activateItem(waitForObjectItem(":MainWindowForm.m_poMainMenu_QMenuBar", "File"))
activateItem(waitForObjectItem(":MainWindowForm.menuFile_QMenu", "New..."))
Now, this didn't worked all the time, honestly I didn't managed to understand why :).
But, I found on their site, this a possible solution. So, I've replaced Symbolic names from the code above, and created a function that calls the objects after Real name properties:
def callMenu(menu_name, submenu_name):
activateItem(waitForObjectItem("{type='QMenuBar' visible='true'}", menu_name))
activateItem(waitForObjectItem("{type='QMenu' title='%s'}" % menu_name, submenu_name))
After I made this change, the tests run smooth, without anymore problems (at least from the menu side).
I need to add couple of assertions on the screen.
Lets Say I am on Page 1. I need to verify that some xxx text is displayed or not and button is displayed or not and also need to verify that the label of the button.
Please Help me how to add assertion in the monkey runner script..
Thanks
AFAIK Monkeyrunner doesn't have its own assertion mechanisms that would suit your need.
You can take a snapshot of your device and use some external image processing mechanism to verify interesting parts - but I know that wouldn't be ideal for text comparison.
You can use Python Imaging Library http://www.pythonware.com/products/pil/
Take a look at http://developer.android.com/guide/developing/tools/MonkeyImage.html, if you already have a MonkeyImage object that looks correct you can use MonkeyImage.sameAs() to compare it to the current MonkeyImage.
http://docs.python.org/library/pickle.html might be helpful for saving MonkeyImage objects. (I'd like to stress the might though)
The next version of the SDK should have a method of loading MonkeyImage objects from image files so you can compare it with less work. See https://review.source.android.com//#change,21478 for more info about this change.