How to Add assertion in monkey runner - monkeyrunner

I need to add couple of assertions on the screen.
Lets Say I am on Page 1. I need to verify that some xxx text is displayed or not and button is displayed or not and also need to verify that the label of the button.
Please Help me how to add assertion in the monkey runner script..
Thanks

AFAIK Monkeyrunner doesn't have its own assertion mechanisms that would suit your need.
You can take a snapshot of your device and use some external image processing mechanism to verify interesting parts - but I know that wouldn't be ideal for text comparison.
You can use Python Imaging Library http://www.pythonware.com/products/pil/

Take a look at http://developer.android.com/guide/developing/tools/MonkeyImage.html, if you already have a MonkeyImage object that looks correct you can use MonkeyImage.sameAs() to compare it to the current MonkeyImage.
http://docs.python.org/library/pickle.html might be helpful for saving MonkeyImage objects. (I'd like to stress the might though)
The next version of the SDK should have a method of loading MonkeyImage objects from image files so you can compare it with less work. See https://review.source.android.com//#change,21478 for more info about this change.

Related

Customized json report for karate framework [duplicate]

I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047

Karate tests - problem with visiting PDF link from email in headless mode (run from Jenkins) [duplicate]

I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047

How to create blackbox functionality testing form

I know litte bit bout blackbox functionality testing. But my supervisor asked me to do the form for my system's evaluations. And i don't know how to started it. I need guidances to build the form. Which topics should i include in the form?
Black box testing document covers mainly
Action on a particular field
Steps to be followed
Input
Expected Output
Result
Please confirm from your Supervisor, whether he/she expects the same from you

Frama-C's extensible printer and projects

I am trying to make changes to the behavior of a function and print the results to a file. The ViewCfg plug-in described in the Plug-in Development Guide does something similar, but I am trying to avoid having to use Ast.get, which ViewCfg uses. I am thinking of extending Printer.extensible_printer which, according to the Frama-C API Documentation, is something I can do if I want to obtain a custom pretty-printer.
However, if I extend the pretty-printer as described in the API docs, unless I'm doing something wrong, I notice that the changes I make take place regardless of which project is set as the current project. I'm using File.create_project_from_visitor to create a new project and Project.set_current to set the new project as the current project before I use the custom pretty-printer.
Is it true that any change made by a class that extends Printer.extensible_printer applies to all projects? I happen to be using frama-c-Aluminium-20160502, which I know is not the latest version.
EDIT: Sorry, I should have made this clearer in the beginning, but I'm not actually making changes to the behavior of a function. I'm reading in the behavior of a function, then based on that, I'm trying to generate as output valid C code that's meant to be read as input by another program.
Regarding AST.get, the only reason I was avoiding it was that it sounds like it gets the entire AST, while I'm only interested in part of it, i.e. behaviors. But if I'm just making things harder for myself by avoiding it, then I'll go ahead and use it.

Looking for a simple explanation on using trace logging

I have seen several projects that use the Trace functionality to capture events and stream them out to a log file. I have been unsuccessful in finding a simple to follow guide that will show me how to configure Trace to capture and write said logfile. Does anyone have a link recommendations, or provide some simple steps to follow?
The Trace object writes the statements to any attached TraceListeners. You can build your own, but there are a number already defined in the System.Diagnostics namespace, including:
ConsoleTraceListener (Console)
DefaultTraceListener (Visual Studio / Debugger)
DelimitedListTraceListener (TextWriter, special formatting)
EventLogTraceListener (EventLog - anything that inherits from System.Diagnostics.EventLog)
TextWriterTraceListener (TextWriter - think file)
You can, of course, inherit your own from the TraceListener class that writes to where ever you want. For example, you could log to a database, have it send e-mails or pages in certain situations, or write the statements back to a logging platform like log4net.
The big thing is that you need to create an instance of whatever listeners you want and then add them to the Trace' class Listeners collection. You can add as many as you need and Trace will write to all of them. This way, you can write your logging code once using a well-supported and understood object that's part of the framework, and you can attach anything you need to it.
I stumbled into a MSDN article that really helps. Sorry I didn't find it before posting the question but perhaps others may have the same question and haven't found this link.
Take a look at logging frameworks. We rolled out own, but are now migrating over to log4net available free at http://logging.apache.org/log4net/
Im looking for a way to set the Category of the EventLog, the FormattedEventLogTraceListener writes into (not the category of the message).
But I can't find an appropriate property of this class.
Is it possible to set this?

Resources