How to get the right rank searching in Google page - robotframework

I'm using Robot Framework (RF) to search a keyword relating to my website to know/find the ranking position of my web site in Google search page (ranking : 1st, 2nd,... and which page? 1st page, 2nd page ?).
Here are my code:
*** Test Cases ***
Rank
Open Browser http://www.google.com.vn gc
Input Text name=q atdd framework
Submit Form xpath=//form[#action='/search']
Wait Until Element Is Visible xpath=//div[#class='srg']/li[#class='g']
${xpa-count}= Get Matching Xpath Count xpath=//div[#class='srg']/li[#class='g']
${lis}= Create List
: FOR ${i} IN RANGE 1 ${xpa-count} + 1 # XPath indexes are 1-based, but Python is 0-based
\ ${li}= Get Text xpath=//div[#class='srg']/li[#class='g'][${i}]
\ Append To List ${lis} ${li}
\ Log ${li}
Log List ${lis}
List Should Contain Value ${lis} robotframework.org/
${rank}= Get Index From List ${lis} robotframework.org/
${rank}= Evaluate unicode(${rank} + 1) # increment to get back to 1-based index
Log ${rank}
Log robotframework.org has rank ${rank}
[Teardown] Close All Browsers
But on the ranking position on RF's log doesn't match to Google screen:
Documentation:
Logs the length and contents of the `list` using given `level`.
Start / End / Elapsed: 20140509 10:25:51.025 / 20140509 10:25:51.026 / 00:00:00.001
10:25:51.026 INFO List length is 10 and it contains following items:
0: atdd-with-robot-framework - Google Code
code.google.com/p/atdd-with-robot-framework/ - Dịch trang này
This project is a demonstration on how to use Acceptance Test Driven Development (ATDD, a.k.a. Specification by Example) process with Robot Framework.
1: ATDDWithRobotFrameworkArticle - Google Code
code.google.com/.../robotframework/.../ATDDWithRobot...
Dịch trang này
21-11-2010 - Acceptance Test-Driven Development with Robot Framework article by Craig Larman ... See also ATDD With Robot Framework demo project.
2: tdd - ATDD versus BDD and the proper use of a framework ...
stackoverflow.com/.../atdd-versus-bdd-and-the-proper-us...
Dịch trang này
29-07-2010 - The Quick Answer. One very important point to bring up is that there are two flavors of Behavior Driven Development. The two flavors are xBehave ...
3: ATDD Using Robot Framework - SlideShare
www.slideshare.net/.../atdd-using-robot-framework
Dịch trang này
23-11-2011 - A brief introduction to Acceptance Test Driven Development and Robot Framework.
4: [PDF]
acceptance test-driven development with robot framework
wiki.robotframework.googlecode.com/.../ATDD_with_Ro...
Dịch trang này
WITH ROBOT FRAMEWORK by Craig Larman and Bas Vodde. Version 1.1. Acceptance test-driven development is an essential practice applied by suc-.
5: Robot Framework
robotframework.org/
Dịch trang này
Robot Framework is a generic test automation framework for acceptance testing and acceptance test-driven development (ATDD). It has easy-to-use tabular test ...
6: Robot Framework - Google Code
https://code.google.com/p/robotframework/
Dịch trang này
Robot Framework is a generic test automation framework for acceptance testing and acceptance test-driven development (ATDD). It has easy-to-use tabular test ...
7: Selenium 2 and Thucydides for ATDD | JavaWorld
www.javaworld.com/.../111018-thucydides-for-atdd.html
Dịch trang này
18-10-2011 - Find out how Thucydides extends and rethinks ATDD. ... In this article I introduce Thucydides, an open source ATDD framework built on top of ...
8: ATDD | Assert Selenium
assertselenium.com/category/atdd/
Dịch trang này
24-01-2013 - Thucydides In this article I introduce Thucydides, an open source ATDD framework built on top of Selenium 2. Introducing Thucydides ...
9: (ATDD) with Robot Framework - XP2011
xp2011.org/content.apthisId=180&contentId=179
Dịch trang này
Acceptance Test Driven Development (ATDD) with Robot Framework. Executable requirements neatly combine two important XP practices: user stories and ...
Please take a look at the 6th position is Stackoverflow, but on Google is 5th.
And one more question, how to extend my test case if my website doesn't exist in 1st page, and then i'll search on next page and then get the page ID ?
Thanks.

The results on the page could be different between when you test and when you visually inspect. The first step would be to use the seleneium Get Source keyword to dump the HTML of the page. You can then use that to verify whether your code is working properly or not.
To extend your test case, I would start by writing a keyword named "get result" that returns the result, and the ranking. I would write another keyword named "next page" which goes to the next page of results. Then I would write another keyword called "find page with result" that calls "get result" in a loop. In the body of the loop I would call "get result". If the result isn't found, call "Next page" and loop again. Eventually this keyword will find the page with the results.

Related

Is there a way to generate separate TestExecution files when using multiple threads?

I am attempting to write a tool that will automate the generation of a visual studio test playlist based on failed tests from the spec flow report, we recently increased our testThreadCount to 4 and when using the LivingDocumentation plugin to generate the TestExecution.json file it is only generating a result for 1 in 4 tests and I think this is due to the threadCount so 4 tests are being seen as a single execution.
My aim is to generate a fully qualified test name for each of the failed tests using the TestExecution file but this will not work if I am only generating 25% of the results. Could I ask if anyone has an idea of a workaround for this?
<Execution stopAfterFailures="0" testThreadCount="4" testSchedulingMode="Sequential" retryFor="Failing" retryCount="0" />
This is our current execution settings in the .srprofile
We made this possible with the latest version of SpecFlow and the SpecFlow+ LivingDoc Plugin.
You can configure the filename for the TestExecution.json via specflow.json.
Here is an example:
{
"livingDocGenerator": {
"enabled": true,
"filePath": "TestExecution_{ProcessId}_{ThreadId}.json"
}
}
ProcessId and ThreadId will be replaced with values and you get for every thread a separate TestExecution.json.
You can then give the livingdoc CLI tool or the Azure DevOps task a list of TestExecution.jsons.
Example:
livingdoc test-assembly BookShop.AcceptanceTests.dll -t TestExecution*.json
This generates you one LivingDoc with all the test execution results combines.
Documentation links:
https://docs.specflow.org/projects/specflow-livingdoc/en/latest/LivingDocGenerator/Setup-the-LivingDocPlugin.html
https://docs.specflow.org/projects/specflow-livingdoc/en/latest/Guides/Merging-Multiple-test-results.html

can i map 2 testcases in jira-xray to 1 testcases in automation script and upload junit xml test results

We are using NightwatchJS automation tool for testing. We have 2 testcases in Jira-xray and 1 testcase in automation. When we run auomation, JUnit xml test results contains only 1 testcase. If JUnit xml test results are uploaded, will it mark 2 testcsaes mapped in Jira as pass/fail?
It should only map to one test case, following the rules described in the integration specifics page.
Xray will try to find an existing "Generic" Test issue having a Generic Definition field composed by the classname and name attributes of the element; these are used as a unique identifier for the test.

how to test Asyncronous services using robot framework

I have been using robot framework for testing my Rest APIs?
But I have no clue if we can test asynchronous services. I started to look for robotframework-async library but still could not figure it out.
Robot Framework is able to test asynchronous services. The trick is using wait conditions before your verification.
Try:
*** Test Cases ***
Load And Verify Table Data
Click Button To Load Table
Verify First Row Details
*** Keywords ***
Click Button To Load Table
Click Element ${SOME_BUTTON}
Wait Until Element Is Enabled ${SOME_ELEMENT_TO_WAIT_FOR}
Verify First Row Details
Table Row Should Contain ${TABLE_LOCATOR} 1 ${SOME TEXT}
Or something like that. Just whatever you do, don't use Sleep keywords. You will regret it later.

Robot Framework HTML Report Customisation

I need to customize the HTML report generated at the end of test execution.
Few things I require are:
Remove the table - Statistics by Tags as I am not using any tags
Add the version number for the SUT in the summary section of the report.
What solutions are there for this? I tried to change the robot code and also tried to work on the output.xml. But nothing worked.
There is no facility provided by robot to customize the report and log files, as far as adding or removing sections is concerned. You have two options:
write your own report generator that converts output.xml into a format you like, or
create a fork of the robot framework source code and make the modifications there.
For the case of putting the version number for the SUT in the summary section of the report, you can add that with the --metadata command line option:
pybot --meta "SUT version: 1.2.3" ...
That will add the version to the summary section. You can also use the Documentation setting or the --doc command line option to put information that will appear in the report summary.
If you aren't using tags, you should! Those are one of the best features of the framework. You can create tags during a test run, so you could have your tests define a "sut version" tag and set it to the version of the system being tested.

How do I tell robot-server to reset the test fixture when I use ride with Plone?

I'm trying to write my first robot test; I'd like to use ride as advertized in http://developer.plone.org/reference_manuals/external/plone.app.robotframework/happy.html#install-robot-tools
I added
initialization =
import os
os.environ['PATH'] = os.environ['PATH'] + os.pathsep + '${buildout:directory}/bin'
to my [robot] section to make it possible to run the tests clicking "Start" in ride.
It works, but the second time I run the tests I still see the content created by the first test run.
How do I tell robot-server to go back to a just-initialized state?
Easily (and you should throw me into pool for not documenting this yet in plone.app.robotframework's documentation – I thought that RIDE is too difficult to get running until it works on wxPython 2.9).
In RIDE
select Run-tab
change Execution Profile to custom script
click browse to select for bin/robot from your buildout as the Script to run tests
Click Start.
Technically bin/robot is a shortcut for bin/pybot --listener plone.app.robotframework.RobotListener (I keep repeating bin/, because it's important that plone.app.robotframework is available in sys.path). Robot Framework Listener -interface is specified in Robot Framework User Guide.
Our listener calls bin/robot-server (using XML-RPC) before every test to testSetUp-methods for the current test layer and after every test testTearDown-methods. This resets the fixture and isolates functional tests.

Resources