I experienced the issue with Webdriver Sampler in JMeter. I would like to define some functions in JSR233, then call them from the other Thread Groups. But I don't know how to use WDS as Webdriver Sampler.
The Test Plan looks like:
Test Plan
setUp Thread Group
JSR233 Sampler (define function)
Thread Group 1
Webdriver Sampler A (call function from JSR233)
Webdriver Sampler B
And the function which I defined, like this:
var WDS = com.googlecode.jmeter.plugins.webdriver.sampler.WebDriverScriptable;
var getBroswer = WDS.browser.get('http://www.google.com.vn');
but I got the error
2016/07/06 16:19:06 WARN - jmeter.protocol.java.sampler.BSFSampler: BSF error org.apache.bsf.BSFException: JavaScript Error: Java class "com.googlecode.jmeter.plugins.webdriver.sampler.WebDriverScriptable" has no public instance field or method named "browser".
at org.apache.jmeter.util.BSFJavaScriptEngine.handleError(BSFJavaScriptEngine.java:202)
at org.apache.jmeter.util.BSFJavaScriptEngine.eval(BSFJavaScriptEngine.java:152)
at org.apache.jmeter.protocol.java.sampler.BSFSampler.sample(BSFSampler.java:98)
at org.apache.jmeter.threads.JMeterThread.executeSamplePackage(JMeterThread.java:465)
at org.apache.jmeter.threads.JMeterThread.processSampler(JMeterThread.java:410)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:241)
at java.lang.Thread.run(Thread.java:745)
So, my question is, how to use WDS, also WDS.browser when I define function in BSF,JSR233?
I think the way to do is the same these questions:
How to use JMeter Property props.get props.put from WebDriver Sampler (JMeter)
How to pass variable in Webdriver-Sampler | Jmeter Webdriver
How to set JMeter Vars from within WebDriver Sampler?
Can anyone please help me? Thanks in advance.
I don't think you'll be able to share BSF functions across different Thread Groups, consider using Beanshell Test Elements and bsh.shared namespace instead.
Related
I have SomeBigFlow that calls multiple subflows inside it i.e ValidateFlowA, ValidateFlowB. Assuming it is mandatory for A and B to be initiating flows not functions.
How do I mock a return value for ValidateFlowA when I run the SomeBigFlow in Junit?
I've seen some references to using registerAnswer to mock flows' return value here. I am also curious why this function is only available for InternalMockNetwork.MockNode but not MockNetwork.StartedMockNode which is typically used during junit testing)
I thought I could replicate it by having node[1].registerAnswer(ValidateFlowA.class, 20). But when I ran node[1].startFlow(SomeBigFlow).resultFuture.getOrThrow(), the ValidateFlowA is still using its default call implementation instead of returning the mocked 20 integer value. Maybe I'm using it wrong.
Any pointers on how to make this work or is there a solution to achieve mocking inlined subflows returned values? The only other way I can think of is have a rule of thumb that whenever calling an inlined subflow, put them in an open fun that can be overridden during mocknetwork testing - this makes inlined subflow tedious, hoping for a neater way.
For now, you'd have to use a similar approach to the one outlined here: Corda with mockito_kotlin for unit test.
In summary:
Make the FlowLogic class you are testing open, and move the call to the subflow into an open method
For testing, create a subclass of the open FlowLogic class where you override the open method to return a dummy result
Use this subclass in testing
I am trying to understand is it possible with phpunit to create spy on a method, while calling the original method.
I have done it in java, but I don't see a way to do it in phpunit. I only find that if I spy on the invocations of the method, I also need to mock it.
Example code:
$this->spy = $this->getMockBuilder('\ClassUnderTest')
->setMethods(['methodToSpy'])
->getMock();
$this->spy->expects($this->any())
->method('methodToSpy')
->will($this->returnCallback(array($this, 'stubMethodToSpy')));
So in the test, I want to "spy" on the call to the real method 'methodToSpy()', so I can make on-the-fly analysis on the parameters passed to it (I need to use them in the test later on).
Any idea if this is possible? (or maybe it is not possible in phpunit because it is not multithreaded like java)
You are looking for test proxies.
Background:
I have created a basic playground project that contains:
A testLogin.java file that contains:
a. testng package imports (org.testng.*)
b. selenium webdriver imports (org.openqa.selenium.*)
c. 5 test-methods with testng annotations:
#Test(groups={"init"})
public void openURL()
Contains webdriver code to initiate the webdriver and open a chrome >instance with a given url.
#Test(dependsOnGroups={"init"})
public void testLogin()
Contains webdriver code to:
1. Locate username password text-input elements, enter the username password from a properties file.
2. Locate the "log in" button and click the button to log-in
3. Manage a login-forcefully scenario if someone else has already logged in using the credentials.
#Test(dependsOnMethods={"testLogin"})
public void testPatientsScheduleList()
Contains webdriver code to check if any patients have been scheduled. If yes, then fetch the names and display in console.
#Test()
public void testLogout()
Contains webdriver code to locate the logout button and click on the button to logout of the app.
#AfterTest()
public void closeConnection()
Contains webdriver code to dispose the webdriver object and close the chrome instance.
Currently I am simply running the test script wrapped as testng methods from ANT and a testng-xslt report gets generated.
Issues:
1. Performing validations against every line of code of webdriver script in a test method:
I know:
1. Selenium webdriver script contains API methods (findElement() and others.) that throw exceptions as a result of a default assertion/validation they perform. These exceptions show up in the generated report when a test-method fails.
2. TestNG provides Assert class that has many assertion methods but I have not yet figured out how can i use them to perform validation/assertions against every line of code of webdriver script. I tried adding assertion methods after every line of webdriver script code. What appeared in the output was just an AssertionError exception for a testmethod.
2. Failing a certain test method which gets passed due to try.. catch block.
If I use a try catch block around a set of 2 or more test drive script steps, and if a test-case fails in any of the steps (script line) then the try..catch block handles it thereby showing the test-method as "passed" in the execution report, which actually failed.
3. Creating a custom report which will show desired test execution results and not stack-traces!
When I execute the above script, a testng-xslt report gets generated that contains pass/fail status of each test method in a test-suite (configured in testng.xml).
The test-results only give me whether a test-method has passed or failed and provides an exception's stack-trace which really doesn't provide any helpful information.
I don't want such abstract level of test execution results but something like:
Name | Started | Duration | What-really-went-wrong (Failure)
Can anyone please suggest/ give some pointers regarding:
1. How can I perform validation/assertion against every line of code of webdriver script in a test-method without writing asserts after every script line?
2. How can I fail a certain test method which gets passed due to try catch block?
3. How can I customize the failure reporting so that I can send a failure result like "Expected element "button" with id "bnt12" but did not find the element at step 3 of test-method" to testng's reporting utility?
4. In the testng-xslt report I want to display where exactly in the test-method a failure occurred. So for example if my test-method fails because of a webelement = driver.findElement() at line 3 of a test-method, I want to display this issue in the test-report in the "What-really-went-wrong" column. I have read about testng testlisteners TestListenerAdapter / ITestListener/ IReporter but I don't understand how to use them after checking testng's javadocs.
5. Also, I have to implement PageObject pattern once I am done with customizing the test report. Where would be the right place to perform assertions in a page-object pattern? Should assertions be written in the page object test methods or in the higher level test methods that will use the PageObject classes?
P.S: I am completely new to testng framework and webdriver scripting. Please bear with any technical mistakes or observation errors if any in the post.
How can I perform validation/assertion against every line of code of webdriver script in a test-method without writing asserts after
every script line?
I dont think so. It is the assertions that does the comparison. So u need it.
How can I fail a certain test method which gets passed due to try catch block?
try-catch will mask the assertion failure.(because on assertion failure, an assertion exception is thrown, so if your catch block is like (catch(Exception e)) Assertion failures wont escape the catch block.
How can I customize the failure reporting so that I can send a failure result like "Expected element "button" with id "bnt12" but did
not find the element at step 3 of test-method" to testng's reporting
utility?
You need to use test listeners . TestNG TestListenerAdapter is a good start
Also, I have to implement PageObject pattern once I am done with
customizing the test report. Where would be the right place to perform
assertions in a page-object pattern? Should assertions be written in
the page object test methods or in the higher level test methods that
will use the PageObject classes?
My personal choice is to use assertions in Test methods, since it is where we are doing the actual testing. Page objects contains scripts for navigating inside the web page.
How can I customize the failure reporting so that I can send a failure
result like "Expected element "button" with id "bnt12" but did not
find the element at step 3 of test-method" to testng's reporting
utility?
You can use extent report and testng listener class( in this class use onTestFailure method to customize your failure report).
Lets say I have:
an object to be tested (it utilizes RX inside);
number of test object dependencies, that utilize RX too - created using MOQ.
The question is:
is this an erroneous approach to use the same TestScheduler instance to control timings within the test object and for external observables?
Thanks in advance
For any specific test, you should (and must!) use the same TestScheduler for every object / method that requires / can take an IScheduler, or else It Doesn't Work.™ ReactiveUI does this via having a global "MainThreadScheduler" object that can be overridden at test time, so you can do things like:
var oldSched = RxApp.MainThreadScheduler;
RxApp.MainThreadScheduler = new TestScheduler();
// Do a Test, and make sure all your test and runtime code use RxApp schedulers
RxApp.MainThreadScheduler = oldSched;
Or, the more elegant RxUI way is via .With()
(new TestScheduler()).With(sched => {
// Do a test here.
});
I tried to call a service using a for loop and it seems that only the first service call seems to work. My guess is that once a service is called it needs to wait until result event until it can be called again. How can I workaround this?
Waiting for each service to complete before querying for another is too slow.
Ex.
callresponder id="test"
SomeService properly imported through Flash Builder 4
for (var i:int=0;i< pool.length;i++)
{
test.token = SomeService.getSomething(pool[i].someValue);
}
Only one would be successful. Help! I don't want to call after result event!
Problem: The problem is one call responder cannot be used by multiple service call.
Solution: Make more call responders....
var c:CallResponder;
before each iteration begins
c = new CallResponder();
c.addEventListener(ResultEvent.RESULT, resultHandler);
c.token = SomeService.whatEver(something);