I've managed to build some fairly simple tests that do not utilise a Page Object Model structure. The Specflow steps will just call the driver methods (such as finding an element on the page and asserting the text is correct).
The tests use NUnit as the runner and I have managed to add parallel execution by adding [Parallelizable(ParallelScope.Fixtures)] to the assembly class for the solution. This works well, but the reports that come out of NUnit are a bit messy and I'd like more useful information on them (such as screenshots).
I have since added Extent reports to the solution, whilst this works fine for when the tests run sequentially, an error message appears when running them in parallel.
The FeatureContext.Current static accessor cannot be used in multi-
threaded execution. Try injecting the feature context to the binding
class.
The Context.Current steps are used in the creation of the Extent reports. I've been reading the documentation relating to multithreading from the Specflow site, but I'm having issues understanding the concept and figuring out how I can inject FeatureContext into the binding class. I'm trying to follow this example from the site:
[Binding]
public class StepsWithScenarioContext : Steps
{
[Given(#"I put something into the context")]
public void GivenIPutSomethingIntoTheContext()
{
this.ScenarioContext.Set("test-value", "test-key");
}
}
I've also been trying to follow other examples, but I've yet to see any documentation relating how to use ScenarioContext with something like driver.findElement(By.Id("blah")).
Any help would be appreciated, I am fairly new to test automation.
You need to have a property in your Steps class:
ScenarioContext _scenarioContext.
In Constructor you adding ScenarioContext scenarioContext as a parameter and initilizing it using:
_scenarioContext = scenarioContext
Simple example:
class Steps
ScenarioContext _scenarioContext;
public Steps (ScenarioContext scenarioContext)
{
_scenarioContext = scenarioContext;
}
Only I don't know, how it will work with inheritance.
Related
I am trying to find the easy way to load my fixtures in Symfony 2.6 to run functional tests. This is a quite common question, and has been asked a few times, but the answers I have found so far do not quite reach my expectations:
Some rely on running the command line from inside the functional test.
Other run manually each one of the defined fixtures, and then take care of creating and deleting the database.
There is a lot of overhead in both cases (use statements and manual code), for a task that I believe is very standard.
On the other hand, these same posts recommend the LiipFunctionalTestBundle. Going for it, here is what I read in the installation instructions:
write fixture classes and call loadFixtures() method from the bundled
Test\WebTestCase class. Please note that loadFixtures() will delete the contents from the database before loading the fixtures."
So I tried...
namespace AppBundle\Test\Controller;
use Symfony\Bundle\FrameworkBundle\Test\WebTestCase;
class MyControllerTest extends WebTestCase
{
public function setUp()
{
$classes = array(
'AppBundle\DataFixtures\LoadUserData',
);
$this->loadFixtures($classes);
}
...
}
With no luck:
Call to undefined method AppBundle\Tests\Controller\MyControllerTest::loadFixtures() in /gitrepos/myproject/src/AppBundle/Tests/Controller/MyControllerTest.php on line 15
a static call gives the same error...
self:loadFixtures($classes);
I really think I am missing something pretty obvious. Anyone can get me back on track ?
I can see you're using
Oro\Bundle\TestFrameworkBundle\Test\WebTestCase
as the base class while I think you should use
Liip\FunctionalTestBundle\Test\WebTestCase
to be able to call this method.
I'm trying to use Groovy mixin transformation on a spring-mvc controller class but Spring does not pickup the request mapping from the mixed in class.
class Reporter {
#RequestMapping("report")
public String doReport() {
"report"
}
}
#Mixin(Reporter)
#Controller
#RequestMapping("/a")
class AController {
#RequestMapping("b")
public String doB() {
"b"
}
}
When this code is run .../a/b url is mapped and works but .../a/report is not mapped and returns HTTP 404. In debug mode, I can access doReport method on AController by duck typing.
This type of request mapping inheritance actually works with Java classes when extends is used; so why it does not work with Groovy's mixin? I'm guessing it's either that mixin transformation does not transfer annotations on the method or that spring's component scanner works before the mixin is processed. Either way, is there a groovier way to achieve this functionality (I don't want AController to extend Reporter for other reasons, so that's not an option) ?
You can find below the responses I got from Guillaume Laforge (Groovy project manager) in Groovy users mailing list.
Hi,
I haven't looked at Spring MVC's implementation, but I suspect that
it's using reflection to find the available methods. And "mixin"
adding methods dynamically, it's not something that's visible through
reflection.
We've had problems with #Mixin over the years, and it's implementation
is far from ideal and bug-ridden despite our efforts to fix it. It's
likely we're going to deprecate it soon, and introduce something like
static mixins or traits, which would then add methods "for real" in
the class, which means such methods like doReport() would be seen by a
framework like Spring MVC.
There are a couple initiatives in that area already, like a prototype
branch from Cédric and also something in Grails which does essentially
that (ie. adding "real" methods through an AST transformation).
Although no firm decision has been made there, it's something we'd
like to investigate and provide soon.
Now back to your question, perhaps you could investigate using
#Delegate? You'd add an #Delegate Reporter reporter property in your
controller class. I don't remember if #Delegate carries the
annotation, I haven't double checked, but if it does, that might be a
good solution for you in the short term.
Guillaume
Using the #Delegate transformation did not work on its own, so I needed another suggestion.
One more try... I recalled us speaking about carrying annotations for
delegated methods... and we actually did implement that already. It's
not on by default, so you have to activate it with a parameter for the
#Delegate annotation:
http://groovy.codehaus.org/gapi/groovy/lang/Delegate.html#methodAnnotations
Could you please try with #Delegate(methodAnnotations = true) ?
And the actual solution is:
class Reporter {
#RequestMapping("report")
public String doReport() {
"report"
}
}
#Controller
#RequestMapping("/a")
class AController {
#Delegate(methodAnnotations = true) private Reporter = new Reporter
#RequestMapping("b")
public String doB() {
"b"
}
}
When you map requests with annotations, what happens is that once the container is started, it scans the classpath, looks for annotated classes and methods, and builds the map internally, instead of you manually writing the deployment descriptor.
The scanner reads methods and annotations from the compiled .class files. Maybe Groovy mixins are implemented in such a way that they are resolved at runtime, so the scanner software can't find them in the compiled bytecode.
To solve this problem, you have to find a way to statically mixin code at compile time, so that the annotated method is actually written to the class file.
I would like my test to fail if I mock an interface using Mockery and use a shouldReceive with a non-existing method. Looking around didn't help.
For instance :
With an interface :
interface AInterface {
public function foo();
public function bar();
}
And a test case :
function testWhatever{
Mockery::mock('AInterface')->shouldReceive('bar');
$this->assertTrue(true);
}
The test will pass.
Now, refactoring time, bar method is not needed in one place (let's say it's needed on several places) and is suppressed from the interface definition but the test will still pass. I would like it to fail.
Is it possible to do such a thing using mockery (and to be able to do the same thing with a class instead of an interface) ?
Or does a workaround exist with some other tool or a testing methodology ?
Not sur if this can be understood as is, will try to make a clearer description of the issue if needed.
To ensure that Mockery doesn't allow you to mock methods that don't exist, put the following code in your PHPUnit bootstrap file (if you want this behavior for all tests):
\Mockery::getConfiguration()->allowMockingNonExistentMethods(false);
If you just want this behavior for a specific test case, put the code in the setUp() method for that test case.
Check this section of the Mockery manual on Github for more information.
If you want to make sure that the method is called only one time, you can use once().
Suppose that the class AImplementation, implements the interface AInterface, and you wanna tested that the method is called, an example could be:
Mockery::mock('AImplementation')->shouldReceive('bar')->once();
You can also use: zeroOrMoreTimes(), twice() or times(n), checkout the repo at github. Also, I recommend you this tutorial by Jeffrey W
I'm setting up a functional test suite for an application that loads an external configuration file. Right now, I'm using flexunit's addAsync function to load it and then again to test if the contents point to services that exist and can be accessed.
The trouble with this is that having this kind of two (or more) stage method means that I'm running all of my tests in the context of one test with dozens of asserts, which seems like a kind of degenerate way to use the framework, and makes bugs harder to find. Is there a way to have something like an asynchronous setup? Is there another testing framework that handles this better?
It is pretty easy, but took me 2 days to figure it out.
The solution:
First you need to create a static var somewhere.
public static var stage:Stage
There is a FlexUnitApplication.as created by the flexunit framework, and at the onCreationComplete() function, you can set the stage to the static reference created previously:
private function onCreationComplete():void
{
var testRunner:FlexUnitTestRunnerUIAS=new FlexUnitTestRunnerUIAS();
testRunner.portNumber=8765;
this.addChild(testRunner);
testStageRef.stage=stage //***this is what I've added
testRunner.runWithFlexUnit4Runner(currentRunTestSuite(), "testsuitename");
}
and when you would access the stage in the program, you should replace it to:
if(stage==null) stage=testStageRef.stage
Assuming you're using FlexUnit 4, addAsync can be called from a [BeforeClass] method:
public class TestFixture
{
[BeforeClass]
public static function fixtureSetup() : void
{
// This static method will be called once for all the tests
// You can also use addAsync in here if your setup is asynchronous
// Any shared state should be stored in static members
}
[Test]
public function particular_value_is_configured() : void
{
// Shared state can be accessed from any test
Assert.assertEquals(staticMember.particularValue, "value");
}
}
Having said that, testing code that accesses a file is really an integration test. I'm also hardly in a position to argue against using ASMock :)
Sounds like you need to remove the dependency of loading that external file. Pretty much all Aysnchronous tests can be removed through the use of a mocking frameworks. ASMock is an awesome choice for Flex. It will allow you to fake the URLoader object and return faked configurations to run your tests against. Mocking will help with you write much better unit tests as you can mock all dependencies synchronous or asynchronous.
I am trying to create my own EasyBinderDropDown that currently looks like this:
public class EasyBinderDropDown : DropDownList, ICanBindToObjectsKeyValuePair {
public void BindToProperties<TYPE_TO_BIND_TO>(IEnumerable<TYPE_TO_BIND_TO>
bindableEnumerable,
Expression<Func<TYPE_TO_BIND_TO, object>> textProperty,
Expression<Func<TYPE_TO_BIND_TO, object>> valueProperty) {...}
public bool ShowSelectionPrompt { get; set; }
public string SelectionPromptText { get; set; }
public string SelectionPromptValue { get; set; }
//...
}
Basically it is very helpful for easy binding to objects from inside code since you just do something like _dropDown.BindToProperties(myCustomers, c=>c.Name, c=>c.Id) and it works for you, also by setting ShowSelectionPrompt and SelectionPromptText I can easily have a "Select Customer" Line. I don't want to ask so much about my specific implementation, rather I am confused how to write unit tests for some scenarios.
For example my current tests cover the control being created properly during load and having its output render properly but I am lost as to how to test what happens when the control gets posted back. Can anyone give me some advice on how to test that? I would prefer to do this without having to mock an HTTPContext or anything like that, Is there a way I can simulate the control being rebuilt?
"I would prefer to do this without having to mock an HTTPContext or anything like that, Is there a way I can simulate the control being rebuilt."
By definition, you are not asking to "unit test"; you are looking for an "integration test". If you are not mocking the major dependencies, in this case, the ASP.NET runtime components, the what you are testing is the integration between your control and ASP.NET.
If you do not want to mock out the HttpContext and friends, then I would suggest an automated web testing framework such as Selenium or NUnitAsp.
Update: Based on the comment. Don't have the code access directly the IsPostback or other asp.net stuff. Wrap them with simple classes/interfaces. Once you have done that, send mocks that implement those interfaces. This way you don't have to mock the whole HttpContext, just the pieces that matter for the code (which are really clear based on the interfaces involved).
Also, given it is an asp.net custom control, you don't want to force requirements on external things like dependency injection. Have a default (no parameters) constructor, that sets up the control to use the asp.net stuff. Use a constructor with more parameters to send the mocked versions.
Initial answer:
It seems to me you are looking for a happy middle between unit tests and integration tests. You are working with a custom control, which can go wrong on different parts of the asp.net's page lifecycle.
I would:
Check if you can move parts of the code
out of the custom control to separate
classes, you can more easily unit test
For simple scenarios, rely on the functional tests of the rest of the project to catch any further issue with the control (use watin / selenium rc).
For more complex scenarios, as if the control will be used in different parallel projects or will be delivered to the public, set up some test pages and automate against it (again watin / selenium rc).
You write the tests in watin / selenium rc in c#, and run them in your "unit" test framework. Make sure to keep them separated from the unit tests, since they will clearly run slower.
Ps. I haven't used ms test support for asp.net, it might have some support for what you are looking for.