I'm setting up a functional test suite for an application that loads an external configuration file. Right now, I'm using flexunit's addAsync function to load it and then again to test if the contents point to services that exist and can be accessed.
The trouble with this is that having this kind of two (or more) stage method means that I'm running all of my tests in the context of one test with dozens of asserts, which seems like a kind of degenerate way to use the framework, and makes bugs harder to find. Is there a way to have something like an asynchronous setup? Is there another testing framework that handles this better?
It is pretty easy, but took me 2 days to figure it out.
The solution:
First you need to create a static var somewhere.
public static var stage:Stage
There is a FlexUnitApplication.as created by the flexunit framework, and at the onCreationComplete() function, you can set the stage to the static reference created previously:
private function onCreationComplete():void
{
var testRunner:FlexUnitTestRunnerUIAS=new FlexUnitTestRunnerUIAS();
testRunner.portNumber=8765;
this.addChild(testRunner);
testStageRef.stage=stage //***this is what I've added
testRunner.runWithFlexUnit4Runner(currentRunTestSuite(), "testsuitename");
}
and when you would access the stage in the program, you should replace it to:
if(stage==null) stage=testStageRef.stage
Assuming you're using FlexUnit 4, addAsync can be called from a [BeforeClass] method:
public class TestFixture
{
[BeforeClass]
public static function fixtureSetup() : void
{
// This static method will be called once for all the tests
// You can also use addAsync in here if your setup is asynchronous
// Any shared state should be stored in static members
}
[Test]
public function particular_value_is_configured() : void
{
// Shared state can be accessed from any test
Assert.assertEquals(staticMember.particularValue, "value");
}
}
Having said that, testing code that accesses a file is really an integration test. I'm also hardly in a position to argue against using ASMock :)
Sounds like you need to remove the dependency of loading that external file. Pretty much all Aysnchronous tests can be removed through the use of a mocking frameworks. ASMock is an awesome choice for Flex. It will allow you to fake the URLoader object and return faked configurations to run your tests against. Mocking will help with you write much better unit tests as you can mock all dependencies synchronous or asynchronous.
Related
I've managed to build some fairly simple tests that do not utilise a Page Object Model structure. The Specflow steps will just call the driver methods (such as finding an element on the page and asserting the text is correct).
The tests use NUnit as the runner and I have managed to add parallel execution by adding [Parallelizable(ParallelScope.Fixtures)] to the assembly class for the solution. This works well, but the reports that come out of NUnit are a bit messy and I'd like more useful information on them (such as screenshots).
I have since added Extent reports to the solution, whilst this works fine for when the tests run sequentially, an error message appears when running them in parallel.
The FeatureContext.Current static accessor cannot be used in multi-
threaded execution. Try injecting the feature context to the binding
class.
The Context.Current steps are used in the creation of the Extent reports. I've been reading the documentation relating to multithreading from the Specflow site, but I'm having issues understanding the concept and figuring out how I can inject FeatureContext into the binding class. I'm trying to follow this example from the site:
[Binding]
public class StepsWithScenarioContext : Steps
{
[Given(#"I put something into the context")]
public void GivenIPutSomethingIntoTheContext()
{
this.ScenarioContext.Set("test-value", "test-key");
}
}
I've also been trying to follow other examples, but I've yet to see any documentation relating how to use ScenarioContext with something like driver.findElement(By.Id("blah")).
Any help would be appreciated, I am fairly new to test automation.
You need to have a property in your Steps class:
ScenarioContext _scenarioContext.
In Constructor you adding ScenarioContext scenarioContext as a parameter and initilizing it using:
_scenarioContext = scenarioContext
Simple example:
class Steps
ScenarioContext _scenarioContext;
public Steps (ScenarioContext scenarioContext)
{
_scenarioContext = scenarioContext;
}
Only I don't know, how it will work with inheritance.
I'm a newbie on symfony, And I don't understand the advantage of use service instead of write the cose in Controller
For Example, I Have a service that Create Log, with a code like this:
$path = $root.'/../web';
$fs->touch($path.'/log.txt');
$this->file = $path.'/log.txt';
file_put_contents($this->file, $msg, FILE_APPEND | LOCK_EX);
I can put this login in service with DIC ($fs is FileSystem service), or I can Put this Login on my Controller.
Of course If i Need to log often I have to write the same code. The main advantage is decoupling?
Thanks a lot
Suppose you have a Bar class which uses BasicLogger.
You have a few ways to get access to this logger, lets start with the most simple option:
<?php
class Bar
{
public function bar()
{
$logger = new BasicLogger();
$logger->log("foo");
}
}
This is bad practice because we are mixing construction logic with application logic. It still works, but it has the following drawbacks:
It mixes responsibilities.
Bar becomes hard to test and cannot be tested without side effects.
We cannot dynamically change loggers (code is less reusable).
To solve these drawbacks, we can instead require our Logger class through the constructor.
Our code now looks like this:
class Bar
{
private $logger;
public function __construct(Logger $logger)
{
$this->logger = $logger;
}
public function bar()
{
$this->logger->log("foo");
}
}
Great, our class is no longer responsible for creating the logger, we can test our code without side effects (and make assertions against how the logger was used) and we can now use any logger we like.
So now we use our new class all over the application.
$logger = new Logger();
$bar = new Bar($logger);
Look familiar?
Again we are mixing construction logic with application logic, which we already know is bad.
Not only that, but something even worse is happening here, Code duplication.
Thats right. and every time we want to use our Bar class, the duplication gets worse.
The solution? Use the Service container
Registering your logger as a service would mean that all of your code that needs logging functionality is no longer dependent on your specific logger, responsibilities will not be mixed, code duplication will be reduced and your design will become more flexible.
The main goal and advantage of services is that keep reusable code and use a DRY approach.
Of course, there is a lot of other advantages that you discover progressively as you use them.
Services are accessible from whatever context of your application that can accesses the service container, not only controllers.
If without the service the few lines of code you give would be duplicated in several methods/contexts, you should keep your service.
Otherwise, delete it and do your logic in the specific method.
I think the better approach to use them is at your own feeling.
Don't try to create services in prevention, use them to solve a real need.
When you have a block of code that is duplicated, you should naturally avoid it by creating a service (or other AbstractController that your controllers can extend and inherit the code block) .
The goal is: Always keep a light code and avoid duplicates as possible.
For that, you can use the powerful services of Symfony, or just use the inheritance of classes and other POO principles.
I'm new at this TDD thing but making a serious effort, so I'm hoping to get some feedback here.
I created a little web service to minify JavaScript, and everything was nice, with all my tests passing. Then I noticed a bug: if I tried to minify alert('<script>');, it would throw a HttpRequestValidationException.
So that's easy enough to fix. I'll just add [AllowHtml] to my controller. But what would be a good way to unit test that this doesn't happen in the future?
The following was my first thought:
[TestMethod]
public void Minify_DoesntChokeOnHtml()
{
try
{
using (var controller = ServiceLocator.Current.GetInstance<MinifyController>())
{
return controller.Minify("alert('<script></script>');");
}
}
catch (HttpRequestValidationException)
{
Assert.Fail("Request validation prevented HTML from existing inside the JavaScript.");
}
}
However, this doesn't work since I am just getting a controller instance and running methods on it, instead of firing up the whole ASP.NET pipeline.
What would be a good unit test for this? Maybe reflector on the controller method to see if the [AllowHtml] attribute is present? That seems very structural, and unlikely to survive a refactoring; something functional might make more sense. Any ideas?
You have only two options:
First
Write integration test that hosts MVC in-proc or runs using browser (using Watin for instance) that will cover you scenario.
Second
Write unit test that will check that method is marked with needed attribute.
I would go with the first option.
I need it in FlexUnit to test private methods. Is there any possibility to do this via reflection by using describeType or maybe flexUnit has some build in facility? I dislike artificial limitation that i cannot test private functions, it greatly reduces flexibility. Yes it is good design for me to test private functions, so please do not advise me to refactor my code. I do not want to break the encapsulation for the sake of unit testing.
I'm 99% certain this isn't possible and I'm intrigued to know why you would want to do this.
You should be unit testing the output of a given class, based on given inputs, regardless of what happens inside the class. You really want to allow someone to be able to change the implementation details so long as it doesn't change the expected outputs (defined by the unit test).
If you test private methods, any changes to the class are going to be tightly coupled to the unit tests. if someone wants to reshuffle the code to improve readability, or make some updates to improve performance, they are going to have to update the unit tests even though the class is still functioning as it was originally designed.
I'm sure there are edge cases where testing private methods might be beneficial but I'd expect in the majority of cases it's just not needed. You don't have to break the encapsulation, just test that your method calls give correct outputs... no matter what the code does internally.
Just create a public method called "unitTest" and call all your unit tests within that method. Throw an error when one of them fails and call it from your test framework:
try {
myobject.unitTest();
} catch (Exception e) {
//etc.
}
You cannot use describeType for that.
From the Livedocs - flash.utils package:
[...]
Note: describeType() only shows public properties and methods, and will not show
properties and methods that are private, package internal or in custom namespaces.
[...]
When the urge to test a private method is irresistible I just create a testable namespace for the method.
Declare a namespace in a file like this:
package be.xeno.namespaces
{
public namespace testable = "http://www.xeno.be/2015/testable";
}
Then you can use the testable as a custom access modifier for the method you want to test like this:
public class Thing1
{
use namespace testable;
public function Thing1()
{
}
testable function testMe() : void
{
}
}
You can then access that modifier by using the namespace in your tests:
public class Thing2
{
use namespace testable;
public function Thing2()
{
var otherThing : Thing1 = new Thing1();
otherThing.testMe();
}
}
Really though I think this is a hint that you should be splitting your functionality into a separate class.
I am trying to create my own EasyBinderDropDown that currently looks like this:
public class EasyBinderDropDown : DropDownList, ICanBindToObjectsKeyValuePair {
public void BindToProperties<TYPE_TO_BIND_TO>(IEnumerable<TYPE_TO_BIND_TO>
bindableEnumerable,
Expression<Func<TYPE_TO_BIND_TO, object>> textProperty,
Expression<Func<TYPE_TO_BIND_TO, object>> valueProperty) {...}
public bool ShowSelectionPrompt { get; set; }
public string SelectionPromptText { get; set; }
public string SelectionPromptValue { get; set; }
//...
}
Basically it is very helpful for easy binding to objects from inside code since you just do something like _dropDown.BindToProperties(myCustomers, c=>c.Name, c=>c.Id) and it works for you, also by setting ShowSelectionPrompt and SelectionPromptText I can easily have a "Select Customer" Line. I don't want to ask so much about my specific implementation, rather I am confused how to write unit tests for some scenarios.
For example my current tests cover the control being created properly during load and having its output render properly but I am lost as to how to test what happens when the control gets posted back. Can anyone give me some advice on how to test that? I would prefer to do this without having to mock an HTTPContext or anything like that, Is there a way I can simulate the control being rebuilt?
"I would prefer to do this without having to mock an HTTPContext or anything like that, Is there a way I can simulate the control being rebuilt."
By definition, you are not asking to "unit test"; you are looking for an "integration test". If you are not mocking the major dependencies, in this case, the ASP.NET runtime components, the what you are testing is the integration between your control and ASP.NET.
If you do not want to mock out the HttpContext and friends, then I would suggest an automated web testing framework such as Selenium or NUnitAsp.
Update: Based on the comment. Don't have the code access directly the IsPostback or other asp.net stuff. Wrap them with simple classes/interfaces. Once you have done that, send mocks that implement those interfaces. This way you don't have to mock the whole HttpContext, just the pieces that matter for the code (which are really clear based on the interfaces involved).
Also, given it is an asp.net custom control, you don't want to force requirements on external things like dependency injection. Have a default (no parameters) constructor, that sets up the control to use the asp.net stuff. Use a constructor with more parameters to send the mocked versions.
Initial answer:
It seems to me you are looking for a happy middle between unit tests and integration tests. You are working with a custom control, which can go wrong on different parts of the asp.net's page lifecycle.
I would:
Check if you can move parts of the code
out of the custom control to separate
classes, you can more easily unit test
For simple scenarios, rely on the functional tests of the rest of the project to catch any further issue with the control (use watin / selenium rc).
For more complex scenarios, as if the control will be used in different parallel projects or will be delivered to the public, set up some test pages and automate against it (again watin / selenium rc).
You write the tests in watin / selenium rc in c#, and run them in your "unit" test framework. Make sure to keep them separated from the unit tests, since they will clearly run slower.
Ps. I haven't used ms test support for asp.net, it might have some support for what you are looking for.