Selenium WebDriver -Tests that use other tests - automated-tests

As an example, I have:
#Test
public void login(*Code ommitted*){
That tests logging into a site.
Then I want to do other tests that start with logging in, so I've got:
#Test
public void sendUserInvite(){
login();
*Other code omitted*
}
Something is intuitively telling me this is really bad practise, but at the same time if the login test does what I need it to, so why not re-use it in this way? Can anyone clarify this. After a while I end up doing several tests at the start of another test because they are the pre-conditions in order to carry out a particular test.

If you're using TestNG, you can use #BeforeClass, #BeforeSuite, #BeforeTest, #BeforeMethod etc. in order to launch your preconditions on some step before your #Test method
E.g. you have 2 tests in xml:
<suite name="Suite0" verbose="1" >
<test name="name0" >
<classes>
<class name="Test0" />
</classes>
</test>
<test name="name1">
<classes>
<class name="Test1"/>
</classes>
</test>
</suite>
Let's assume Test0 and Test1 both extend class BaseTest
Then in BaseTest:
public class BaseTest {
#BeforeTest
public void login() {
// smth w/ login
}
}
So, when the suite is launched, login method will be invoked. Just note that #BeforeTest will work for every test from a suite, not for every method with #Test, this sometimes confuses.
UPD
If you're using JUnit, you can use #Before, and method with it will be run before every #Test in a class. So to say, it is the same as #BeforeMethod in TestNG
#Before
public void pre() {
// your login here
}
#Test
public void testA() {
// prints A
}
#Test
public void testB() {
// prints B
}
#After
public void end() {
// logout
}
Order of execution:
login
A
logout
login
B
logout

According to the following links, JUnit test cases are designed to be run in isolation and each test should be independent of one other. I believe you have to reconsider your design and go for test framework like TestNG which perfectly suites your requirements.
Choose order to execute JUnit tests
running a subset of JUnit #Test methods
How to run test methods in specific order in JUnit4?

If you feel the need to call one test method from another test method, that's a good sign that you need to extract a class.
I suggest moving login to a PageObject class:
public class HomePage {
private final WebDriver driver;
public HomePage(WebDriver driver) {
this.driver = driver;
}
public WelcomePage login(String userName, String password) {
signIn(userName, password);
Assert.assertEquals("sign in failed", userName, getSignedInUser());
return new WelcomePage(driver, userName);
}
private void signIn(String userName, String password) {
// Add code to perform the sign in
}
public String getSignedInUser() {
// Add code to check the current page to see who is reported
// as the signed in user
}
}
Then your test looks like this:
#Test
public void login() {
HomePage page = new HomePage(driver);
page.login(TEST_USER_NAME, TEST_PASSWORD);
}
#Test
public void sendUserInvite() {
WelcomePage page = new HomePage(driver)
.login(TEST_USER_NAME, TEST_PASSWORD);
page.sendUserInvite(NON_USER_EMAIL_ADDRESS);
}
Of course, your page objects may also have some code duplication (for instance, getting the signed in user may be a common concern). When this happens, you can either extract a base class for all of our page objects or a helper class for common logic.

Related

How to check markup after I get page data in OnInitializedAsync method?

I'm new to Blazor and bUnit. I have component that renders an edit form and I get the values for the form in my OnInitializedAsync event.
I'm having trouble working out how to use cut.WaitForState() or cut.WaitForAssertion().
Here's my razor code:
#page "/{AppId:guid}/app-settings-edit"
<section class="app-settings-edit">
<h1 class="page-title">Application Settings</h1>
#if (InitializedComplete)
{
<p>Hello World</p>
...
And my code behind:
public partial class AppSettingsEdit
{
protected bool InitializedComplete;
[Parameter]
public Guid AppId { get; set; }
[ValidateComplexType]
public AppSettings AppSettings { get; set; } = new AppSettings();
[Inject]
public IAppSettingsDataService AppSettingsDataService { get; set; }
protected override async Task OnInitializedAsync()
{
AppSettings = await AppSettingsDataService.Get(AppId);
InitializedComplete = true;
}
...
And here's my Test:
[Fact]
public void MyFact()
{
Services.AddSingleton<IAppSettingsDataService, MockAppSettingsDataService>(x => new MockAppSettingsDataService(x.GetRequiredService<HttpClient>()));
var cut = RenderComponent<AppSettingsEdit>(parameters => parameters
.Add(p => p.AppId, Guid.Parse("55E5097B-B56A-40D7-8A02-A5B94AAAD6E1"))
);
Assert.NotNull(cut.Instance.AppSettingsDataService);
cut.WaitForState(() => cut.Find("p").TextContent == "Hello World", new TimeSpan(0, 0, 5));
cut.MarkupMatches("<p>Hello World</p>");
}
When I debug the test, I can see the OnInitializedAsync firing, however my markup never changes to include 'Hello World' and the WaitForState() command fails.
Are you certain that the task returned from your AppSettingsDataService.Get() method ever completes?
I would make sure that the task returned from AppSettingsDataService.Get() is already completed, otherwise you need to a way to complete the task after the component is rendered. There are many ways to do this, it all depends on how your mock is implemented.
As for your WaitFor, you can just use the WaitForAssertion method in this case, i.e.: cut.WaitForAssertion(() => cut.MarkupMatches("<p>Hello World</p>"));
A little background:
The WaitFor* methods are used when the component under test is being rendered asynchronously, since the test, running in a different thread, doesn't know when that will happen.
In general, you should never need to set a custom timeout, the default is 1 second, but the WaitFor* methods will retry the assertion/predicate every time a renderer happens. Its only when the thing that triggers the rendering will take more than one second, e.g. if you are using bUnit to perform end-2-end testing and e.g. pulling data from a real web service.

Allure report logged only the first fail and the test ends and doesn't run all steps after the first fail

I'm using Java+TestNG+Allure. I need to get all test fails in Allure report, not only the first fail of the test but all, and the test should run from the beginning to the end despite failed steps.
For reporting the test failures in Allure report we have to do little bit of modifications in Allure Class. Here we want to report any of the sub step as a failure, execute the remaining steps and then mark the main test step as a failed test. For doing this we can use the concept of SoftAssertions. I had created one class called as AllureLogger. Inside the class we will have 5 Methods.
1)starttest() 2)endtest() 3) markStepAsPassed(String message) 4)marstepAsFailed(String message) 5)logStep().
public class AllureLogger {
public static Logger log = Logger.getLogger("devpinoylog");
private static StepResult result_fail;
private static StepResult result_pass;
private static String uuid;
private static SoftAssert softAssertion;
public static void startTest() {
softAssertion = new SoftAssert();
}
public static void logStep(String discription) {
log.info(discription);
uuid = UUID.randomUUID().toString();
result_fail = new StepResult().withName(discription).withStatus(Status.FAILED);
result_pass = new StepResult().withName(discription).withStatus(Status.PASSED);
}
public static void markStepAsFailed(WebDriver driver, String errorMessage) {
log.fatal(errorMessage);
Allure.getLifecycle().startStep(uuid, result_fail);
Allure.getLifecycle().addAttachment(errorMessage, "image", "JPEG", ((TakesScreenshot) driver).getScreenshotAs(OutputType.BYTES));
Allure.getLifecycle().stopStep(uuid);
softAssertion.fail(errorMessage);
}
public static void markStepAsPassed(WebDriver driver, String message) {
log.info(message);
Allure.getLifecycle().startStep(uuid, result_pass);
Allure.getLifecycle().stopStep(uuid);
}
public static void endTest() {
softAssertion.assertAll();
softAssertion = null;
startTest();
softAssertion = new SoftAssert();
}
}
In the above class, we are using different methods from allureClass and we are doing little bit of modification to add soft assertions.
Everytime we start a TestMethod in testClass we can call the starttest() and end testmethod().Inside the test methods if we have some substeps we can use try catch block to mark the substeps as pass or fail.Ex-Please check the below test method as an Example
#Test(description = "Login to application and navigate to Applications tab ")
public void testLogin()
{
AllureLogger.startTest();
userLogin();
navigatetoapplicationsTab();
AllureLogger.endTest();
}
Above is a test method which will login to one application and then navigate to application tab.Inside we have two methods which will be reported as substeps, 1)login()- For logging in the application 2) navigatetoapplicationsTab()-to navigate to application tab. If any of the substep fails then the main step and substep will be marked as fail and remaining steps will be executed.
We will define the body of the above functions which are defined in test method as below:
userLogin()
{
AllureLogger.logStep("Login to the application");
try
{
/*
Write the logic here
*/
AllureLogger.MarStepAsPassed(driver,"Login successful");
}
catch(Exception e)
{
AllureLogger.MarStepAsFailed(driver,"Login not successful");
}
}
navigatetoapplicationsTab()
{
AllureLogger.logStep("Navigate to application Tab");
try
{
/*
Write the logic here
*/
AllureLogger.MarStepAsPassed(driver,"Navigate to application Tab successful");
}
catch(Exception e)
{
e.printStackTrace();
AllureLogger.MarStepAsFailed(driver,"Navigate to application Tab failed");
}
}
Everytime any exception is thrown they will be caught in catch block and reported in the Allure Report. The soft assertion enables us to execute all the remaining steps successfully.
Attached is a screenshot of an Allure report generated by using the above technique.The main step is marked as Failed and remaining test cases have got executed.
The report attached here is not from the above example which is mentioned. It is just a sample as how the report would look.

How to handle wait during a datadriven test using testNG in a webdriver script

This is my scenario, i use webdriver with testNG for doing data driven test. I am observing that the data i am 'seeing' in web app which is provided by #dataprovider is missing some value. For exg if i have an array as {"1","2","3","4","5"}, i am getting these value in webdriver script using testNG #dataprovider, i am observing in the 'Web GUI' initially 2 might be displayed, then in the next iteration 5 is displayed then the test stop.
I am assuming that TestNG is not waiting for webdriver to complete the function or process.
Here is my sample code
#Test (dataProviderClass=MyDataProviders.class)
public class MyWebDriverClass{
#Test(dataProvider = "theProviderName")
public void providerHomeCreateuser(String arg1,String arg2)
{
<..input arg1, arg2 to text fields..>
}
}
I understand somewhere i need to put a Thread.wait(), could any body guide me on this.
Data provider method is as follows
public class MyDataProviders {
...
...
#DataProvider (name="theProviderName")
public static Object[][] getData() throws Exception
{
Object retObject[][]=getTableArray("src\\com\\abcd\\resource\\TestData.xls", 5, "MyTestData");
return retObject;
}

Nunit and web applications

I begin studying Unit testing with "(NUnit)". I know that this type of testing is used to test "classes" , "functions" and the "interaction between those functions".
In my case I develop "asp.net web applications".
How can i use this testing to test my
pages(as it is considered as a class
and the methods used in)and in which sequence?, i have three layers:
Interface layer(the .cs of each page).
Data access layer(class for each entity)(DAL).
Database layer (which contains connection to the database,open connection,close connection,....etc).
Business layer(sometimes to make calculation or some separate logic).
How to test the methods that make connection to the database?
How to make sure that my testing not a waste of time?.
There are unit and integration tests. Unit testing is testing single components/classes/methods/functions and interaction between them but with only one real object (system under test-SUT) and test doubles. Test doubles can be divided to stubs and mocks. Stubs provide prepared test data to SUT. That way you isolate SUT from the environment. So You don't have to hit database, web or wcf services and so on and you have same input data every time. Mocks are used to verify that SUT works as expected. SUT calls methods on mock object not even knowing it is not real object. Then You verify that SUT works by asserting on mock object. You can write stubs and mocks by hand or use one of many mocking frameworks. One of which is http://code.google.com/p/moq/
If You want to test interaction w/database that's integration testing and generally is a lot harder. For integration testing You have to setup external resources in well known state.
Let's take your layers:
You won't be able to unit test it. Page is to tightly coupled to ASP.NET runtime. You should try to not have much code in code behind. Just call some objects from your code behind and test those objects. You can look at MVC design patters. If You must test Your page You should look at http://watin.org/. It automates Your internet browser, clicks buttons on page and verifies that page displays expected result's.
This is integration testing. You put data in database, then read it back and compare results. After test or before test You have to bring test database to well known state so that tests are repeatable. My advice is to setup database before test runs rather then after test runs. That way You will be able to check what's in database after test fails.
I don't really know how that differs from that in point no. 2.
And this is unit testing. Create object in test, call it's methods and verify results.
How to test methods that make connections to the database is addresed in point 2.
How to not waste time? That will come with experience. I don't have general advice other then don't test properties that don't have any logic in it.
For great info about unit testing look here:
http://artofunittesting.com/
http://www.amazon.com/Test-Driven-Development-Kent-Beck/dp/0321146530
http://www.amazon.com/Growing-Object-Oriented-Software-Guided-Tests/dp/0321503627/ref=sr_1_2?ie=UTF8&s=books&qid=1306787051&sr=1-2
http://www.amazon.com/xUnit-Test-Patterns-Refactoring-Code/dp/0131495054/ref=sr_1_1?ie=UTF8&s=books&qid=1306787051&sr=1-1
Edit:
SUT, CUT - System or Class under test. That's what You test.
Test doubles - comes from stunt doubles. They do dangerous scenes in movies so that real actors don't have to. Same here. Test doubles replace real objects in tests so that You can isolate SUT/CUT in tests from environment.
Let's look at this class
public class NotTestableParty
{
public bool ShouldStartPreparing()
{
if (DateTime.Now.Date == new DateTime(2011, 12, 31))
{
Console.WriteLine("Prepare for party!");
return true;
}
Console.WriteLine("Party is not today");
return false;
}
}
How will You test that this class does what it should on New Years Eve? You have to do it on New Years Eve :)
Now look at modified Party class
Example of stub:
public class Party
{
private IClock clock;
public Party(IClock clock)
{
this.clock = clock;
}
public bool ShouldStartPreparing()
{
if (clock.IsNewYearsEve())
{
Console.WriteLine("Prepare for party!");
return true;
}
Console.WriteLine("Party is not today");
return false;
}
}
public interface IClock
{
bool IsNewYearsEve();
}
public class AlwaysNewYearsEveClock : IClock
{
public bool IsNewYearsEve()
{
return true;
}
}
Now in test You can pass the fake clock to Party class
var party = new Party(new AlwaysNewYearsEveClock());
Assert.That(party.ShouldStartPreparing(), Is.True);
And now You know if Your Party class works on New Years Eve. AlwaysNewYearsEveClock is a stub.
Now look at this class:
public class UntestableCalculator
{
private Logger log = new Logger();
public decimal Divide(decimal x, decimal y)
{
if (y == 0m)
{
log.Log("Don't divide by 0");
}
return x / y;
}
}
public class Logger
{
public void Log(string message)
{
// .. do some logging
}
}
How will You test that Your class logs message. Depending on where You log it You have to check the file or database or some other place. That wouldn't be unit test but integration test. In order to unit test You do this.
public class TestableCalculator
{
private ILogger log;
public TestableCalculator(ILogger logger)
{
log = logger;
}
public decimal Divide(decimal x, decimal y)
{
if (y == 0m)
{
log.Log("Don't divide by 0");
}
return x / y;
}
}
public interface ILogger
{
void Log(string message);
}
public class FakeLogger : ILogger
{
public string LastLoggedMessage;
public void Log(string message)
{
LastLoggedMessage = message;
}
}
And in test You can
var logger = new FakeLogger();
var calculator = new TestableCalculator(logger);
try
{
calculator.Divide(10, 0);
}
catch (DivideByZeroException ex)
{
Assert.That(logger.LastLoggedMessage, Is.EqualTo("Don't divide by 0"));
}
Here You assert on fake logger. Fake logger is mock object.

RhinoMocks Event Subscription

Being new to RhinoMocks and Unit Testing, I have come accross an issue that I cannot seem to find a resolution to (no matter how much documentation I read).
The issue is this: I have created an Interface that exposes 5 Events (to be used for a view in ASP.NET and the MVP Supervisory Controller pattern..... I know, I should be using MVC, but that's a whole other issue). Anyway, I want to test that when a certain event fires on the view, we'll call it "IsLoaded", that a method inside of my Presenter is called and, using Dependency Injection, a value is returned from the Dependency and set to the view. Here is where the problem starts: when I use Expect.Call(Dependency.GetInfo()).Return(SomeList), the Call never executes (without the mock.ReplayAll() method being invoked). Well, when I invoke the ReplayAll method, I get ExpectationExceptions because of the Subscription by the Presenter object to the other Events exposed by the View Interface.
So, for me to test that IView.IsLoaded has fired, I want to verify that IView.ListOfSomething has been updated to match the list I passed in via the Expect.Call(). However, when I set the expectation, the other Event subscriptions (which occur straight out of the constructor for the Presenter) fail the #0 Expectations of the test. What I get is, view.Save += this.SaveNewList tosses up a RhinoMocks ExpectationViolationException.
My million dollar question is this: Is it necessary I set expectations for ALL of my events (via [Setup]), or is there something that I'm missing/not understanding about how Unit Testing or RhinoMocks works?
Please bear in mind I am extremely new to Unit Testing, and therefore RhinoMocks. If it appears I don't know what I'm talking about, please feel free to point that out.
I'm working on a project where we used MVP and rhino mocks as well. What we did was simply expect all event subscriptions in every test.
private void SetupDefaultExpectations()
{
_mockView.Initializing += null; LastCall.IgnoreArguments();
_mockView.SavingChanges += null; LastCall.IgnoreArguments();
}
Then we built a extension method on IMockedObject (from RhinoMocks) to trigger events in the unit tests and un-wrap exceptions so that they can be expected in the standard NUnit way.
static class IMockedObjectExtension
{
public static void RaiseEvent(this IMockedObject mockView, string eventName, EventArgs args)
{
EventRaiser eventraiser = new EventRaiser(mockView, eventName);
try
{
eventraiser.Raise(mockView, args);
}
catch (TargetInvocationException ex)
{
throw ex.InnerException;
}
}
public static void RaiseEvent(this IMockedObject mockView, string eventName)
{
RaiseEvent(mockView, eventName, EventArgs.Empty);
}
}
This could then be used from the unit test like this
using(_mocks.Record())
{
Expect.Call(dependency.GetInfo()).Return(someList);
}
using(_mocks.Playback())
{
Presenter presenter = new Presenter(_mockView, dependency);
(_mockView as IMockedObject).RaiseEvent("SavingChanges");
}
To eliminate duplication between presenter tests we have refactored this to a BasePresenterTest base class which sets up this basic structure for all presenter tests and exposes helper methods to the sub class.
public abstract class BasePresenterTest<VIEW> where VIEW : IBaseView
{
protected MockRepository _mocks;
protected VIEW View { get; private set; }
protected abstract void SetUp();
protected abstract void TearDown();
protected abstract void SetupDefaultExpectations();
[SetUp]
public virtual void BaseSetUp()
{
_mocks = new MockRepository();
View = _mocks.CreateMock<VIEW>();
SetUp();
}
[TearDown]
public virtual void BaseTearDown()
{
TearDown();
View = null;
_mocks = null;
}
protected virtual void BaseSetupDefaultExpectations()
{
//Setup default expectations that are general for all views
SetupDefaultExpectations();
}
protected virtual IDisposable Record()
{
IDisposable mocksRecordState = _mocks.Record();
BaseSetupDefaultExpectations();
return mocksRecordState;
}
protected virtual IDisposable Playback()
{
return _mocks.Playback();
}
protected void RaiseEventOnView(string eventName)
{
(View as IMockedObject).RaiseEvent(eventName);
}
}
This eliminates alot of code from the tests in our project.
We still use a old version of RhinoMocks but I will try to update this once we move to a later version.

Resources