I created some new test cases on a couple of new features I would like to request for addition to Flyway. These features are mainly changes to the Flyway class. However, there seems to be two unit test categories, SmallTests and MediumTests. Is there some sort of criteria for determining which one my test cases should fall under?
Yes. These are based on the test categories described by the Google Testing Blog: http://googletesting.blogspot.de/2010/12/test-sizes.html
But make sure raise issues for your fixes and discuss them first to avoid doing work for nothing.
Related
I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047
I've two files test_utils.r and test_core.r, they contain tests for various utilities and some core functions separated into different 'context'. I can control the flow of tests within each file by moving around my test_that() statements.
But am looking for a way in which I can create different workflows, say ensuring that at run time, tests from Context A_utils runs first followed by tests from Context B_Core followed by context B_Utils.
Any ideas on how this can be achieved?
BrajeshS,
I have an idea. Have you tried the skip() function in version 0.9 or better? See testthat documentation on page 7:
Description
This function allows you to skip a test if it’s not currently available.
This will produce an informative message, but will not cause the test suite
to fail.
It was introduced to skip tests if an internet connection or API not available. You could then dependent on your workflow, jump over tests.
To see example code using skip_on_cran, look at wibeasley's answer where he provides test code in Rappster's reply - https://stackoverflow.com/a/26068397/4606130
I am still getting to grips with testthat. Hope this helps you.
PhpUnit has a generator of skel based on an existing class.
But it's works once.
If later some new method are added (because a dev doesnt work with tdd ), the test file is incomplete.
Is there a tool to generate a skel for uncovered method ?
I don't know any, and I also don't see the need. That skeleton generator generates one test method per function it finds, but you cannot test all use cases of a slightly advanced function within only one test function.
Also, the name of the test function is generated - but better names can and should be created to describe the intended test case or behavior of the tested function. Like "testGetQuoteFromStockMarket" vs. "testGettingMicrosoftQuoteFromStockMarketShouldReturnQuoteObject" and "testGettingUmbrellaCorporationFromStockMarketShouldFailWithException".
Note that you cannot test the throwing of exceptions in combination with cases that do not throw exceptions.
So all in all there simply is no use case to create "one test method per method" at all, and if you add new methods, it is your task to manually add the appropriate number of new tests for that - the generated code coverage statistics will tell you how well you did, or which functions are untested.
AFAIK there is no built-in phpunit functionality to updated the auto-generated test code; that is typical of most code generators.
The good news is that each of the functions is added quite cleanly and independently. So what I would do is rename your existing unit test file to *.old, regenerate a fresh test file, then use meld (or the visual diff tool of your choice) to merge in the new functions.
Aside: automatic test generation is really only needed at the start of a new class anyway; the idea of exactly one unit test per function is more about generating nice coverage stats to please your boss; from the point of view of building good software, some functions will need multiple tests, and some functions (getters and setters come to mind) do not really need any, and sometimes multiple functions are best covered by a single unit test (getters and setters again come to mind).
Having toyed with the concept in the past, I am interested in using multivariate testing on my companies Sitecore website. There are a number of places where I feel we can definitely improve sales through the use of A/B testing in:
Running two entirely different templates to see what layouts work better for users
Running a number of different Sublayouts (forms) on the site to see which ones people are more likely to fill out
Trialling different content - Running two different sets of copy to see if users are more likely to stay on the page
I want to use the Marketing Suite within Sitecore, and I want to be able to measure who visits pages more and count, out of two or more sublayout forms, which form is used the most. Sadly, I have no experience with the OMS and am struggling to see how one actually implements these things.
Let's say I have a content item, with a bunch of sublayouts attached to it within its template. Can someone help guide me towards a way of achiving the three things I want to run multivariate testing on?
EDIT: On the subject of the two sublayouts I want to test on a template; I have two sublayouts, which are both simple ASP.NET email forms. Once a user fills in the form the contents of the form are written to a database and an email (using Sitecore.Context.Item to get an "Email From" field from the content item that runs the form).
This is where I get stuck. A number of the sublayouts I have don't seem to have any "content" that needs pulling from a data source. The only content I can see in the case of the two forms I want to test is the "Email To" fields. So, if I were to abstract those away into their own data templates, and then added those as data sources I assume that I would then have to change my code for these to stop using Sitecore.Context.Item?
The point where I get stuck is with the data sources for the Multivariate Test Variables and the data sources for the Sublayouts. If I have two data templates containing the Email fields for each, two sublayouts that contain the forms that need testing and two multivariate variables, what goes where?
I believe you can read about it in the Analytics Configuration Reference (PDF link) under section 2.2.
You essentially create a MV test that wraps over potential data sources of a sublayout. The test then randomly assigns a DataSource, so your sublayouts need to be written to work with a DataSource.
With Sitecore 8 released Multivariate Testing is now supported out of the box as well as AB Testing.
You can run two entirely different templates to see which Layout works best for the user by Page Test in Sitecore's Optimization Tool on the Launch Pad. Creating a Page Test you can select the current version of the Item then create a new Version of the Item with the different Layout. This can also be done for Content on the Page
After that you need to decide how a winner can be chosen e.g. most goals completed by users, registrations etc then Sitecore will automatically run the test for you showing A and B to various users and ultimately choose a winner based on the Test Objective. You can choose a winner mannually or let Sitecore automatically choose after a set Duration.
Creating a Mulitvariate Test on number of different Sublayouts as well as imagery, personalisation, content etc is a little more interesting. To create a Multivariate Test is done via Workflow Actions, I've posted a blog recently how to add Maultivariate Testing to workflow.
Approving with a Test will prompt Sitecore to create a Multivariate Test for all variables (Sublayouts, Content, Personalization etc). It creates an 'Experience' for every possible combination of these variables and tests them against each other.
For a more in-depth explination and guide I have recently posted a tutorial to create a Multivariate Test in Sitecore.
There are two trainings that you (and a developer on your team) should really consider attending: OMS Certified Marketer and OMS .NET Developer.
Working with a Sitecore Certified OMS .NET Developer, you will be able to accomplish your marketing objectives. This is what Sitecore Training is for!
Please see the following and regsiter for the next available trainings:
http://www.sitecore.net/Training/Course-Overview/OMS-11-Certified-Marketer.aspx
http://www.sitecore.net/Training/Course-Overview/OMS-11-NET-Developer.aspx
I've just started working in a continuous integration environment (TeamCity). I understand the basic idea of not getting so abstracted out in your code that you are never able to build it to test functionality, etc. However, when there is deep coding going on, occasionally it will take me several days to get buildable code--but in the interim other team members may need to see my code.
If I check the code in, it breaks the build. However, if I don't check it in, my team members are unable to see the most recent work. I'm wondering how this situation is best dealt with.
A tool like Code Collaborator (Google link, smartbear.com is down..) would allow your peers to see your code, without you committing it. Instead, you just submit it for review.
It's a little extra trouble for them to run it though.
Alternatively, setup a second branch/fork of your codebase for you to work in, your peers can sync to that, and it won't break the build server. When you're done working in your own branch, you can merge it back with mainline/trunk/whatever.
In a team environment, it is usually highly undesirable for anybody to be in an unbuildable state for days. I try to break large code deliveries to as many buildable check-ins as I can. At minimum, create and check in your interfaces even if you do not have the implementation ready so others can start to code against them.
One of the primary benefits of Continuous Integration is that it shows you when things break and when things are fixed. If you commit those pieces of code that break the system, other developers will be forced to get it into a working state before continuing the development. This is a good thing because it doesn't allow code changes to be made on top of broken things (which could cause issues where a co-workers code worked on the broken system, but doesn't work once the initial break is fixed).
This is also a prime example of a good time to use branches/forks, and simply merge to the trunk when all the broken things are fixed.
I am in exactly the same situation here.. As build engineer I have this working beautifully.
First of all, let me break down the branches / projects. #Dolph Mathews has already mentioned branching and tbh, that is an essential part of getting your setup to work.
Take the main code base and integrate it into several personal or "smaller" team branches. i.e. branch_team_a, branch_team_b, branch_team_c
Then set up teamcity to build against these branches under different project headings. So you will eventually have the following: Project Main, Project Team A, Project Team B, Project Team C
Thirdly, then setup developer checkins so that they run pre-commits builds for the broken down branches.. You can find the TC plugin for this under tools and settings.. They have it for IntelliJ or VS.
You now have your 3-tier setup..
- Developer kick starts a remote-run pre-commit build from their desktop against their project. If it passes, it get's checked into the repository i.e. branch_team_a
- Project Team A passes after several check-ins; at which point you integrate your changes from branch_team_A to main branch
- Project Main builds!
If all is successful then you have a candidate release.. If one part fails, projects a, b or c. it doesn't get checked into main. This has been my tried and tested method and works everytime. It also vastly improves team communication.