Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 10 months ago.
Improve this question
In my workplace we use scenario based testing. However whenever somethign is fixed or a new patch is added new scenarios are added as a result the list keeps getting longer and longer and takes 3 days plus to test the application.
Is there a way to do proper testing without taking a long long time?
What do you use?
Thanks
Only 3 days to test your application ! We've got test jobs that run for maybe 15 days. And I guess other lurkers around here can tell you that they have even bigger test jobs; you know the drill -- when I were a lad we didn't even have a hole in't' road to live in.
But seriously, 3 days to fully test a release candidate with a benefit stream worth O(USD10^7) doesn't seem outrageous to me. On the other hand, if it's taking you 3 days to test changing one field on a GUI from 12 characters to 24 characters, then that does seem a bit too much. I think your question might be better phrased as 'How much of our development time should be spent on testing ?' and the answer might be anything from 10% -- 50% (possibly higher for safety-critical systems). If you are spending 2 days developing a patch, then testing should probably take no more than 1/2 day.
And yes, the scenario where as your application expands your test suite expands too, is very familiar. However if we add a new bit of functionality we tend to add new tests; a better approach, -- one we never have time for though we always have time to deal with not taking it -- is to modify existing tests. Modify code -> modify tests; add new code -> add new tests.
Yes, we use automated testing as much as we can; we use a lash-up of bash scripts, python programs and make to drive our automated tests. The processors we use never complain that testing is boring and repetitive, so we have no ethical qualms about working the poor dawgs close to heat-death. Sadly local labour laws prevent the same robust management principles being applied to the carbon-based life forms in our offices.
CI can help you to achieve that, automation is the key word. For testing process, you have to do is automation testing, UT, interface testing, UIbased testing and performance testing. But there is a root concept needs to be accepted, quality is not equal to testing. UT can be created by RD before coding is finish; UIbased testing and interface testing are develped by QA in the whole coding process. When the new feather finished, there is a test suite to ensure the quality. The only thing you have to do is functional testing which automation testing can not be covered.
I believe you should go for Agile Methodology, this will help you to create small releases and the scenarios wont be as long as they are getting now. Also you can automate few scenarios which are used repeatedly for regression testing.
I also do believe that u should go for Agile . As agile is a combination of iterative & incremental process therefore the Story Points shared by client i.e. requirements & updates. u can sort the Requirement in the order of priority & can plan for sprints i.e. all requirements should be ordered in high to low order as product backlog and sprints can be prepared from the product backlog . Therefore By the time development is in progress for sprint 1 , u can prepare scenarios for testing of sprint 1 in this span . after the sprint delivery if there is any change request in any process follow the same can be managed easily and with the help of scrum & sprint retrospective meetings the process can be improved in the upcoming projects.Thus Project can be delivered in the sprints easily & in a short span of time.
Why don’t you automate your application Test suit? Whenever there is a gap between the current and next release, you can automate the existing test cases in the meanwhile. This will not only save the Testing cycle time but also the Regression Testing will be more accurate without skipping or missing any test scenario.
You can automate at least 60-70% of your total test cases which will save test execution time by a good margin and can be run overnight.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I'm not against Scrum. I love it, it's right on my second preference right after RAD, however in my current team they made me hate it. We're possibly doing it in the worst possible way.
We have the usual Sprint planning which takes roughly 30 minutes while writing user stories ourselves and that's all. Right in that 30 minute we answer questions like the following:
What should the user do?
What is needed for this (Subtasks).
How much time will it take?
Okay we're done, see you tomorrow morning in the daily stand-up meeting.
This really frustrates me and they won't listen to me. There is no planning, like at all. At the point of (2) all 4 developers talking about different ways of solving a particular problem. It would be fine, but we also don't have any clarified vision and thus everyone has different understanding of where is the whole project headed. Thus our ideas completely differs. This usually ends up in chaos. For example the most recent story in our newest shiny project's first sprint:
Vision: We need an application to perform unit testing on X application.
User stories:
User logs in
Create DB table (No schema has been clarified)
Create Login View
Authenticate user to Y server.
User sees the available unit tests
Create a view to display unit tests
Read DB table
Implement CRUD operations
User executes unit tests.
Implement selection to the upper view
Add an execute operation
Display the result in a new page
What my worries were:
Vision doesn't say anything about where this whole project is headed thus we will end up re-implementing the majority of our functions when going to the next spring, or after that, or after that... (Checked - this happened right away; I can't help it I just hate to work on something that will be erased right at the start of the next spring. I don't think Scrum is about it, it would be really useless)
No actual planning. We haven't clarified anything what the DB should look like so how to create it? I can create a DB for such a system with 1 to N tables depending on what the project should achieve in the future but this is not so serious as a DB can easily be extended.
Based on (2) we started working on different parts. I created the DB while others created views and again others created operation implementations. All of us had different understanding and even in just a day we ended up with non-compatible models that just couldn't be integrated.
What have we done wrong:
No planning. My team just hates planning, they're like act first and ask later. I'm like: I.DO.NOT.DO.SOMETHING.TWICE.BECASE.YOU.ARE.LAZY.TO.DO.PROPER.PLANNING.
No communication between team members, but even I didn't expect that just under one day we will end up like that.
What is going wrong in here? Is it just me with the wrong understanding of scrum or my worries are true? This is giving me so much stress at work I barely can handle it anymore.
I'm intrigued as to who "they" are in this line : "This really frustrates me and they won't listen to me." ?
It reads as if you're referring to the rest of the scrum team. If so, I suggest you need to get to a "we" footing as soon as possible and work on communication.
With regard to some of the items in your post, a few things come to mind immediately:
If you don't have one, you need a product owner to own the product, it's vision and it's backlog. If you do have one, they may benefit from good training or coaching
You are absolutely right about needing a Product Vision. You seem to have one but, you infer that it describes some functionality rather than a complete product vision. If so, have you tried to discuss this within your team?
If you don't have one, you need a scrum master to help the product owner and development team to play by the rules of scrum and, in your case, encourage communication within the team. If you do have one, they may benefit from good training or coaching
Concerning your worries, I would add:
I think you mean 'sprint' where you write 'spring'
It is common in scrum that product backlog items are changed to reflect better understanding
You shouldn't need to describe the database in depth when you start a project. Scrum works best with emergent architecture based on implemented functionality
If multiple developers work in the same area without communicating, it's highly likely that you will step on each other's toes and get the outcomes you describe
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We use scrum with our development, we often create task/ticket for developer, and I want to find a way
to record them. But I hava a refused question, that is one way to record them. one way is write on
whiteboards, the other way is write on Agile project management tool(Pivotal Tracker), I think they
are duplicate, so which is better?
It depends who cares about the tasks.
In teams very new to Scrum, devs can split stories in to tasks to get a better idea of estimates, collaborate on work, etc. For this reason, whatever the devs prefer should be the way forward. Usually a dev will prefer to put tasks on a card, or a whiteboard, or something close to the workspace, but some devs do prefer electronic systems. I find the act of moving a card or writing on a board gives a sense of commitment to a task or story, so I prefer this.
Sometimes the PM prefers to have the tasks so that he can see if a story is 65% done, etc.
Every single time I've seen this it ends up with the PM telling the devs off for not finishing their stories when they said they would, or saying, "It was 85% done yesterday! How can you not have finished it?" This happens a lot with new teams, where devs often prefer to do the easy bits first, or they don't know how to integrate their work with others' yet.
The thing is, there is no value whatsoever in the tasks! It's only possible to get useful feedback by delivering the stories, even if they don't represent completed features but just slices through the system. The tasks themselves are only valuable for the iteration until the stories are completed, so no historic record is needed. PMs who value the tasks often end up with part-done stories and nothing to release or showcase.
For this reason, I would try not to duplicate the tasks for my recording efforts, but just to let the devs make the tasks themselves and put them wherever they want to. It's easy enough to count tasks manually for a burn-down.
I'd have to disagree with the previous answer of there not being any value in the tasks. I myself prefer the electronic methods such as:
- Calenders : Not only do they say what needs to be done but also when and how long it might take
- Task List : Just like the traditional todo list.
- Scope Items : Turning the items in the scope spreadsheet into deliverables.
Having physical tasks on cards (tried that) or on the whiteboard in the LLP (did that for a while) is technically better, because you're able to always get to the information quickly. However if your development team is distributed, especially when then PM is in another part of the world, you're going to end up having to duplicate data electronically. The tasks themselves add value to the development house in that they provide good historical data about how long certain tasks take. This information is extremely valuable in building the Scope Matrix of future projects, and as such affect the costing and delivery time. As a side benefit, you'll be able to see by historical trend which asset (i.e. developer) is able to perform and at what efficiency. E.g. If you give a developer a Database task to do and they were inefficient then you'll know next time that database tasks should either be given to someone else or during the down time between projects, said asset should spend time upgrading the database skills.
So important is historical task recording that sometimes clients will ask to see the tasks and how long they took as verification of "the bill". When clients are charged by the development house's hourly rate for work, they want accountability for every hour (or part there of) spent. We used to fill out these sheets with the tasks and the durations to send along with the invoice to the client; and sometimes they would question it.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I work at a small service based company where we are starting to implement Scrum practices, and we are also starting to use JIRA with greenhopper for issue tracking. Our team has defined "done" as:
coded
unit tested
integration tested
peer reviewed
qa tested
documentation updated
I'm trying to figure out whether this should be done using a separate issue for each item in the above list for each "task", or if some of these items should be implemented in the ticket workflow, or if simply lumping them together in one issue is the best approach.
I'm disinclined to make these subtasks of a task, as there is only one-level nesting of issues and I fear there is a better use for that capability.
I also am not too excited about modifying the workflow, as this approach has proved to be a burden for us in other systems.
If all of these items are part of the same ticket then that seems weird to me because the work is likely spread between multiple team members, and it'll be hard to make tasks that are under 16 hours that include all of those things.
I feel like I understand all of the issues, but as of yet I don't know what the best solution is.
Is there a best practice? Or some strong opinions?
Done is done - it has to be all those things you defined, however treating them as steps explicitly with a bug tracker can have the undesired side effect of encouraging divisions within then team and throwing stuff over the wall. So coders would claim they are done once ticket is marked "coded" and "unit tested", testers when marked tested etc.
This is exactly the opposite of what Scrum intends to do - the whole team commits to doing the stories so that they meet the definition of done in the end. So even though some of the elements of achieving done are indeed steps one should be very careful with solidifying these steps in any kind of defined workflow.
(This btw shows nicely why using a bug tracker as a scrum tool is a bad idea. Those are different tools that should be optimized for different things - even if linked together through some APIs.)
I certainly wouldn't nest them, since they are steps common to each task. Making them subtasks would just increase the complexity and boilerplate of the system. These seem like perfect workflow stages to me.
Something like Submitted->Assigned->Coding->Review->Testing->Finished.
Where Coding requires "coded", "unit tested", and "integration tested" before moving to Review, Review requires Peer Review before moving to Testing, Testing requires QA Testing before moving to Finished.
The only reason this would be tricky is if you're allowing Peer Review and Testing to be done in parallel. I see problems with allowing that, since if the code fails peer review and is subsequently changed it invalidates the testing work done by QA.
coded
unit tested
IMHO these belong together, as both should be handled by the same person (preferably TDD, which really makes it impossible to separate these).
integration tested
In our team, this is usually done by the same developer, so we typically do it as part of the above task. Other teams may do it differently.
commented
Do you mean code comments? Then, to me, this does not deserve a separate task. Otherwise, please clarify.
peer reviewed
A separate task for a separate developer (or more).
qa tested
A separate task for testers / QA personnel.
I would add documentation - it may not always be needed, but often is. Again, it should be a separate task, typically for the same guy who did the implementation (but not always).
One prime concern to practically all the Scrum teams I have been working with so far is to make sure that nothing important is forgotten from the above. Partitioning into distinct tasks may help this. Then you can clearly see in your backlog what's left to do. Lumping all of these into one task makes it easy to forget about this or that little detail. For us, it was most typical to forget about code review and documentation, that was the main reason why we turned these into independent tasks.
Done defines what the Team means when it commits to “doing” a Product Backlog item in a Sprint. Some products do not contain documentation, so the definition of “done” does not include documentation. A completely “done” increment includes all of the analysis, design, refactoring, programming, documentation and testing for the increment and all Product Backlog items in the increment. Testing includes unit, system, user, and regression testing, as well as non-functional tests such as performance, stability, security, and integration.
Reference: Scrum Guide - Written by Ken Schwaber and Jeff Sutherland (Inventors of Scrum)
You state that you are following "Scrum Practices". It sounds to me like you are just using a few parts of the Scrum Framework and not others, is that true? First of all, Scrum is not necessarily a practice, it is a Framework, you either use the framework or you don't. It works on the basis of inspect and adapt, so apart from the basic Scrum framework rules, nothing is set in stone, so you won't get an exact answer to your question. The best way to know the answer is hire experienced Scrum Professionals, and Experienced Developers and Testers and try the above done plan in your Scrum Team.
Remember always Inspect and Adapt. There are three points for inspection and adaptation in Scrum. The Daily Scrum meeting is used to inspect progress toward the Sprint goal, and to make adaptations that optimize the value of the next work day. In addition, the Sprint Review and Planning meetings are used to inspect progress toward the Release Goal and to make adaptations that optimize the value of the next Sprint. Finally, the Sprint Retrospective is used to review the past Sprint and determine what adaptations will make the next Sprint more productive, fulfilling, and enjoyable.
Do not spend loads of time documenting or looking for a solution to a given Process problem because most of the time the problems change faster than you would realize, it is just better to inspect and adapt provided you have at least the basic knowledge of scrum and you are using the Scrum framework and not just a few Scrum like practices.
We use a pretty similar system in JIRA and I have an open question here and on the Atlassian boards asking a very similar question. We have a similar definition of done. We create the main story in descriptive form i.e. "The legend text on the profit and loss graph overlaps". We then define sub-tasks which are either of type 'technical' or 'process'. Technical tasks are the actual work of implementing the story "Research possible causes on vendor site", "Implement fix in the infographic class". Process items include 'Peer Review', 'Make Build', 'QA Testing', 'Merge'. As one comment noted you may have QA going on before/during Peer Review. As a part of the Scrum process we have QA going on nearly all of the time (they are part of the team) sometimes they sit with the developer, sometimes they get 'bootleg builds' to run in a test environment. This is exploratory testing and is considered part of the coding process to us. The sub-task for 'QA Testing' is for integration and regression testing and is a final validation of the whole story after Peer Review is completed. By that time the QA team already has a complete test plan they worked up during exploratory testing and it's typically just a matter of running through the plan and 'checking it off'.
We've gotten to this point after running sprints for a year and making changes during the retrospective. I'm open to suggestions as I think one of the downsides to the retrospective is that you can group-think yourself in one direction with little hope of ever backing all the way out and considering a different path.
We use two boards for this purpose. We have one board for the Development Sprint where "Done" is Ready for Testing. You can't enter a sprint unless you're well and truly ready to start development (all analysis done, estimates done, people know what they are supposed to be doing - all the conversations have been had, shall we say, though our conversations tend to take place in JIRA Comments given the distributed team) ... and you exit when you finish development. That's the best way to track whether our development team is meeting their own goals without being impacted by QA. Meanwhile, QA uses a Kanban style board and they go from "Ready for Testing" (this is their "to-do"), through In Testing to Ready for Release.
We switched to this because we previously had all these steps in a single board, and we weren't "meeting our commitments" within any sprints because there was no way to both develop & test all in a single sprint, where we have to do a code migration to the QA environment for final testing to occur, although testing is happening all along the way. We are still trying to figure out how to do things correctly, so this may not be the right answer, and yet it sounds like it's not something you've thought of, so maybe it would work for you.
and it'll be hard to make tasks that are under 16 hours that include all of those things.
This is your real issue; ability to break down stories into small useful vertical slices of functionality. Working on this will make your team more agile and give the PO more flexibility.
To the contrary, breaking down the work by process/mechanical step will only make you less agile and really serves no useful purpose. Either you are done or you aren't; no one cares if you are dev complete and not tested so don't bother tracking it by the hour....its waste.
Refocus on your stories, not on tasks.
We use subtasks.
Given that the story is a shared item (the whole scrum team works on it), we use the subtasks as 'the post-it notes' allowing to track tasks which individuals need to tackle.
We don't require that every little piece of task is represented as a subtask.
We are not bookkeepers, but developers.
The team agreement is that if you can't take up a task immediately, just jot it down as a subtask to the story. (Using the agile plugin, it is really easy). ie. we will never have systematically a subtask 'create unit test', but in some occassions, when someone is struggling to get that dynamock up and running, you will see this subtask popup in the story. Having it there allows the team to discuss it during the scrum.
If you want to generate the checklist automatically, look at the create subtask on transition plugin.
https://studio.plugins.atlassian.com/wiki/display/CSOT/Jira+Create+Subtask+for+transition
It allows you to automatically add the subtasks when the story has been committed.
BTW - JIRA is more than a bug tracker. We are using it in a wide variety of applications,
including the management of our sprint activity. (as an Atlassian partner, I'm biased :-).
Francis
Important thing is that you use sub-task as real task; not as activity of main task. Issue tracker is primarily meant for what you are doing; not how you are doing and in what order.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Our scrum team of three developers has a dedicated tester. At the moment the tester is ostensibly waiting for something to test for most of the first week of our 2-week sprint. We typically do our first release of the sprint deliverable around the Thurs or Fri of sprint week 1. At this point our tester can "test" the embryonic software.
This begs the question in my mind - how much value is functional testing like this adding, so early in the development of the deliverable?
At this stage (end of sprint week 1) in development there are usually significant bugs / functional omissions which would be rectified if testing was postponed by only a couple of days (say to week 2 of the sprint).
What is best practice in this case?
While you mention Scrum, a good management practice, you don't describe which testing practice you're using.
If you're using best practices, you should be using Test-Driven Development.
Test-Driven Development means that the testing must be done from the very beginning. The programmers must write tests and fill in classes that pass those tests.
The tester should be writing functional tests on day 1 which the application absolutely fails to pass on day 1. Eventually the application starts to pass those tests and you can call your sprint done.
If you're not doing test-driven development, you should be, and your tester should be writing integration test cases.
If your tester can't code, teach them to code. You must have a tester who can code. And make them start coding functional tests on day 1.
A tester could be going through the spec and writing test scripts / acceptance criteria steps.
As a dev is coming up to completing a task but before check-in they can also do mini test reviews, ie a 5 minute eye ball with the developer as they are completing the work can often turn up a few bugs.
There is always testing the existing application (assuming this isnt the very first sprint of a new product) theres always bugs to find.
Then there is triage of existing bugs, are they high or low priority.
Then there is the testing and closing of bugs that developers have fixed.
Of course the most important is making coffee and wiping the fevered brow of any developer who puts his hand up.
You have uncovered a problem with your Product Backlog. If you have 3 devs coding for 3 days with no testable/releasable code then your stories are too big. You should notice this reflect this fact on your burndown; flatline->big drop at end of sprint. Integration should be a daily routine with new functionality always available for testing.
I agree with the above. When you choose your user stories for the spring, you should start defining how they will be tested
How about:
Automate some of the story tests from the last sprint that were tested by hand
Automate the setting up of the test data (and/or machines), so that it is quicker to do the next round of regression tests.
Write the test specs for some of the stories, so that the developers have better information when they get to do those stories.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I just finished listening to a very eye-opening podcast on Hanselminutes about the definition of "Done". So my question to everyone is "When do you consider a piece of software to be "Done"? Is it when it's fully unit tested? Is it when it's completely documented? What measurement do you use in your development process to determine Doneness of your software?
When the check clears?
Seriously, every time you write a piece of software, you should have defined what "done" means. First. If you have a customer, then there should be a contract -- specific, measurable, agreed, and testable -- that defines done.
If you don't know where you're going, how will you know when you get there?
Surely dependent on context and purpose of the software?
Lunar Lander (the real thing) would have a very different definition of Done to Lunar Lander the Flash game.
Where I work, DONE is defined by a committee of non-technical managers. You can imagine the fun and games.
Test, unit test, integration test, webtest, peer QA and end user review in the sprint review. Peer QA decides if anything else is necessary, all tests must pass in CI environment. This is in a scrum web-project.
When they client(1) considers it done, it's checked in, backed up, and documented.
Also: "done" rarely exists in web dev.
(1) where client may be an internal PM or such
A good measurement is code churn. Using your source code control software, measure the rate of change. How many lines of code are being removed/added/changed per day. Graph this over time. As you approach being ready to release, this should trend downwards and give an indication of stability and readiness to ship. This assumes that you are actually testing well and making changes to fix bugs or respond to change requests. If your user acceptance test users and integration/unit test activity are continuing to regress and test and you aren't having to make code changes (because they aren't finding anything necessitating a change) then you are probably ready to ship.
If big chunks of code are churning a few days before an arbitrary or externally driven ship date, look out!
When the software can be used to satisfy the requirements that define the system.
But I've always thought, "software is never done, it just reaches an acceptable level of incompleteness."
From a development viewpoint 'done' is described quite well by my friend and mentor Simon Baker, here
Alistair Cockburn, Jeff Patton and Mike Cohn also have the following collected views
Shippable quality, which has to be exercised in a go-live, forces teams to really focus on ensuring that incremental work is more carefully thought through.
'Done' is something which all the above quoted would be the first to agree is always different per team and project; however to satisfy knowing that a given piece of work is done, the team must conduct an exercise at the start to flesh out the measure of done-ness and list those criteria.
In so doing, everyone has agreed by consensus what an acceptable completion point is - whether that includes noting the Task in Excel, or writing documentation (or not) becomes an implementation detail for that team/project. The overriding thing is that everyone's understanding of Done is uniform.
Equally, assuming you reach that definition by consensus, it can also be changed as required by consensus.
When all of the requirements are met and all the tests pass.
It's never done, simply versioned and released.
Each project will have it's own definition of done, ours is code complete (compiles successfully, etc), unit tested (or some kind of local testing if not possible) and released within one of our packages (so it's available to the other teams).
But the MOST important thing in DoD is every parties should agree on what it is (team, product owner, manager, etc) and it should be some kind of public contract, published in a team portal is a good idea.
Any piece of software at any time is always 80% done. At least, that's what my experience teaches ...
When the customer thinks it is.