Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We are new to Scrum and part way through the first sprint we have realised that one of the team members (a developer) needs to do some investigation into how navigation should work (from a user perspective) in the application.
So at the end of this investigation we should have a proposal or prototype of how something should work. But it wont have been actually coded in the application.
So my question is, how should we deal with something like this in terms of the sprint planning. I don't really see it as being user story, but what is it, and how is it treated in Scrum? Does something need to be added to the planning board for the investigation?
Thanks
Paul.
Try to treat prototyping like any other requirement as much as possible. Think about what you want to achieve, create a user story, define one ore several tasks and estimate them during sprint planning. Think of the development team being the user in this case. Definitely have it on the planning board and track progress in daily Scrum meetings. If you have problems estimating the tasks, define them as "time-boxed", i.e. with the fixed time budget, to prevent "endless" work without results.
Although you got the solution Just wanted to add something here.
Such prototyping/researching works are termed as Spikes in the Agile world.
Here, the team dedicates some members into such spikes only so much as to understand the feasibility of the user story and be in a position to help the entire team estimate for the user story.
SCRUM is rather an organizational process than a development model, like prototype-driven development. It means that different X-driven-development models can be easily incorporated, like TDD or even prototype-driven (PDD).
To incorporate PDD in SCRUM, one can set several milestones that are prototype versions. SCRUM can be used normally considering each prototype as a whole new project. It is good for a complex prototype.
However, if creating a prototype is very easy, and a single person can do it in one or two sprints worth of time, so it might be useful to retain a prototype-specialist, that, much like the application-specialist, monitors the work of the rest of the team to check consistency with the ultimate goal. However, a prototype specialist can iteratively provide new prototypes, guiding the work of the rest of the team in a practical manner, differently from the application specialist.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am currently facing a situation where I as an advocate of test driven development have to compete with an advocate of model driven software development (MDSD) / model driven architecture (MDA).
In my opinion, code generation is a valuable tool in my toolbox and I make heavy use of templates and automation when needed. I also create diagrams in UML when I think this helps to understand the inner working or to discuss architecture on the white board. However, I strongly doubt that creating software via UML (creating statecharts and sequence diagrams to create working code not only skeletons of code) is more efficient for multi tier applications (database layer, business/domain layer and a Gui, maybe even distributed). It seems to me when it comes to MDSD, the CASE tooling suddenly isn't just a tool anymore but it is the thing to satisfy: As I see it, on the one hand, MDSDevelopers profit from the higher abstraction UML gives them but at the same time they are struggling with modifing the codegenerator/template/engine to fullfill their needs which might be easily implemented (and tested) if used another tool out of their toolbox (VisualStudio, Eclipse,...).
All this makes me wonder if there has been a success story (suceess being that the product was rolled out in time, within the budged and with only few bugs and parts of the software have been reused later on) for a real world application which fullfills this creteria and has been developed using a strict model driven approach:
it has nothing to do with the the Object Management Group (OMG) or with consultants related to MDSD/MDA/SOA/
the application is not related to Business Process Modelling and is not a CASE tool itself
the application is actively used by end user
it has at least three tiers, including a user interface which goes beyond displaying raw table values and is not one of the common MDA/MDSD examples ("how to model a coffee machine, traffic light, dishwasher").
A tiny, but nevertheless useful testimonial on the use of MDSD has been posted on the Model Driven Software Network:
http://www.modeldrivensoftware.net/profiles/blogs/viva-mdd-follow-up-building-a?xg_source=activity
It is a relatively small app being developed, but still a good example of MDSD in action.
More success stories are listed at Metacase's site (http://www.metacase.com/cases/index.html). Metacase sells MetaEdit+, which implements DSM (Domain-Specific Modeling). DSM is just a form of MDSD.
I am also developing ABSE (Atom-Based Software Engineering), another form of MDSD, very close to DSM. ABSE is outlined at http://www.abse.info.
I used MDA and code generation on an embedded system project using 4 processors connected via CAN. We had over 20 axes of motion and many, many sensors. The system was highly robust and maintainable as the mechanical components were evaluated and modified.
We worked in the models and generated code so the models were always up-to-date. We did a careful domain analysis to achieve subject matter isolation. The motor control required very high performance and so was not modeled or generated. Our network drivers were also hand-coded, and we wrote interfaces that allowed bridge services to send events to any service anywhere in the system as needed (although this was tightly controlled so as to minimize interprocessor dependencies).
Using the method took a bit of discipline, but having working models was great because they can be reviewed by non-software types.
Version control and differencing of the models was a bit of a challenge but we had a small, localized team so we were able to avoid merge issues.
The good people at Pathfinder Solutions (our tool vendor) can help mentor you through the project.
You could also take a look at the slides from previous Code Generation conferences. Several of these talks were from successful case studies e.g. http://www.codegeneration.net/cg2009/slides.php
I am working on one of the project for legacy modernization and its using MDA tool named Bluage. Its for a big healthcare organization and its in production so i could say that its successful. MDA is better in case of legacy modernization as it can generate KDM model from some technologies like pacbase which are going to be out of support.
I worked on a MDSD system that generated admin style web apps in Google Closure. I believe that your question is compelling. Too much complexity and your MDSD system is too hard to use. Too simple and you won't generate apps that are useful in the real world. Where MDSD really shines is in saving developer time typing lots of plumbing style code but how can MDSD remain effective over multiple releases? Requirements can go in many directions. That is the real challenge. I recently blogged about my MDSD lessons learned on that project.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
If everything in scrum is all about functional things that a user can see is there really any place for refactoring code unrelated to any new functional requirements?
I don't think that this has as much to do with Scrum as it does with project management philosophy.
Regardless of whether a project uses Scrum or not, many project managers do not like developers spending time on "unnecessary" things like code refactoring or restructuring that doesn't directly advance one of the outstanding functional requirements. It's not "work that yields results" like normal development, it's "work that prevents a delay of results later". Given the typically short time-lines used for Sprints, the benefit is often hard to see and nearly impossible to quantify.
Keeping code maintainable needs to be an item on your burn-down list (if you use a Scrum). It is just as important as new development. While it may not seem like something that is "visible to the user", ignoring it increases your technical debt. Down the road when the technical debt piles up enough that your code's lack of maintainability slows down development, the delays in new feature development will be visible to customers.
It's all a matter of management/philosophy. Instead of looking at refactoring and maintainability enhancements as "extra" work that doesn't impact customers, it should be viewed as a time investment to prevent customer-visible delays (and potentially bugs as well) down the road. Developers can sometimes see these benefits more clearly than managers can; if your manager doesn't understand the disadvantages of neglecting maintainability, you might want to grab several other developers and have a chat with your manager.
I think there is a fair case to make for technical debt refactoring where the effort/cost impact of maintaining the code is as high as, or higher even, than the cost of refactoring it to improve quality or work better / properly - specifically to lend it a higher degree of maintainability.
eg: if the software is so problematic you are losing customers, or money, you'd act fast to fix it.. Some might argue this is a business requirement of it's own, but it's often not placed front and centre on small to mid sized development projects, which instead focus on the technicalities of creating apps rather than the impact of the quality of the app on the bottom line.
I think you are probably talking about large scale refactoring rather than the continuous refactoring you would do whilst in the whole red-green-refactor cycle.
My approach would be something like this, if reafactoring an old feature makes it easier to add a new feature then go ahead and do it. But in some ways you are right, if there is no pressure on a particular unit to change (i.e. it is completely finished and will never change again and will never impact on other modules) then there is no practical need to refactor. However I rarely find a module that is quite so finalised.
If everything in Scrum is all about functional things that a user can see (...)
Any project and methodology should be about generating business value, you rarely do things just for the fun in a business environment. Having that said, I see quality in Scrum (and other Agile methods) as a way to not kill your velocity on the long run and, ultimately to achieve hyper productivity. I thus believe that a typical "Definition of Done" should include something like "no increase of technical debt" (put your quality standards in there). If you think a new feature will impact existing code that should be refactored, include this cost in the estimate (or create a refactoring item in your Product Backlog) and explain things to your Product Owner. Because at the end, it's up to the Product Owner to prioritize items and to decide if quality can be sacrificed temporarily (if your business die because you don't release a feature, what is the point of refactoring existing code?). But he must be aware that this can't be a long term strategy or he will kill the team velocity.
bta: Regardless of whether a project uses Scrum or not, many project managers do not like developers spending time on "unnecessary" things like code refactoring or restructuring that doesn't directly advance one of the outstanding functional requirements.
Definitely a noteworthy observation; my solution to this would be as follows:
Perform regular code reviews. Every code review should recommend actions to improve on deficiencies in the code.
There is now a requirement for jobs which improve code quality. Build these into the sprint and track them in the same way as any other job.
If your manager needs any more convincing, cast 'the maintainer' as a user, and describe some user stories for them - and then 'features' are things like 'the code is fully commented with xml doc comments' and 'the code does not produce any warnings from ReSharper'
If you can justify it as part of the process of completing other tasks by identifying issues/risks with current sets of code, and it is a better end result, go for it. But don't get overzealous and screw the timelines/budget.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Who is winning in the "Low vs High fidelity prototyping" debate?
Should prototype-zero (P0) be the first version of the final product? Or should be P-0 always a throwaway? What approach is the industry favoring?
Excelent article from wikipedia: Software prototyping
A prototype should always be a throwaway - a prototype is used to quickly prove a concept and influence the design of the real product. As such, a lot of things which are important for a real product (a thought-out architecture and design, reliability, security, maintainability, etc.) fall by the wayside. If you do take these things into account when building your prototype, you're not really building a prototype anymore.
My experience with prototypes where the code directly evolved into an actual product shows that the end-result suffers because of it - the lack of a real architecture resulted in a lot of cobbled-together code that had to be constantly hacked to add new features. I've even seen a case the original technology chosen for rapid development of the prototype was not the best choice for the actual product, and a complete re-write was necessary for V2.
I think we, the pedants, have lost this particular battle -- alleged "prototypes" (which by definition should be rewritten from scratch!!!-) are in fact being "evolved" into (often half-baked "betas"), etc.
Even today, I've applauded at the smart attempt by a colleague of mine to recapture the concept, even if the term is a lost battle: he's setting up a way for proofs of concept small projects to be developed (and, if the concept does get proven, transferred to software engineers for real prototyping, then development).
The idea is that, in our department, we have many people who aren't (and aren't in fact supposed to be!-) software developers, but are very smart, computer savvy, and in daily contact with the reality "in the trenches" -- they are the ones who are most likely to smell an opportunity for some potential innovation which could have real impact once implemented as a "production-ready" software project. Salespeople, account managers, business analysts, technology managers -- at our company, they all often fit this description.
But they're NOT going to program in C++, hardly at all in Java, maybe in Python but miles away from "productionized" -- indeed they're far more likely to whip up a smart proof of concept in php, javascript, perl, bash, Excel+VBA, and sundry other "quick and dirty" technologies we don't even want to dream about productionizing and supporting forevermore!-)
So by calling their prototypes "proofs of concept", we hope to encourage them to embody their daring concepts in concrete form (vague natural-language blabberings and much waving of hands being least useful, and alien to the company's culture anyway;-) and yet sharply indicate that such projects, if promoted to exist among the software engineers' goals and priorities, DO have to be programmed from scratch -- the proof-of-concept serves, at best, as a good draft/sketch spec for what the engineers are aiming for, definitely NOT to be incrementally enriched, but redone from the root up!-).
It's early to say how well this idea works -- ask me in three months, when we evaluate the quarter's endeavors (right now, we're just providing a blueprint for them, hot on the heels of evaluating last quarter's department- and company-wise undertakings!-).
Write the prototype, then keep refactoring it until it becomes the product.
The key is to not hesitate to refactor when necessary.
It helps to have few people working on it initially. With too many people working on something, refactoring becomes more difficult.
Response from BUNDALLAH, HAMISI
A prototype typically simulates only a few aspects of the features of the eventual program, and may be completely different from the eventual implementation.
Contrary to what my other colleagues have suggested above, I would NOT advise my boss to opt for the throw away prototype model. I am with Anita on this. Given the two prototype models and the circumstances provided, I would strongly advise the management (my boss) to opt for the evolutionary prototype model. The company being large with all the other variables given such as the complexity of the code, the newness of the programming language to be used, I would not use throw away prototype model. The throw away prototype model becomes the starting point from which users can re-examine their expectations and clarify their requirements. When this has been achieved, the prototype model is 'thrown away', and the system is formally developed based on the identified requirements (Crinnion, 1991). But with this situation, the users may not know all the requirements at once due to the complexity of the factors given in this particular situation. Evolutionary prototyping is the process of developing a computer system by a process of gradual refinement. Each refinement of the system contains a system specification and software development phase. In contrast to both the traditional waterfall approach and incremental prototyping, which required everyone to get everything right the first time this approach allows participants to reflect on lessons learned from the previous cycle(s). It is usual to go through three such cycles of gradual refinement. However there is nothing stopping a process of continual evolution which is often the case in many systems. According to Davis (1992), an evolutionary prototyping acknowledges that we do not understand all the requirements (as we have been told above that the system is complex, the company is large, the code will be complex, and the language is fairly new to the programming team). The main goal when using Evolutionary Prototyping is to build a very robust prototype in a structured manner and constantly refine it. The reason for this is that the Evolutionary prototype, when built, forms the heart of the new system, and the improvements and further requirements will be built. This technique allows the development team to add features, or make changes that couldn't be conceived during the requirements and design phase. For a system to be useful, it must evolve through use in its intended operational environment. A product is never "done;" it is always maturing as the usage environment change. Developers often try to define a system using their most familiar frame of reference--where they are currently (or rather, the current system status). They make assumptions about the way business will be conducted and the technology base on which the business will be implemented. A plan is enacted to develop the capability, and, sooner or later, something resembling the envisioned system is delivered. (SPC, 1997).
Evolutionary Prototypes have an advantage over Throwaway Prototypes in that they are functional systems. Although they may not have all the features the users have planned, they may be used on an interim basis until the final system is delivered.
In Evolutionary Prototyping, developers can focus themselves to develop parts of the system that they understand instead of working on developing a whole system. To minimize risk, the developer does not implement poorly understood features. The partial system is sent to customer sites. As users work with the system, they detect opportunities for new features and give requests for these features to developers. Developers then take these enhancement requests along with their own and use sound configuration-management practices to change the software-requirements specification, update the design, recode and retest. (Bersoff and Davis, 1991).
However, the main problems with evolutionary prototyping are due to poor management: Lack of defined milestones, lack of achievement - always putting off what would be in the present prototype until the next one, lack of proper evaluation, lack of clarity between a prototype and an implemented system, lack of continued commitment from users. This process requires a greater degree of sustained commitment from users for a longer time span than traditionally required. Users must be constantly informed as to what is going on and be completely aware of the expectations of the 'prototypes'.
References
Bersoff, E., Davis, A. (1991). Impacts of Life Cycle Models of Software Configuration Management. Comm. ACM.
Crinnion, J.(1991). Evolutionary Systems Development, a practical guide to the use of prototyping within a structured systems methodology. Plenum Press, New York.
Davis, A. (1992). Operational Prototyping: A new Development Approach. IEEE Software.
Software Productivity Consortium (SPC). (1997). Evolutionary Rapid Development. SPC document SPC-97057-CMC, version 01.00.04.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 months ago.
Improve this question
.NET 3.5, .NET 4.0, WPF, Silverlight, ASP.NET MVC - there's really a lot of new Microsoft technology released / on the horizon to try out these days.
(The examples I gave is all Microsoft technology but this can apply to any language or platform). I am curious how this is handled in the company you work for. A few examples:
Do you have a CTO that determines what technology the company uses?
Are development teams free to choose what technology they use? For example: framework version, classic ASP.NET vs ASP.NET MVC, ADO.NET Entity Framework vs Linq2Sql or NHibernate? Or a mix of these?
What new technologies does the company you work for try out and why?
Does your company have dedicated resources (time) to try out WPF or whatever technology, just for research, or do you try things out in your spare time and try to introduce them to your company?
These are just examples to make my question clearer. To summarize, I'd like to know what this process looks likes, who is responsible, who makes the decisions. Does your company jump on the bandwagon, or is it reluctant to try new technologies? And are you comfortable with this situation?
At the company I work for, we still use .NET 2.0 (although we are now slowly switching to .NET 3.5), haven't seriously looked into ASP.NET MVC, haven't tried out WPF at all, etcetera. And, some find it pretty hard to convince people to do. Is it fair to expect otherwise?
At my company, we have an architecture group that determines which technologies are used. People are welcome to read up on alternative technologies and make suggestions, but at the end of the day, it's the architecture group that makes the decisions.
While this may seem restrictive, it does ensure that all of the development groups are using the same or similar technologies, and moving from one group to the next is fairly easy. As well, by having one group do all the research, you ensure that you don't waste time by having multiple groups duplicate the research effort.
Since I work in such a small company and am I typically either the only developer, or the lead developer in a very small group, I can usually convince my boss to use whatever I think would be the best for a given project/situation.
We stick to what we know for our major and key projects within the company.
For any new "mini" projects that come along, we take the hit on the learning curve to try and build them in the latest technologies if at all possible.
This enables us to get up to speed on these things to then comfortably and safely use these technologies in our major projects as we see fit.
Where I work there is an architect team which looks at technologies from a high level and makes recommendations to various actual teams. A subset of the architect team actually takes the technologies and experiments on them and out of the produces
Internal 1 hour overview sessions
Week long boot camps
Whitepapers/Posters
The more important the technology is the more of that list is produced. All of that just feeds to teams, which combined with customer requirements for technology actually make the decision for what that team should use.
I have a mix answer to this question. Where I work, lower level technical managers are usually the ones that chose a certain technology and sometimes even the developers have the freedom to try something new. For example, I really wanted to learn about JavaScript's Prototype while working on a web site. I made the case to my boss, he was reluctant first because nobody else knew it or had used it before, but gave me the go ahead. It was great for me to be able to learn Prototype and take advantage of it's many built in functionality. Other bigger projects come down from higher management and we don't really have much of a choice. Right now, my company is adopting SAP, so everything is moving into that direction. I don't necessarily want to become an SAP expert, but if I want to stay here, I'll need to at least learn how to work with it.
Every company has its own pace for innovation, and it's dependent first on the comfort level of the managers, and second on whether anybody actually does the work to research and propose using new things. When the managers start getting uncomfortable, innovation slows or stops until they get comfortable again. Some innovations they will never be comfortable with.
Keeping this in mind, I'm not sure how to answer your question about whether or not it's fair to expect more innovation than is happening. Certainly it's reasonable for you to want more; equally, once you've hit your organization's speed limit on innovation, it's not likely to change and, if it does change, it will probably take a long, long time.
I've been given rather large amounts of freedom to change things by various managers in my past, and I took advantage of it. I also ran into the limits on a regular basis, and finally dealt with my frustration by starting my own company. (This may be considered a somewhat drastic measure; certainly by doing do you reduce the time you have to research and develop the very things for which you started your company.)
These days I'm developing rather significant applications in Haskell, and I'm pleased as punch. After a year, I'm starting to get the hang of it, and I certainly have several more years ahead of me just learning what I can do with the tools I have now.
I suppose the summary of my response is: if you want to innovate more than those around you, you need to change your peer group.
I think any company that tries new technology for the sake of it, as its bleeding edge and 'innovative' is crazy. To have a formal 'lets play with new technology to try it out department' is just nuts.... unless they're in the business of providing technology consulting to other businesses.
For everyone else technology is there to help the business get things done. Not to help developers line their CV's with cool sounding TLA's.
The company I'm working at the moment is quite large and has a CTO that chooses 'strategic platforms'. But I've have to say, if you can pick a technology, they're probably using it. They're too big to beat everyone down with the corporate stick, but they try. If the technology will work in the project and bring it in on time, then it gets used.
We need solid and proven platforms for our stuff. And, we don't need anything fancy. Therefore we might go for .NET after 5-10 years or so, hope it's ready by then. On the other hand, Java is already mature enough, so we're using it alongside with C++ and some Jython scripting. These decisions are pretty much autonomous (we're a small shop).
I don't mean to mock bleeding edge developers, but whether you need solidity or newest features obviously depends on what you're working on. Many scientists are still happily using Fortran 77.