Evolutionary vs throwaway prototyping [closed] - prototyping

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Who is winning in the "Low vs High fidelity prototyping" debate?
Should prototype-zero (P0) be the first version of the final product? Or should be P-0 always a throwaway? What approach is the industry favoring?
Excelent article from wikipedia: Software prototyping

A prototype should always be a throwaway - a prototype is used to quickly prove a concept and influence the design of the real product. As such, a lot of things which are important for a real product (a thought-out architecture and design, reliability, security, maintainability, etc.) fall by the wayside. If you do take these things into account when building your prototype, you're not really building a prototype anymore.
My experience with prototypes where the code directly evolved into an actual product shows that the end-result suffers because of it - the lack of a real architecture resulted in a lot of cobbled-together code that had to be constantly hacked to add new features. I've even seen a case the original technology chosen for rapid development of the prototype was not the best choice for the actual product, and a complete re-write was necessary for V2.

I think we, the pedants, have lost this particular battle -- alleged "prototypes" (which by definition should be rewritten from scratch!!!-) are in fact being "evolved" into (often half-baked "betas"), etc.
Even today, I've applauded at the smart attempt by a colleague of mine to recapture the concept, even if the term is a lost battle: he's setting up a way for proofs of concept small projects to be developed (and, if the concept does get proven, transferred to software engineers for real prototyping, then development).
The idea is that, in our department, we have many people who aren't (and aren't in fact supposed to be!-) software developers, but are very smart, computer savvy, and in daily contact with the reality "in the trenches" -- they are the ones who are most likely to smell an opportunity for some potential innovation which could have real impact once implemented as a "production-ready" software project. Salespeople, account managers, business analysts, technology managers -- at our company, they all often fit this description.
But they're NOT going to program in C++, hardly at all in Java, maybe in Python but miles away from "productionized" -- indeed they're far more likely to whip up a smart proof of concept in php, javascript, perl, bash, Excel+VBA, and sundry other "quick and dirty" technologies we don't even want to dream about productionizing and supporting forevermore!-)
So by calling their prototypes "proofs of concept", we hope to encourage them to embody their daring concepts in concrete form (vague natural-language blabberings and much waving of hands being least useful, and alien to the company's culture anyway;-) and yet sharply indicate that such projects, if promoted to exist among the software engineers' goals and priorities, DO have to be programmed from scratch -- the proof-of-concept serves, at best, as a good draft/sketch spec for what the engineers are aiming for, definitely NOT to be incrementally enriched, but redone from the root up!-).
It's early to say how well this idea works -- ask me in three months, when we evaluate the quarter's endeavors (right now, we're just providing a blueprint for them, hot on the heels of evaluating last quarter's department- and company-wise undertakings!-).

Write the prototype, then keep refactoring it until it becomes the product.
The key is to not hesitate to refactor when necessary.
It helps to have few people working on it initially. With too many people working on something, refactoring becomes more difficult.

Response from BUNDALLAH, HAMISI
A prototype typically simulates only a few aspects of the features of the eventual program, and may be completely different from the eventual implementation.
Contrary to what my other colleagues have suggested above, I would NOT advise my boss to opt for the throw away prototype model. I am with Anita on this. Given the two prototype models and the circumstances provided, I would strongly advise the management (my boss) to opt for the evolutionary prototype model. The company being large with all the other variables given such as the complexity of the code, the newness of the programming language to be used, I would not use throw away prototype model. The throw away prototype model becomes the starting point from which users can re-examine their expectations and clarify their requirements. When this has been achieved, the prototype model is 'thrown away', and the system is formally developed based on the identified requirements (Crinnion, 1991). But with this situation, the users may not know all the requirements at once due to the complexity of the factors given in this particular situation. Evolutionary prototyping is the process of developing a computer system by a process of gradual refinement. Each refinement of the system contains a system specification and software development phase. In contrast to both the traditional waterfall approach and incremental prototyping, which required everyone to get everything right the first time this approach allows participants to reflect on lessons learned from the previous cycle(s). It is usual to go through three such cycles of gradual refinement. However there is nothing stopping a process of continual evolution which is often the case in many systems. According to Davis (1992), an evolutionary prototyping acknowledges that we do not understand all the requirements (as we have been told above that the system is complex, the company is large, the code will be complex, and the language is fairly new to the programming team). The main goal when using Evolutionary Prototyping is to build a very robust prototype in a structured manner and constantly refine it. The reason for this is that the Evolutionary prototype, when built, forms the heart of the new system, and the improvements and further requirements will be built. This technique allows the development team to add features, or make changes that couldn't be conceived during the requirements and design phase. For a system to be useful, it must evolve through use in its intended operational environment. A product is never "done;" it is always maturing as the usage environment change. Developers often try to define a system using their most familiar frame of reference--where they are currently (or rather, the current system status). They make assumptions about the way business will be conducted and the technology base on which the business will be implemented. A plan is enacted to develop the capability, and, sooner or later, something resembling the envisioned system is delivered. (SPC, 1997).
Evolutionary Prototypes have an advantage over Throwaway Prototypes in that they are functional systems. Although they may not have all the features the users have planned, they may be used on an interim basis until the final system is delivered.
In Evolutionary Prototyping, developers can focus themselves to develop parts of the system that they understand instead of working on developing a whole system. To minimize risk, the developer does not implement poorly understood features. The partial system is sent to customer sites. As users work with the system, they detect opportunities for new features and give requests for these features to developers. Developers then take these enhancement requests along with their own and use sound configuration-management practices to change the software-requirements specification, update the design, recode and retest. (Bersoff and Davis, 1991).
However, the main problems with evolutionary prototyping are due to poor management: Lack of defined milestones, lack of achievement - always putting off what would be in the present prototype until the next one, lack of proper evaluation, lack of clarity between a prototype and an implemented system, lack of continued commitment from users. This process requires a greater degree of sustained commitment from users for a longer time span than traditionally required. Users must be constantly informed as to what is going on and be completely aware of the expectations of the 'prototypes'.
References
Bersoff, E., Davis, A. (1991). Impacts of Life Cycle Models of Software Configuration Management. Comm. ACM.
Crinnion, J.(1991). Evolutionary Systems Development, a practical guide to the use of prototyping within a structured systems methodology. Plenum Press, New York.
Davis, A. (1992). Operational Prototyping: A new Development Approach. IEEE Software.
Software Productivity Consortium (SPC). (1997). Evolutionary Rapid Development. SPC document SPC-97057-CMC, version 01.00.04.

Related

Difference of safety-critical SW development [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
When developing safety-critical software using some quality standards (like e.g. IEC 61508 or DO 178-C) developers have to care about many things. I know that the verification in each development step is quite time consuming and expensive. Moreover, I know that some reduced programming languages are used.
But I am interested in the concrete difference to a "normal" SW-development process. I mean in the standard V-Model, verification and testing should also be part of each development step. What do I have to consider finding requirements? What do I have to consider in SW design?
It isn't so much as a change in the "V Model" that helps verify critical system, it's what you do at each step of the way.
For example you may prefer to plan your development using waterfall in order to have verification steps and controlled transition periods. This has the benefit of staying in line with any government regulations that may be in place.
While developing it is common to use a limited subset of assemblies (APIs) in order to prevent developers from preforming dangerous operations. This type of restriction can also ensure that developers utilize the APIs correctly, such as cleaning up objects as a requirement.
Once the product has been developed you'll likely have gone through all of the testing phases. It is common in industry to develop test fixtures in order to verify and generate data to prove to the government or customers that your system says what it does.
In general, this topic is very deep. You did mention standards, one more is the ISO 2008 standard. I think what you should keep in mind is that the process doesn't change much (the life cycle model stays generally the same). But what you do at each step of the model will change depending on the project. You can take classes on Project Management... In fact it is a tract and sometimes a full degree program. So there's tons to learn about process and how to manage different projects.
Googling system critical projects and project management will likely generate a trove of knowledge.
Hope that helps shed some like on the subject.
EDIT: Finding requirements, like in a waterfall process, is very time consuming. It will involve understanding the customers needs and goals of course. In general you have to spend lots of time in this area for government reasons and software architecture. It's not really a different technique... Be explicit, understanding the requirements is most critical. The system shall recover from 90 second timeouts within 5 seconds of resetting. <- its like all other requirements in SW engineering... Explicit and testable. Objective not subjective. Think Grammer Nazi level of consideration.
One example of a safety critical systems is lockheeds F-35... The system requirements manuals are huge and the process to make a change requires meetings and quite a bit of paperwork.

Is there a MDSD/MDA success story for a real world application? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am currently facing a situation where I as an advocate of test driven development have to compete with an advocate of model driven software development (MDSD) / model driven architecture (MDA).
In my opinion, code generation is a valuable tool in my toolbox and I make heavy use of templates and automation when needed. I also create diagrams in UML when I think this helps to understand the inner working or to discuss architecture on the white board. However, I strongly doubt that creating software via UML (creating statecharts and sequence diagrams to create working code not only skeletons of code) is more efficient for multi tier applications (database layer, business/domain layer and a Gui, maybe even distributed). It seems to me when it comes to MDSD, the CASE tooling suddenly isn't just a tool anymore but it is the thing to satisfy: As I see it, on the one hand, MDSDevelopers profit from the higher abstraction UML gives them but at the same time they are struggling with modifing the codegenerator/template/engine to fullfill their needs which might be easily implemented (and tested) if used another tool out of their toolbox (VisualStudio, Eclipse,...).
All this makes me wonder if there has been a success story (suceess being that the product was rolled out in time, within the budged and with only few bugs and parts of the software have been reused later on) for a real world application which fullfills this creteria and has been developed using a strict model driven approach:
it has nothing to do with the the Object Management Group (OMG) or with consultants related to MDSD/MDA/SOA/
the application is not related to Business Process Modelling and is not a CASE tool itself
the application is actively used by end user
it has at least three tiers, including a user interface which goes beyond displaying raw table values and is not one of the common MDA/MDSD examples ("how to model a coffee machine, traffic light, dishwasher").
A tiny, but nevertheless useful testimonial on the use of MDSD has been posted on the Model Driven Software Network:
http://www.modeldrivensoftware.net/profiles/blogs/viva-mdd-follow-up-building-a?xg_source=activity
It is a relatively small app being developed, but still a good example of MDSD in action.
More success stories are listed at Metacase's site (http://www.metacase.com/cases/index.html). Metacase sells MetaEdit+, which implements DSM (Domain-Specific Modeling). DSM is just a form of MDSD.
I am also developing ABSE (Atom-Based Software Engineering), another form of MDSD, very close to DSM. ABSE is outlined at http://www.abse.info.
I used MDA and code generation on an embedded system project using 4 processors connected via CAN. We had over 20 axes of motion and many, many sensors. The system was highly robust and maintainable as the mechanical components were evaluated and modified.
We worked in the models and generated code so the models were always up-to-date. We did a careful domain analysis to achieve subject matter isolation. The motor control required very high performance and so was not modeled or generated. Our network drivers were also hand-coded, and we wrote interfaces that allowed bridge services to send events to any service anywhere in the system as needed (although this was tightly controlled so as to minimize interprocessor dependencies).
Using the method took a bit of discipline, but having working models was great because they can be reviewed by non-software types.
Version control and differencing of the models was a bit of a challenge but we had a small, localized team so we were able to avoid merge issues.
The good people at Pathfinder Solutions (our tool vendor) can help mentor you through the project.
You could also take a look at the slides from previous Code Generation conferences. Several of these talks were from successful case studies e.g. http://www.codegeneration.net/cg2009/slides.php
I am working on one of the project for legacy modernization and its using MDA tool named Bluage. Its for a big healthcare organization and its in production so i could say that its successful. MDA is better in case of legacy modernization as it can generate KDM model from some technologies like pacbase which are going to be out of support.
I worked on a MDSD system that generated admin style web apps in Google Closure. I believe that your question is compelling. Too much complexity and your MDSD system is too hard to use. Too simple and you won't generate apps that are useful in the real world. Where MDSD really shines is in saving developer time typing lots of plumbing style code but how can MDSD remain effective over multiple releases? Requirements can go in many directions. That is the real challenge. I recently blogged about my MDSD lessons learned on that project.

Feasibility of Scrum with certain modifications to the philosophy [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I would prefer if those who answer this question state whether or not they have experience developing in an Agile Environment or if they are speaking from a theoretical standpoint.
Backstory:
Let's say there is an opportunistic company that develops technologically innovative products (multi-touch interfaces, speech recognition devices, etc, etc) all of which are fundamentally unrelated. However, as one may see, the key advantage of working on products like these are that libraries can be created / extracted from the product and sold to other companies, developers, etc. Thus, working in an incremental fashion is advantageous as it allows the milestones to be separated from the final product.
Question1 : Is this advantageous from a business standpoint? Have any of you encountered the separating of libraries into individual products within your company?
Question2 : If products are indeed created in such an incremental manner, does Scrum seem like a valid methodology to apply?
Let's assume that this incremental process of creating components to piece together into a final application is set in place. The development team is initially very small, 6 or 7 people. For the fun of it, let's call this team a Guild. The company is just starting out, and they need to make something profitable. For argument's sake, let's say the Guild developed the FaceAPI Library. All of this was done within the Scrum methodology, let's say in one sprint. Now, the company has enough funding to employ 7 more people. These new 7 people are put into their own Guild, and their skills mirror the skills of the original Guild.
So now, this company has 2 Guilds, and 1 library off which to develop. Let's say that the one Guild is tasked with creating Product1 using the original library, and the other Guild is tasked with extending the library with more features. These two "sprints" would be carried out concurrently, and at the end the updated library would be merged into the application. As you can see, it is possible that some modifications might need to be made to the library by the team working on Product1, in which case the merge will be non-trivial.
In any case, this is the general idea. The company would have individual Guilds, or teams of people (Question 3: What do you think of this idea? Since teams are smaller, they would want to hire members that have good synergy. Is this likely to increase overall morale and productivity?), which would carry out sprints concurrently. Because of the nature of the service the company offers, the teams would work with more or less the same components, and parts of the applications, however their sprints could be created so that the teams could always carry out work without impediments. Each Guild would be a self-enclosed unit, having testers, designers, and QA's.
Final Questions:
As developers or testers, what are
your opinions on a company that
functions in this manner? Does it
foster leadership skills in
developers? Does it sound appealing?
Does it sound destined to fail?
Anyone with knowledge or experience
with Scrum, does it seem to apply
naturally in this kind of
environment?
Has anyone worked for
a company that functions similarly to
the above description? If you don't
mind answering, what was it called?
Was it successful?
To start with, I have been working on 3 more or less Scrum projects so far.
There are a couple of unclear things in your story. What is the company aiming for - developing libraries or final products? To me the two seems fairly conflicting, especially for a small company.
Another thing is, starting development with a library itself without any real users doesn't sound very agile to me. IMO an agile setup would start the other way around: develop a concrete product first, refactoring the design as dictated by the concrete situation, to possibly arrive to some sort of layered architecture, in which the lower layer(s) could be extracted into a reusable library. Then start developing more concrete products, looking for possibilities to reuse code between the projects, and evolving the design of the common library - again, as dictated by the concrete usage and needs of its clients (the product development teams).
At some point, library development would probably require its own team - in the beginning, it might suffice to have its design and its backlog coordinated between the different teams.
Regarding your question about teams treading on each other's code - this is what source control is for. Fork for the new stuff, then in the next sprint reintegrate and stabilise.
Regarding q2, scrum is an incremental approach so if the design lends itself to incremental segments of work then of course it's appropriate.
Regarding q3, how could it ever be a bad thing to hire "people that would work well within them and that they would want to work with"?
Team organization and system structure are highly dependent. See Conway's Law
This means that for you to have two separate teams working on two separate code modules (the Library team and the product team) you will need to have a clearly defined communication channel between the teams and thus, the code developed will reflect those channels in the design. Traditionally what this means is you end up defining an API or interface for the library which acts like a contract to which each team can develop. Agile practices normally adopt a more emergent design philosophy so it can be difficult to create an API that makes sense.
The way most agile teams get around this is by time boxing development to manageable increments. So while it might be unrealistic to design the entire API, the product team and library team could probably agree on an API design enough for 2 weeks of work. Write the code, deploy, design for the next iteration, and repeat. This way communication paths between the teams and code modules are established so the two teams can work independently without stepping on one another's toes.
Another option I've seen used recently is to have larger teams managed with a Kanban/Limited WIP process. Having everyone on the same team managed by a Kanban allows for more organic and flexible self-organization which means your system will be able to evolve more easily. By keeping work-in-progress highly visible it increases communication and by limiting work-in-progress you constrain developers from clobbering each other by keeping the system from evolving too far in any one direction. Combined with a solid VCS you should be good to go.
Finally, another option is that you take some time to really think about your architecture before diving into development. Using a software architecture design process such as the Architecture Centric Design Methodology (ACDM) in a limited "spike 0" kind of role could help you resolve many of the issues commonly encountered when allowing emergent design. By the end of the design sprint, you'll be able to lay out a plan that makes much more sense for what you need to do. And remember, just because it's a design phase doesn't mean you don't write code - quite the opposite. ACDM advocates strongly for experimentation.

exploring mathematics of/in computer science [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 13 years ago.
I have been working for two years in software industry. Some things that have puzzled me are as follows:
There is lack of application of mathematics in current software industry.
e.g.: When a mechanical engineer designs an electricity pole , he computes the stress on the foundation by using stress analysis techniques(read mathematical equations) to determine exactly what kind and what grade of steel should be used, but when a software developer deploys a web server application he just guesses on the estimated load on his server and leaves the rest on luck and god, there is nothing that he can use to simulate mathematically to answer his problem (my observation).
Great softwares (wind tunnel simulators etc) and computing programs(like matlab etc) are there to simulate real world problems (because they have their mathematical equations) but we in software industry still are clueless about how much actual resources in terms of memory , computing resources, clock speed , RAM etc would be needed when our server side application would actually be deployed. we just keep on guessing about the solution and solve such problem's by more or less 'hit and trial' (my observation).
Programming is done on API's, whether in c, C#, java etc. We are never able to exactly check the complexity of our code and hence efficiency because somewhere we are using an abstraction written by someone else whose source code we either don't have or we didn't have the time to check it.
e.g. If I write a simple client server app in C# or java, I am never able to calculate beforehand how much the efficiency and complexity of this code is going to be or what would be the minimum this whole client server app will require (my observation).
Load balancing and scalability analysis are just too vague and are merely solved by adding more nodes if requests on the server are increasing (my observation).
Please post answers to any of my above puzzling observations.
Please post relevant references also.
I would be happy if someone proves me wrong and shows the right way.
Thanks in advance
Ashish
I think there are a few reasons for this. One is that in many cases, simply getting the job done is more important than making it perform as well as possible. A lot of software that I write is stuff that will only be run on occasion on small data sets, or stuff where the performance implications are pretty trivial (it's a loop that does a fixed computation on each element, so it's trivially O(n)). For most of this software, it would be silly to spend time analyzing the running time in detail.
Another reason is that software is very easy to change later on. Once you've built a bridge, any fixes can be incredibly expensive, so it's good to be very sure of your design before you do it. In software, unless you've made a horrible architectural choice early on, you can generally find and optimize performance hot spots once you have some more real-world data about how it performs. In order to avoid those horrible architectural choices, you can generally do approximate, back-of-the-envelope calculations (make sure you're not using an O(2^n) algorithm on a large data set, and estimate within a factor of 10 or so how many resources you'll need for the heaviest load you expect). These do require some analysis, but usually it can be pretty quick and off the cuff.
And then there are cases in which you really, really do need to squeeze the ultimate performance out of a system. In these case, people frequently do actually sit down, work out the performance characteristics of the systems they are working with, and do very detailed analyses. See, for instance, Ulrich Drepper's very impressive paper What Every Programmer Should Know About Memory (pdf).
Think about the engineering sciences, they all have very well defined laws that are applicable to the design, and building of physical items, things like gravity, strength of materials, etc. Whereas in Computer science, there are not many well defined laws when it comes to building an application against.
I can think of many different ways to write a simple hello world program that would satisfy the requirment. However, if I have to build an electricity pole, I am severely constrained by the physical world, and the requirements of the pole.
Point by point
An electricity pole has to withstand the weather, a load, corrosion etc and these can be quantified and modelled. I can't quantify my website launch success, or how my database will grow.
Premature optimisation? Good enough is exactly that, fix it when needed. If you're a vendor, you've no idea what will be running your code in real life or how it's configured. Again you can't quantify it.
Premature optimisation
See point 1. I can add as needed.
Carrying on... even engineers bollix up. Collapsing bridges, blackout, car safety recalls, "wrong kind of snow" etc etc. Shall we change the question to "why don't engineers use more empirical observations?"
The answer to most of these is in order to have meaningful measurements (and accepted equations, limits, tolerances etc) that you have in real-world engineering you first need a way of measuring what it is that you are looking at.
Most of these things simply can't be measured easily - Software complexity is a classic, what is "complex"? How do you look at source code and decide if it is complex or not? McCabe's Cyclomatic Complexity is the closest standard we have for this but it's still basically just counting branch instructions in methods.
There is little math in software programs because the programs themselves are the equation. It is not possible to figure out the equation before it is actually run. Engineers use simple (and very complex) programs to simulate what happens in the real world. It is very difficult to simulate a simulator. additionally, many problems in computer science don't even have an answer mathematically: see traveling salesman.
Much of the mathematics is also built into languages and libraries. If you use a hash table to store data, you know to find any element can be done in constant time O(1), no matter how many elements are in the hash table. If you store it in a binary tree, it will take longer depending on the number of elements [0(n^2) if i remember correctly].
The problem is that software talks with other software, written by humans. The engineering examples you describe deal with physical phenomenon, which are constant. If I develop an electrical simulator, everyone in the world can use it. If I develop a protocol X simulator for my server, it will help me, but probably won't be worth the work.
No one can design a system from scratch and people that write semi-common libraries generally have plenty of enhancements and extensions to work on rather than writing a simulator for their library.
If you want a network traffic simulator you can find one, but it will tell you little about your server load because the traffic won't be using the protocol your server understands. Every server is going to see completely different sets of traffic.
There is lack of application of mathematics in current software industry.
e.g.: When a mechanical engineer designs an electricity pole , he computes the stress on the foundation by using stress analysis techniques(read mathematical equations) to determine exactly what kind and what grade of steel should be used, but when a software developer deploys a web server application he just guesses on the estimated load on his server and leaves the rest on luck and god, there is nothing that he can use to simulate mathematically to answer his problem (my observation).
I wouldn't say that luck or god are always the basis for load estimation. Often realistic data can be had.
It's also not true that there are no mathematical techniques to answer the question. Operations research and queuing theory can be applied to good advantage.
The real problem is that mechanical engineering is based on laws of physics and a foundation of thousands of years worth of empirical and scientific investigation. Computer science is only as old as me. Computer science will be much further along by the time your children and grandchildren apply the best practices of their day.
An MIT EE grad would not have this problem ;)
My thoughts:
Some people do actually apply math to estimate server load. The equations are very complex for many applications and many people resort to rules of thumb, guess and adjust or similar strategies. Some applications (real time applications with a high penalty for failure... weapons systems, powerplant control applications, avionics) carefully compute the required resources and ensure that they will be available at runtime.
Same as 1.
Engineers also use components provided by others, with a published interface. Think of electrical engineering. You don't usually care about the internals of a transistor, just it's interface and operating specifications. If you wanted to examine every component you use in all of it's complexity, you would be limited to what one single person can accomplish.
I have written fairly complex algorithms that determine what to scale when based on various factors such as memory consumption, CPU load, and IO. However, the most efficient solution is sometimes to measure and adjust. This is especially true if the application is complex and evolves over time. The effort invested in modeling the application mathematically (and updating that model over time) may be more than the cost of lost efficiency by try and correct approaches. Eventually, I could envision a better understanding of the correlation between code and the environment it executes in could lead to systems that predict resource usage ahead of time. Since we don't have that today, many organizations load test code under a wide range of conditions to empirically gather that information.
Software engineering are very different from the typical fields of engineering. Where "normal" engineering are bound to the context of our physical universe and the laws in it we've identified, there's no such boundary in the software world.
Producing software are usually an attempt to mirror a subset of the real-life world into a virtual reality. Here we define the laws ourselves, by only picking the ones we need and by making them just as complex as we need. Because of this fundamental difference, you need to look at the problem-solving from a different perspective. We try to make abstractions to make complex parts less complex, just like we teach kids that yellow + blue = green, when it's really the wavelength of the light that bounces on the paper that changes.
Once in a while we are bound by different laws though. Stuff like Big-O, Test-coverage, complexity-measurements, UI-measurements and the likes are all models of mathematic laws. If you look into digital signal processing, realtime programming and functional programming, you'll often find that the programmers use equations to figure out a way to do what they want. - but these techniques aren't really (to some extend) useful to create a virtual domain, that can solve complex logic, branching and interact with a user.
The reasons why wind tunnels, simulations, etc.. are needed in the engineering world is that it's much cheaper to build a scaled down prototype, than to build the full thing and then test it. Also, a failed test on a full scale bridge is destructive - you have to build a new one for each test.
In software, once you have a prototype that passes the requirements, you have the full-blown solution. there is no need to build the full-scale version. You should be running load simulations against your server apps before going live with them, but since loads are variable and often unpredictable, you're better off building the app to be able to scale to any size by adding more hardware than to target a certain load. Bridge builders have a given target load they need to handle. If they had a predicted usage of 10 cars at any given time, and then a year later the bridge's popularity soared to 1,000,000 cars per day, nobody would be surprised if it failed. But with web applications, that's the kind of scaling that has to happen.
1) Most business logic is usually broken down into decision trees. This is the "equation" that should be proofed with unit tests. If you put in x then you should get y, I don't see any issue there.
2,3) Profiling can provide some insight as to where performance issues lie. For the most part you can't say that software will take x cycles because that will change over time (ie database becomes larger, OS starts going funky, etc). Bridges for instance require constant maintenance, you can't slap one up and expect it to last 50 years without spending time and money on it. Using libraries is like not trying to figure out pi every time you want to find the circumference of a circle. It has already been proven (and is cost effective) so there is no need to reinvent the wheel.
4) For the most part web applications scale well horizontally (multiple machines). Vertical (multithreading/multiprocess) scaling tends to be much more complex. Adding machines is usually relatively easy and cost effective and avoid some bottlenecks that become limited rather easily (disk I/O). Also load balancing can eliminate the possibility of one machine being a central point of failure.
It isn't exactly rocket science as you never know how many consumers will come to the serving line. Generally it is better to have too much capacity then to have errors, pissed of customers and someone (generally your boss) chewing your hide out.

Type of Team Lead: More Programmer || More !Programmer [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Yesterday I had a team leader of another team say that they took a while to figure out something I wrote on a wiki page because I referred to obtaining code from source control as "checking out" which apparently confused them. They said that they were use to Clear Case and had only heard of the term "joining a project" and said that they haven't really programmed much for a long time.
While this is fine, what it then made me think of is the different types of team leaders I've had over the years. I've had some that have been almost purely managerial and I've had those that are programmers that do managerial things at the same time.
Do people have a preference as to what kind of team leader they have? How do you care if your team lead is active in the development of your product? I find team leaders who actually sit and code like the rest of the team more likely to understand things like (from my experience):
things aren't always as simple as they sound. Team leaders I've had who don't code or rarely code at all believe everything is a piece of cake and shouldn't take much time at all (which perhaps might be the case if you want to hack it together)
they are more understanding that developers don't always like sitting in long meetings and do their best to avoid getting their team into as many pointless meetings as possible
they understand what you say from a technical point of view. Those that might not have coded for a while might not be up to speed with a lot of the new technologies, techniques or lingo
I find it much more satisfying to have a team leader who has the mind of a developer and likes to get their hands dirty in the code as well. Perhaps there are some people out there that like team leads who distance themselves from the actual coding side of things and simply doles out the work, or perhaps another type of team leader that I haven't mentioned?
A team leader has to be a coder -- they can't lead the team unless the team respects them and where they're taking everyone.
A team manager, on the other hand, can either be a coder or someone who is just well organised and knows when to ask questions and interface to other management.
It is possible to find both a manager and a leader in the same person, but more often the roles (should be) separate and distinct.
You should read the book Managing Humans. I am of the opinion managers should keep their hands out of the code. They have more important responsibilities like keeping people away from developers, so they can do their job. Having them jump into development creates confusion as they aren't in it enough to know what's going on and have their time divided between that and other things, so it is difficult to count on them for major pieces of functionality. Plus, it really sucks when you have to tell your manager that something they just wrote needs to be changed, and you have to go back and redo it. Managers are really their to jump on the grenades for the rest of the team, so they can focus on accomplishing the task at hand.
That being said, should manager's know about software engineering? Yes of course they should, that's the field they are in. Should be know how to code in the latest and greatest whiz bang technology? That shouldn't really matter as long as they get how software development works.
I have no preference, I can't, I have to work with all of them, even though too many cooks spoil the broth. On a multi-developer typical project I have a technical lead, project manager and a non-technical customer. Of course, divisional and programme management will each stick their head in.
There are a number of types of leader, each have their own traits:
Non-technical customer: "The customer is always right." Often wants a moon-on-a-stick. Will call both the management and the technical bods and take the best answer as gospel.
Team manager/line manager: Somewhat pastoral role. Not particularly interested in the project I'm working on right now. Steps in when there is a decision to be made between project priorities. Probably really wants to be a coder, and delegates all the rest of his work that he can to his subordinates.
Project manager: Varying degrees of technical know-how. Is concerned only with timescales and costs. Does not understand, "I don't know how long its going to take, I need to play with it for a couple of days first to get a feel."
Team leader/technical lead: Just another developer, but with more experience. Responsible for technical decision making that will affect the whole project. Often fighting with the project manager to carry out good engineering practice, even though it will take longer in the short term.
Team leader/glorified secretary: Someone who is supposed to lead the team, but acts as more of a secretary. (Usually a grade above the team). Answers the phones, insulates customers from the technical bods. This works fine until they ask a technical question, where the glorified secretary tries to blag his/her way out of it, and eventually they work around the secretary and talk direct to the team.
We typically have a PM (non technical) who manages the project from an admin. viewpoint and a Tech Lead who manages the technical aspects and provides technical leadership to the team.
The Tech Lead will code parts of the project and will probably be the main (only) developer for the "Proof of Concept" stage.
On some smaller projects, they are the same person but it's a rare combination.
The absolute worst Software Leads/Chief Software Engineers that I've worked with were the ones that wanted to be intimately involved in the technical details. Too many important tasks were either missed or just not done. Managing a team is a full-time job. If the lead wants to get involved in the technical aspects it will certainly come at the expense of the managerial aspects.
I’ve only had 2 Software Leads/Chief Software Engineers out of dozens that I thought were worthwhile. While both were previously software engineers, those days were long gone for both of them. They knew it. They didn’t even try to pretend. Their job was now to manage. Their job was to make sure the developers had every chance to succeed. They did their best to remove all obstacles and make sure everyone was making progress.
I have a theory, but have never seen it in action, that the best software lead would be someone who is not, nor ever has been a software developer. They specialize in the true spirit of management, specifically that of being a facilitator. Unfortunately, most managers are more politically motivated or are just in the job because they've reached their pinnacle technically.

Resources