C10K for the modern world - networking

Answering this question made me wonder: is there nice modern version of the famous C10K page? 5 years passed from the time it was updated and I'm sure there're advances in the field.
Perhaps current version would be called something like C1M problem? ;)

In the modern world of things like distributed load-balancers and content distribution networks, I'm not sure that a single system having to be able to handle such a large number of concurrent connections is as big a deal. These days scalability is more about scaling out to distribute than scaling up to beef up your capabilities.

There's a million user comet application , though it investigates Erlang(+ OS tuning)

Related

Real life experience developing with Meteor

I'm working on a project where we have to take the decision soon whether to invest in our current technology stack to improve it and make it more flexible to support our time to market (LAMP based stack) or whether to change to a different stack in the hope that it would make our development faster, more efficient and possibly more fun.
One framework we're looking at is Meteor. So I'm wondering: Does anyone have real life experience with starting or shifting a medium-sized project to Meteor (3 developers, couple of hundred active users, mostly short-lived small pieces of user-generated content that are viewed by all users and need to be updated instantly)? Do you have metrics on productivity, code quality, code efficiency that you could share? Or just overall a feeling for how it went? How happy are you with Meteor when working using it for more than just a week or two? How is maintainability over a longer period? How well does it scale up?
Would appreciate any insight!
I'll try to be as fact based as possible to keep this objective:
I switched from Django to Meteor, PostGreSQL to MongoDB.
Switching stacks has a huge cost. A new language, syntax, patterns, and maybe even IDE. Online courses to be taken, a solid node.js foundation, curiosity about io.js, ES6, and Mongo 3.0. A refresher on how JavaScript treats Dates and numbers, and how to use JavaScript to query mongo.
On top of that, you'll want your developers to peak under the hood to see the Meteor magic so they understand fibers, reactivity, DDP, and minimongo. All these things will cost each developer at LEAST 160 hours, yet are necessary to be a competent developer. Skip these steps, and you've got a team of monkeys pulling levers.
To answer your questions:
Productivity? It will hit rock bottom along with code quality. Then slowly climb, and possibly exceed the previous mark (IF it's something the developers enjoy). This is because client & server are in the same language & just a file away. Debugging messages & stack traces are pretty good & hot code reloads, although still not great, are good.
Code quality has absolutely nothing to do with the framework.
Code efficiency is good because reactivity in handled behind the scenes most of the time and fibers makes it possible to write server code in a synchronous fashion. This increases code readability.
Maintainability is another word for code quality.
Scalability is more of a question about node.js, but will work for the VAST majority of projects. An honest critique of node's shortcomings is here: https://medium.com/code-adventures/farewell-node-js-4ba9e7f3e52b

Corp IT Systems direction. Invest in A or B?

This is more of a general question about which direction would be a better investment for the company.
Our company's core business application is written in Visual FoxPro and is about 9+ years old. The database is huge 15+ gigs and the core logic is complex and to make matters worse the data model is terrible. The two guys that built it and have maintained it all these years are at least in their 50's, so needless to say retirement or possibly death could come within the next decade or so.
This VFP app drives all our core business functions and requires terminal services and citrix to access it from the outside world. Our web apps have to interface with it via ODBC and we are always having performance issues with it. The servers that run this system are also very old, like Win 2000 server and are falling apart.
Recently we have been having meetings about upgrading the systems that run this core app as well as other services like email and file storage. The biggest expense however is buying new server hardware, OS licensing, Terminal Services licensing, Citrix licensing etc to solve some performance and outside access issues we are currently having as well as just generally bringing us to date on our systems.
The price tag is going to be in the $55K to $65K price range. So as a web developer my point of view is that this is a huge waste of money! My solution would be to invest that money in rewriting the core system to run on the web based .Net platform. This would eliminate the need for Terminal Server and Citrix licensing along with the pricey hardware and configuration management to run it on. I don't see the point in investing this kind of money in an antiquated system that should be on it's way out anyways.
I am looking to get some convincing arguments as to why this is a waste of money. Hopefully there is someone here that has faced this type of situation before that can give me some points of view. The hardware upgrade seems to be the easiest road to take because they will just have a consultant come in and do it all. A software development project would take longer, require more resources and possibly cost a little more money.
The short-term rewrite vs. re-hardware argument cannot be won. Hardware and licenses are always cheaper than a rewrite. And hardware plus license seems to involve no risk.
You can't win on ROI argument. Unless the system is trivial and you are a genius, it will always cost $100K or more to rewrite an application that actually does something. Think multiple person years.
You might win the "technical debt" argument. Change is getting more and more complex, risky and expensive. The longer this code is perpetuated, the more risk and cost accumulates.
The real question is "start to fix now?" or "wait until it breaks and suffer later?" And that has no definite $-valued answer.
You can't compete on money, so you have to compete on risk, features, growth, maintainability, adaptability, standards compliance, security, creating unique value for each customer, etc., etc.
"We are now looking at a larger base of customers and more data". That's an argument you might be able to win.
(I'm over 50, I'm not planning on dying any time soon. That argument doesn't win hearts and minds. Unless they're over 80, you can't really use age except as way to get your argument ignored.)
Focus on the cost (and risk) of making changes.
Prove that you have a web-based solution that makes changes less costly and less risky.
Further, dig into what's there and find parts that can be replaced by a web framework. Code you don't write is cheaper to maintain the code you write.
Every project needs a cost-benefit analysis. If a $60,000 one-time investment will resolve all issues for the next 10 years, then it is (probably) far more economical than hiring a team of developers for even one year to build a newer, better system.
On the other hand, if it's already costing $50,000/year in maintenance and this capital cost is just to keep the system alive, and you'll need to spend another $60k in a few years from now, then it warrants a serious consideration with respect to a re-design.
Or you could take the middle road and start wrapping it up in something opaque like a web service, then gradually swapping out components with better (more efficient, more maintainable, etc.) internal components. Lots of companies go this route because it defers the up-front costs of a rewrite; if necessary you can defer IT resources elsewhere.
S.Lott is right, though - it's likely that you won't be able to compete on cost alone. You have to try to quantify the risks associated with these ancient systems - for example, how much it will cost the company to find and train qualified FoxPro developers if the original programmers decide to quit (or, to use the parlance of so many managers I've met, "run over by a bus")...
Just to add some further perspective to this: Before .NET (and for a few years after) I conducted most of my projects exclusively in Delphi. At the time, it really was a great choice for enterprise development. I was actually the person who didn't want to "upgrade." After a while, however, it became apparent to both myself and my higher-ups that this scared people outside the company.
Investors, auditors, everyone - they didn't like the idea that our core IT asset was done in some "obscure" language. Of course, Delphi wasn't/isn't really that obscure; there's a "delphi" tag here on SO with a count of 3340. But let's use SO as our example - here are the current counts:
c# - 57293
.net - 30577
asp.net - 26600
java - 31023
vb.net - 5996
delphi - 3340
foxpro - 69
vfp - 27
Let those numbers sink in for a while. Delphi, my tool of choice at the time, now has less than 10% of the representation of C#, and this made non-techies nervous. Foxpro/VFP is not even at 1%. I can't even remember how many times I had to answer questions like:
What happens if the lead developer (me) quits or gets run over by a bus?
How difficult/costly will it be to hire programmers in that field?
What if the vendor stops supporting it? (This almost happened)
What if we want to get outside help? Consultants? Security audits?
How easy will it be to get it to work with outside products?
Blah blah blah, worry worry worry, was how I felt at the time, and this was a product that wasn't really that obscure. In your case, we're talking about FoxPro here. FoxPro has gotten to be almost like COBOL; sure, it's still around, there are people out there who know it, but who starts a new project in FoxPro today? It's boring, it's downright ghetto. VB6 is starting to become ghetto, and VB/Access effectively replaced FoxPro so many years ago.
I'm obviously being slightly melodramatic here, but if I were you, this is the angle I would be taking. Forget about the short-term economics, forget about the age, and focus on the obscurity of the product. How many genuine, qualified responses do they think they'll get if they put a want-ad out for a FoxPro developer? What kind of pay would they have to offer for a position like that? What would the turnover be like? This may all seem remote if these two developers have been there for 20-odd years, but when you're running a multimillion-dollar business, you ought to know that it's never a good idea to stake your very survival on one or two employees - not if you can help it.
In general supplementing a poor system this tons of hardware is a bad plan, i would probably say that it#s better to rewrite, but it's hard to say without knowing the details.
Bear in mind that a decent rewrite should improve performance, reliability and maintainabilty so the potential savings are large and will only increase year on year, even if the inital investment is a little more.
In order to figure out if it is worthwhile, you have to calculate, in addition to the costs of a rewrite:
Documenting everything the system currently does, and reverse-engineering the requirements.
Writing unit and integration tests for everything that currently exists. This probably doensn't exist already, but should be.
Cost of maintaining the new system. The new system isn't going to eliminate maintenance costs, merely reduce it. How much will you save?
Cost of hardware for the new system. The new system is going to have to run on something.
Licensing costs for any software/etc. that are needed for the new system. Is everything going to be open source? Or are you going to need several Visual Studio Test Editions for your developers and testers?
Cost of hiring new personnel to do the development. In addition to the straight salary costs, there are office costs. The total might be $300,000, for say 3 developers, counting salary, office space, equipment, licenses, health care benefits.
Time horizon for the saving. The saving isn't going to occur immediately. It is going to occur in the future. In the meantime, they have to still pay for the licensing for the current system, because something has to do the job until the new system is put in place.
Cash flow issues. Because of the above, in the short term they are going to need more money to fund the development. The actual costs are higher, because they essentially have to get a loan, raise equity, or have an opportunity cost (they aer going to have to forego some other investment opportunity to pursue the rewrite).
Business risk. There may be a danger that the rewrite might cost more, work worse,
Two important numbers:
Number of "FoxPro" jobs listed in San Francisco's craigslist right now: 2.
Number of ".NET" jobs listed in San Francisco's craigslist right now: 252.
A lot of other points that have been mentioned are valid. However, you can spend as much as you want on hardware, but the fact is that if something breaks and you need help, you are going to have a heck of a time finding more people to help.
Sounds like a good time to start talking about a migration¹ to newer, better-supported technologies. (And in 10 years when .NET is old hat, you can do it all over again :)
[1] And evolve the system, don't rewrite it. I would guess your current system grew very organically based on needs at the time. There's no way that you'll be able to completely replace all of that (at least, not without a couple of years and a few miillion bucks).
As a historical VFP devloper (over 20yrs with Foxpro/VFP, and STILL have people asking me to write / update their systems with VFP, for a variety of reasons), its still very powerful. However, while researching and taking much of my OOP and development experience and working with .Net, I do find some things in .Net much easier, especially the strong type-casting. However, doing a basic report REQUIRES all strong type-casting to the database tables / structures / objects, and in many cases thus far, a PITA to do.
The price tag for a rewrite is always of significant consideration, but so too is the collapse of ANY system... regardless of VFP, VB, Access, or other. I would strongly suggest getting a consulting company in to help in the re-modeling of your system and maybe act as a project manager / mentor to your in-house staff of programmers who may be able to offer their talents even though it may require some training in the new development environment. This way, you can get a good basis of a strong talent in the language, yet keep some costs down by using your own programming staff -- yet you may need to hire supplemental programming staff. The learning curve from VFP to .Net is there, and can still be a head scratcher.
There are a variety of companies out there who were VFP specialists that have subsequently migrated their services to .Net world and may offer a perfect match for your organization having the historic knowledge and professional experience of BOTH worlds. I know they can act as mentors too for the development of such work.
You can only say it is a waste of money after you analyzed the ROI - it will depend heavily on how much does it cost to rewrite the system.
Classic mistake on JOS - "system is a mess, let's rewrite it".
It will be like looking at this old building and seeing a toothpick and wondering why it is there. You figure it isn't needed, and pull it out.
Suddenly the building collapses around your head :)
It might be a better idea to
Consider rewriting parts of the system for better maintainability.
Optimizing the system for better performance.
Abstracting the Foxpro specific parts, so it could be more easily converted to some other technology.
This incremental approach would reduce risk, and provide some short-term improvements.
There is no magic bullet here for the company. The only way to be sure is to take the hit on a new server to get the stability and speed benefits that brings to the existing business-critical software. Then once that is parked for a few years start re-engineering the thing on a different platform like .NET if that's what you want to do. Bearing in mind that you will have to migrate the VFP data into the new database structure at some point.

exploring mathematics of/in computer science [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 13 years ago.
I have been working for two years in software industry. Some things that have puzzled me are as follows:
There is lack of application of mathematics in current software industry.
e.g.: When a mechanical engineer designs an electricity pole , he computes the stress on the foundation by using stress analysis techniques(read mathematical equations) to determine exactly what kind and what grade of steel should be used, but when a software developer deploys a web server application he just guesses on the estimated load on his server and leaves the rest on luck and god, there is nothing that he can use to simulate mathematically to answer his problem (my observation).
Great softwares (wind tunnel simulators etc) and computing programs(like matlab etc) are there to simulate real world problems (because they have their mathematical equations) but we in software industry still are clueless about how much actual resources in terms of memory , computing resources, clock speed , RAM etc would be needed when our server side application would actually be deployed. we just keep on guessing about the solution and solve such problem's by more or less 'hit and trial' (my observation).
Programming is done on API's, whether in c, C#, java etc. We are never able to exactly check the complexity of our code and hence efficiency because somewhere we are using an abstraction written by someone else whose source code we either don't have or we didn't have the time to check it.
e.g. If I write a simple client server app in C# or java, I am never able to calculate beforehand how much the efficiency and complexity of this code is going to be or what would be the minimum this whole client server app will require (my observation).
Load balancing and scalability analysis are just too vague and are merely solved by adding more nodes if requests on the server are increasing (my observation).
Please post answers to any of my above puzzling observations.
Please post relevant references also.
I would be happy if someone proves me wrong and shows the right way.
Thanks in advance
Ashish
I think there are a few reasons for this. One is that in many cases, simply getting the job done is more important than making it perform as well as possible. A lot of software that I write is stuff that will only be run on occasion on small data sets, or stuff where the performance implications are pretty trivial (it's a loop that does a fixed computation on each element, so it's trivially O(n)). For most of this software, it would be silly to spend time analyzing the running time in detail.
Another reason is that software is very easy to change later on. Once you've built a bridge, any fixes can be incredibly expensive, so it's good to be very sure of your design before you do it. In software, unless you've made a horrible architectural choice early on, you can generally find and optimize performance hot spots once you have some more real-world data about how it performs. In order to avoid those horrible architectural choices, you can generally do approximate, back-of-the-envelope calculations (make sure you're not using an O(2^n) algorithm on a large data set, and estimate within a factor of 10 or so how many resources you'll need for the heaviest load you expect). These do require some analysis, but usually it can be pretty quick and off the cuff.
And then there are cases in which you really, really do need to squeeze the ultimate performance out of a system. In these case, people frequently do actually sit down, work out the performance characteristics of the systems they are working with, and do very detailed analyses. See, for instance, Ulrich Drepper's very impressive paper What Every Programmer Should Know About Memory (pdf).
Think about the engineering sciences, they all have very well defined laws that are applicable to the design, and building of physical items, things like gravity, strength of materials, etc. Whereas in Computer science, there are not many well defined laws when it comes to building an application against.
I can think of many different ways to write a simple hello world program that would satisfy the requirment. However, if I have to build an electricity pole, I am severely constrained by the physical world, and the requirements of the pole.
Point by point
An electricity pole has to withstand the weather, a load, corrosion etc and these can be quantified and modelled. I can't quantify my website launch success, or how my database will grow.
Premature optimisation? Good enough is exactly that, fix it when needed. If you're a vendor, you've no idea what will be running your code in real life or how it's configured. Again you can't quantify it.
Premature optimisation
See point 1. I can add as needed.
Carrying on... even engineers bollix up. Collapsing bridges, blackout, car safety recalls, "wrong kind of snow" etc etc. Shall we change the question to "why don't engineers use more empirical observations?"
The answer to most of these is in order to have meaningful measurements (and accepted equations, limits, tolerances etc) that you have in real-world engineering you first need a way of measuring what it is that you are looking at.
Most of these things simply can't be measured easily - Software complexity is a classic, what is "complex"? How do you look at source code and decide if it is complex or not? McCabe's Cyclomatic Complexity is the closest standard we have for this but it's still basically just counting branch instructions in methods.
There is little math in software programs because the programs themselves are the equation. It is not possible to figure out the equation before it is actually run. Engineers use simple (and very complex) programs to simulate what happens in the real world. It is very difficult to simulate a simulator. additionally, many problems in computer science don't even have an answer mathematically: see traveling salesman.
Much of the mathematics is also built into languages and libraries. If you use a hash table to store data, you know to find any element can be done in constant time O(1), no matter how many elements are in the hash table. If you store it in a binary tree, it will take longer depending on the number of elements [0(n^2) if i remember correctly].
The problem is that software talks with other software, written by humans. The engineering examples you describe deal with physical phenomenon, which are constant. If I develop an electrical simulator, everyone in the world can use it. If I develop a protocol X simulator for my server, it will help me, but probably won't be worth the work.
No one can design a system from scratch and people that write semi-common libraries generally have plenty of enhancements and extensions to work on rather than writing a simulator for their library.
If you want a network traffic simulator you can find one, but it will tell you little about your server load because the traffic won't be using the protocol your server understands. Every server is going to see completely different sets of traffic.
There is lack of application of mathematics in current software industry.
e.g.: When a mechanical engineer designs an electricity pole , he computes the stress on the foundation by using stress analysis techniques(read mathematical equations) to determine exactly what kind and what grade of steel should be used, but when a software developer deploys a web server application he just guesses on the estimated load on his server and leaves the rest on luck and god, there is nothing that he can use to simulate mathematically to answer his problem (my observation).
I wouldn't say that luck or god are always the basis for load estimation. Often realistic data can be had.
It's also not true that there are no mathematical techniques to answer the question. Operations research and queuing theory can be applied to good advantage.
The real problem is that mechanical engineering is based on laws of physics and a foundation of thousands of years worth of empirical and scientific investigation. Computer science is only as old as me. Computer science will be much further along by the time your children and grandchildren apply the best practices of their day.
An MIT EE grad would not have this problem ;)
My thoughts:
Some people do actually apply math to estimate server load. The equations are very complex for many applications and many people resort to rules of thumb, guess and adjust or similar strategies. Some applications (real time applications with a high penalty for failure... weapons systems, powerplant control applications, avionics) carefully compute the required resources and ensure that they will be available at runtime.
Same as 1.
Engineers also use components provided by others, with a published interface. Think of electrical engineering. You don't usually care about the internals of a transistor, just it's interface and operating specifications. If you wanted to examine every component you use in all of it's complexity, you would be limited to what one single person can accomplish.
I have written fairly complex algorithms that determine what to scale when based on various factors such as memory consumption, CPU load, and IO. However, the most efficient solution is sometimes to measure and adjust. This is especially true if the application is complex and evolves over time. The effort invested in modeling the application mathematically (and updating that model over time) may be more than the cost of lost efficiency by try and correct approaches. Eventually, I could envision a better understanding of the correlation between code and the environment it executes in could lead to systems that predict resource usage ahead of time. Since we don't have that today, many organizations load test code under a wide range of conditions to empirically gather that information.
Software engineering are very different from the typical fields of engineering. Where "normal" engineering are bound to the context of our physical universe and the laws in it we've identified, there's no such boundary in the software world.
Producing software are usually an attempt to mirror a subset of the real-life world into a virtual reality. Here we define the laws ourselves, by only picking the ones we need and by making them just as complex as we need. Because of this fundamental difference, you need to look at the problem-solving from a different perspective. We try to make abstractions to make complex parts less complex, just like we teach kids that yellow + blue = green, when it's really the wavelength of the light that bounces on the paper that changes.
Once in a while we are bound by different laws though. Stuff like Big-O, Test-coverage, complexity-measurements, UI-measurements and the likes are all models of mathematic laws. If you look into digital signal processing, realtime programming and functional programming, you'll often find that the programmers use equations to figure out a way to do what they want. - but these techniques aren't really (to some extend) useful to create a virtual domain, that can solve complex logic, branching and interact with a user.
The reasons why wind tunnels, simulations, etc.. are needed in the engineering world is that it's much cheaper to build a scaled down prototype, than to build the full thing and then test it. Also, a failed test on a full scale bridge is destructive - you have to build a new one for each test.
In software, once you have a prototype that passes the requirements, you have the full-blown solution. there is no need to build the full-scale version. You should be running load simulations against your server apps before going live with them, but since loads are variable and often unpredictable, you're better off building the app to be able to scale to any size by adding more hardware than to target a certain load. Bridge builders have a given target load they need to handle. If they had a predicted usage of 10 cars at any given time, and then a year later the bridge's popularity soared to 1,000,000 cars per day, nobody would be surprised if it failed. But with web applications, that's the kind of scaling that has to happen.
1) Most business logic is usually broken down into decision trees. This is the "equation" that should be proofed with unit tests. If you put in x then you should get y, I don't see any issue there.
2,3) Profiling can provide some insight as to where performance issues lie. For the most part you can't say that software will take x cycles because that will change over time (ie database becomes larger, OS starts going funky, etc). Bridges for instance require constant maintenance, you can't slap one up and expect it to last 50 years without spending time and money on it. Using libraries is like not trying to figure out pi every time you want to find the circumference of a circle. It has already been proven (and is cost effective) so there is no need to reinvent the wheel.
4) For the most part web applications scale well horizontally (multiple machines). Vertical (multithreading/multiprocess) scaling tends to be much more complex. Adding machines is usually relatively easy and cost effective and avoid some bottlenecks that become limited rather easily (disk I/O). Also load balancing can eliminate the possibility of one machine being a central point of failure.
It isn't exactly rocket science as you never know how many consumers will come to the serving line. Generally it is better to have too much capacity then to have errors, pissed of customers and someone (generally your boss) chewing your hide out.

Can software developing in a large team be interesting and fun? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I've been in the business of developing hardware and software for 19 years now. In the earlier days the projects and teams I worked on were smaller, much more effective and more fun.
The effect of the input of one single developer to the final product and to its success was evident to everybody. We had direct contact to and feedback from the customers. This was rewarding for our work and a very effective way to improve the product.
With the years the complexity of hard and software increases and more and more people were needed to get things done on time. The downside of the trend to bigger teams for me is that the contribution of a single developer to the project success gets smaller and smaller. And we lose the contact to real world of the users and customers because of growing QA departments more and more.
I always enjoyed my work and kept in touch with latest technologies like OOP, UML, .NET, and whatever. I already worked a few years as a team leader but I didn't like it very much because I missed developing and coding.
I'm just frustrated about the fact that my piece of the whole "thing" we're working on gets smaller and smaller and I lose the overview about it and the contact to the ground. Please don't understand me wrong, I don't want to cry for the good old days but for me the work on more and more specialized sub modules of a giant system simply gets more and more boring.
I'm wondering if I'm alone feeling like that and maybe if you have some advice how to bring the fun back to my work. And sorry, no, I'm not interested in working on an open source project in my free time. Nine hours a day in front of a computer screen are enough, life is more than coding...
I also require interaction with and feedback from the customer. However, a customer can be many things. As long as I'm satisfying someone (end user, team leader, big boss, etc.) then that's enough for me. The interaction itself is the key factor.
As for the feeling of pride and ownership from having a large impact on the system, again it's a matter of focus. You are still creating something, even if it's a smaller piece of the whole.
I long ago realized that I'm a small fish in a big pond. Learning to feel happy about my place in that pond was the only solution.
IOW, it's all relative!
I guess it all depends, there is a degree of camaraderie that comes with smaller teams and a lesser chance of ego's colliding. I have experienced both and they both have their upsides and downsides. To be honest, while working on a larger team I learned so much from other programmers, you think you know a lot, but someone always knows more.
It all depends on the team and the egos of the individuals.
When working on a team with ego problems, it doesn't matter how cool the technology is or how much interaction you get with the customers. One bad apple can drain all of the fun out of working on an otherwise cool project.
On the other hand, if the team has gelled, it matters very little if the technology is out-of-date, or the business problem is boring. Working on an back-office accounting system using VI and 10-year-old beta C++ compilers can still be invigorating when you feel like your peers are in the same fight and have your back. When you learn from others and are listened to when you have some new approach to try. When the developers control the build/test/deploy process so that it's sane and improves the lives (and sleep patterns) of the support team. When your peers (and you them) are always willing to help with an obscure language issue or work through a maddening bug. That what makes programming fun and interesting regardless of everything else.
You may want to consider changing companies back to a smaller company where you had a broader set of responsiblities, for one idea. Also, what are changes in the process that would help with the points you don't like?
I do have the question of what you mean by large here? Would a team of 50 people in a project be large? Or is it more like 1,000 to be large? On one level I'm asking for scale as there are teams beyond large if one wants to look at all the developers that work on Microsoft's big products like Office and Windows while at the other end of the spectrum are the one person development teams that do it all.
I'd second Kelly's answer that it depends on the team and egos for another big factor in things. What do you consider fun? Is it finding more efficient ways to solve problems that have poor solutions? Is it conquering a Millenium puzzle? Or is seeing someone smile while using your software what makes it fun? Lots of different possible answers and while I can make suggestions, how good or bad they are is totally for you to interpret.
I don't think you're alone in disliking how as a company matures the process can change as new people in various roles are added with increased bureaucracy and losing agility as it may take more signatures to get a change to be allowed or developers lose that touch to the customer of their product. There is a spectrum of various ways to produce software and some places may have less process in place and be focusing on "just make it work" while other places may want the process to be much more formal and organized with 1,001 policies for every little thing. At which end do you want to be working?
To answer the question as it's asked in the title: No!
I feel very similar and talked to many others who think the same. From my experience small teams are much more fun to work with and by that (and some other reasons) they're much more effective.
Thank you all for your interesting and valuable answers (and for correcting grammar and spelling :-)
You gave me some big points to think about:
The missing interaction with custumers (whatever "customer" means)
The interaction and feedback inside the developer team
What means fun for me. I think its more the smile in the face of the user than the use of cutting-edge technology.
How to deal with the sometimes overwhelming processes.
Last but not least to find my comfortable place in the big pond. It may be not the one where I'm staying at the moment...

What is your company's stance regarding (technological) 'innovation'? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 months ago.
Improve this question
.NET 3.5, .NET 4.0, WPF, Silverlight, ASP.NET MVC - there's really a lot of new Microsoft technology released / on the horizon to try out these days.
(The examples I gave is all Microsoft technology but this can apply to any language or platform). I am curious how this is handled in the company you work for. A few examples:
Do you have a CTO that determines what technology the company uses?
Are development teams free to choose what technology they use? For example: framework version, classic ASP.NET vs ASP.NET MVC, ADO.NET Entity Framework vs Linq2Sql or NHibernate? Or a mix of these?
What new technologies does the company you work for try out and why?
Does your company have dedicated resources (time) to try out WPF or whatever technology, just for research, or do you try things out in your spare time and try to introduce them to your company?
These are just examples to make my question clearer. To summarize, I'd like to know what this process looks likes, who is responsible, who makes the decisions. Does your company jump on the bandwagon, or is it reluctant to try new technologies? And are you comfortable with this situation?
At the company I work for, we still use .NET 2.0 (although we are now slowly switching to .NET 3.5), haven't seriously looked into ASP.NET MVC, haven't tried out WPF at all, etcetera. And, some find it pretty hard to convince people to do. Is it fair to expect otherwise?
At my company, we have an architecture group that determines which technologies are used. People are welcome to read up on alternative technologies and make suggestions, but at the end of the day, it's the architecture group that makes the decisions.
While this may seem restrictive, it does ensure that all of the development groups are using the same or similar technologies, and moving from one group to the next is fairly easy. As well, by having one group do all the research, you ensure that you don't waste time by having multiple groups duplicate the research effort.
Since I work in such a small company and am I typically either the only developer, or the lead developer in a very small group, I can usually convince my boss to use whatever I think would be the best for a given project/situation.
We stick to what we know for our major and key projects within the company.
For any new "mini" projects that come along, we take the hit on the learning curve to try and build them in the latest technologies if at all possible.
This enables us to get up to speed on these things to then comfortably and safely use these technologies in our major projects as we see fit.
Where I work there is an architect team which looks at technologies from a high level and makes recommendations to various actual teams. A subset of the architect team actually takes the technologies and experiments on them and out of the produces
Internal 1 hour overview sessions
Week long boot camps
Whitepapers/Posters
The more important the technology is the more of that list is produced. All of that just feeds to teams, which combined with customer requirements for technology actually make the decision for what that team should use.
I have a mix answer to this question. Where I work, lower level technical managers are usually the ones that chose a certain technology and sometimes even the developers have the freedom to try something new. For example, I really wanted to learn about JavaScript's Prototype while working on a web site. I made the case to my boss, he was reluctant first because nobody else knew it or had used it before, but gave me the go ahead. It was great for me to be able to learn Prototype and take advantage of it's many built in functionality. Other bigger projects come down from higher management and we don't really have much of a choice. Right now, my company is adopting SAP, so everything is moving into that direction. I don't necessarily want to become an SAP expert, but if I want to stay here, I'll need to at least learn how to work with it.
Every company has its own pace for innovation, and it's dependent first on the comfort level of the managers, and second on whether anybody actually does the work to research and propose using new things. When the managers start getting uncomfortable, innovation slows or stops until they get comfortable again. Some innovations they will never be comfortable with.
Keeping this in mind, I'm not sure how to answer your question about whether or not it's fair to expect more innovation than is happening. Certainly it's reasonable for you to want more; equally, once you've hit your organization's speed limit on innovation, it's not likely to change and, if it does change, it will probably take a long, long time.
I've been given rather large amounts of freedom to change things by various managers in my past, and I took advantage of it. I also ran into the limits on a regular basis, and finally dealt with my frustration by starting my own company. (This may be considered a somewhat drastic measure; certainly by doing do you reduce the time you have to research and develop the very things for which you started your company.)
These days I'm developing rather significant applications in Haskell, and I'm pleased as punch. After a year, I'm starting to get the hang of it, and I certainly have several more years ahead of me just learning what I can do with the tools I have now.
I suppose the summary of my response is: if you want to innovate more than those around you, you need to change your peer group.
I think any company that tries new technology for the sake of it, as its bleeding edge and 'innovative' is crazy. To have a formal 'lets play with new technology to try it out department' is just nuts.... unless they're in the business of providing technology consulting to other businesses.
For everyone else technology is there to help the business get things done. Not to help developers line their CV's with cool sounding TLA's.
The company I'm working at the moment is quite large and has a CTO that chooses 'strategic platforms'. But I've have to say, if you can pick a technology, they're probably using it. They're too big to beat everyone down with the corporate stick, but they try. If the technology will work in the project and bring it in on time, then it gets used.
We need solid and proven platforms for our stuff. And, we don't need anything fancy. Therefore we might go for .NET after 5-10 years or so, hope it's ready by then. On the other hand, Java is already mature enough, so we're using it alongside with C++ and some Jython scripting. These decisions are pretty much autonomous (we're a small shop).
I don't mean to mock bleeding edge developers, but whether you need solidity or newest features obviously depends on what you're working on. Many scientists are still happily using Fortran 77.

Resources