I am fairly new in product development and I am trying to work over a product. The problem that I have realized is that people draw diagrams and charts showing different modules and layers.
But as I am working alone (I am my own team) I got a bit confused about the interaction I am facing in the development within the programs and I am wondering whether developing a product in modules is real or not?
Maybe I am not a great programmer, but I see no boundaries when data start to travel from frontend to backend.
I've written a lot of layered applications and it can be a useful pattern but it can lead you astray too, and thinking in modules is a bit more useful.
One problem with layers is that they're often used as a reason for repackaging data as it flows through the system, when the data is packaged perfectly well when it enters the system, such as from a database.
Another issue is that layering by its very nature stacks modules on top of one another - this is just too naive for most systems.
I suggest you get a good book on design patterns and spend some time studying and understanding the trade-offs with different architectural approaches. Developing modular applications is not easy but it's worth taking the time to do it well.
Related
I'm working on a project where we have to take the decision soon whether to invest in our current technology stack to improve it and make it more flexible to support our time to market (LAMP based stack) or whether to change to a different stack in the hope that it would make our development faster, more efficient and possibly more fun.
One framework we're looking at is Meteor. So I'm wondering: Does anyone have real life experience with starting or shifting a medium-sized project to Meteor (3 developers, couple of hundred active users, mostly short-lived small pieces of user-generated content that are viewed by all users and need to be updated instantly)? Do you have metrics on productivity, code quality, code efficiency that you could share? Or just overall a feeling for how it went? How happy are you with Meteor when working using it for more than just a week or two? How is maintainability over a longer period? How well does it scale up?
Would appreciate any insight!
I'll try to be as fact based as possible to keep this objective:
I switched from Django to Meteor, PostGreSQL to MongoDB.
Switching stacks has a huge cost. A new language, syntax, patterns, and maybe even IDE. Online courses to be taken, a solid node.js foundation, curiosity about io.js, ES6, and Mongo 3.0. A refresher on how JavaScript treats Dates and numbers, and how to use JavaScript to query mongo.
On top of that, you'll want your developers to peak under the hood to see the Meteor magic so they understand fibers, reactivity, DDP, and minimongo. All these things will cost each developer at LEAST 160 hours, yet are necessary to be a competent developer. Skip these steps, and you've got a team of monkeys pulling levers.
To answer your questions:
Productivity? It will hit rock bottom along with code quality. Then slowly climb, and possibly exceed the previous mark (IF it's something the developers enjoy). This is because client & server are in the same language & just a file away. Debugging messages & stack traces are pretty good & hot code reloads, although still not great, are good.
Code quality has absolutely nothing to do with the framework.
Code efficiency is good because reactivity in handled behind the scenes most of the time and fibers makes it possible to write server code in a synchronous fashion. This increases code readability.
Maintainability is another word for code quality.
Scalability is more of a question about node.js, but will work for the VAST majority of projects. An honest critique of node's shortcomings is here: https://medium.com/code-adventures/farewell-node-js-4ba9e7f3e52b
I have been tasked with making several flex-driven visualizations of immense excel spreadsheets. The client wants the finished product to be as self-contained as possible. The problem being that flex doesn't offer as much computing horsepower as is needed. Is there an easy (or not easy) way to accomplish this. I am just trolling for pointers. Thanks in advance!
If you dont mind doing it the hard way, I have two options for you:
Pixel Bender: a tool originally designed for creating complex and CPU-intensive graphic filters and offload those calculations to the hardware. But it can be used for number crunching too. Here's an article that covers that topic: Using Pixel Bender with Flash Builder 4 as a number crunching engine. The language may not be like anything you're used to. I had a hard time wrapping my head around it.
Alchemy: a tool that compiles C or C++ code so it can be executed in the Flash VM. I am not certain how much performance can be gained for simple number crunching, but if you know C, this might be a path to investigate.
The first thing that comes into my mind - building a webservice that will do the hard work. But this is not a self-contained product though.
Apart from that - take a look at the apparat - http://code.google.com/p/apparat, it allows various optimizations, access to the low level AVM2 code - http://code.google.com/p/apparat/wiki/AsmExpansion and more. I do not think that as3 and flex compiler is so bad for math. Try to write the sample math function and test it using different languages.
Our company builds several (Java) applications that loosely communicate with eachother via web services, remote EJB and occasionally via shared data in a DB.
Each of those applications are build and maintained by their own teams. 1 or 2 persons for the smaller apps, and almost 10 for the largest one. The total amount of developers is approximately 25 FTE.
One problem we're facing is that there are some big egos among the teams. Historically the team of the largest app has set up a code convention and general guide lines. For instance our IDE is Netbeans, we use Hg for SCM, build with Ant and emphasize to first use as much from Java EE as possible, if that doesn't suffice use an external library and only resort to writing something yourself as a last resort. Writing things like yet another logging framework, orm, cms or web framework is pretty much not allowed following these guide lines.
Now some of the smaller teams go against this and start using Eclipse, Git and Maven and have an approach of writing as much as possible themselves and only look at existing things if time is short or they 'just don't feel like writing it themselves'. Where the main team uses log4j, one of the smaller teams just started writing their own logging framework.
There have been talks going on about all teams adhering to the same standards, but these have been 'troublesome' at best.
Now the big question I'd like to ask: does it actually matter that different teams do things differently? As long as each seperate app implements its requirements and provides the agreed upon interfaces, should we really force everyone to use Hg, Ant, the same code conventions, etc etc?
There is not much harm in letting each team use the technologies that work best for them. In fact if you restrict teams to the "standard" way of doing things you'll stifle innovation and have bad morale.
But you don't want things to diverge too much. There a few things you can do to prevent libraries and tools getting out of hand. The first thing is to have regular rotation of each member through the teams to cross pollinate ideas. In this way the best ideas will spread through the teams.
You can also enforce a "rule of 3", which simply says it is ok to introduce a second library, tool, logging approach, whatever. But as soon as you want to introduce a 3rd one, you have to remove one of the first two. In other words it is ok to have 2 competing logging frameworks but if there are 3 logging frameworks, choose one to kill.
A 3rd idea is to let developers run regular presentations to the entire developer group to demonstrate the pros and cons of each idea or approach. Encourage lots of discussion and constructive criticism. The purpose is to try many things and let everyone find the best way as a group.
Finally, Management 3.0 talks a lot more in depth about how teams make decisions. Well worth the read.
I have been tasked with automating some of the paper forms in HR. This might turn into "automate all forms" eventually, so I want to approach this in a way which will be best for the long term and will be a good framework as this project grows.
The first things that come to mind were:
-InfoPath/SharePoint (We currently don't use SharePoint now, and wouldn't be an option for the next two years.)
-Workflow Foundation (I've looked into this and does not seem too attractive or appropriate)
Option I'm considering at this point:
-Custom ASP.NET (VB.NET) & SQL Server, which is what my team mostly writes their apps with.
-Leverage Infopath for creating the forms electronically. Wondering if there is a good approach to integrating this with a custom built ASP.NET app.
-Considering creating the app as an MVC web app.
My question is this:
-Are there other options I might want to consider?
-Are there any starter kits or VB.NET based open source projects there which would be a starting point or could be used as a good reference. Here I'm mostly concerned with the workflow processing.
-Any comnments from those who have gone down this path?
This is going to sound really dumb, but in my many years of helping companies automate paper form-based processes is to understand the process first. You will most likely find that no single person understands the whole thing. You will need to role-play the many paths thru the process to get your head around it. And once you present your findings, everyone will be shocked because they had no idea it was that complex. Use that as an opportunity to streamline.
Automating a broken process only makes it screw up faster and tell a lot of people.
As far as tools, my experience dates me but try to go with something with these properties:
EASY to change. You WILL be changing it. So don't hard-code anything.
Possible revision control - changes to a process may or may not affect documents already in route?
Visual workflow editing. Everyone wants this but they'll all ask you to drive it. Still, nice tools.
Not sure if this helps or not - but 80% of success in automating processes is not technology.
This is slightly off topic, but related - defect tracking systems generally have workflow engines/state. (In fact, I think Joel or some other FC employee posted something about using FB for managing the initial emails and resume process)
I second the other advice about modeling the workflow before doing any coding or technology choices. You will also want this to be flexible.
as n8owl reminded us, automating a mess yields an automated mess - which is not an improvement. Many paper-forms systems have evolved over decades and can be quite redundant and unruly. Some may view "messing with the forms" as a violation of their personal fiefdoms, so watch your back ;-)
model the workflow in terms of the forms used by whom in what roles for what purposes; this documents the current process as a baseline. Get estimates of how long each step takes, both in terms of man-hours and calendar time
understand the workflow in terms of the information gathered, generated, and transmitted
consolidate the information on the forms into a new set of forms for minimal workflow
be prepared to be told "This is the way we've always done it and we're not going to change", and to gently (a) validate their feelings, (b) explain how less work is more efficient, and (c) show concrete benefits [vs.the baseline from step 1]
soft-code when possible; use processing rules when possible; web services and html forms (esp. w/jquery) will go a long way if you have an intranet
beware of canned packages (including sharepoint) unless you are absolutely certain they encompass your organization's current and future needs
good luck!
--S
I detect here a general tone of caution with regards to a workflow based approach and must agree. Be advised about the caveats of most workflow technologies which sacrifice usability for flexibility.
I was trying to find online some exercises to practice scaling techniques (memchached, SQL Optimization, sharding dbs), but I could only find descriptions of these techniques, not any project on which to try them.
This link with slides on scaling techniques, is an interesting one, as it sums up some tools to achieve scalability quite well.
Is there a projecteuler kind of site for these kind of activities? Or at least some excercises (such as a downloadable ASP.NET/PHP site with obvious slowdowns, concurrency issues, subtle bugs) for people to try and learn how to fight this issue?
I find that the site High Scalability has some nice insights.
It might be interesting to hack at Wordpress. Their caching plugins take care of a lot of scaling issues but it would be cool to write your own plugin or hack at the source to cut down on SQL queries or to cache static pages. If you come up with something, make sure to let the rest of the community know!
George's slides are definitely a good basis to work from. Note that he is not talking about a specific technique or technology; rather he's discussing more general architectural and design decisions that will help your application scale as a whole.
I personally think this sort of high-level thinking would be much more valuable than individual optimisation techniques. Perhaps you could take a well known web application and hack it until it scales well across multiple machines? A cluster of lots of cheap, low-power EC2 machines could be really useful here. Getting an existing or new application to run properly across a number of machines would be a fantastic exercise.
Counter-intuitively, rather than getting as much as possible to run on a single machine, I'd say it would be much more educational to get the same application running on several machines.
Once you have that, it makes sense to move onto more specific improvements like a separate static content tier, memcached, DB sharding, batch operations and so on.
In terms of specific projects to work on, how about cloning Twitter, Flickr or The Pirate Bay. They've all had performance and scaling challenges in the past.