I would like to get some tips from peer developers about how you go about testing an application you developed, prior to release to QA. Keep in mind, this is a small scale application (requirements are verbal), and so doing formal testing processes wont work, especially, since your boss told you to develop this app quick, push it out the door.
Despite the time restraints, I would like to make sure it is bug free, however, numerous times in the past, I have had the app sent back to me because clicking the "Reset" button, messes up the other controls alignment etc.
I know there are people out there that develop small scale apps fast, and send them out with minimal bugs. How can I achieve that?
I researched this post, but it didnt quite answer my question.
Testing your code before releasing to QA
Unit testing and something like Selenium for simple UI verification might be enough. You've given impossible constraints - testing and pushing it out the door conflict. You can only do the amount of testing that doesn't compromise the speed issue.
Let's face it - all software has bugs, and the second law of thermodynamics applies. Even code that works perfectly on the day you ship will evolve and need changes, updates, and face lifts as time goes by.
Making an application bug free is quite hard even if it's tiny. The advantage is that you can test almost any aspect of such a tiny application by hand. There will however be features you cannot test at the end of your development process. It is always good to test small features on the go. It also pays to have someone else test your application and (if possible) make another developer review your code.
But then, it is probably impossible to fully test a weeks work in a few hours.
The best advise is to make at least someone test you application as it is commonly used. Nothing frustrates a user more than an error on the first clicks. Errors in a more deeply hidden feature that is hardly used is easier forgiven. So test as a user, not (only) as a programmer.
One important thing is to either develop multiple personalities or get someone else to run through the app at least once -- you are too close to the problem so there could be a design flaw that makes things unusable. Which is not technically a defect, but users do not understand the difference between "working as designed but the design was wrong" and "horribly broken."
Keep in mind, this is a small scale application (requirements are verbal), and so doing formal testing processes wont work, especially, since your boss told you to develop this app quick, push it out the door.
Did your boss also tell you not to test?
Developing quickly without tests usually does not in fact get the thing done more quickly.
There are of course no guarantees, but you might consider doing test-driven development to get the tests and maybe get code that actually works done more quickly.
See Scott Bellware's blog entry: Does Test-Driven Development Speed Up Development?
Related
I am supposed to automate an application which is developed in PowerBuilder. In order to test this Application we are using Rational Robot as a functional testing tool. We expect at least 40 -50% of change control in the Application for each release. Release trends are scheduled at least 3 times in a year.
The product has different setup for each client. Accordingly scenario has been derived. Although if is there any change occurs, it would be in functional feature and also in interface. Pointing to that, need to proceed with automation. Identified few areas which are stabilized (i.e., where no major changes occurs) to automate. Will that be feasible for proceeding with Automation?
Could you please suggest me how to go about on this?
I've seen the answer to your last question involve a consultant spending two days interviewing the development team, then three days developing a report. And, in some cases, I'd say the report was preliminary, introductory, and rushed. However, let me throw out a few ideas that may help manage expectations on your team.
Testing automation is great for checking for regressions in functionality that allegedly isn't being touched. Things like framework changes or database changes can cause untouched code to crash. For risk-averse environments (e.g. banking, pharmaceutical prescriptions), the investment in automation is well worth the effort.
However, what I've seen often is an underestimation of the effort. To really test all the functionality in a unit (let's say a window, for example), you need to review the specifications, design tests that test each functional point, plan your data (what is the data going into the window, how will you make sure this data is as expected when you start the test each time, what is the data when you've finished your tests, how will you ensure the data, including non-visible data, is correct), then script and debug all your tests. I'm not sure what professional testers say (I'm a developer by trade, but have taken a course in an automated testing tool), but if you aren't planning for the same amount of effort to be expended on developing the automation tests as you're spending on application development for the same functionality, I think you'll quickly become frustrated. Add to that the changing functionality means changing testing scripts, and automated testing can become a significant cost. (So, tell your manager that automated testing doesn't mean you push a button and things get tested. < grin > )
That's not to say that you can't expend less effort on testing and get some results, but you get what you pay for. Having a script that opens and closes all the windows in the app provides some value, but it won't tell you that a new behaviour implemented in the framework is being overridden on window X, or that a database change has screwed up the sequencing of items in a drop down DataWindow, or a report completion time goes from five seconds to five hours. However, again, don't underestimate the effort. This is a new tool with a new language and idiosyncrasies that need to be figured out and mastered.
Automated testing can be a great investment. If the cost of failure is significant, like a badly prescribed drug causing a death, then the investment is worth it. However, for cases where there is a high turn over of functionality (like I think you're describing) and the consequences of a failure is less critical, you might want to consider comparing the cost/benefit of testing automation with additional manual testing resources.
Good luck,
Terry
Adding on to what #Terry said: It sounds like you are both new to automation in general and to Rational Robot in particular.
One thing to keep in mind is that test automation is software development and needs to be treated as such. That means you need personnel dedicated to the automation effort who are solid programmers and have expertise in the tool being used (Robot in this case).
If your team does not have the general programming/automation skills and the specific Robot skills, you are going to need to hire that personnel or get existing staff trained in those skill sets.
I bet many of you were in such situations in the past.
I'm currently working on huge ASP.NET web project. Ad management system of some kind. My boss doesn't want to get more professionals to help me but gives me inexperienced staff that don't even know to program on ASP.NET and think it is an easy task. I deal with programming and design
What advices do you have to handle the boss ?
What tools can help me to ease with this task ( except usage of this very website )?
Thanks
I would hope good source control is something you already haev on your list but I think its always the best thing for any big project. Keeps your code safe and has the added advantage of allowing easy review of what your team are checking in if you feel the need for oversight.
Other than that just make sure you give your boss a realistic understanding of the time taken for various tasks and if he complains make it clear that your team needs more training if he wants stuff done faster.
P.S. [Edit: removed as no longer relevant]
You could ask him for a raise from getting rid of the people who are not helpful. that might actually save him money and make your time more worthwhile.
What advices do you have to handle the boss ?
First make sure you have a good analysis document and that you have for every dependency a spoc (single point of contact). Make sure the people who you're making this application for are integrated into the process. I suggest using something like scrum but certainly daily standup meetings.
Use a good system to follow up on everyone like for example TFS2010 which has also testing capabilities integrated so your testing team can be better integrated.
Have a bug tracking tool and source safe handy. Continuous integration is also an asset.
but gives me inexperienced staff that don't even know to program on ASP.NET
It's your boss intention to upgrade his people to a level where they are capable of programming ASP.NET applications in the end. What way to better learn it than hands on experience from a dedicated professional like you?
Be aware: you're dealing with people now, not just code. They get sick, have their strenghts and weaknesses just like you. And believe me, it can be a challenge sometimes to deal with the human part of a project. Especially when there's pressure due to release dates.
Perhaps you can convince him to distribute some (technical...) parts of the project to RentAcoder.com or getAfreelancer.com? It will be cheaper than getting more manpower..
Use a decent workitem/bugtracking system. This won't turn your 'inexperienced devs' into experts but at least you'll be able to see what progress they are making(or not making as the case may be)
I religiously go through all of my code before I check it in and do a diff of the before and after code and read through it and make sure I undersatnd the changes. Usually I end up having to add comments, amend variable names, amend algorithms, amend code, retest things, discuss with other developers about their code, add new bugs/issues, but I very rarely end up doing the checkin immediately.
I do notice however that alot of developers these days seem to check their code in and think that when the build breaks that it is enough, then they go back and look at their changes. This is one of the things about continuous build systems that I definitely do not like, in that sometimes I think developers stop thinking about their code enough.
What best practices are there for ensuring only quality code goes into continuous build systems?
I do notice however that alot of developers these days seem to check their code in and think that when the build breaks that it is enough, then they go back and look at their changes.
In my opinion, using the continuous build for "verification" purpose is indeed a bad practice, developers should always try to not commit bad code that can affect the team and interrupt the work (the why is so obvious that if you don't get that, just look for another job). So, if your CI engine doesn't offer Pre-tested commit (like TeamCity, Team Foundation Server as I just saw, maybe Hudson one day, etc), you should always sync/build/sync (and rebuild if necessary) locally before to commit. Not doing this is laziness and not respectful of the team.
What best practices are there for ensuring only quality code goes into continuous build systems?
Just in case, remind why breaking the build is bad: bad code potentially affects the whole team and interrupts the work (sigh).
If you can get tool support (see the mentioned solutions above), get it.
If not, document the right workflow: 1. sync 2. build locally 3. sync 4. back to 2 if required 5. commit. Make this visible so that there is no excuses.
If this is an option, use something like (or similar to) the Hudson's Continuous Integration Game plugin. This can make things more fun.
I've seen people using a light financial "penalty" for broken build but I don't really like this idea. First, we should be able to behave as responsible adults. Second, the obtained result was that people started to delay commits (which at the end was the opposite of the expected benefit).
Team Foundation Server has something called pre checkin validation.
Although I am using drupal since the D4 series, I only started developing professionally for it with D6, so - despite I did various site upgrades - I was never faced by the task of having to port my own code to a new version.
I know the Drupal community will come up with lot of technical support about changed API's and architectural changes (see the deadwood module for D5-D6 or even these stubs of D6-D7 how-to's for modules and themes).
However what I am looking for with my question is more in the line of strategy thinking, or in other words, I am looking for inputs and advice on how to plan / implement / review the process of porting my own code, in the light of what colleague developers learned by previous experience. Some example:
Would you advice to begin to port my modules as soon as I have time for doing it, and to maintain a concurrent D7 for some time (so I am "prepared" for the D-day) or would you advice to rather wait for the day in which the port will be actually imminent and then upgrade the modules to D7 and drop the D6 version?
Only some of my modules have full test coverage. Would you advice to complete test coverage for the D6 version so to have all tests working to check the D7 port, or would you advice to write my test directing at porting time, to test the D7 version?
Did you find that being an early adopter gives you an edge in terms of new features and better API's or did you rather find that is more convenient to delay the conversion so as to leverage the larger amount of readily available contrib modules?
Did you set for yourself quality standards / evaluation criteria or did you just set the bar to "if it works, I'm happy"? Why? If you set certain standards or goals, what did they where / what will they be? How did they help you?
Are there common pitfalls that you experienced in the past and that you think are applicable to the D6-D7 porting process?
Is porting a good moment to do some refactoring or it is just going to make everything more complex to be put back together?
...
These questions are not an exhaustive list, but I hope they give an idea of what kind of information I am looking for. I would rather say: whatever you think is relevant and I did not list above gets a "plus"! :)
If I did not manage to express myself clearly enough, please post a comment with the info you think I should add in the question. Thank you in advance for your time!
PS: Yes I know... D7 is not yet out and it will take months before important contrib modules will be upgraded... but it's never too early to start thinking! :)
Good questions, so let's see:
(when to start porting)
This certainly depends on the complexity of the modules to port. If there are really complex/large ones, it might be useful to start early in order to find tricky spots while not being under pressure. For smaller/standard ones, I'd try to find a bigger time slot later on where I can port many of them in a row in order to get the routine stuff memorized quickly (and benefit from the probably improved documentation).
(test coverage)
Normally I'd say that having a good test coverage before starting refactoring/porting would certainly be advisable. But given that Drupal-7 introduces a major change concerning the testing framework by moving it to core, I'd expect the need to rewrite a significant amount of tests anyway. So if there is no need to maintain the Drupal-6 versions after the migration, I'd save the time/trouble and aim for increased coverage after the porting.
(early adopter vs. wait and see)
Using Drupal since the 4.7 version, we have always waited for at least the first official release of a new major version before even thinking about porting. With Drupal 6, we waited for the views module before porting our first site, and we still have some smaller projects on Drupal-5, as they are working just fine and it would be hard to justify the extra bill for our clients as long as there are still maintenance/security fixes for it. There is just so much time in a day and there is always this backlog of bugs to fix, features to add, etc., so no use playing with unfinished technology while there are more imminent things to do that would immediately benefit our clients. Now this would certainly be different if we'd have to maintain one or more 'official' contributed modules, as offering an early port would be a good thing.
I'm a bit in a bind here - being an early adopter certainly benefits the community, as someone has to find that bugs before they can get fixed, but on the other hand, it makes little business sense to fight hour after hour with bugs others might have found/fixed if you'd just waited a bit longer. As long as I have to do this for a living, I need to watch my resources, trying to strike an acceptable balance between serving the community and benefiting from it :-/
(quality standards)
"If it works, I'm happy" just doesn't cut it, as I don't want to be happy momentarily only, but tomorrow as well. So one of my quality standards is that I need to be (somewhat) certain that I 'grokked' new concepts well enough in order to not just makes things work, but make them work like they should. Now this is hard to define more precisely, as it is obviously impossible to know if one 'got it' before 'getting it', so it boils down to a gut feeling/distinction of 'yeah, it kinda works' vs. 'yup, that looks right', and one has to accept that he will quite regularly be wrong about this.
That said, one particular point I'm looking out for is 'intervene as early as possible'. As a beginner, I often tweaked stuff 'after the fact' during the theming stage, while it would have been much easier to apply the 'fix' earlier in the processing chain by means of one hook or the other. So right now, whenever I'm about to 'adjust' something in the theme layer, I deliberately take a small time out to check if this can not be done more cleanly/compatible within a hook earlier on. As I expect Drupal-7 to add even more hooking options, this is something I will pay extra attention to, as it usually reduces conflicts and sudden 'breaking of stuff' when adding new modules.
(common pitfalls)
Well - mainly porting to early, finding out afterwards/in between that one or more needed modules were not available for the new version at all, or only in dev/alpha/early beta state. So I'd make sure to compile a complete list of used/needed modules first, listing their porting state, along with a quick inspection of their issue queues.
Besides that, I have so far always been very pleased with the new versions and their improvements, and I'm looking forward for Drupal-7 again.
(refactoring while porting)
One could say that porting is a rather large refactoring in itself, so there is no need to add to the complexity by restructuring non porting related stuff. On the other hand, if you already have to shred your modules to pieces anyway, why not use the opportunity to make it a major overhaul? I'd try to draw a line based on complexity - for big/complex modules, I'd do the port as straight as possible, and refactor more later on, if need be. For smaller modules, it shouldn't really matter, as the likelihood of introducing subtle bugs should be rather small.
(other stuff)
... need to think about it ...
Ok, other stuff:
Resource needs - given some of the Drupal-7 threads, it looks like they are likely to go up, so this should be evaluated before porting smaller sites that sit on a shared/restricted hosting account.
Latest versions first - This one is rather obvious and always stressed in the upgrade guides, but nevertheless worth mentioning: Upgrade core and all modules to their latest current version first before doing a major upgrade, as the upgrade code is highly likely to depend on the latest table/data structures to work correctly. Given Drupals 'piecemeal', one step at a time update strategy, it would be very hard to implement upgrade code that would detect different pre-upgrade states and acted accordingly.
I was using a CASE called MAGIC for a system I'm developing, I've never used this kind of tool before and at first sight I liked, a month later I had a lot of the application generated, I felt very productive and ... I would say ... satisfied.
In some way a felt uncomfortable, cause, there is no code and everything I was used to, but in the other hand I could speed up my developing. The fact is that eventually I returned to use C# because I find it more flexible to develop, I can make unit testing, use CVS, I have access to more resources and basically I had "all the control". I felt that this tool didn't give me confidence and I thought that later in the project I could not manage it due to its forced established rules of development. And also a lot of things like sending emails, using my own controls, and other things had their complication, it seemed that at some point it was not going to be as easy as initially I thought and as initially the product claims. This reminds me a very nice article called "No Silver Bullet".
This CASE had its advantages but on the other hand it doesn't have resources you can consult and actually the license and certification are very expensive. For me another dissapointing thing is that because of its simplistic approach for development I felt scared on first hand cause of my unexperience on these kind of tools and second cause I thought that if I continued using it maybe it would have turned to be a complex monster that I could not manage later in the project.
I think it's good to use these kind of solutions to speed up things but I wonder, why aren't these programs as popular as VS.Net, J2EE, Ruby, Python, etc. if they claim to enhance productivity better than the tools I've pointed?
We use a CASE tool at my current company for code generation and we are trying to move away from it.
The benefits that it brings - a graphical representation of the code making components 'easier' to pick up for new developers - are outweighed by the disadvantges in my opinion.
Those main disadvantages are:
We cannot do automatic merges, making it close to impossible for parallel development on one component.
Developers get dependant on the tool and 'forget' how to handcode.
Just a couple questions for you:
How much productivity do you gain compared to the control that you use?
How testable and reliant is the code you create?
How well can you implement a new pattern into your design?
I can't imagine that there is a CASE out there that I could write a test first and then use a CASE to generate the code I need. I'd rather stick to resharper which can easily do my mundane tasks and retain full control of my code.
The project I'm on originally went w/ the Oracle Development Suite to put together a web application.
Over time (5+ years), customer requirements became more complex than originally anticipated, and the screens were not easily maintainable. So, the team informally decided to start doing custom (hand coded) screens in web PL/SQL, instead of generating them using the Oracle Development Suite CASE tools (Oracle Designer).
The Oracle Report Builder component of the Development Suite is still being used by the team, as it seems to "get the job done" in a timely fashion. In general, the developers using the Report Builder tool are not very comfortable coding.
In this case, it seems that the productivity aspect of such CASE tools is heavily dependent on customer requirements and developer skill sets/training/background.
Unfortunaly the Magic tool doesn't generates code and also it can't implement a design pattern. I don't have control over the code cause as i stated before it doesn't have code to modify. Te bottom line is that it can speed up productivity in some way but it has the impossibility to user CVS, patterns also and I can't control all the details.
I agree with gary when he says "it seems that the productivity aspect of such CASE tools is heavily dependent on customer requirements and developer skill sets/training/background" but also I can't agree more with Klelky;
Those main disadvantages are:
1. We cannot do automatic merges, making it close to impossible for parallel development on one component.
2.Developers get dependant on the tool and 'forget' how to handcode.
Thanks