- Which functionality is most important to the project's intended purpose?
- Which functionality is most visible to the user?
- What functionality has the largest safety impact?
- Which functionality has the largest financial impact on users?
- Which aspects of the application are most important to the customer?
- Which aspects of the application can be tested early in the development cycle?
- Which parts of the code are most complex, and thus most subject to errors?
- Which parts of the application were developed in rush or panic mode?
- Which aspects of similar/related previous projects caused problems?
- Which aspects of similar/related previous projects had large maintenance expenses?
- Which parts of the requirements and design are unclear or poorly thought out?
- What do the developers think are the highest-risk aspects of the application?
- What kinds of problems would cause the worst publicity?
- What kinds of problems would cause the most customer service complaints?
- What kinds of tests could easily cover multiple functionalities?
- Which tests will have the best high-risk-coverage to time-required ratio?
Monday, December 21, 2009
Is there enough time to do through testing?
Sunday, November 29, 2009
What makes a good Software tester?
The few qualities that define good Software testers from my point of views are:
1. Good to know programming.
This can be debated. As most would assume that testers can be staffed with people who have no technical or
programming knowledge. This is not recommended, even though this is common approach.
Here are my few reasons for this section.
Firstly, Software Testers are testing software’s. So without technical background, they can't have real insight
about the kinds of bugs in the applications and the likeliest place to explore them. As we all agree that testers
will never have enough time to completely test the applications, also they'll compromise with available resources and thoroughness. Therefore testers must optimize scare resources, which mean they should focus on where bugs could be likely found.
Secondly, Testing methods are tools and technology intensive. Testing tools as well as products under testing are all built using technical and programming knowledge. Therefore, testers could be in-capable of using most of the test techniques and will be restricted to ad-hoc techniques.
This doesn't mean that testers should have programming training or have worked as programmers, but understanding the logical techniques and experience will be the easiest way to meet the "know programming" requirements.
2. Know the software application.
This is another side of the knowledge coin. The ideal testers should have an insight into how users will interact and exploit the applications features, and also the kinds of errors users are most likely to make. In reality, it is not possible for testers to know both the applications under test and complete programming language, they should be a compromise between the application and the logical architecture.
For example, for testing demand generation software, testers should know how marketers would use the product to automate lead generation and calculate ROI or for online retail order software, testers should know how users could exploit the online security.
3. Practice Intelligence.
There are many researchers conducted to determine what would make ideal testers and the common conclusions made with these researches are that, there are no single bench marks to predict ideal testers. Good testers are also smart people; the single most important quality for ideal testers is raw intelligence.
4. Be Hyper-Sensitive to little things.
Good testers notice little things that others (including programmers) miss or ignore. Testers see symptoms, not just bugs. We know that a given bug can have many different symptoms, ranging from trivial to catastrophic. We know that the symptoms of a bug are in turn related in severity to the cause. Consequently, there is no such thing as a minor symptom-because a symptom isn't a bug. It is only after the symptom is fully explained that we have the right to say if the bug that caused that symptom is minor or major. Therefore, anything at all out of the ordinary is worth pursuing.
For example, the screen flickered this time, but not last time-this is a bug. The report generated is off by 0.01%-great bug. Good testers notice such little things and use them as an entree to finding a closely related set of inputs that will cause a catastrophic failure and therefore get the programmers' attention.
5. Be Tolerant for Chaos.
People react to chaos and uncertainty in different ways. Some cave in and give up while others try to create order out of chaos. If the tester waits for all issues to be fully resolved before starting test design or testing, he/ she won't get started until after the software has been shipped. Testers have to be flexible and be able to drop things when blocked and move on to another thing that's not blocked. Testers always have many unfinished tasks during SDLC. In this respect, good testers differ from programmers. The testers' world is inherently more chaotic than the programmers'.
6. Practice people Skills.
Here's another area in which testers and programmers can differ. You can be an effective programmer even if you are hostile and anti-social; that won't work for a tester. Testers can take a lot of abuse from outraged programmers. A sense of humor and a thick skin will help the testers to survive. Testers may have to be diplomatic when confronting a senior programmer with a fundamental goof. Diplomacy, tact and a ready smile, all works to the independent tester's advantage.
7. Tenacity.
An ability to reach compromises and consensus can be at the expense of tenacity. That's the other side of the
people skills. Being socially smart and diplomatic doesn't mean being indecisive or a limp rag that anyone can walk all over. The best testers are both-socially adept and tenacious where it matters. Good testers can't be
intimidated even by pulling his/her rank. They'll need high-level backing, of course, in order to sign-off the
quality software.
8. Be Organized.
I can't imagine a scatter-brained tester. There's just too much to keep track of to trust the memory. Good testers use files, data bases and all the other characters of an organized mind. They make up checklists to keep themselves on track. They recognize that they too can make mistakes, so they double-check their findings. They have the facts and figures to support their position. When they claim that there's a bug-believe it, because if the developers don't, the tester will flood them with well-organized, overwhelming evidence.
9. Be Skeptical.
That doesn't mean hostile, though. I mean skepticism in the sense that nothing is taken for granted and that all is fit to be questioned. Only tangible evidence in documents, specifications, code, and test results matter. While they may patiently listen to the reassuring, comfortable words from the programmers ("Trust me. I know where the bugs are."), and do it with a smile, they will ignore all such in-substantive assurances.
10. Be Self-Sufficient and Tough.
If testers need love, they don't expect to get it on the job. They can't be looking for the interaction between
them and programmers as a source of ego-gratification and/or nurturing. Their ego is gratified by finding bugs, with few misgivings about the pain (in the programmers) that such finding might engender. In this respect, they must practice very tough love.
11. Be Cunning.
Systematic test techniques such as syntax testing and automatic test generators have reduced the need for such cunning, but the need is still with us and undoubtedly always will be because it will never be possible to
systematize all aspects of testing. There will always be room for that offbeat kind of thinking that will lead to a
test case that exposes a really bad bug. But this can be taken to extremes and is certainly not a substitute for
the use of systematic test techniques. The cunning comes into play after all the automatically generated "sadistic" tests have been executed.
12. Be Technology hungry.
Good testers hate dull, repetitive work. They'll do it for a while if they have to, but not for long. The silliest
thing for a human to do, in their mind, is to pound on a keyboard when they're surrounded by computers. They have a clear notion of how error-prone manual testing is, and in order to improve the quality of their own work, they'll find ways to eliminate all such error-prone procedures.
I've yet to meet a tester who wasn't hungry for applicable technology. When asked why didn't they automate
such and such-the answer was never "I like to do it by hand." It was always one of the following:
(1) "I didn't know that it could be automated"
(2) "I didn't know that such tools existed"
(3) or worst of all, "Management wouldn't give me the time to learn how to use the tool."
13. Be Honest.
Testers are fundamentally honest and incorruptible. They'll compromise if they have to, but they'll righteously
agonize over it. This fundamental honesty extends to a brutally realistic understanding of their own limitations as a human being. They accept the idea that they are no better and no worse, and therefore no less error-prone than their programming counterparts. So they apply the same kind of self-assessment procedures that good programmers will. They'll do test inspections just like programmers do code inspections. The greatest possible crime in a tester's eye is to fake test results.
Reference: onestoptesting.com
Tuesday, November 24, 2009
CAPABILITY MATURITY MODEL (CMM)
Tuesday, November 17, 2009
“Testing” and “Quality Assurance”
Thursday, November 12, 2009
Firebug as a dedugging tool
QA/ Testers and developers need tools to help them locate these bugs, and such tool would be Firebug. In very short time, this has become popular among Testers/ QA's and Java Scripts development communities. Firebug is free, open source project and works as plug- ins for firefox browser. This tool was created by Joe Hewitt, co- founder of firefox browser.
Firebug is a debugger in the traditional sense. It lets you pause program execution, step through code line by line, and access the state of variables at any time. It can also examine entire DOM(HTML and CSS), view built-in browser details and easily inspect anything on the page simply by clicking on it. It also includes powerful tool to monitor network traffic. All these goodies are placed within this compact interface.
One of its functionality that amazes me is that, it can identify every single objects on the webpage with useful information like size of the objects and time to load each objects. It also displays the url's and previews of the objects.
Firebug not only lets find and fix the bug in the code, it is also a suitable tool for exploration of web applications. It can help to discove how development teams Java Script works. Exploring others application can be powerful educational experience.
Source: getfirebug.com
Tuesday, November 10, 2009
Why entry-level programmers should not be placed in Test organization?
(1) Loser Image.
Few universities offer undergraduate training in testing beyond "Be sure to test thoroughly." Entry-level people expect to get a job as a programmer and if they're offered a job in a test group, they'll often look upon it as a failure on their part: they believe that they didn't have what it takes to be a programmer in that organization. This unfortunate perception exists even in organizations that values testers highly.
(2) Credibility With Programmers.
Independent testers often have to deal with programmers far more senior than themselves. Unless they've been through a coop program as an undergraduate, all their programming experience is with academic toys: the novice often has no real idea of what programming in a professional, cooperative, programming environment is all about. As such, they have no credibility with their programming counterpart who can sluff off their concerns with "Look, kid. You just don't understand how programming is done here, or anywhere else, for that matter." It is setting up the novice tester for failure.
Monday, November 2, 2009
Top 10 strategic Technologies for 2010
Research firm Gartner has listed top 10 strategic technologies that will help organisations transform and grow.
However, according to David Cearley, vice president and distinguished analyst at Gartner, "This does not necessarily mean adoption and investment in all of the technologies. They should determine which technologies will help and transform their individual business initiatives.”
Here's over to the top 10 strategic technologies for 2010
Cloud computing
Cloud computing is a style of computing that characterizes a model in which providers deliver a variety of IT-enabled capabilities to consumers. Cloud-based services can be exploited in a variety of ways to develop an application or a solution. Using cloud resources does not eliminate the costs of IT solutions, but does re-arrange some and reduce others. In addition, consuming cloud services enterprises will increasingly act as cloud providers and deliver application, information or business process services to customers and business partners.
Advanced analytics
Optimization and simulation is using analytical tools and models to maximize business process and decision effectiveness by examining alternative outcomes and scenarios, before, during and after process implementation and execution. This can be viewed as a third step in supporting operational business decisions. Fixed rules and prepared policies gave way to more informed decisions powered by the right information delivered at the right time, whether through customer relationship management (CRM) or enterprise resource planning (ERP) or other applications.
The new step is to provide simulation, prediction, optimization and other analytics, not simply information, to empower even more decision flexibility at the time and place of every business process action. The new step looks into the future, predicting what can or will happen.
Client computing
Virtualization is bringing new ways of packaging client computing applications and capabilities. As a result, the choice of a particular PC hardware platform, and eventually the OS platform, becomes less critical. Enterprises should proactively build a five to eight year strategic client computing road-map outlining an approach to device standards, ownership and support; operating system and application selection, deployment and update; and management and security plans to manage diversity.
IT for green
IT can enable many green initiatives. The use of IT, particularly among the white collar staff, can greatly enhance an enterprise’s green credentials.
Common green initiatives include the use of e-documents, reducing travel and teleworking. IT can also provide the analytic tools that others in the enterprise may use to reduce energy consumption in the transportation of goods or other carbon management activities.
Reshaping the data center
In the past, design principles for data centers were simple: Figure out what you have, estimate growth for 15 to 20 years, then build to suit. Newly-built data centers often opened with huge areas of white floor space, fully powered and backed by a uninterruptible power supply (UPS), water-and air-cooled and mostly empty.
However, costs are actually lower if enterprises adopt a pod-based approach to data center construction and expansion. If 9,000 square feet is expected to be needed during the life of a data center, then design the site to support it, but only build what’s needed for five to seven years. Cutting operating expenses, which are a nontrivial part of the overall IT spend for most clients, frees up money to apply to other projects or investments either in IT or in the business itself.
Flash memory
Flash memory is not new, but it is moving up to a new tier in the storage echelon. Flash memory is a semiconductor memory device, familiar from its use in USB memory sticks and digital camera cards. It is much faster than rotating disk, but considerably more expensive, however this differential is shrinking. At the rate of price declines, the technology will enjoy more than a 100 percent compound annual growth rate during the new few years and become strategic in many IT areas including consumer devices, entertainment equipment and other embedded IT systems.
In addition, it offers a new layer of the storage hierarchy in servers and client computers that has key advantages including space, heat, performance and ruggedness.
Mobile applications
By year-end 2010, 1.2 billion people will carry handsets capable of rich, mobile commerce providing a rich environment for the convergence of mobility and the Web. There are already many thousands of applications for platforms such as the Apple i-Phone, in spite of the limited market and need for unique coding. It may take a newer version that is designed to flexibly operate on both full PC and miniature systems, but if the operating system interface and processor architecture were identical, that enabling factor would create a huge turn upwards in mobile application availability.
Courtesy: OnestopTesting.com
Wednesday, October 28, 2009
Test Data Management
A well documented Test Data Management (TDM) can help to increase efficiencies and provide greater values. These test datas can be made available with in organization in secure, organized, consistent and controlled manner.
By documenting right test data and test data management strategy, QA teams can get one step closer to reliable testing.
Release Life Cycle
Microsoft Corporation often uses the term release candidate. During the 1990s, Apple Inc. used the term "golden master" for its release candidates, and the final golden master was the general availability release. Other terms include gamma (and occasionally also delta, and perhaps even more Greek letters) for versions that are substantially complete, but still under test, and omega for final testing of versions that are believed to be bug-free, and may go into production at any time. (Gamma, delta, and omega are, respectively, the third, fourth, and last letters of the Greek alphabet.) Some users disparagingly refer to release candidates and even final "point oh" releases as "gamma test" software, suggesting that the developer has chosen to use its customers to test software that is not truly ready for general release. Often, beta testers, if privately selected, will be billed for using the release candidate as though it were a finished product.
A release is called code complete when the development team agrees that no entirely new source code will be added to this release. There may still be source code changes to fix defects. There may still be changes to documentation and data files, and to the code for test cases or utilities. New code may be added in a future release.