Thursday, August 26, 2010

Why I am a tester? Wrong Reasons

We have seen many discussions on why people came into software testing and why they still love to work as a tester. People have interesting reasons, for some people its creativity, for some people its challenges of automation, for some its relation to system thinking, domain expertise etc. All of these are good reasons to be in testing field and if you are in testing because of similar reasons, probably you are enjoying your work and may be exciting people around you about testing.
Unfortunately, over the years I have seen many people staying as tester because of wrong reasons as well. Following list is the collection of wrong reasons / motivation to be in software testing field and work as a tester. If you are in testing because one of these reasons, probably you need to find a good mentor, understand testing properly or change your field. 


Well that is a strange reason, but true nevertheless. Many testers memorize two responses - did not follow best practices and process was not followed correctly for every failed project. It is always possible to find at-least one process which did not work and one best practice which was not followed in every failed project. You can always give same response to most of the failures in testing project.
 
Good testers do not believe in best practices and finds the best way to work in a given context. Even if the project on which they are working is failed, they will retrospect and find opportunities to improve next time rather than opening a process book.
 
I hope you are not working as a tester because of these, and if you are probably you'll start thinking about it. 

1. Software Testers can be Lazy

Software testing could be the perfect job for lazy people. There are so many things you need before start testing the product. You can happily wait for documentation, changed documentation and final documentation and documents describing some more changes after the final documentation. You can also wait for software to be delivered in test environment, defect tracking system to be setup and configured properly, Finalize the process which needs to be followed, testable system and list can just go on. As a tester, you can find thousands of things which can prevent you from testing the system.
 
Some of these things are important, really important. What you need to do is think about them and see are they really blocking you or becoming an excuse for you? Good testers will always find ways to start testing as soon as they can. They'll work with analysts, developers, infrastructure etc. to make sure they can test the system. Even if they are not testing, they will probably think about data, users, automation or anything which will make them more efficient rather than waiting. 

2. Software Testers can preserve grey cells

Some testers feel that grey cells are precious and they make all the effort to ensure that mind is not exercised / challenged. Following test scripts manually is perfect way to preserve these precious grey cells. I remember someone telling me - I'll happily follow written scripts and not apply my brain as long as I am getting paid. Surely you are getting paid, but are you enjoying the process? Does it not become boring for you? Do you feel good and proud about the work you are doing? Also, even if you are following the script, are you not observing things which are not part of the scripts?
 
For real testers, constant exercise and challenges to the mind is one of the main reasons to be in testing field. They continuously ask questions, make observations, write note, talks to people about the product. They don't preserve, but enhance their mind by continuously exercising and challenging it.

3. Software Testers can blame anyone

As a testers, there is always opportunity to blame someone for your failures. You can blame BAs for changing requirements, infrastructure for not providing test environment, developers for not writing unit tests, introducing regression defects and delivering late and management for not giving enough importance to testing and cutting time for testing.
 
Now I am not saying that these are not problems. Some of these could be real problems but good testers will highlight these problems and find ways to work rather than finger pointing and finding ways to avoid work. 

4. Software Testers can fake

It is very easy for testers to get away without actually working on anything. In most of the cases, management does not have right appreciation or tools to check your progress as a tester. It is extremely easy to say that you have tested this feature without testing it. In many organizations progress is checked with yes / no questions along with some numbers and it is extremely difficult for anyone to make sense from these answers.
 
Good testers on the other hand, make sure that progress is traceable. They do not answer in yes / no but explains what part is tested, how it is tested and what is not tested. They provide information rather than data and maintain integrity of their profession.

5. Software Testers do not need to learn

Developers need to learn new things constantly. This could be new languages, frameworks, platforms, algorithms and so on. Testing on the other hand is relatively static. You can argue that you do not need to be technical so wont learn new technologies. Definitions and techniques of testing are very old and hardly used so you do not need to learn that. You can also leave domain knowledge to the business analysts and not learn about domain and so effectively you do not need to learn anything to survive in the testing field.
 
Testers who love their job though have appreciation for the technologies and platforms. Even if they are not technical, they will find out how this program was built, what made them choose this platform, language etc. Similarly, they will try to understand domain to find out how a typical user would use this system. They will make themselves familiar with many tools to increase their efficiency. Constantly learning new thing is one the main motivation for them to work as a tester.

6. Software Testers can become expert easily

Becoming an expert is extremely easy in testing. There are so many certifications which claim to make you an expert probably in a month. You can always claim your supremacy by flaunting various certifications you have acquired by memorizing all the definitions. In many organizations, management will promote you for becoming experts (by acquiring certifications) without actually working.
 
Good testers usually do not consider themselves expert. They do not rely on certification agencies to certify their excellence. They are just good learner and learns few new things every day and are on the journey to become an experts. They are probably involved in or have appreciation for movement like weekend testing rather than syllabus of any certification program. 

7. Software Testers can confuse people

If you love to confuse people, you can do so very easily if you are a tester. There are different definitions and interpretation of almost every term we use in testing. You can find completely different definitions for ad-hoc, exploratory, v-model etc. and probably most of them are wrong anyways. You can have endless discussion on why priority and severity are different and needed. You can argue endlessly about defect life cycle, best processes on version control, environment control and so on. In most of the cases, irrespective of what people are doing, how they are doing testers can find out at-least one thing which should be changed in-order to improve quality.
 
Some testers prefer to work though and it doesn't matter what names you give to the techniques they are following. Their focus is on the improving quality, but not by talking about it but working for it. They do suggest change, but by showing the real value of why it should be changed rather then where a specific process is being followed.

8. Software Testers get paid without adding real value

As a tester, it is extremely easy to do whatever you are instructed to do. Now there is nothing wrong with that but often the person who is instructing you to perform testing does not understand testing. If you do not think hard and continuously, it is very easy to test as instructed but without testing as a good tester. In situation like this, you are testing as good as the person who is instructing you / have written scripts for you.
 
Real tester, even under instruction will not stop thinking about the problems and ways in which product can be tested. There will always be questions which needs investigations or new ideas which analysts have not covered or missing data set you need to test. They always find ways to add value in the project of every size and in every stage.

9. Software Testers can Play with numbers

Playing with numbers could be another favourite activity for many testers. Number of test cases, number of automated test cases, number of defects, number of defects in every status, developer tester ratio, unit test coverage and list can go on and on. It is always possible to answer most of the testing questions in numbers without giving any additional information. Testing is 50% complete or 70% of the test cases are automated can have different meanings to different people. Numbers doesn't give any useful information in itself.
 
Good testers on the other hand give sensible and useful information rather than random numbers. I am not saying that good testers do not prepare any matrix, they do that but they also explain what these numbers are telling. 


 


Re-published from reliable source.

Thursday, August 12, 2010

What's Web-Enabled Application Measurement Tests?

  1. Meantime between failures in seconds
  2. Amount of time in seconds for each user session, sometimes know as transaction
  3. Application availability and peak usage periods.
  4. Which media elements are most used ( for example, HTML vs. Flash, JavaScript vs. HTML forms, Real vs Media Player vs. QuickTime)

Wednesday, May 19, 2010

What testers can do if there is not enough information for creating requirement matrix?

If testers are not getting requirements in a way that it is usable to create requirement traceability matrix, and assuming that they have the time, I recommend taking the knowledge of the system that they have and writing their own test requirement matrix. These kinds of matrix should be used internally for traceability. This method has worked for me in the past. 


The best outcome was that, as the word of these requirement matrix got out to other groups in the company, they began to see the value of them and made our testing efforts more efficient and they wanted to see what it could do for them, resulting in requirements adoption.

Sunday, May 9, 2010

What would it take to make software testing effective?

I think that following qualities are essential to make software testing effective:
  •  Successful testing efforts are determined by the quality of the testing process.
  •  Deploy early testing life cycle techniques to prevent defect migration.
  •  To improve testing process, real person must own their responsibilities.
  •  Testing is a professional discipline. It requires continuous training and skills improvements.

Sunday, April 11, 2010

What is the Requirements stability metrics?

Metrics can be very helpful, but only if the value of them is worth the value of the time put in to track them. And keep in mind, metrics are designed to be used for analysis and improvement, not necessarily for management 'on the fly' of current projects. 
                                          
So, while I see the need to track the kind of information one is looking for, I think it would be difficult to get useful metrics if the requirements are not stabilized.

More importantly, what is it you are trying to analyze about the requirements? For what purpose?
If you want to be able to better analyze the requirements and how to go about testing them, then try to implement some sort of weighting practice. Have the business side prioritize requirements based on how critical they are to have in a particular (or first) release of the application. Have the technical people estimate the amount of effort required to implement, and it should be easier to plan development and testing based on that sort of matrix.
If you're concerned about change, consider using a requirements management tool- there are commercial apps, or do something as simple as keeping a spreadsheet of the current requirements at a high level, and tracking changes to each requirement. Try to implement some level of change management or change control through your defect tracking system - defects can be written do documentation as well, and can track change requests as easily as actual errors. 

Monday, March 22, 2010

What is the purpose of good test case?


Test cases are the bread and butter of software testers. Test cases created either manually or automated, has many objectives. Writing a effective test cases is arts which are acquired by skills and experiences. The topic for the effective test cases will depends on the types of project you are handling and the depth of functional requirement to be captured. 
See full size image

Writing a kind of test cases falls in to many categories. Each categories provides respective values to the organization in many different ways.  In order for the test case to provide better values for an organization, it should be essentially functional to reduce risks and address the testing efforts and also it should provide some measurable values to the organization.

There are various categories for writing an effective test cases. Following are some of the categories, most of the test cases would fall under:
  • To verify the expected results versus actual results during testing cycle.
  • To verify if the application under test is in conformance with the standards and guidelines as requested by the stakeholders.
  • To increase the functional test matrix coverage.
  • To increase the data flow coverage.
  • To increase the logical flow coverage.
  • To verify and execute end user scenarios.
  • To report any errors or defects to developers before it impacts the end users.

Friday, March 12, 2010

Have you heard about Monkey Testing?

Monkey Testing, interesting terminology, but valuable testing in software industry.
Monkey Testing is nothing but random testing of software application or system on the fly, with out the working knowledge of the application. The goal of the monkey testing is to identify any show stoppers or criticals in the software application under test.

Monkey testings are best performed by the automated testing tools. These automated testing tools will be called as 'monkeys' if they work randomly identifying crash or break in the software code.

Monkey testings are inexpensive to perform some basic random testing which will result in finding very few bugs, where as it could be very expensive for load or performance testing, which can result in number of bugs.

Monkey testing can be valuable for software application to identify rather embarrassing show stoppers, but this should not be only testing performed.

Search in this page

References: Some of the contents may have reference to various sources available on the web.
Logos, images and trademarks are the properties of their respective organizations.