Wednesday, November 24, 2010

Shout out by Joe Payne, CEO and President, Eloqua

On March 24, 2010, I reluctantly woke up as I normally do and got ready to work. Took the transit to work  and reached my desk little late. Made my breakfast, Quakers Instant oatmeal and office coffee. So far this is my daily routine. No change and no surprises.

A card enclosed in a envelope was placed on my desk by Executive Assistant for CEO. I curiously opened the card. I was expecting new years greeting card. To my surprise, it was hand written note from CEO and President of Eloqua,  Joe Payne. This is what Joe has to write about me.

"March 24, 2010


Sammy- Just a quick note to say, thanks for all the efforts you have been putting in on the testing of Eloqua 10. Abe and Andre tell me that you have gone out of your way to help the team get up to speed.

WELL DONE- JOE"

I was extremely happy and greatfull for the recognition.

Functional vs non-functional testing

Functional Testing:
Testing the application against business requirements. Functional testing is done using the functional specifications provided by the client or by using the design specifications like use cases provided by the design team.


Functional Testing covers:
  • Unit Testing
  • Smoke testing / Sanity testing
  • Integration Testing 
  • Interface & Usability Testing
  • System Testing
  • Regression Testing
  • Pre User Acceptance Testing(Alpha & Beta)
  • User Acceptance Testing
  • White Box & Black Box Testing
  • Globalization & LocalizationTesting
Non-Functional Testing:
Testing the application against client's and performance requirement. Non-Functioning testing is done based on the requirements and test scenarios defined by the client.

Non-Functional Testing covers:
  • Load and Performance Testing
  • Stress & Volume Testing
  • Compatibility & Migration Testing
  •  Data conversion Testing
  • Security / Penetration Testing
  • Operational Readiness Testing
  • Installation Testing
  •  Security Testing (Application, Network and System security)

Thursday, August 26, 2010

Why I am a tester? Wrong Reasons

We have seen many discussions on why people came into software testing and why they still love to work as a tester. People have interesting reasons, for some people its creativity, for some people its challenges of automation, for some its relation to system thinking, domain expertise etc. All of these are good reasons to be in testing field and if you are in testing because of similar reasons, probably you are enjoying your work and may be exciting people around you about testing.
Unfortunately, over the years I have seen many people staying as tester because of wrong reasons as well. Following list is the collection of wrong reasons / motivation to be in software testing field and work as a tester. If you are in testing because one of these reasons, probably you need to find a good mentor, understand testing properly or change your field. 


Well that is a strange reason, but true nevertheless. Many testers memorize two responses - did not follow best practices and process was not followed correctly for every failed project. It is always possible to find at-least one process which did not work and one best practice which was not followed in every failed project. You can always give same response to most of the failures in testing project.
 
Good testers do not believe in best practices and finds the best way to work in a given context. Even if the project on which they are working is failed, they will retrospect and find opportunities to improve next time rather than opening a process book.
 
I hope you are not working as a tester because of these, and if you are probably you'll start thinking about it. 

1. Software Testers can be Lazy

Software testing could be the perfect job for lazy people. There are so many things you need before start testing the product. You can happily wait for documentation, changed documentation and final documentation and documents describing some more changes after the final documentation. You can also wait for software to be delivered in test environment, defect tracking system to be setup and configured properly, Finalize the process which needs to be followed, testable system and list can just go on. As a tester, you can find thousands of things which can prevent you from testing the system.
 
Some of these things are important, really important. What you need to do is think about them and see are they really blocking you or becoming an excuse for you? Good testers will always find ways to start testing as soon as they can. They'll work with analysts, developers, infrastructure etc. to make sure they can test the system. Even if they are not testing, they will probably think about data, users, automation or anything which will make them more efficient rather than waiting. 

2. Software Testers can preserve grey cells

Some testers feel that grey cells are precious and they make all the effort to ensure that mind is not exercised / challenged. Following test scripts manually is perfect way to preserve these precious grey cells. I remember someone telling me - I'll happily follow written scripts and not apply my brain as long as I am getting paid. Surely you are getting paid, but are you enjoying the process? Does it not become boring for you? Do you feel good and proud about the work you are doing? Also, even if you are following the script, are you not observing things which are not part of the scripts?
 
For real testers, constant exercise and challenges to the mind is one of the main reasons to be in testing field. They continuously ask questions, make observations, write note, talks to people about the product. They don't preserve, but enhance their mind by continuously exercising and challenging it.

3. Software Testers can blame anyone

As a testers, there is always opportunity to blame someone for your failures. You can blame BAs for changing requirements, infrastructure for not providing test environment, developers for not writing unit tests, introducing regression defects and delivering late and management for not giving enough importance to testing and cutting time for testing.
 
Now I am not saying that these are not problems. Some of these could be real problems but good testers will highlight these problems and find ways to work rather than finger pointing and finding ways to avoid work. 

4. Software Testers can fake

It is very easy for testers to get away without actually working on anything. In most of the cases, management does not have right appreciation or tools to check your progress as a tester. It is extremely easy to say that you have tested this feature without testing it. In many organizations progress is checked with yes / no questions along with some numbers and it is extremely difficult for anyone to make sense from these answers.
 
Good testers on the other hand, make sure that progress is traceable. They do not answer in yes / no but explains what part is tested, how it is tested and what is not tested. They provide information rather than data and maintain integrity of their profession.

5. Software Testers do not need to learn

Developers need to learn new things constantly. This could be new languages, frameworks, platforms, algorithms and so on. Testing on the other hand is relatively static. You can argue that you do not need to be technical so wont learn new technologies. Definitions and techniques of testing are very old and hardly used so you do not need to learn that. You can also leave domain knowledge to the business analysts and not learn about domain and so effectively you do not need to learn anything to survive in the testing field.
 
Testers who love their job though have appreciation for the technologies and platforms. Even if they are not technical, they will find out how this program was built, what made them choose this platform, language etc. Similarly, they will try to understand domain to find out how a typical user would use this system. They will make themselves familiar with many tools to increase their efficiency. Constantly learning new thing is one the main motivation for them to work as a tester.

6. Software Testers can become expert easily

Becoming an expert is extremely easy in testing. There are so many certifications which claim to make you an expert probably in a month. You can always claim your supremacy by flaunting various certifications you have acquired by memorizing all the definitions. In many organizations, management will promote you for becoming experts (by acquiring certifications) without actually working.
 
Good testers usually do not consider themselves expert. They do not rely on certification agencies to certify their excellence. They are just good learner and learns few new things every day and are on the journey to become an experts. They are probably involved in or have appreciation for movement like weekend testing rather than syllabus of any certification program. 

7. Software Testers can confuse people

If you love to confuse people, you can do so very easily if you are a tester. There are different definitions and interpretation of almost every term we use in testing. You can find completely different definitions for ad-hoc, exploratory, v-model etc. and probably most of them are wrong anyways. You can have endless discussion on why priority and severity are different and needed. You can argue endlessly about defect life cycle, best processes on version control, environment control and so on. In most of the cases, irrespective of what people are doing, how they are doing testers can find out at-least one thing which should be changed in-order to improve quality.
 
Some testers prefer to work though and it doesn't matter what names you give to the techniques they are following. Their focus is on the improving quality, but not by talking about it but working for it. They do suggest change, but by showing the real value of why it should be changed rather then where a specific process is being followed.

8. Software Testers get paid without adding real value

As a tester, it is extremely easy to do whatever you are instructed to do. Now there is nothing wrong with that but often the person who is instructing you to perform testing does not understand testing. If you do not think hard and continuously, it is very easy to test as instructed but without testing as a good tester. In situation like this, you are testing as good as the person who is instructing you / have written scripts for you.
 
Real tester, even under instruction will not stop thinking about the problems and ways in which product can be tested. There will always be questions which needs investigations or new ideas which analysts have not covered or missing data set you need to test. They always find ways to add value in the project of every size and in every stage.

9. Software Testers can Play with numbers

Playing with numbers could be another favourite activity for many testers. Number of test cases, number of automated test cases, number of defects, number of defects in every status, developer tester ratio, unit test coverage and list can go on and on. It is always possible to answer most of the testing questions in numbers without giving any additional information. Testing is 50% complete or 70% of the test cases are automated can have different meanings to different people. Numbers doesn't give any useful information in itself.
 
Good testers on the other hand give sensible and useful information rather than random numbers. I am not saying that good testers do not prepare any matrix, they do that but they also explain what these numbers are telling. 


 


Re-published from reliable source.

Thursday, August 12, 2010

What's Web-Enabled Application Measurement Tests?

  1. Meantime between failures in seconds
  2. Amount of time in seconds for each user session, sometimes know as transaction
  3. Application availability and peak usage periods.
  4. Which media elements are most used ( for example, HTML vs. Flash, JavaScript vs. HTML forms, Real vs Media Player vs. QuickTime)

Wednesday, May 19, 2010

What testers can do if there is not enough information for creating requirement matrix?

If testers are not getting requirements in a way that it is usable to create requirement traceability matrix, and assuming that they have the time, I recommend taking the knowledge of the system that they have and writing their own test requirement matrix. These kinds of matrix should be used internally for traceability. This method has worked for me in the past. 


The best outcome was that, as the word of these requirement matrix got out to other groups in the company, they began to see the value of them and made our testing efforts more efficient and they wanted to see what it could do for them, resulting in requirements adoption.

Sunday, May 9, 2010

What would it take to make software testing effective?

I think that following qualities are essential to make software testing effective:
  •  Successful testing efforts are determined by the quality of the testing process.
  •  Deploy early testing life cycle techniques to prevent defect migration.
  •  To improve testing process, real person must own their responsibilities.
  •  Testing is a professional discipline. It requires continuous training and skills improvements.

Sunday, April 11, 2010

What is the Requirements stability metrics?

Metrics can be very helpful, but only if the value of them is worth the value of the time put in to track them. And keep in mind, metrics are designed to be used for analysis and improvement, not necessarily for management 'on the fly' of current projects. 
                                          
So, while I see the need to track the kind of information one is looking for, I think it would be difficult to get useful metrics if the requirements are not stabilized.

More importantly, what is it you are trying to analyze about the requirements? For what purpose?
If you want to be able to better analyze the requirements and how to go about testing them, then try to implement some sort of weighting practice. Have the business side prioritize requirements based on how critical they are to have in a particular (or first) release of the application. Have the technical people estimate the amount of effort required to implement, and it should be easier to plan development and testing based on that sort of matrix.
If you're concerned about change, consider using a requirements management tool- there are commercial apps, or do something as simple as keeping a spreadsheet of the current requirements at a high level, and tracking changes to each requirement. Try to implement some level of change management or change control through your defect tracking system - defects can be written do documentation as well, and can track change requests as easily as actual errors. 

Monday, March 22, 2010

What is the purpose of good test case?


Test cases are the bread and butter of software testers. Test cases created either manually or automated, has many objectives. Writing a effective test cases is arts which are acquired by skills and experiences. The topic for the effective test cases will depends on the types of project you are handling and the depth of functional requirement to be captured. 
See full size image

Writing a kind of test cases falls in to many categories. Each categories provides respective values to the organization in many different ways.  In order for the test case to provide better values for an organization, it should be essentially functional to reduce risks and address the testing efforts and also it should provide some measurable values to the organization.

There are various categories for writing an effective test cases. Following are some of the categories, most of the test cases would fall under:
  • To verify the expected results versus actual results during testing cycle.
  • To verify if the application under test is in conformance with the standards and guidelines as requested by the stakeholders.
  • To increase the functional test matrix coverage.
  • To increase the data flow coverage.
  • To increase the logical flow coverage.
  • To verify and execute end user scenarios.
  • To report any errors or defects to developers before it impacts the end users.

Friday, March 12, 2010

Have you heard about Monkey Testing?

Monkey Testing, interesting terminology, but valuable testing in software industry.
Monkey Testing is nothing but random testing of software application or system on the fly, with out the working knowledge of the application. The goal of the monkey testing is to identify any show stoppers or criticals in the software application under test.

Monkey testings are best performed by the automated testing tools. These automated testing tools will be called as 'monkeys' if they work randomly identifying crash or break in the software code.

Monkey testings are inexpensive to perform some basic random testing which will result in finding very few bugs, where as it could be very expensive for load or performance testing, which can result in number of bugs.

Monkey testing can be valuable for software application to identify rather embarrassing show stoppers, but this should not be only testing performed.

Thursday, March 4, 2010

What can testers do if the application has a functionality that wasn't in the requirement?

I believe, if the application under test has functionalities that was not described in requirement documents, it  indicates deeper problems in software development process. All testers agree that it takes tremendous efforts  to determine the unexpected behaviors of an application under test. It will take serious efforts by testers to identify any hidden functionalities which will result in losing precious time and resources.

As per my experience, if the functionality isn't necessary for the purpose of the application, it should be removed, as it may have unknown impacts or dependencies on the application which were not taken in to consideration by the stakeholders.

If alien functionality (as I call because it was not described in requirement documents) is not removed for what ever reasons it might be, it should be documented in the test plan to determine the added testing needs, resources and additional regression testing needs. Stakeholders and managements should be made aware of any significant risks that might arise as a result of this alien functionality.

On the other hand, if this alien functionality has to be included for minor improvements in the user interface and  its effects on the overall applications functionality ranges from trivial to minor and does not pose a significant risks, it may be accommodated  in the testing cycle with or with out minor changes in the test plan.

Monday, January 25, 2010

My top 5 common problems in software/ web development process

Software or Web development has many risks associated with it. Some of them may be planning risks, which becomes responsibilities of all stakeholders involved in the project, some may be software risks, which are more closely connected to the day-to-day development process. Other risks may be inherent from general software development or may be specific to the software environment.

Here are the top 5 common problems in web development process.
  1. Poor Requirements: If the functional requirements of the features are unclear, incomplete, too general or requirements that are not testable, there might be a problem in the development process.
  2. Unrealistic schedule: If too much features are to be developed and tested in too little time, problems are inevitable.
  3. Inadequate Testing: If the software is released or published with out adequate testing, no one will know whether or not the software is any good until the customer complains or system crashes, which will guarantee a bad reputation for the organizations and its process.
  4. Additional Feature Requests: Request to add new features during or after the development process is underway is very common practice. This would add tremendous pressure on developers and testers, and the software quality will be compensated if this is not taken in to account during project scheduling process. 
  5. Miscommunication: If customers have erroneous expectations, project managers don't know whats is required, developers don't know what is needed or testers don't have sufficient functional requirements, problems are guaranteed.

Sunday, January 10, 2010

Why is Usability testing so important?


Usability testing is carried out in order to find out if there is any change needs to be carried out in the developed system (may it be design or any specific procedural or programmatic change) in order to make it more and more user friendly so that the intended/end user who is ultimately going to buy and use it receives the system which he can understand and use it with utmost ease.

Any changes suggested by the tester at the time of usability testing, are the most crucial points that can change the stand of the system in intended/end user’s view.  Developers/Designer of the system need to incorporate the feedbacks (here feedback can be a very simple change in look and feel or any complex change in the logic and functionality of the system) of usability testing into the design and developed code of the system (the word system may be a single object or an entire package consisting more than one objects) in order to make system more and more presentable to the intended/end user.

Developer often try to make the system as good looking as possible and also tries to fit the required functionality, in this endeavor he may have forgotten some error prone conditions which are uncovered only when the end user is using the system in real time.
Usability testing helps developer in studying the practical situations where the system will be used in real time. Developer also gets to know the areas that are error prone and the area of improvement.

In simple words, usability testing is an in-house dummy-release of the system before the actual release to the end users, where testers can find loop holes and developer can fix the possible loop holes.

Search in this page

References: Some of the contents may have reference to various sources available on the web.
Logos, images and trademarks are the properties of their respective organizations.