Tuesday, February 14, 2012

Requirements-Based Testing including Cause-Effect Graphing

In many organizations, testing is viewed as slowing down the delivery of the system, partly because it is the last step in the development process. Ironically, when testing is properly deployed, with heavy emphasis on Requirements-Based Testing, it can have a major impact on overall project productivity as well as product quality.                                           

Many organizations also have discovered that capture/playback tools require a significant investment in building and maintaining test scripts. They also discover that the scripts cannot keep up with rapidly changing specifications.

This presentation will address how a Requirements-Based Testing (RBT) process provides major productivity gains. The RBT process stabilizes the completeness of the specifications early in the development process. RBT is made up of two components that support Verification and Validation.  RBT uses a verification technique called Ambiguity Reviews to drive out ambiguities from requirements.  It also uses a test case design technique called Cause-Effect Graphing that derives an optimized set of test cases that covers 100% of the problem description. The results are fewer tests with greater functional coverage, shortened time to delivery, reduced costs of development, and significantly improved quality.

This is an open event, however space is limited so register now

Register now to discover:
  •  The 12 point Requirements-Based Testing (RBT) process
  •  How the RBT process stabilizes the completeness of the specifications early in the development process
  • The two components of RBT, Verification and Validation including a verification technique called Ambiguity Reviews to drive out ambiguities from requirements
  • Tools to translate Cause-Effect Graphs into logical test cases and evaluate previously existing test libraries to cover the problem description
  • Discussion with Expert Panel:
Gaining perspective of this calibre will help you be an indispensable resource in the software testing process.

Hosted by SELA Canada and Microsoft Canada

Where:
Microsoft Office
1950 Meadowvale Blvd., Mississauga,
ON, L5N 8L9
Thursday, Mar 29, 2012,
from 6:00PM to 8:30PM


Monday, December 5, 2011

Training by ISTQB Leaders

2012 Course Calendar
Training by the President of the CSTB

Training by ISTQB LeadersGary Mogyorodi, President of CSTB


SELA Canada is excited to have Mr. Gary Mogyorodi, the President of the CSTB teach the following classes:

1) ISTQB Certified Tester Foundation Level
Dates: January 11-13, 2012 | 9am - 5pm
Course Overview:
This course provides test engineers and test team leaders with the main ideas, processes, tools and skills they need in order to set themselves on a path for true testing professionalism. This hands-on course covers the major test design techniques with lecture and exercises.

2) ISTQB Certified Tester Advanced Level Test Manager
Dates: January 23 - 27, 2011 | 9am - 5pm
Course Overview:
Being a technical manager is hard enough, but managing the testing process is a unique challenge, requiring judgment, agility, and organization. The course covers the essential tools, critical processes, significant considerations, and fundamental management skills for people who lead or manage development and maintenance test efforts.

3) ISTQB Certified Tester Advanced Level Test Analyst
Dates: February 27 - March 2, 2011 | 9am - 5pm
Course Overview:
This training course gives detailed information on the specifics of different testing techniques: specification based techniques; defect and experience based test techniques, their characteristics, their boundaries – all is done while extending the range of their usage.



Wednesday, September 21, 2011

Best Practices For Regression Testing

SDLC defines that when a defect is fixed, two forms of testing are to be done on the fixed code. The first is confirmation testing to verify that the fix has actually fixed the defect and the second is a regression test to ensure that the fix itself, hasn't broken any existing functionality. It is important to note that the same principle applies when a new feature or functionality is added to the existing application. In the case of new functionality being added, tests can verify that the new features work as per the requirement and design specifications while regression testing can show that the new code hasn't broken any existing functionality.


It is possible that a new version of the application will have fixed previously reported defects as well as having new functionality. For the 'fixes' we would normally have a set of  Test Scripts (Test cases) which are run to confirm the fixes, while for the new functionalities we would have a set of  Functionality test cases.

Overtime, as the software application becomes bigger and bigger in terms of new functionality and more components are added, a regression pack, which is a bank of test cases, is developed to be run on new version of the application which is to be released.

Selecting tests for regression packs

As explained earlier, for each new release of software application, three sets of test suites are executed; Regression Tests, Release Specific Tests and Defect Test Scripts. Choosing test cases for regression packs is not a trivial exercise. Careful thoughts and attention need to be paid on choosing the sets of tests to include in the regression packs.

One would expect that as each new test case written for Release Specific Tests, they will become part of the regression pack to be executed after the next version of the code is arrived. So, in other words, the regression pack becomes bigger and bigger as more and more new versions of the code is developed. If we automate regression testing, this should not be a problem, but for a manual execution of large regression packs, this can cause time constraints and the new functionalities may not be tested due to lack of time.

These regression packs often contain tests that cover the core functionality that will stay the same throughout the evolution of the application. Having said that, some of the old test cases may not be applicable anymore as those functionalities may have been removed and replaced by new functionality. Therefore, the regression test packs need to be updated regularly to reflect changes to the application.

The regression packs are a combination of scripted tests that have been derived from the requirement specifications for previous versions of the software as well as random or ad-hoc tests. A regression test pack should, at a minimum, cover the basic workflow of typical use case scenarios. "Most Important Tests" i.e. tests which are important to the application domain should always be included in the regression packs. Successful test cases, i.e. tests which have revealed defects in the previous versions of the application are also a good candidate to be included in the regression packs.

Wednesday, November 24, 2010

Shout out by Joe Payne, CEO and President, Eloqua

On March 24, 2010, I reluctantly woke up as I normally do and got ready to work. Took the transit to work  and reached my desk little late. Made my breakfast, Quakers Instant oatmeal and office coffee. So far this is my daily routine. No change and no surprises.

A card enclosed in a envelope was placed on my desk by Executive Assistant for CEO. I curiously opened the card. I was expecting new years greeting card. To my surprise, it was hand written note from CEO and President of Eloqua,  Joe Payne. This is what Joe has to write about me.

"March 24, 2010


Sammy- Just a quick note to say, thanks for all the efforts you have been putting in on the testing of Eloqua 10. Abe and Andre tell me that you have gone out of your way to help the team get up to speed.

WELL DONE- JOE"

I was extremely happy and greatfull for the recognition.

Functional vs non-functional testing

Functional Testing:
Testing the application against business requirements. Functional testing is done using the functional specifications provided by the client or by using the design specifications like use cases provided by the design team.


Functional Testing covers:
  • Unit Testing
  • Smoke testing / Sanity testing
  • Integration Testing 
  • Interface & Usability Testing
  • System Testing
  • Regression Testing
  • Pre User Acceptance Testing(Alpha & Beta)
  • User Acceptance Testing
  • White Box & Black Box Testing
  • Globalization & LocalizationTesting
Non-Functional Testing:
Testing the application against client's and performance requirement. Non-Functioning testing is done based on the requirements and test scenarios defined by the client.

Non-Functional Testing covers:
  • Load and Performance Testing
  • Stress & Volume Testing
  • Compatibility & Migration Testing
  •  Data conversion Testing
  • Security / Penetration Testing
  • Operational Readiness Testing
  • Installation Testing
  •  Security Testing (Application, Network and System security)

Thursday, August 26, 2010

Why I am a tester? Wrong Reasons

We have seen many discussions on why people came into software testing and why they still love to work as a tester. People have interesting reasons, for some people its creativity, for some people its challenges of automation, for some its relation to system thinking, domain expertise etc. All of these are good reasons to be in testing field and if you are in testing because of similar reasons, probably you are enjoying your work and may be exciting people around you about testing.
Unfortunately, over the years I have seen many people staying as tester because of wrong reasons as well. Following list is the collection of wrong reasons / motivation to be in software testing field and work as a tester. If you are in testing because one of these reasons, probably you need to find a good mentor, understand testing properly or change your field. 


Well that is a strange reason, but true nevertheless. Many testers memorize two responses - did not follow best practices and process was not followed correctly for every failed project. It is always possible to find at-least one process which did not work and one best practice which was not followed in every failed project. You can always give same response to most of the failures in testing project.
 
Good testers do not believe in best practices and finds the best way to work in a given context. Even if the project on which they are working is failed, they will retrospect and find opportunities to improve next time rather than opening a process book.
 
I hope you are not working as a tester because of these, and if you are probably you'll start thinking about it. 

1. Software Testers can be Lazy

Software testing could be the perfect job for lazy people. There are so many things you need before start testing the product. You can happily wait for documentation, changed documentation and final documentation and documents describing some more changes after the final documentation. You can also wait for software to be delivered in test environment, defect tracking system to be setup and configured properly, Finalize the process which needs to be followed, testable system and list can just go on. As a tester, you can find thousands of things which can prevent you from testing the system.
 
Some of these things are important, really important. What you need to do is think about them and see are they really blocking you or becoming an excuse for you? Good testers will always find ways to start testing as soon as they can. They'll work with analysts, developers, infrastructure etc. to make sure they can test the system. Even if they are not testing, they will probably think about data, users, automation or anything which will make them more efficient rather than waiting. 

2. Software Testers can preserve grey cells

Some testers feel that grey cells are precious and they make all the effort to ensure that mind is not exercised / challenged. Following test scripts manually is perfect way to preserve these precious grey cells. I remember someone telling me - I'll happily follow written scripts and not apply my brain as long as I am getting paid. Surely you are getting paid, but are you enjoying the process? Does it not become boring for you? Do you feel good and proud about the work you are doing? Also, even if you are following the script, are you not observing things which are not part of the scripts?
 
For real testers, constant exercise and challenges to the mind is one of the main reasons to be in testing field. They continuously ask questions, make observations, write note, talks to people about the product. They don't preserve, but enhance their mind by continuously exercising and challenging it.

3. Software Testers can blame anyone

As a testers, there is always opportunity to blame someone for your failures. You can blame BAs for changing requirements, infrastructure for not providing test environment, developers for not writing unit tests, introducing regression defects and delivering late and management for not giving enough importance to testing and cutting time for testing.
 
Now I am not saying that these are not problems. Some of these could be real problems but good testers will highlight these problems and find ways to work rather than finger pointing and finding ways to avoid work. 

4. Software Testers can fake

It is very easy for testers to get away without actually working on anything. In most of the cases, management does not have right appreciation or tools to check your progress as a tester. It is extremely easy to say that you have tested this feature without testing it. In many organizations progress is checked with yes / no questions along with some numbers and it is extremely difficult for anyone to make sense from these answers.
 
Good testers on the other hand, make sure that progress is traceable. They do not answer in yes / no but explains what part is tested, how it is tested and what is not tested. They provide information rather than data and maintain integrity of their profession.

5. Software Testers do not need to learn

Developers need to learn new things constantly. This could be new languages, frameworks, platforms, algorithms and so on. Testing on the other hand is relatively static. You can argue that you do not need to be technical so wont learn new technologies. Definitions and techniques of testing are very old and hardly used so you do not need to learn that. You can also leave domain knowledge to the business analysts and not learn about domain and so effectively you do not need to learn anything to survive in the testing field.
 
Testers who love their job though have appreciation for the technologies and platforms. Even if they are not technical, they will find out how this program was built, what made them choose this platform, language etc. Similarly, they will try to understand domain to find out how a typical user would use this system. They will make themselves familiar with many tools to increase their efficiency. Constantly learning new thing is one the main motivation for them to work as a tester.

6. Software Testers can become expert easily

Becoming an expert is extremely easy in testing. There are so many certifications which claim to make you an expert probably in a month. You can always claim your supremacy by flaunting various certifications you have acquired by memorizing all the definitions. In many organizations, management will promote you for becoming experts (by acquiring certifications) without actually working.
 
Good testers usually do not consider themselves expert. They do not rely on certification agencies to certify their excellence. They are just good learner and learns few new things every day and are on the journey to become an experts. They are probably involved in or have appreciation for movement like weekend testing rather than syllabus of any certification program. 

7. Software Testers can confuse people

If you love to confuse people, you can do so very easily if you are a tester. There are different definitions and interpretation of almost every term we use in testing. You can find completely different definitions for ad-hoc, exploratory, v-model etc. and probably most of them are wrong anyways. You can have endless discussion on why priority and severity are different and needed. You can argue endlessly about defect life cycle, best processes on version control, environment control and so on. In most of the cases, irrespective of what people are doing, how they are doing testers can find out at-least one thing which should be changed in-order to improve quality.
 
Some testers prefer to work though and it doesn't matter what names you give to the techniques they are following. Their focus is on the improving quality, but not by talking about it but working for it. They do suggest change, but by showing the real value of why it should be changed rather then where a specific process is being followed.

8. Software Testers get paid without adding real value

As a tester, it is extremely easy to do whatever you are instructed to do. Now there is nothing wrong with that but often the person who is instructing you to perform testing does not understand testing. If you do not think hard and continuously, it is very easy to test as instructed but without testing as a good tester. In situation like this, you are testing as good as the person who is instructing you / have written scripts for you.
 
Real tester, even under instruction will not stop thinking about the problems and ways in which product can be tested. There will always be questions which needs investigations or new ideas which analysts have not covered or missing data set you need to test. They always find ways to add value in the project of every size and in every stage.

9. Software Testers can Play with numbers

Playing with numbers could be another favourite activity for many testers. Number of test cases, number of automated test cases, number of defects, number of defects in every status, developer tester ratio, unit test coverage and list can go on and on. It is always possible to answer most of the testing questions in numbers without giving any additional information. Testing is 50% complete or 70% of the test cases are automated can have different meanings to different people. Numbers doesn't give any useful information in itself.
 
Good testers on the other hand give sensible and useful information rather than random numbers. I am not saying that good testers do not prepare any matrix, they do that but they also explain what these numbers are telling. 


 


Re-published from reliable source.

Thursday, August 12, 2010

What's Web-Enabled Application Measurement Tests?

  1. Meantime between failures in seconds
  2. Amount of time in seconds for each user session, sometimes know as transaction
  3. Application availability and peak usage periods.
  4. Which media elements are most used ( for example, HTML vs. Flash, JavaScript vs. HTML forms, Real vs Media Player vs. QuickTime)

Search in this page

References: Some of the contents may have reference to various sources available on the web.
Logos, images and trademarks are the properties of their respective organizations.