The by-product of this activity of finding defects provides
a very good measure of the quality of the application. For example, the number
of defects found per release can give an idea about the stability of the
product, number of defects found for a particular feature can give information
about the feature stability, and number of defects found on test complete
(hypothetical) can give the crucial information to make release decisions. The
tests themselves give an idea about the test coverage through mapping of tests
to documented specs/ perceived user scenarios.
The “process of executing a program” can be very complicated
though. Today’s complex applications varying in technology used, domains, user
bases and interfaces provide quite a challenge to exercise the program fully.
Off course, it is a well-known fact that we can never cover all possibilities
in program execution, so testing can never be complete. But, we can still aim
at effective testing and efficient program execution with the intent of
capturing defects and information (metrics). Having said this, a “test case”
which can be considered to be the smallest unit of a test should also be
written to satisfy this intent.
What is a” test case”
The IEEE definition of test case is “Documentation
specifying inputs, predicted results, and a set of execution conditions for a
test item.” The aim is to divide the software function into small units of
function that is testable with input, and producing result that is measurable.So, basically a test case is a feature/function description that should be executed with a range of input, given certain preconditions, and the outcome measured against expected result.
By the way, there is a common misconception relating to test cases and test scripts, or even test suite. Many people use them interchangeably, and that is a mistake. In short, a test script (or test suite) is a compilation of multiple test cases.
What information would the test manager want out of test
case document/s?
The test cases provide important information to the client
regarding the quality of their product. The approach to test case writing
should be such as to facilitate the collection of this information.
1.
Which features have been tested/
will be tested eventually?
2.
How many user scenarios/ use cases
have been executed?
3.
How many features are stable?
4.
Which features need more work?
5.
Are sufficient input combinations
exercised?
6.
Does the application give out
correct error messages if the user does not use it the way it was intended to
be used?
7.
Does the UI conform to the
specifications?
8.
Are the features traceable to the
requirement spec? Have all of them been covered?
9.
Are the user scenarios traceable to
the use case document? Have all of them been covered?
10.
Can these tests be used as an input
to automation?
11.
Are the tests good enough? Are they
finding defects?
12.
Is software ready to ship? Is
testing enough?
13.
What is the quality of the
application?
Approach to test case writing
The approach to organizing test cases will determine the
extent to which they are effective in finding defects and providing the
information required from them
Function: Test each function/ feature in isolation
- Domain : Test by partitioning different sets of values
- Specification based: Test against published
specifications
- Risk based: Imagine a way in which a program could fail
and then design tests to check whether the program will actually fail.
- User: Tests done by users.
- Scenario/ use case based: Based on actors/ users and a
set of actions they are likely to perform in real life.
- Exploratory: the tester actively controls the design of
tests as those tests are performed and uses information gained while
testing to design new and better tests.
Since the goal should be to maximize the extent to which the
application is exercised, a combination of two or more of these works well.
Exploratory testing in combination with any of these approaches will give the
focus needed to find defects creatively.
Pure exploratory testing provides a rather creative option
to traditional test case writing, but is a topic of separate discussion.
Test case writing procedure
- Study the application
·
Get as much information about the
application as possible through available documentation (requirement specs, use
cases, user guides) , tutorials, or by exercising the software itself (when
available)
·
Determine a list of features and
different user roles.
·
If it’s a special domain, try to
obtain as much info about how the users might interact with the application.
- Standardize an approach for test case writing
Further as a check, make sure that the entire application
flow has been covered. For example, for an ecommerce application, the
application flow will begin at registration and end at the point the user gets
an order confirmation or does a successful order cancellation. Trace this flow
to the set of test cases.
- Identify sets of test cases
·
Identify logical sets of test cases
based on individual features/ user roles or integration between them.
·
Create separate test cases for
special functional tests e.g. browser specific tests (using browser specific
functions like back button, closing the browser session etc.), UI tests,
usability tests, security tests, cookie/ state verification etc. to ensure that
all tests under these categories are covered.
·
Effective test cases verify
functions in isolation. If a particular feature has a lot of input
combinations, separate the test into subtests. For e.g. to verify how the registration
feature works with invalid input, write sub tests for different values.
Main test case: Register_01- Verify user cannot register
with invalid inputs in registration form
Sub test cases:
Register_01a- Verify with invalid email id.
Register_01b- Verify with invalid phone number
Register_01c- Verify with large number of characters in
password field.
- Decide on a structure
The test case format given below serves well for functional
test case writing. Some of the information may be redundant if written for each
test case example references, in which case it can be mentioned only once in
the beginning of the test case. In some cases when the use cases or requirement
specs are well written, there could be a perfect mapping of each test case to a
particular section of the document.
For e.g. consider the example of an auction site like eBay,
buyer and seller are distinct user roles. So, the most effective approach would
be to write test cases separately for buyer and seller, addressing different
features that the user roles execute. (E.g. Buyer_Register_01 – Verify that
inserting valid values for all fields on the registration page, registers the
buyer successfully) could be one of the test cases where a buyer registers with
eBay. Similarly “Buyer_bids_01 – Verify that buyer can bid for items where the
bid period has not yet expired” could be one of the test cases for the bid
feature for a buyer. Another set of test cases will address features for a
seller. Another set of test cases should address the interaction between buyer
and seller and will be scenario based.
·
Description- Explain the function under test. Clearly state exactly what
attribute is under test and under what condition?
·
Prerequisites- Every test needs to follow a sequence of actions, which
lead to the function under test. It could be a certain page that a user needs
to be on, or certain data that should be in the system (like registration data
in order to login to the system), or certain action. State this precondition
clearly in the test case. This helps to define specific steps for manual
testing, and more so for automated testing, where the system needs to be in a
particular base state for the function to be tested.
·
Steps- Sequence of steps to execute the specific function.
·
Input- Specify the data used for a particular test or if it is a
lot of data, point to a file where this data is stored.
·
Expected result
– Clearly state the expected outcome
in terms of the page/ screen that should appear after the test, changes that
should happen to other pages, and if possible, changes that should happen to
the database.
·
Actual result – State the actual result of the function execution.
Especially in case the test case fails, the information under ‘actual result’
will be very useful to the developer to analyse the cause of the defect.
·
Status- Write status separately for tests done using different
environments, e.g. various OS/browser combinations. Test case status could be-
o
“Passed” – The expected and actual
results match.
o
“Failed”- The actual result does not
match the expected result.
o
“Not tested”- The test case has not
been executed for the test run, maybe is a lower priority test case.
o
“Not Applicable”-The test case does
not apply to the feature any more since the requirement changed.
o
“Cannot be tested” – May be the
prerequisite/ precondition is not met. There could be a defect in one of the
steps leading up to the function under test.
·
Comments- Write additional information under this column. For e.g. the
actual result occurs only under a particular condition or a defect is
reproducible only sometimes. This information gives the developer/ client
additional info about the feature behavior which can be very useful in
determining the root cause of a problem. It is especially useful for “failed”
cases, but also serves as a feedback if additional observation is mentioned in
the “passed” cases.
·
References- Refer / map a test case to the corresponding requirement
spec or use case or any other reference material that you used. This
information helps gauge the test coverage against the documented requirements.
Conclusion:
It’s a huge task to write effective cases with all the appropriate details.
Once the test case documents are ready to be executed against, it’s typically
only the beginning of the test effort. As you become more familiar with the
application (exercising creativity while doing so) and become better “in tune”
with the end users’ perspective (not only relying on the documented features/
use cases), you will likely add more relevant test cases. Also, it will be the
same with each progression towards the final release, as features get added/
deleted or modified while building towards the final product. Thus, the
usefulness of the test cases will ultimately depend on how current and relevant
they are.
I have gone through your article and it motivates me in different way. Thanks for sharing such a wonderful content.
ReplyDeleteRegards:
Software training institutes in chennai
Software testing training institutes in chennai
LoadRunner is the best Performance testing tool to ensure the maximum stress of the system. QTP also a plays a vital role in functional testing, these both simulation is the major testing tool in the software industry.
ReplyDeleteRegards:
Loadrunner course in Chennai | Qtp training institutes in chennai
Testing makes your customer to get full satisfaction on your service since it found out all the bugs and errors and rectify it. There are best tool available to test web based applications. Thank you for your information.
ReplyDeleteRegards:
Software testing training
Software training