Tuesday, May 30, 2006

Test Director Notes

What are folders?
Folders are structures that are used to arrange your test case. Similar to what you do on your file system where you separate files into logical folder structures, folders in TestDirector are a logical equivalent.


CREATING A NEW FOLDER

Note: This must be done in the 'Plan Test' tab

1. Select the folder in which you want the new folder to be placed in, or select the root folder if you want to create a root level folder.
2. Click on Folder->New button at the bottom left corner of the page(5.0 or 6.0 version)
3. Type in the name of the folder in the dialog box that is loaded, then click on the OK button


DELETING A FOLDER

Note: This must be done in the 'Plan Test' tab

1. Select the folder you want to delete by clicking on it once.
2. Click on Folder->Delete button at the bottom left corner of the page
3. Confirm your action on the confirmation dialog presented

You can also delete folders by using the context menu that is activated by right-clicking on the folder name.



TEST CASES

The folders that we created exist simply to structure our test cases in a logical manner. Our test case is the document that actuals contains our test. The test case contains several pieces of information including

* Details
- Meta-information about the test that can be used to locate the test case, track the creation of the test case, provide a description etc
* Design Steps
- Step wise information that describes each step in a test and the expected result that should occur after each step.
* Test Script
- An automated test script in one of the 3 technologies supported by TestDirector. These technologies include
WinRunner
QTP
LoadRunner
* Attachments
- Attachments of files or URL that hold additional information useful to anyone that will use this testcase


CREATING A NEW TEST CASE

Note: This must be done in the 'Plan Test' tab

1. Select the folder in which you want the new test to be placed in.
2. Click on test->New button at the bottom left corner of the page
3. Choose the type of test you plan on creating from the dialog provided
4. Type in the name of the test in the appropriate space.


DELETING A TEST CASE

Note: This must be done in the 'Plan Test' tab

1. Select the test you want to delete by clicking on it once.
2. Click on Test->Delete button at the bottom left corner of the page
3. Confirm your action on the confirmation dialog presented


CAUTION

1. Test vs. TestCase vs. Test Script
Although the terms test, testcase and test script are used interchangeably, they are quite different.

* A test script is a part of a test case.
* In Test Director, the term 'Test' refresh to a TEST CASE
* In Winrunner, QTP & LoadRunner, the term 'Test' refers to a TEST SCRIPT

2. Validations
You can put any information you want in the fields of a test case. Be careful when you do this though as it may make it ver

WINRUNNER vs. QTP [Comparison]

Environments (common)

Environments that are supported by both QTP and WinRunner. This means that for these environments, Mercury has provided add-ins for both QTP & WinRunner.

Web

Desktop

Internet Explorer

Active X Controls

Netscape

Visual Basic

AOL

C/C++

AWT & JFC

Environments (different)

Environments that are supported by one of QTP or WinRunner.

WinRunner

QTP

PowerBuilder

.NET

Forte

Flash

Delphi

XML Web Services

Centura

Stingray

SmallTalk

User Model

How users interact with the application

WinRunner

QTP

Focus on test script

Synchronized Test Script and Active Screen

Requires familiarity with programming

Has an expert mode for programmers

Very Powerful

Easy, yet powerful

Test Creation Process

  1. Create GUI Map (WR) or Object Repository (QTP)
  2. Create Test

- Record Script

- Edit Script

Add one or more of the following

Verification

Synchronization

Checkpoints

Data Parameterization

  1. Debug Test
  2. Run Test
  3. View Result
  4. Track Defects

Script Recording Modes

WinRunner

QTP

Context Sensitive

- Uses a flat object hierarchy

Context Sensitive

- Uses a multi-level object hierarchy

Analog

- Captures keyboard input, mouse click, mouse path

Low-level

- Uses mouse co-ordinates

Scripts

The process of how scripts are created and stored.

WinRunner

QTP

Programmatic representation

Two modes. Icon based and programmatic representation

TSL, similar to C

VBScript, similar to VB

Procedural language

Object-oriented language

Uses objects from GUI Map

Uses objects from Object Repository

Object Storage and Operations

How QTP/Winrunner recognize the objects in an AUT and how they store the information about these objects.

WinRunner

QTP

Stored in a flat hierarchy

Multi level object hierarchy

Viewed using GUI Spy

Viewed using Object Spy

Stored in GUI Map

Stored in Object Repository

Creates temporary GUI Map file to hold new objects

Automatically saves object repository

Additional Items

Miscellaneous details

WinRunner

QTP

Transaction measurement

- Through TSL programming

Transaction measurement

- Through tree view and VBScript programming

Data Driven operations

- Create iterations programmatically

Data Driven operations

- Create iterations automatically and programmatically

Create code using Function generator

Create code using Method wizard generator

Exception Handling

- Uses the Exception Editor

Exception Handling

- Uses the Recovery Scenario Manager

Tuesday, May 23, 2006

Testing Types [Updated 5/30]

The most important thing to note about this section is that many of the testing types are not mutually exclusive. You can convievably be doing black-box testing at the same time as system testing.



Accessibility Testing
Testing to ensure that a software interface meets accessibility standards for differently abled individuals. For certains system, it is required by law that the system interface must meet certain federal accessibility requirements.

Acceptance Testing (aka = also known as)
(aka End User Testing)
The testing phase typically carried out by the paying client before they accept delivery of the software product.

Automated Testing:
Software testing using automated testing tools. This involves the use of tools to create an automated testing script which can then later be executed in a unattended state.

Ad-hoc testing:
An unstructured form of testing where functionality is tested for based on the the biases of the tester. It is often used to quickly test a specific functionality.

Black Box:
Testing a system by providing input and examining output without knowledge and/or regard of the internal code of the system.

Functional Testing:
This is testing to verify the functionality of an application as dictated in requirements.

Integration Testing:
Testing that occurs during the phase of the application development lifecycle where the different modular elements that make up a software system are being joined (integrated) together.

Load Test:
Testing an application to measure performance under the desired in which the application is expected to operate.

Manual Testing:
Software testing process where a tester verifies functionality of an application/system by physically interacting with the application.

Penetration Tesing:
A form of testing used to test the security of software systems. Penetration testing often involves the use of specialized tools to attack the system in order to find out where security vulnerabilities lie.

Performance Testing:
Testing an application to measure its performance during use. This is similar to load testing (for multi-user applications) and Benchmark Testing (for single user applications).

Regression Testing:
This involves running test scripts created for a previous version of the application against a later version of the application. Running regression tests against an application is done to identify what has changed between two versions of an application.

User Acceptance Test:
This is often carried out by the end use and is carried primarily to ensure that the user accepts the software that has been created as meeting with their approval before checks are signed.

Sanity test
This type of testing is also called sanity testing and is done in order to check if the application is ready for further major testing and is working properly without failing up to least expected level.

Security testing:
Testing the application from a security perspective to ensure that users can do everything, and only everything, that they have been given the access rights to do.

Smoke Testing:
aka Sanity Testing
Testing the application in order to ensure that the current build of the application/system is sufficiently stable for further, more comprehensive, testing to be performed.

Stress Testing:
This involves placing the system at maximum load for an extended period of time to monitor its performance under stress.

System Testing:
Testing that takes place when all the modular units that make up a software system have been fully integrated. Therefore, system testing happens way after integration testing

User Interface Testing
Testing the user interface application to help ensure that it is user friendly and (optionally) meets with standard software accessibility guidelines.

Unit testing:
This form is done by developers as it involves testing the individual component of a software system. This is loosely defined enough to be interpreted as anything from a function to a class

Usability testing:
Testing to see how easy it is to use a web site or web application. The usability of an application is often referred to as the 'user-friendliness' of the application

White Box Testing:
A testing technique in which an explicit knowledge of the internal workings of a system is used in testing the system.