Glossary

Get familiar with TestGear system terms.

Test management system is a system that is designed to manage the testing processes. It contains test documentation, test plans, test results history, data on autotests and their runs, and reports on test activities.

Work item is a unit of test documentation contained in the test management system. Work items can be organized in the test library by dividing them into sections. There are three types of work items in the TestGear system:

  • Test Case
  • Checklist
  • Shared step

Test case is a manual test, which describes the test scenario: pre- and postconditions of the test and the steps to be taken by the tester. If an autotest is written for a test case, you can link it to the test case. A test case with an autotest bound to it is considered automated. The results of running the autotest are automatically transferred to the test case.

Checklist is a list of checks that specifies the steps to be performed by the tester. The checklist has no preconditions and postconditions, and does not specify the expected result in the steps. A checklist can be transformed into a test case by adding preconditions, postconditions, and expanding the description of the steps.

Shared step is a step (or set of steps) stored in the test management system that can be reused in different test cases. Shared steps are created for actions that are repeated in several tests to avoid duplicating step descriptions in test documentation.

Test preconditions are a set of general and specific conditions for continuing the process with a particular task, for example, a test phase. The purpose of preconditions is to prevent the task from starting, which may require more effort than eliminating the input criteria that have not been passed.

Test postconditions are a set of general and specific conditions agreed upon in advance with the stakeholders, so that the process can be officially considered completed. The purpose of exit criteria is to prevent a situation where an assignment is considered completed, but there are still some incomplete parts of the assignment. Exit criteria are used for reporting as well as for planning when to stop testing.

Test plan is a document that describes the goals, approaches, resources, and schedule of planned test activities. It defines the test objects, the properties to be tested, the tasks, those responsible for the tasks, the degree of independence of each tester, the test environment, the test design method, the input/output criteria used and the reasons for their selection, and any risks that require contingency planning.

Test point is a unit of a test plan, which consists of a test scenario, the configuration on which the test is to be run, and the values of the input parameters used in the test. There may be several test points for the same test case, depending on the number of configurations and input parameters.

Test plan report is a document summarizing the tasks and results, compiled at certain intervals in order to compare the progress of testing with the baseline version and to notify of the risks and alternatives that need to be addressed by management.

Autotest is a Unit / API / e2e / integration test that runs a specific test scenario.

Test metadata is some information about an autotest, which is stored in the code and transferred to TestGear. It includes autotest ID, data about its steps, bound links, files, labels, as well as additional data explicitly specified in the code using programming language tools (annotations, attributes, decorators).

Test run is an autotest run in TestGear. It is generated when autotests are selected and launched in the system, or created via API. The test run is referenced by the results of autotests run, combined by this run.

Test result is a result assigned for a single test point, that is a particular iteration of a test on a particular configuration and with particular input parameters. An infinite number of results can be assigned to the same test. The results of a test are tracked in the run history.

Adapter is a testing framework extension that collects autotest metadata from the test framework, processes it and passes it to TestGear through the API client. Adapter cannot work without the API client.