Technical glossary
This glossary is intended to help you understand some of the technical terms used by the Pensions Dashboards Programme.
Terms | Definition |
---|---|
acceptance testing | the final stage in the testing lifecycle conducted by end users with the purpose of accepting or rejecting the system before release |
actual result | the system status or outcome after a test has been executed. An anomaly or deviation is when the actual results differ from the expected result |
ad hoc testing | unstructured testing: that is testing carried out informally without test cases or other written test instructions |
alpha testing | operational testing conducted by potential users, customers, or an independent test team at the vendor’s site. Alpha testers should not be from the group involved in the development of the system, in order to maintain their objectivity. Alpha testing is sometimes used as acceptance testing by the vendor |
anomaly | any irregular software behaviour that deviates from expectations based on requirements specifications, design documents, standards etc. A good way to find anomalies is by testing the software |
application programming interface (API) | a set of programming code, which allows two applications to talk to each other |
authorisation server | the software that manages the authorisation process within the ecosystem |
beta testing | test that comes after alpha tests and is performed by people outside of the organisation that built the system. Beta testing is especially valuable for finding usability flaws and configuration problems |
big-bang integration | integration testing strategy in which every component of a system is assembled and tested together; contrast with other integration testing strategies in which system components are integrated one at a time |
black box testing | testing in which the test object is seen as a “black box” and the tester has no knowledge of its internal structure. The opposite of white box testing |
bottom-up integration | an integration testing strategy in which you start integrating components from the lowest level of the system architecture. Other techniques are the big-bang integration and top-down integration |
boundary value analysis | a black box test design technique that tests input or output values that are on the edge of what is allowed or at the smallest incremental distance on either side of an edge. For example, an input field that accepts text between one and 10 characters has six boundary values: 0, 1, 2, 9, 10 and 11 characters |
BS 7925-1 | a testing standards document containing a glossary of testing terms. BS stands for British Standard |
BS 7925-2 | a testing standards document that describes the testing process, primarily focusing on component testing. BS stands for British Standard |
bug | this represents a fault or a defect. The International Software Testing Qualifications Board (ISTQB) glossary explains that “…a human being can make an error (mistake), which produces a defect (fault, bug) in the program code, or in a document. If a defect in code is executed, the system may fail to do what it should do (or do something it shouldn’t), causing a failure. Defects in software, systems or documents may result in failures, but not all defects do so.” |
computer-aided software testing (CAST) | a general term for automated testing tools |
central digital architecture | we use the term digital architecture to refer to the group of elements that make dashboards work. These include the ecosystem components that PDP is responsible for: the pension finder system, the consent and authorisation service, the identity service and the governance register |
change control board (CCB) | a group responsible for evaluating, prioritising, and approving/rejecting requested changes to an IT system |
change request | a type of document describing a needed or desired change to the system |
checklist | a simpler form of a test case, often merely a document with short test instructions (“one-liners”). An advantage of checklists is that they are easy to develop. A disadvantage is that they are less structured than test cases |
client | the part of an organisation that orders an IT system from the internal IT department or from an external supplier/vendor |
capability maturity model integration (CMMI) | a framework for improving process efficiency in systems development and maintenance |
code coverage | a generic term for analysis methods that measure the proportion of code in a system that is executed by testing. Expressed as a percentage, for example, 90% code coverage |
code standard | description of how a programming language should be used within an organisation |
compilation | the activity of translating lines of code written in a human-readable programming language into machine code that can be executed by the computer |
compliance | formal testing against the central digital architecture platform to demonstrate that a data provider or dashboard supplier has successfully implemented API messaging standards including error and retry behaviour |
component | the smallest element of the system, such as a class or a DLL |
component integration testing | another term for integration test |
component testing | test level that evaluates the smallest elements of the system. Also known as unit test, program test and module test |
compulsory onboarding | beginning in programme phase four, data providers will be compelled to connect to the ecosystem and make individuals’ pensions information available via dashboards in the order determined by the DWP regulations consistent with PDP standards |
configuration management | routines for version control of documents and software/program code, as well as managing multiple system release versions |
configuration testing | a test to confirm that the system works under different configurations of hardware and software, such as testing a website using different browsers |
conformance | the internal testing phase a dashboard or data provider supplier performs, connected to the reference environments and ensuring they can evidence conformance to the API standards and are fit to proceed to formal testing |
consent and authorisation service | part of the central digital architecture. This component acts as the ecosystem trust anchor, operating the authorisation protocol and managing registration of software entities. It steps up authentication by handing off the user to the identity service when necessary and provides a user interface (UI), allowing the user to provide and manage their consents, define and manage a policy against each of their found pension identifiers and provide any self-asserted claims as part of the find process |
consent user interface (consent UI) | what individuals use to interact with the ecosystem, when they provide their consent and authorisation to locate their pensions |
context-driven testing | testing which makes use of debugging techniques inspired by real-world usage conditions. It is a method of testing, which encourages testers to develop testing opportunities based on the specific details of any given situation |
commercial off the shelf (COTS) | commercial off the shelf: software that can be bought on the open market. Also called “packaged” software |
dashboard provider (DB) | the organisations that will develop front-end dashboards to connect to the PDP ecosystem. The Money and Pensions Service (MAPS) is required by the government to create a front-end dashboard and we anticipate that other organisations (authorised by the FCA) will also create dashboards |
data provider (DP) | data providers are the organisations that provide data to dashboards. This includes pension providers, schemes, trusts, third-party administrators and integrated service providers (ISPs) |
debugging | the process in which developers identify, diagnose, and fix errors found. See also bug and defect |
decision table | a test design and requirements specification technique. A decision table describes the logical conditions and rules for a system. Testers use the table as the basis for creating test cases |
defect | a flaw in a component or system that can cause the component or system to fail to perform its required function. A defect, if encountered during execution, may cause a failure of the component or system |
defect report | a document used to report a defect in a component, system, or document. Also known as an incident report |
deliverable | any product that must be delivered to someone other than the author of the product. Examples of deliverables are documentation, code and the system |
ecosystem / pensions dashboards ecosystem | multiple parties, technical services and governance need to be connected in what we are referring to as an ecosystem. This is made up of the supporting digital architecture, which allows dashboards to work, the dashboards themselves which individuals interact with, pension providers’ find and view interfaces, and the governance system which monitors the whole ecosystem |
dynamic testing | testing performed while the system is running |
end-to-end testing | testing used to test whether the performance of an application from start to finish conforms with the behaviour that is expected from it. This technique can be used to identify system dependencies and confirm the integrity of data transfer across different system components remains |
entry criteria | criteria that must be met before you can initiate testing. An example is ensuring that the test cases and test plans are complete before testing can start |
equivalence partitioning | a test design technique based on the fact that data in a system is managed in classes, such as intervals. Because of this, you only need to test a single value in every equivalence class. For example, you can assume that a calculator performs all addition operations in the same way; so, if you test one addition operation, you have tested the entire equivalence class |
error | a human action that produces an incorrect result |
error description | the section of a defect report where the tester describes the test steps he/she performed, what the outcome was, what result he/she expected, and any additional information that will assist in troubleshooting |
error guessing | experience-based test design technique where the tester develops test cases based on his/her skill and intuition, and experience with similar systems and technologies |
execute | when a program is executing, it means that the program is running when you execute or conduct a test case, you can also say that you are running the test case |
exhaustive testing | a test approach in which you test all possible inputs and outputs |
exit criteria | criteria that must be fulfilled for testing to be considered complete, such as that all high-priority test cases are executed, and that no open high-priority defect remains. Also known as completion criteria |
expected result | a description of the test object’s expected status or behaviour after the test steps are completed. Part of the test case |
exploratory testing | a test design technique based on the tester’s experience; the tester creates the tests while he/she gets to know the system and executes the tests |
external supplier | a supplier/vendor that doesn’t belong to the same organization as the client/buyer |
extreme programming | an agile development methodology that emphasises the importance of pair programming, where two developers write program code together. The methodology also implies frequent deliveries and automated testing |
factory acceptance test | acceptance testing carried out at the supplier’s facility, as opposed to a site acceptance test, which is conducted at the client’s site |
failure | deviation of the component or system under test from its expected result |
formal review | a review that proceeds according to a documented review process that may include, for example, review meetings, formal roles, required preparation steps, and goals. Inspection is an example of a formal review |
functional integration | an integration testing strategy in which the system is integrated one function at a time. For example, all the components needed for the “search customer” function are put together and tested one by one |
functional testing | testing of the system’s functionality and behaviour; the opposite of non-functional testing |
grey-box testing | testing, which uses a combination of white box and black box testing techniques to carry out software debugging on a system, when the tester has limited knowledge of its code |
governance register | part of the central digital architecture. This component acts as the ecosystem trust root, managing and operating a private PKI, which provisions static trust by issuing cryptographic certificates to ecosystem participants allowing them to establish a connection with central digital architecture. It contains various registers of entities that have been onboarded onto the ecosystem and provides ecosystem monitoring and auditing capabilities that feed into the Operational Management Centre and Security Operations Centre |
IEEE 829 | an international standard for test documentation published by the IEEE organisation. The full name of the standard is IEEE Standard for Software Test Documentation. It includes templates for the test plan, various test reports, and handover documents |
incident | a condition that is different from what is expected, such as a deviation from requirements or test cases |
independent testing | a type of testing in which testers’ responsibilities are divided up, in order to maintain their objectivity. One way to do this is by giving different roles the responsibility for various tests. You can use different sets of test cases to test the system from different points of view |
identity service (IDS) | part of the central digital architecture. This component assures a user’s identity to the confidence and assurance level specified by PDP and provides the users verified data attributes needed to find their pensions |
informal review | a review that isn’t based on a formal procedure |
inspection | an example of a formal review technique |
installation test | a type of test meant to assess whether the system meets the requirements for installation and uninstallation. This could include verifying that the correct files are copied to the machine and that a shortcut is created in the application menu |
integration | in the context of the PDP, this is the end to end integration of a data provider or dashboard provider, in compliance with the message standards |
integration testing | a test level meant to show that the system’s components work with one another. The goal is to find problems in interfaces and communication between components |
internal supplier | developer that belongs to the same organisation as the client. The IT department is usually the internal supplier |
international software testing qualifications board (ISTQB) | responsible for international programs for testing certification |
iteration | a development cycle consisting of a number of phases, from formulation of requirements to delivery of part of an IT system. Common phases are analysis, design, development, and testing. The practice of working in iterations is called iterative development |
ITWG | integration test working group |
JUnit | a framework for testing Java applications, specifically designed for automated testing of Java components |
load testing | a type of performance testing conducted to evaluate the behaviour of a component or system with increasing load, eg numbers of concurrent users and/or numbers of transactions. Used to determine what load can be handled by the component or system |
maintainability | a measure of how easy a given piece of software code is to modify in order to correct defects, improve or add functionality |
maintenance | activities for managing a system after it has been released in order to correct defects or to improve or add functionality. Maintenance activities include requirements management, testing, development amongst others |
regulatory framework | the standard for creating names for variables, functions, and other parts of a program. For example, strName, sName and Name are all technically valid names for a variable, but if you don’t adhere to one structure as the standard, maintenance will be very difficult |
negative testing | a type of testing intended to show that the system works well even if it is not used correctly. For example, if a user enters text in a numeric field, the system should not crash |
non-functional testing | testing of non-functional aspects of the system, such as usability, reliability, maintainability, and performance |
NUnit | an open-source framework for automated testing of components in Microsoft .Net applications |
onboarding | connecting a supplier end point to the central digital architecture platform for the purposes of test or live running as appropriate |
open ID connect (OIDC) | this is an identity layer, which sits on top of the OAuth 2.0 protocol. It allows users to securely sign in to an application |
open source | a form of licensing in which software is offered free of charge. Open-source software is frequently available via download from the internet |
operational testing | tests carried out when the system has been installed in the operational environment (or simulated operational environment) and is otherwise ready to go live. Intended to test operational aspects of the system eg recoverability, co-existence with other systems and resource consumption |
outcome | the result after a test case has been executed |
pair programming | a software development approach where two developers sit together at one computer, while programming a new system. While one developer codes, the other makes comments and observations, and acts as a sounding board. The technique has been shown to lead to higher quality thanks to the continuous code review – bugs and errors are avoided because the team catches them as the code is written |
pair testing | test approach where two people, eg two testers, a developer and a tester, or an end-user and a tester, work together to find defects. Typically, they share one computer and trade control of it while testing. One tester can act as observer when the other performs tests |
protection API token (PAT) | a long-lived authorisation token, representing a user’s consent at the consent and authorisation service. It is part of the UMA authorisation process and identifies the correct authorisation server to pension providers’ resource server |
persisted claims token (PCT) | this is part of the UMA authorisation process. A persisted claims token holds on to permissions collected during one authorisation process, so that users can access the system easily in future, without having to provide the same permissions again |
public key infrastructure (PKI) | a public key infrastructure allows the secure exchange of online data. It uses public and private cryptographic key pairs to unlock the information to authorised individuals |
permission ticket / token (PMT) | issuing permission tokens is an important part of the UMA authorisation process. Within the PDP ecosystem, the consent and authorisation service will issue a permission token to the data provider to release the pension information to the user’s dashboard, provided the user has given their consent to do so |
pension provider find interface (PPFI) | this is the means by which pension providers interact with the ecosystem, when they are receiving find data ie the instruction to look for a particular individual’s pension(s) |
pension finder service (PFS) | part of the central digital architecture. This component is orchestration middleware; it has no user interface. It is responsible for distributing find requests across the data provider endpoints and managing the low-level interactions to achieve message delivery to providers. It also manages traffic volumes and handles data provider endpoint failures, operating a cache of find requests, a time out process for each endpoint for requests and a back-off retry process to throttle traffic |
pension identifier (PeI) | term used to cover all separately identifiable pensions, in which some individual(s) may have an interest. It is the identifier of a pension, not in itself a statement of ownership. Its format is a text string in the form of a uniform resource name (URN) and provides a pointer to the pension asset. It is capable of being dereferenced by a pension dashboard and resolved into a URL, which provides the view endpoint which can serve the pension details associated with the PeI |
pension providers | pension providers are organisations that provide pensions to individuals in the UK. This includes pension providers and schemes and DWP (State Pension) |
performance testing | a test to evaluate whether the system meets performance requirements such as response time or transaction frequency |
positive testing | a test aimed to show that the test object works correctly in normal situations. For example, a test to show that the process of registering a new customer functions correctly when using valid test data |
postconditions | environmental and state conditions that must be fulfilled after a test case or test run has been executed |
pension providers’ view interface (PPVI) | the means by which pension providers receive view requests from users at dashboards, check their authorisation at the consent and authorisation service, and if authorised return view data to dashboards |
preconditions | environmental and state conditions that must be fulfilled before the component or system can be tested. May relate to the technical environment or the status of the test object. Also known as prerequisites or preparations |
priority | the level of importance assigned to defect, requirement, change, etc |
professional tester | a person whose sole job is testing |
quality | the degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations |
quality assurance (QA) | systematic monitoring and evaluation of various aspects of a component or system to maximize the probability that minimum standards of quality are being attained |
reference environment | a condition where there is stable equilibrium, with all parts at rest relative to one another. No chemical reactions can occur between the environmental components. The reference environment acts as an infinite system, and is a sink and source for heat and materials |
record and playback tool | test execution tool for recording and playback of test cases often used to support automation of regression testing. Also known as capture/playback |
regression testing | a test activity generally conducted in conjunction with each new release of the system, in order to detect defects that were introduced (or discovered) when prior defects were fixed |
release | a new version of the system under test. The release can be either an internal release from developers to testers, or release of the system to the client |
release management | a set of activities geared to create new versions of the complete system. Each release is identified by a distinct version number |
release testing | a type of non-exhaustive test performed when the system is installed in a new target environment, using a small set of test cases to validate critical functions without going into depth on any one of them |
requirements management | a set of activities covering gathering, elicitation, documentation, prioritisation, quality assurance and management of requirements for an IT system |
re-testing | a test to verify that a previously reported defect has been corrected |
retrospective meeting | a meeting at the end of a project/a sprint during which the team members evaluate the work and learn lessons that can be applied to the next project or sprint |
review | a static test technique in which the reviewer reads a text in a structured way in order to find defects and suggest improvements. Reviews may cover requirements documents, test documents, code, and other materials, and can range from informal to formal |
reviewer | a person involved in the review process that identifies and documents discrepancies in the item being reviewed. Reviewers are selected in order to represent different areas of expertise, stakeholder groups and types of analysis |
risk | a factor that could result in future negative consequences. Is usually expressed in terms of impact and likelihood |
risk-based testing | a structured approach in which test cases are chosen based on risks. Test design techniques like boundary value analysis and equivalence partitioning are risk-based. All testing ought to be risk-based |
requesting party token (RPT) | these are short-lived authorisation tokens or required access tokens. Within the PDP ecosystem, an RPT is a token that the pension finder service (the requesting party in this instance) will send to data providers, when an individual is trying to find their pensions via a pensions dashboard. RPTs are also used to request view data from providers and represent consent permissions for a specific PeI |
software as a service (SaaS) | a method of software delivery and licensing where users access software via a subscription, rather than buying and installing it on individual devices |
scalability testing | a component of non-functional testing, used to measure the capability of software to scale up or down in terms of its non-functional characteristics |
scenario | a sequence of activities performed in a system, such as logging in, signing up a customer, ordering products, and printing an invoice. You can combine test cases to form a scenario especially at higher test levels |
scrum | an iterative, incremental framework for project management commonly used with agile software development |
session-based testing | an approach to testing in which test activities are planned as uninterrupted, quite short, sessions of test design and execution, often used in conjunction with exploratory testing |
severity | the degree of impact that a defect has on the development or operation of a component or system |
site acceptance testing (SAT) | acceptance testing carried out onsite at the client’s location, as opposed to the developer’s location. Testing at the developer’s site is called factory acceptance testing (FAT) |
state transition testing | a test design technique in which a system is viewed as a series of states, valid and invalid transitions between those states, and inputs and events that cause changes in state |
static testing | testing performed without running the system. Document review is an example of a static test |
stress testing | testing meant to assess how the system reacts to workloads (network, processing, data volume) that exceed the system’s specified requirements. Stress testing shows which system resource (eg memory or bandwidth) is first to fail |
system integration testing | a test level designed to evaluate whether a system can be successfully integrated with other systems (eg that the tested system works well with the HR system). May be included as part of system-level testing, or be conducted as its own test level in between system testing and acceptance testing |
system testing | test level aimed at testing the complete integrated system. Both functional and non-functional tests are conducted |
test case | a structured test script that describes how a function or feature should be tested, including test steps, expected results preconditions and postconditions |
test data | information that completes the test steps in a test case with, for example, what values to input. In a test case where you add a customer to the system the test data might be customer name and address. Test data might exist in a separate test data file or in a database |
test driven development | a development approach in which developers write test cases before writing any code |
test driver | a software component (driver) used during integration testing in order to emulate (ie to stand in for) higher-level components of the architecture. For example, a test driver can emulate the user interface during tests |
test environment | the technical environment in which the tests are conducted, including hardware, software, and test tools. Documented in the test plan and/or test strategy |
test execution | the process of running test cases on the test object |
test level | a group of test activities organised and carried out together in order to meet stated goals. Examples of levels of testing are component, integration, system, and acceptance test |
test log | a document that describes testing activities in chronological order |
test object | the part or aspects of the system to be tested. Might be a component, subsystem, or the system as a whole |
test plan | a document describing what should be tested by whom, when, how, and why. The test plan is bounded in time, describing system testing for a particular version of a system, for example. The test plan is to the test manager what the project plan is to the project manager |
test policy | a document that describes how an organisation runs its testing processes at a high level. It may contain a description of test levels according to the chosen life cycle model, roles and responsibilities, required/expected documents, etc |
test process | the complete set of testing activities, from planning through to completion. The test process is usually described in the test policy |
test report | a document that summarises the process and outcome of testing activities at the conclusion of a test period. Contains the test manager’s recommendations, which in turn are based on the degree to which the test activities attained its objectives. Also called test summary report |
test run | a group of test cases eg all the test cases for system testing with owner and end-date. Tests on one test level are often grouped into a series of tests, ie two-week cycles consisting of testing, re-testing, and regression testing. Each series can be a test run |
test script | automated test case that the team creates with the help of a test automation tool. Sometimes also used to refer to a manual test case, or to a series of interlinked test cases |
test specification | a document containing a number of test cases that include steps for preparing and resetting the system. In a larger system you might have one test specification for each subsystem |
test strategy | document describing how a system is usually tested |
test stub | a test program used during integration testing in order to emulate lower-level components. For example, you can replace a database with a test stub that provides a hard-coded answer when it is called |
test suite | a group of test cases eg all the test cases for system testing |
testing | a set of activities intended to evaluate software and other deliverables to determine if that they meet requirements, to demonstrate that they are fit for purpose and to find defects |
third-party component | a part of an IT system that is purchased as a packaged/complete product instead of being developed by the supplier/vendor |
top-down integration | an integration test strategy, in which the team starts to integrate components at the top level of the system architecture |
token | a token is a digital way to access something that is protected |
test process improvement (TPI) | a method of measuring and improving the organisation’s maturity with regard to testing |
traceability | analysis of a prior chain of events, as well as the ability to follow an object such as a document or a program through various versions. Traceability enables you to determine the impact of a change in requirements, assuming you also develop a traceability matrix |
traceability matrix | a table showing the relationship between two or more baselined documents, such as requirements and test cases, or test cases and defect reports. Used to assess what impact a change will have across the documentation and software, for example, which test cases will need to be run when given requirements change |
unit testing | test level that evaluates the smallest elements of the system. Also known as unit test, program test and module test |
unit test framework | software or class libraries that enable developers to write test code in their regular programming language. Used to automate component and integration testing |
usability | the capability of the software to be understood, learned, used and attractive to the user |
usability testing | a test technique for evaluating a system’s usability. Frequently conducted by users performing tasks in the system while they describe their thought process out loud |
use case | a type of requirements document in which the requirements are written in the form of sequences that describe how various actors in the system interact with the system |
user-managed access 2.0 (UMA) | an open standard authorisation protocol that extends the widely adopt Oauth 2.0 protocol and gives resource owners (pension owners) the ability to manage access to their resources (pension data) by defining an access policy at centralised authorisation server which it enforces. The access policy to the pension owners pension data can be defined at a party-to-party basis (ie giving access to another person such as an IFA or MaPS guider) instead of giving access to an application |
user-managed access resource server (UMA RS) | within the UMA protocol, the resource servers hold the data that needs to be unlocked via the appropriate permissions. So within the PDP ecosystem, these are the pension providers servers, which hold the information about individuals’ pensions |
V-model | a software development lifecycle model that describes requirements management, development, and testing on a number of different levels |
validation | tests designed to demonstrate that the developers have built the correct system. Contrast with verification, which means testing that the system has been built correctly. A large number of validation activities take place during acceptance testing |
verification | tests designed to demonstrate that the developers have built the system correctly. Contrast with validation, which means testing that the correct system has been built. A large number of verification activities take place during component testing |
versioning | various methods for uniquely identifying documents and source files, eg with a unique version number. Each time the object changes, it should receive a new version number. See also release management |
waterfall model | a sequential development approach consisting of a series of phases carried out one by one. This approach is not recommended due to a number of inherent problems |
white box testing | a type of testing in which the tester has knowledge of the internal structure of the test object. White box testers may familiarize themselves with the system by reading the program code, studying the database model, or going through the technical specifications. Contrast with black box testing |