Indicators for Measuring Patient Experience

Overview

As other articles in this journal have explained, the need to understand what matters to patients is an integral part of delivery of high quality health care, which has now been adopted as policy within the NHS. However, if we are to move beyond rhetoric, it is important to assess what is being achieved and where services are still found wanting, before acting to improve patients’ experience of healthcare. Given the volumes of patient activity at all levels within health and social care, demonstrating achievement will require some quantitative measurement and the Department of Health has already made a start by introducing various national accountability frameworks. However, whether undertaken at national or local level, it is important to ensure that measurement is sound and the results truly helpful in policy making, service delivery and management. This article seeks to discuss the use of indicators in the measurement and reporting of patient experience.

Assessing achievement in relation to what matters to patients

A key feature of an indicator is that it is a measurement in the context of service objectives or standards. Any discussion on indicators must therefore consider the context of the indicator first of all.

There has been a renewed focus recently on what patients can expect from the NHS. In its White Paper – Equity and Excellence, Liberating the NHS, the government said ‘We will put patients at the heart of the NHS, through an information revolution and greater choice and control.’1 Department of Health, 2010.

Research by the King’s Fund and King’s College London2Robert, G and Cornwell, J, 2011 What Matters To Patients? Developing the Evidence Base for Measuring and Improving Patient Experience, Kings Fund/Kings College London. showed that there is a substantial evidence base for what matters to patients and identified current shortfalls from expectations and what improvements are needed.

NHS Patient Experience Framework, Department of Health, February 2012

This Framework is based on a modified version of the Picker Institute Principles of Patient-Centred Care, and a working definition of patient experience to guide measurement across the NHS, as agreed by the National Quality Board. It covers the following aspects:

  • Respect for patient-centred values, preferences, and expressed needs
  • Coordination and integration of care
  • Information, communication, and education
  • Physical comfort
  • Emotional support
  • Welcoming the involvement of family and friends
  • Transition and continuity
  • Access to care

There is broad consensus on these common factors in patient experience3Department of Health. NHS Outcomes Framework, 2012-13, 20/12/2010.. But how do we effectively assess what is being achieved, while acknowledging that different aspects of patient experience will have differing levels of priority in different service settings and patient environments? One approach is to develop high quality quantitative indicators as well as local audits and qualitative reviews. The Department of Health (DH) has recently announced a shift in accountability from what is done, to what is achieved with available resources, demonstrating continuing improvement. This is to be made operational via a range of outcomes frameworks. Domain Four of the NHS Outcomes Framework focuses on ensuring that patients have a positive experience of care. Within each domain, including Domain Four, there are overarching indicators and defined areas of improvement that are likely to contribute to the overarching indicator. Patient Experience indicators are thus firmly and formally entrenched within NHS accountability frameworks for the future, at Ministerial, policy, commissioning and management levels. Commissioners will want to build some of these into their contracts with service providers, but along with these formal contractual requirements, patient experience needs to be assessed at all levels where there is direct contact between patients and service providers. Clinicians and other front line staff need to know, rather than make assumptions about the extent to which expectations are being met.

The National Institute for Health and Clinical Excellence (NICE) has been asked to develop a set of quality standards for the NHS using evidence-based guidelines and other sources of information4http://pathways.nice.org.uk/. Each quality standard contains a number of quality statements reflecting particular aspects of care within a care pathway for a particular topic, such as diabetes or stroke. Each statement is accompanied by proposed sets of measures which could be used to assess the extent to which the expected quality is being achieved. NICE aims to develop a library of some 150 quality standards over the next few years. It has published two patient experience quality standards and will cross reference these within each clinical topic based quality standards in the future.

Effective use of indicators should lead to organisations establishing appropriate local investigations to follow up indicator findings that have raised questions. Indicators show patterns and help identify departure from expectation but do not always provide answers.

There may also be tensions between the perspectives of patients and health care delivery staff, for example, a patient may make a fully informed choice, based on a personal trade-off between quantity and quality of life, which a clinician may respect but find difficult to accept. Conversely, a patient may not fully understand the clinical issues and may make choices based on experience of contact with service providers. These and other such aspects may ultimately be reflected in patient experience indicators.

Why use specific patient experience indicators?

NICE quality statements covering patient experience have to date been a part of each topic based quality standard, for example diabetes or breast cancer. NICE recognises, however, that patient experience is a generic issue and has, more recently, published two patient experience quality standards for people who use adult NHS services – one for general adult services5http://publications.nice.org.uk/patient-experience-in-adult-nhs-services-improving-the-experience-of-care- for-people-using-adult-cg138 and another for mental health service users6http://publications.nice.org.uk/service-user-experience-in-adult-mental-health-improving-the-experience- of-care-for-people-using-cg136. The general Quality Standard contains quality statements covering the following aspects:

  1. Respect for the patient.
  2. Demonstrated competency in communication skills.
  3. Patient awareness of names, roles and responsibilities of healthcare professionals.
  4. Giving patients opportunities to discuss their health beliefs, concerns and preferences.
  5. Understanding treatment options.
  6. Shared decision making.
  7. Supporting patient choice.
  8. Asking for a second opinion.
  9. Tailoring healthcare services to the individual.
  10. Physical and psychological needs.
  11. Continuity of care.
  12. Coordinated care through the exchange of patient information.
  13. Sharing information with partners, family members and carers.
  14. Information about contacting healthcare professionals.

This shows the range of perspectives considered. Alongside each quality statement, a set of measures have been proposed to help assess whether patients are experiencing the level of service they expect to receive. Some of these measures will be developed as national indicators to become parts of the various Outcomes Frameworks. Other aspects will not be suitable for quantitative measurement at national level but could be assessed locally through special local data collection, surveys, audits and service reviews.

Patient experience indicators do not operate in isolation and are likely to be used alongside other types of indicators, for example as in the five domains of the NHS Outcomes Framework. Organisations need to decide on the appropriate numbers and balance in a set of proposed indicators. Too many indicators may become too resource intensive, unwieldy and overwhelming. Too few indicators may be too selective, not representative of all services / service aspects and distort priorities. Given these limitations, it is necessary to ensure that indicators meet agreed criteria for good quality indicators, as described later in this article.

One way of handling numbers of indicators is to create hierarchies – with summary indicators for use at a higher level of management, underpinned by more detailed indicators that are useful at a lower level of service delivery and that may explain the high level indicator finding. Indicators within the patient experience domain of the NHS Outcomes Framework illustrate this.

Ensuring that people have a positive experience of care

Overarching indicators

  • 4a Patient experience of primary care:
    1. GP services.
    2. GP Out of Hours services.
    3. NHS Dental Services.
  • 4b Patient experience of hospital care.

Improvement areas

  • Improving people’s experience of outpatient care 4.1 Patient experience of outpatient services.
  • Improving hospitals’ responsiveness to personal needs 4.2 Responsiveness to in-patients’ personal needs.
  • Improving people’s experience of accident and emergency services 4.3 Patient experience of A&E services.
  • Improving access to primary care services 4.4 Access to i GP services and iii NHS dental services.
  • Improving women and their families’ experience of maternity services 4.5 Women’s experience of maternity services.
  • Improving the experience of care for people at the end of their lives 4.6 An indicator to be derived from the survey of bereaved carers.
  • Improving experience of healthcare for people with mental illness 4.7 Patient experience of community mental health services.
  • Improving children and young people’s experience of healthcare 4.8 An indicator to be derived from a Children’s Patient Experience Questionnaire.

Concepts, frameworks and definitions of terms

What is the difference between an indicator and a metric?

A metric is a precise measure of a known attribute, for example a speedometer measuring the speed of a car. Examples within clinical care include generic or condition / procedure specific scales [e.g. Oxford Hip Score]. Metrics, whether based on physical instruments or questionnaires, need rigorous testing and calibration plus precision in use. Metrics are used widely to assess quality of service delivery as experienced by patients.

An indicator is an aggregate statistic based on measurements or assessments made on people in a defined group or population, in the context of service objectives or standards. An example from a car dashboard would be the oil light, which is illuminated when oil levels drop below a certain point. An example within clinical care includes the proportion of patients who experience improved mobility after hip replacement surgery.

A speedometer just states the fact of speed. A marker may indicate whether the speed is within or outside legal limits. However, these aspects may not be informative if looked at in isolation. A context may, for example, concern reaching a certain destination on time without breaching speed limits, but may be subject to other factors such as the state of the road, traffic, road works etc. In the same way, indicators used to assess health care and patient experience must also be used in context.

Types of indicators and purposes

Some indicators lack precision but may alert managers that something is amiss and requires investigation, for example emergency readmission to hospital within a few days of discharge. Other indicators are sufficiently precise and attributable that they can be used to hold personnel accountable, for example improvement in mobility after hip replacement surgery.

Some indicators may have a defined reference point against which the performance of an organisation is judged. The reference point may be based on an evidence-based standard. In the absence of such standards, the average performance of a group of organisations (or the national value) is often used as a reference point. However, an average is made up of a mix of good and poor so may not be an appropriate reflection of good practice. An alternative is to look at the distribution of indicator values of a number of organizations and pick a point in the distribution that is deemed to reflect best practice, on the basis that it has been achieved, hence is realistic.

The NICE Quality Standard on Patient Experience contains five quality statements concerned with enabling patients to actively participate in their care. Each could have a separate indicator to monitor achievement. However, as they share the same concept, it may be possible to develop a single indicator that summarises achievement on all five fronts. Such an indicator is called a composite indicator and there are several different technical ways of producing them.

Implications of the recent move to focus on outcomes

Outcome has been defined in the research literature as either a state at a point in time, a change over a period, or a result.

  • Outcome as a state at a point in time: quality of life e.g. restricted mobility of a patient at a point in time, is an example.
  • Outcome as a change in a state over a defined period, in the context of expectation of change: from the perspective of outcome as a change, there may either be change or no change when change is either expected or not expected. For example a disabled person’s activities of daily living not deteriorating, when they are not expected to.
  • Outcome as a result of intervention: change may occur either by design or through a natural process. For example hip joint pain may change for the better as the result of an intervention (e.g. pain relieving medication) or may change for the worse due to the lack of an effective intervention (e.g. hip replacement surgery).The outcome relationship of a measurement (attribution) may be direct or indirect. For the latter, if there is good research evidence of effectiveness of treatment e.g. hip joint surgery reducing pain, a measure of successful completion of surgery may be used as a proxy for a good result that may be expected in due course. It should be noted, when using service delivery as a proxy for outcome, that the outcome may manifest itself years, maybe decades in the future.

Health overview for breast cancer
Figure 1: Health overview for breast cancer

An outcome is often a cumulative result of a variety of influences by a variety of organisations. In addition to a direct role using its own resources, the NHS also has a role in acting as an advocate for health and as a partner with other health-related organisations. Any attempt to monitor achievement of stated goals and to hold organisations to account should reflect this more complex reality. Health overviews (Figure 1) developed by the former National Centre for Health Outcomes Development (commissioned by the HSCIC), are an illustrative way of drawing a variety of linked health states and services together to provide a summary outcome overview.

They categorise improvements to reflect more specific aspects of outcome, such as success in reducing level of risk to health. Similarly, the action needed to achieve such success is divided into more precise categories, such as proactive interventions to avoid risk to health.

Figure 1 demonstrates the complexity of interactions between patients and healthcare systems, which has important implications for the interpretation and use of patient experience indicators. The various Outcomes Frameworks are based on three aspects of quality – effectiveness, patient experience and patient safety. Patient experience indicators relate

in general to the quality of delivery of services to patients, as perceived by patients themselves, their families and carers. There is not always, however, a clear distinction between the three aspects of quality – there is some overlap and interaction between them. Patient Reported Outcome Measures (impact on the patient’s quality of life from a patient’s perspective) are categorised as ‘effectiveness’ indicators in the NHS Outcomes Framework but overlap somewhat with patient experience. There may also be a tension between the domains in that a patient may exercise choice and be very satisfied with a service but may have chosen a less effective intervention that leads to a poorer health outcome over time. Patient experience indicators are thus only one part of a whole complex picture on healthcare quality.

What makes a good indicator – aspects to consider

Indicators are only useful if they are robust and credible. Criteria for judging how robust an indicator is have been quoted in the international literature. The Health and Social Care Information Centre (HSCIC) has drawn on these to establish a new process, known as the Indicator Assurance Pipeline Process7 https://indicators.ic.nhs.uk/webview/ to ensure that nationally approved indicator methods meet the highest standards. The process is governed by the HSCIC, under the auspices of the Quality Information Committee of the National Quality Board. The process includes peer review during the development and appraisal process, as well as wider consultation The HSCIC has a structured approach for collecting information from organizations wishing to take an indicator method through the process.

The criteria used by the HSCIC to judge the quality of indicators may be used as a checklist by others when considering suitability of patient experience indicators for their own context. The criteria do not operate in isolation from each other and there may even be some overlap between them. Incomplete or poor quality data may compromise the scientific validity of an indicator and its usefulness.

Purpose

To be effective, an indicator should have a defined purpose. The following questions may help to clarify the purpose of an indicator:

  • Does the indicator already exist?
  • Are there any indicators that overlap or are similar?
  • What is the benefit in using this particular indicator?
  • Who will use the indicator?

Rationale

An indicator should have a clear rationale that is clearly set out,be plausible, and capable of being understood by a diverse audience, including the public. Considerations include:

  • Is there a clear statement of the critical question that the indicator is seeking to capture?
  • Is there a clear statement about the scientific basis for the indicator; and the evidence base to support the assertions in the policy statement or service objective?

Definition

An indicator should have a name that identifies it and a definition that describes it. Questions for consideration include:

  • Is there a unique name for the indicator which differentiates it from, or specifically associates it with, other indicators, and which is sufficiently descriptive to convey meaning when referenced or quoted without supporting information? For example ‘Patient experience of hospital care’ is a clear brief indicator title.
  • Is there a clear and unambiguous description of the indicator, which is expressed both in plain English and the relevant clinical and/or statistical terminology of the particular subject in question, and which is suitable for publishing in its native form, to a diverse audience? For example, a more detailed description of the indicator will show that this is based on an average (mean) of five domain scores, each domain score being an average (mean) of scores out of 100 from a number of questions in each of the following domains: access and waiting times; safe, high quality co-ordinated care; information and choice; relationships with care personnel; and the care environment.
  • There are a variety of measurement units used by indicators. Is there a clear statement understandable by lay people, about the measurement unit and reasons why that unit has been chosen, for example why per 1,000 population?

Type of indicator

The indicators in the NHS Outcomes Framework are to be used in the context of accountability and thus have precise definitions, based on scores and responses to specific questions within validated questionnaires. Indicators can be general, e.g. covering all hospital inpatient care, or specific e.g. A&E services. Unless there is a defined standard or target against which performance will be judged, variation between providers may be used to infer that achievement is less than optimal. Consider the following questions:

  • Does the indicator assess the performance question directly, or is it some form of proxy? In the context of indicators, a proxy is where measurement of one form of service or service perspective is used to represent another, for example being in a mixed sex ward as a proxy for loss of dignity, rather than its other meaning of a carer answering questions on behalf of a patient.
  • Is the indicator one that is specifically designed for judgement purposes, and which would therefore imply a degree of precision in the specification, or is it intended for improvement activity and therefore less precise?
  • Is the indicator a general one, or specific to a particular condition or scope?
  • Is the indicator relative (for comparison in relation to the performance of others) or absolute (for comparison against a specified benchmark)?

Scope and limitations

Is there a clear statement about the scope of the indicator? For example, all England, all males over 16 years of age, all diagnoses?

Is there an explicit definition of any exclusion from the scope, which might include specific instances? For example exclusion of elective patients; or be based on calculated or derived rules, for example exclusion of organisations where data quality is below agreed threshold of 80%.

What is measured

Once the rationale, purpose and scope of an indicator have been agreed, it is important to define precisely how and on what basis the population for the indicator will be counted (denominator) and how the measure will be applied to this group (numerator). The following need to be considered:

  • Definition of numerator / denominator: Is the construction of the indicator, its component parts (numerator, denominator, inclusions, exclusions), and/or relevant derivations from it explicitly defined? Is it possible to reconstruct the indicator and/or derivations using the same base data, and achieve the same results? Is there a match between the numerator and denominator i.e. is the former a subset of the latter? Has the construction been independently verified in any way?
  • Source of data: What methods will be used to gather the data for the denominator and the numerator e.g. counts of patients from hospital episodes statistics, scores based on responses to questionnaires? How will the denominator and numerator be derived from the data source i.e. from existing or pre-calculated data; existing raw data that need further calculations to answer the indicator question; a new data source; a change to an existing data source etc.?
  • Data availability: Are the data for the indicator available, in an appropriately accessible form, consistently over time, and are they available with sufficient frequency and timeliness to enable the desired improvement actions to be visible? Is the source of the data clearly identified, including the extent of any intermediate processing steps which might predispose the data to errors or bias?
  • Data completeness: How complete are the counts for the numerator and denominator, with respect to the indicator concept and to what extent does completeness vary by organisation? How are missing data handled?
  • Data accuracy: Are the data used in the indicator robust enough to support the indicator and its derivations? Is the accuracy of the data above the threshold of acceptability, and is this threshold explicitly defined in the method, and accepted by all stakeholders? Is the effect
    of data accuracy issues upon the indicator explicitly known and declared? How are invalid data handled?
  • Data repeatability and continuity: Is the measurement consistent over the required period of time, between periods and across all applicable organisations? If there are frequent changes (for whatever reason, be they organisational, structural or other), then the utility as an indicator is likely to be compromised. If different methods have been used to capture the data this can introduce bias which needs to be considered before carrying out any aggregation.

How data are aggregated

The numerator and denominator are usually aggregated into a statistic e.g. percent, rate, ratio, average score etc. using a variety of methods. Relevant questions (which may require the support of experts in statistics) include:

  • Statistical methods: Have appropriate methods been used? Have alternative methods been tested?
  • Statistical criteria: Predictive capability: where the indicator or derivations are based upon statistical models, how well do the models reflect reality? Do the models work in all circumstances? Has this been independently tested and verified?
  • Bias: to what extent is the outcome unduly influenced by selections in scope, sample sizes, or data collection factors? Is this assessed, and has it been independently verified?
  • Statistical process control: Has the element of chance been considered in the design of the indicator, and in any associated derivations or statistical models? Has this been tested and independently verified?
  • Deconstruction: If the indicator is a composite, is the contribution that each component makes to the overall indicator clear, and if they change do they change the indicator value in a plausible way?

Risk adjustment

When comparing organisations it is important to compare like with like – risk adjustment is designed to adjust for aspects outside the direct control of the organisation. For example, patient experience scores may vary by the age and gender of patients. Since the age and gender mix of patients referred to a hospital are outside the control of hospitals, comparing hospitals with different age and gender mixes of patients would not be a valid comparison. Risk adjustment uses various statistical methods to allow for such differences in patient mix. For the NHS Outcomes Framework indicator on patient experience of GP services, the indicator values will be weighted based on demographic data to ensure results are representative of the national population. However, care is needed in the selection of aspects to adjust for. There may, for example, be variation in patient experience scores between sub-groups of patients based on social deprivation. To some extent this may be outside the direct control of service providers, but it may also reflect genuine issues of service inequality, such as lack of effort in ensuring patients are able to make informed choices. Adjusting an indicator for social deprivation may thus remove the very aspect of service quality that should be of interest. The following questions may help with judging appropriate use of variables for risk adjustment:

  • To what extent does the indicator adjust for factors outside the control of the organisation? Are appropriate variables used?
  • Are estimates of the independent contribution of each variable to observed variation in indicator values available, subject to data availability? Is there a statistically significant association between the variable and the indicator value? Does such association vary by organisation (i.e. is there a constant risk)? If there is significant association, does the variable vary by organisation?
  • Is there interaction between variables? Does such interaction vary between organisations?
  • Are there clear explanations for decisions on variables not used for risk adjustment, and assessments of consequence of not including adjustment variables? Is there a risk of over-adjustment? Has the proposed risk-adjustment been independently verified or tested in any way?

Scientific validity

Scientific validity is the extent to which the indicator measures what is intended. For example, if individual patients score certain questions in different ways, which do not reflect satisfaction with services accurately, then the aggregate indicator will not be a true measure of satisfaction. Scientific validity is usually tested when a survey questionnaire is designed, but should also be checked as part of the definition of an indicator. In addition, response rates to surveys may vary between hospitals. If those who respond to surveys have a different experience of care from those who do not (e.g. less satisfied), then comparing hospitals with different response rates would not be comparing like with like. In the NHS Outcomes Framework indicator on patient experience of GP services, the indicator is weighted for factors from the area where the respondent lives, such as level of deprivation, ethnicity profile, classification of residential neighbourhoods etc. which have been shown to impact on non-response bias. However, there is a potential conflict here with the risk adjustment issues described above. It may be necessary to get technical advice from measurement experts when considering the following aspects of validity:

  • Face validity: Does the indicator do what it claims to do?
  • Content validity: Are the components of the indicator plausibly related to or determinants of the concept?
  • Construct validity: Are the components of the indicator combined correctly?
  • Criterion validity and predictive value: How well does the indicator value compare to a ‘gold standard’? If the indicator is based on a model, how well does the indicator predict actual events?
  • Validity for the public: Is the indicator valid in the context of use for public accountability and patients choosing a hospital?
  • Validity for clinicians: Is the indicator valid in the context of use to assess the quality of clinical care?
  • Validity for performance: Is the indicator valid in the context of use for service governance and performance management?

Presentation of data

There are numerous different ways in which indicator values may be presented; as raw data, tables or graphs with or without additional information on how reliable the information is. There is often a tension between the need to keep the presentation simple and the need to capture the complexity of analyses (e.g. risk adjusted composite with confidence intervals) so that the data can be accurately interpreted. It is important to check that the presentation is appropriate in the context of intended use and not misleading:

  • Presentation: Has consideration been given to the forms of presentation of the indicator for the different stakeholder audiences? Are the forms
    of presentation appropriate for each of those audiences? How has the presentation been tested or verified for those audiences? Have common, industry standard conventions for presentation been adopted e.g. standard error bars, labelling, scale, limitations, exclusions etc. What supporting information is provided? Are other data provided alongside the indicator to support its use?

Interpretation

If there is no explicit evidence-based standard or target, an indicator value is often compared with a reference point, such as the national average or the previous year’s value. Indicator values are estimates which are subject to margins of error, especially after risk adjustment to ensure comparability. A judgement on whether a value for a particular organisation is truly different from a standard, target or reference value and not just due to chance (error) requires the use of special statistical techniques and rules. The basis on which an organisation’s performance will be judged better or worse than expected, and the level of certainty of this judgement must therefore be considered. This requires expertise, otherwise an organisation may be labelled as performing poorly when that conclusion is not valid, leading to unnecessary anxiety and reputation risk; or conversely performing well when it is not, causing issues to remain unidentified and unresolved. Also, as mentioned above, the quality of data has implications for this judgement. Those who respond to a patient experience questionnaire may systematically have a different experience form those who do not (non-response bias). If this is the case and there is a difference in response rates between organisations, then judgements made on performance using such data would not be valid. Even the national average value would not be valid as it would be made up of data with varying levels of completeness and bias. Single indicators used in isolation often provide an incomplete picture and interpretation may be dependent on other relevant information, for example the constituent parts of a composite score, proportions of patients accessing different types of treatment when there are choices etc. The following questions deal with aspects related to interpretation:

  • Cross sectional: Is a single value comparing organisations, in the absence of a standard, interpretable?
  • Play of chance: Is there a measure of uncertainty associated with the indicator value such that if the value is high or low, the play of chance can be assessed?
  • Threshold: Are there clear thresholds for identifying outliers and when action should be taken? Are different thresholds needed for different audiences / uses?
  • Potential bias and confounding: Are there external factors which would change the outcome, irrespective of any care interventions?
  • Place of event bias: To what extent is the indicator susceptible to place of event bias e.g. potentially avoidable complications following surgery may occur in the community after discharge and may not be reflected in a patient experience survey response prior to discharge?
  • Sensitivity to change, attribution, confounding factors: Does the indicator value change if true events change? Does change reflect bias or true changes e.g. how sensitive is the indicator to changes in organisational behaviours which might have a bearing on the indicator, but which are not directly related to improvement actions? Are changes in the indicator value (positive, negative) attributable to actions taken to improve quality? To what extent is the indicator susceptible to variations in data quality over time? Has sensitivity to change been tested and therefore understood for the different component parts of the particular indicator? Is there instability in indicator values over time between organisations (other than instability due to changes in care quality)?
  • Understandability: Can the indicator be readily explained and interpreted to a wide range of audiences?

Feasibility

Some indicators which have been shown to be sound in theory, as part of a research initiative, may not be feasible in practice due, for example, to complexities and costs of consistent ongoing data collection as part of service delivery. The following questions may help:

  • Feasibility: How feasible is it to produce the indicator on an ongoing basis using this approach?
  • Costs and burden: Are the costs of data collection, construction of the indicator, dissemination and presentation affordable? Will the indicator or the process of measurement place an undue burden upon the NHS? Are these burdens outweighed by the benefit gained?

Use and usefulness

It is important to think about equitable access to indicator data by the various defined stakeholder constituencies and about how the information will be used. The following questions may guide such thinking:

  • Access: Are the same data universally accessible to all who might wish to use the indicator in a consistent and standard way?
  • Credibility: Is the indicator likely to be credible across the NHS?
  • Use as indicator: Is there a clear description of how this is to be used as an indicator of quality, performance or improvement (or other specifically defined purpose)?
  • Usefulness: While the purpose and rationale questions reflect on the intent, is there evidence that the indicator can be or has been adopted, used, interpreted and acted on in practice?
  • Adoption: how widely available is it?
  • Track record: has the indicator been used in practice?
  • Quality improvement: has the indicator been used successfully for quality improvement?
  • Patients: has the indicator been used successfully for patient choice?
  • Governance: has the indicator been used successfully for corporate accountability?
  • Screening: has the indicator been used successfully for screening and identifying potential problems?
  • Actionability: To what extent can action be taken to improve a ‘poor’ result? Is it clear what steps can be taken and what is expected? Is it clear how the indicator is linked to the right improvement actions?
  • Timeliness: Is the indicator value sufficiently timely for action?

Investigation and action

It is also necessary to monitor the extent to which the indicator is actually found useful in practice. There are numerous examples of structured case studies showing how outlier values have been investigated locally and followed up, with resultant changes in data quality, service policies and service delivery. Key questions include:

Quality: Is there an investigative pathway for identifying potential quality of care issues e.g. drilling down from summary indicators to detailed component indicators?

Artefacts: Is there an investigative framework such that potential data problems to do with quality, collection and other potential explanatory variables can be followed?

Risk of perverse incentives and gaming: Does the indicator, or the process of measurement, introduce undesired behaviours by those being measured? Is the extent of this known, or predictable, and if so does it invalidate utility of the indicator or improvement process? To what extent is the indicator susceptible to the risk of ‘gaming’? Is the indicator capable of being manipulated in some way to influence the outcome without the intended improvement actions taking place e.g. by reclassifying patients as day cases to remove them from the equation? To what extent can organisations influence the value of the indicator in ways which may not benefit patients e.g. early discharge, reduced quality of end of life care, coding manipulation?

Risk of unintended consequences: what effect could the indicator have on other indicators and, subsequently service delivery?. For example, in outpatient care a key indicator used relates to the 18 week wait, however, this has led to a large number of patients having to wait for longer, because they are unrelated to the indicator, and may explain a large rise in appointment cancellations as the hospital attempts to free up slots to see those patients who are nearing the maximum 18 week wait.

Conclusions

There is a substantial evidence base of what matters to patients and its relevance in the context of delivery of high quality health care. This has now been embodied in National Health Service policies and accountability frameworks. However to make this commitment a reality, it is important to assess the extent to which the expectations of patients are being met, at all levels of service delivery, in all parts of the NHS, all of the time. Such assessment may take the form of local qualitative reviews / audits or quantitative measurement. Given the volume of healthcare and the numbers of patients treated by the NHS, quantitative measurement has an important role. Indicators represent one form of quantitative measurement and already form part of national accountability frameworks. Commissioners and managers working with indicators should understand the following key points about the use of indicators:

  1. In order to avoid misleading policy makers, patients, clinicians and managers, it is important to ensure that indicators are clear and sound.
  2. Robust indicators need to meet a number of criteria. The construction and interpretation of indicators may be complex technically and may need technical advice. While there is a desire to make indicators transparent and accessible, inappropriate oversimplification could mislead.
  3. There are multiple components of patient experience and multiple perspectives. This, alongside with the multiple types of health care services and multiple levels at which they are delivered, may require a lot of indicators to monitor holistic achievement comprehensively. There may be trade-offs required when defining a set of indicators – too many may overwhelm while too few may be too selective and may distort priorities.
  4. Indicators may not provide precise answers but may act as alerts to aspects of service quality that require further investigation and follow up action, leading to ongoing improvement.

Sources of existing data

This article is mostly concerned with what to think about and consider when judging and using patient experience indicators. This may involve local data collection. However, there are data collected for national and research purposes, where some of this thinking and testing has already been done. The following may act as guides or sources of such data:

NHS ambulance service users survey http://www.cqc.org.uk/aboutcqc/howwedoit/ involvingpeoplewhouseservices/patientsurveys/ambulanceservices.cfm

NHS inpatient survey http://www.cqc.org.uk/aboutcqc/howwedoit/involvingpeoplewhouseservices/ patientsurveys/inpatientservices.cfm

NHS emergency department survey http://www.cqc.org.uk/aboutcqc/howwedoit/ involvingpeoplewhouseservices/patientsurveys/emergencydepartments.cfm

NHS outpatient survey http://www.cqc.org.uk/aboutcqc/howwedoit/involvingpeoplewhouseservices/ patientsurveys/outpatientservices.cfm

NHS GP patient survey http://www.gp-patient.co.uk/

NHS maternity services survey http://www.cqc.org.uk/aboutcqc/howwedoit/involvingpeoplewhouseservices/ patientsurveys/maternityservices.cfm

NHS surveys focused on patient experience http://www.nhssurveys.org/survey/1093

Measuring the experience of patients / users – DH reports and data http://webarchive.nationalarchives.gov. uk/+/www.dh.gov.uk/en/Publicationsandstatistics/PublishedSurvey/NationalsurveyofNHspatients/DH_087516

References

NHS Outcomes Framework http://www.dh.gov.uk/en/Publicationsandstatistics/Publications/ PublicationsPolicyAndGuidance/DH_131700

NHS Commissioning Board: Commissioning Outcomes Framework – Engagement Document. 29/11/2011 http://www.commissioningboard.nhs.uk/2011/11/29/cof/

Department of Health: The Adult Social Care Outcomes Framework – Handbook of Definitions, 23/11/2011. http://www.dh.gov.uk/prod_consum_dh/groups/dh_digitalassets/documents/digitalasset/dh_131732.pdf

Department of Health. NHS Outcomes Frameworkfor England, 2013-2016, 23/01/2012 http://www.dh.gov.uk/en/Publicationsandstatistics/Publications/PublicationsPolicyAndGuidance/DH_132358

Health and Social Care Information Centre. The Quality and Outcomes Framework. http://www.ic.nhs.uk/ statistics-and-data-collections/audits-and-performance/the-quality-and-outcomes-framework

Health and Social Care Information Centre. www.ic.nhs.uk

Department of Health: Equity and Excellence, Liberating the NHS (White Paper), 12/07/2010 http://www.dh.gov.uk/prod_consum_dh/groups/dh_digitalassets/@dh/@en/@ps/documents/digitalasset/ dh_117794.pdf (accessed 05/03/2012)

Department of Health. NHS Patient Experience Framework. 22/2/2012 http://www.dh.gov.uk/prod_consum_dh/ groups/dh_digitalassets/@dh/@en/documents/digitalasset/dh_132788.pdf

Health and Social Care Information Centre Indicator portal: https://indicators.ic.nhs.uk/webview/

The King’s Fund and King’s College London. What Matters to Patients? Developing the Evidence Base for Measuring and Improving Patient Experience. March 2010.

Health and Social Care Information Centre Indicator Assurance Pipeline Process https://ic.nhs.uk

NICE. Quality Standards http://www.nice.org.uk/aboutnice/qualitystandards/qualitystandards.jsp NICE. Patient Experience in adult NHS services Quality Standard http://www.nice.org.uk/guidance/qualitystandards/ patientexperience/home.jsp NICE. Quality Standard for service user experience in adult mental health http://www.nice.org.uk/guidance/qualitystandards/service-user-experience-in-adult-mental-health/index.jsp (accessed 24/04/2012)

Other national initiatives on patient experience indicators http://www.dh.gov.uk/en/Publicationsandstatistics/ PublishedSurvey/NationalsurveyofNHSpatients/index.htm)

OECD Glossary of statistical terms http://stats.oecd.org/glossary/index.htm

Azim
Dr Azim Lakhani MA, BMBCh, FFPH

Azim Lakhani is a specialist in Public Health Medicine, with expertise in clinical epidemiology and a special interest in the measurement of health and quality of health care, in particular health outcomes. He is currently Head of Clinical Analysis Research and Development at the Health and Social Care Information Centre. Prior to this he held a number of posts including Director of the National Centre for Health Outcomes Development, based jointly at London and Oxford Universities; Principal Medical Officer and head of the Central Health Outcomes Unit within the Department of Health; and Director of Public Health, West Lambeth Health Authority; amongst others.