Quality and Outcomes Management

Quality and Outcomes Management
Michael B. Garrett
This chapter is a revised version of what was previously published in the first edition of CMSA Core Curriculum for Case Management. The contributor wishes to acknowledge Sherry Aliotta, Nancy Claflin, and Patricia M. Pecqueux, as some of the timeless material was retained from the previous version.
▪ INTRODUCTION
A. Today’s health care consumers are demanding that they receive full value for their health care dollars. Health care executives and other personnel meet customers’ expectations by focusing on improving the quality of the services they provide and by ensuring that the customer experience is desirable and rewarding. At the same time, health care executives recognize that a focus on quality care is the best way to ensure that revenues equal or exceed expenses. Focusing on quality, according to Cesta and Tahan (2003), allows health care organizations to achieve many objectives, some of which are:
  • Efficiently and effectively using scarce health care resources
  • Meeting the needs of the customers
  • Enhancing customer and staff satisfaction
  • Providing compassionate, ethical, and culturally sensitive care
  • Ensuring that patients are safe and that the environment of care is conducive to safety
  • Ensuring professional performance by health care providers
B. For several years, case managers have used outcome information from providers to make decisions about patient care activities, including referrals to specialty services and providers. Case managers are now being called on to measure and report outcomes of case management services to the public, federal and state agencies, accreditation agencies, providers of services, administrators of health care organizations, as well as payers.
  • Although there have been numerous anecdotal descriptions of case management outcomes, objective, scientific evidence is sparse.
  • Several issues have contributed to the lack of valid, reliable outcomes data. These include:
    • Inconsistent definitions of case management and the interventions performed by case managers
    • Inconsistent methods of measurement. For example, one group calculates cost savings using one method, whereas another group calculates cost savings a different way.
    • Organizations maintaining their methods of measuring outcomes as proprietary to their program and process
    • The initial acceptance of case management as a tool to reduce costs reduced the need for case managers to define, document, and measure carefully the results of their activities except via cost savings.
▪ KEY DEFINITIONS
A. Administrative and management processes—The activities performed in the governance and management systems of a health care organization.
B. Benchmarks—Gold standards or ideal practices established by the leaders or toughest competitors in the field; used by others to measure their performance continuously with these leaders (Powell, 2000).
C. Care delivery processes—The support activities utilized by practitioners and all suppliers of care and care products to get the product/service to the patient.
D. Clinical practice guidelines—Systematically developed statements to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances (Institute of Medicine, 1990).
E. Clinical processes—The activities of health care practitioners with and for patients/families, and what patients do in response.
F. Direct case management outcomes—The measurement or results of those activities and interventions that are within the scope of the case manager’s practice and control.
G. End health system outcomes—Those performance indicators measured for the health care system overall; include the following: cost of care, quality of care, and health status and clinical outcomes achieved.
H. External validity—The degree to which the results of a study can be generalized to settings or samples other than the one studied (Polit and Hungler, 1989).
I. Information flow—The creating and transporting of facts, knowledge, and data that make for informed decisions. The sharing of data between providers, health care teams members, with payers, or with patients and their families.
J. Internal validity—The degree to which it can be inferred that the experimental treatment or independent variable, rather than uncontrolled extraneous factors, is responsible for observed effects on the dependent variable (Polit and Hungler, 1989).
K. Materials flow—The movement of equipment and supplies.
L. Outcomes—The end results of care—adverse or beneficial—as well as gradients between; the products of one or more processes. Outcomes used as indicators of quality are states or conditions of individuals and populations attributed or attributable to antecedent health care (Donabedian, 1992). Another way of describing an outcome is as a measurable individual, family, or community state, behavior, or perception that is measured along a continuum and is responsive to nursing interventions (Moorhead et al., 2003). Classifications of outcomes may include clinical, functional, financial/cost, experience perceived.
M. Outcomes management—A technology of patient experience designed to help patients, payers, and providers make rational medical care-related choices based on their better insight into the effects of these choices on the patient’s life (Ellwood, 1988).
N. Patient flow—The movement of patients from one place to another, from one level of care to another, or from one care setting to another.
O. Process—The procedures, methods, means, or sequence of steps for providing or delivering care and producing outcomes (Brown, 2005). They are sequentially related steps intended to complete a task and produce specific outcomes (Goonan, 1993).
P. Process measures—Used primarily to determine the degree to which the process is being executed as planned. For example, “The number of patients receiving a case management assessment within 24 hours of admission to a hospital setting.”
Q. Quality—The degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge (Institute of Medicine, 1990).
R. Reliability—The degree of consistency or accuracy with which an instrument measures the attribute it is designed to measure (Polit and Hungler, 1989).
S. Risk adjustment—A process for introducing an allowance for factors that would introduce bias; taking into consideration multiple factors that contribute to an end result. For example, in one health plan, Medicare-aged patients may be counted as 1.5 patients when establishing caseload numbers. This “risk adjustment” (counting one person as if he or she were one and a half) is meant to account for the additional time that these more-complex patients require from a case manager or a health care delivery system.
T. Standards of care—Measures that define the type of care/service and desired outcome that the patient can expect from the health care encounter (Healthcare Quality Certification Board).
U. Standards of practice—Measures that establish an acceptable level of performance that is expected of health care practitioners (Healthcare Quality Certification Board).
V. Structure—The arrangement of a care system, a part of a system, or elements that facilitate care; the care “environment”; evidence of the organization’s ability to provide care to patients (Brown, 2005). Examples include staff qualifications, staffing levels, work environment, technology resources, policies and procedures, equipment, table of organization and reporting structure, and types of services provided.
W. Variation—Deviation, divergence, or difference in results from an assumed or usual standard that is usually selected at the initiation of a quality/outcome management program. Variation can also be defined as the stability (or lack thereof) in a process. If the process has a large amount of variation (instability), it is more difficult to manage than a process with only a slight degree of variation (Powell, 2000).
▪ DEFINING OUTCOMES MANAGEMENT AND MEASUREMENT
A. Avedis Donabedian described a quality paradigm from which information can be drawn for inferences about the quality of care (Donabedian, 1966).
  • His paradigm holds that there are three key factors in determining quality: structure, process, and outcome.
  • Structure leads to process, which leads to outcome.
  • These factors represent complex sets of events and factors.
  • How each relates to the other must be clearly understood before quality measurement and assessment begins.
  • Causal relationships may be understood between these factors, but they are considered as probabilities, not certainties.
B. When selecting outcome measures, we are attempting to determine in advance the potential effects, side effects, or consequences of our actions.
C. Outcomes measurement can assist in the demonstration of value by validating:
  • What is effective;
  • What is not effective;
  • The costs of an intervention; and
  • Whether the cost of the intervention is substantiated by the return on the investment.
D. The centerpiece and underlying ingredient of outcomes management is the tracking and measurement of the patient’s clinical condition, functional ability, and well-being or quality of life.
E. Outcomes management is a common language of health outcomes that is understood by patients, practitioners, payers, health care administrators, and other stakeholders.
F. This requires a national reference database containing information and analysis on clinical, financial, and health outcomes, estimating:
  • Relationships between medical interventions and health outcomes
  • Relationships between health outcomes and money spent/cost of care
G. Outcomes management is dependent on four developing technologies:
  • Practitioner reliance on standards of care and evidence-based guidelines in selecting appropriate interventions
  • Routine and systematic measurement of the functioning and well-being of patients along with disease-specific clinical outcomes, at appropriate time intervals
  • Pooling of clinical and outcome data on a massive scale
  • Analysis and dissemination of results (outcomes) from the segment of the database pertinent to the concerns of each decision maker
H. One of the typical results from analysis is the detection of variation. Variation is typically measured through an outcomes management program. Variation is neither good nor bad in itself. Further analysis is required to determine what the causation is for the variation. The goal of outcome management is not to eliminate variation but to reduce it in order to produce and sustain stability in processes.
I. There are two types of variation:
Jul 14, 2016 | Posted by in PEDIATRICS | Comments Off on Quality and Outcomes Management

Full access? Get Clinical Tree

Get Clinical Tree app for offline access