ﺑﺎﺯﮔﺸﺖ ﺑﻪ ﺻﻔﺤﻪ ﻗﺒﻠﯽ
خرید پکیج
تعداد آیتم قابل مشاهده باقیمانده : -26 مورد

Overview of clinical practice guidelines

Overview of clinical practice guidelines
Authors:
Paul Shekelle, MD
Jennifer S Lin, MD, MCR, FACP
Section Editor:
Mark D Aronson, MD
Deputy Editor:
Jane Givens, MD, MSCE
Literature review current through: Apr 2025. | This topic last updated: Aug 27, 2024.

INTRODUCTION — 

Clinical practice guidelines are recommendations for clinicians about the care of patients with specific conditions. They should be based upon the best available research evidence and practice experience. This topic will provide an overview of clinical practice guidelines. The broader principles of evidence-based medicine are discussed separately. (See "Evidence-based medicine".)

OVERVIEW — 

In 2011, the Institute of Medicine (IOM) defined clinical practice guidelines as "statements that include recommendations, intended to optimize patient care, that are informed by a systematic review of evidence and an assessment of the benefits and harms of alternative care options" [1].

Based on this definition, guidelines have two parts:

The foundation is a systematic review of the research evidence bearing on a clinical question, focused on the strength of the evidence on which clinical decision making for that condition is based.

A set of recommendations, involving both the evidence and value judgments regarding benefits and harms of alternative care options, addressing how patients with that condition should be managed, everything else being equal.

As an example, the US Preventive Services Task Force (USPSTF) recommendations for colorectal cancer screening [2] were published with two separate background papers: a systematic review [3] and accompanying decision analyses [4]. (See "Screening for colorectal cancer: Strategies in patients at average risk", section on 'Society guideline links'.)

Modern clinical practice guidelines build upon the older traditional practice of review articles written by experts or opinion leaders, which had a powerful influence on practice in the past. In contrast with expert opinion review articles, or other types of clinical guidance, modern-day clinical practice guidelines typically:

Are based on systematic reviews of the evidence rather than subjectively selecting the "best" studies for a review article (see 'Evidence-based' below)

Are developed by a panel (including experts from different disciplines, patients, and other key stakeholders) rather than a single person (see 'Expertise in guideline panel membership' below and 'Incorporating patient perspectives' below)

Undergo a rigorous review process (see 'Review process' below)

Use well-defined methods for reaching consensus (eg, Delphi process) and providing transparency about the evidence (eg, GRADE approach) (see 'Grading recommendations' below)

Are often developed or endorsed by national organizations (see 'Sponsoring bodies' below)

Are often widely circulated across specialties and international boundaries

In the past, guidelines largely focused on the effectiveness of interventions. Over time, however, they have paid more attention to the magnitude and certainty of effects, as well as the balance between effectiveness, harms, costs, and the feasibility of recommended interventions.

Other developments in clinical practice guidelines have addressed tailored recommendations around individual risk or in populations with different risk for underlying disease or poor health outcomes. Tailored recommendations, whereby risk factors specific to the individual patient, rather than population-based risk factors, are incorporated into tools weighing risks and benefits to guide treatment decisions [5]. For example, the USPSTF's 2022 recommendation on the use of aspirin to prevent cardiovascular disease states that the decision to initiate low-dose aspirin for the primary prevention of cardiovascular disease in adults aged 40 to 59 years with an estimated 10 percent or greater 10-year cardiovascular disease risk (based on the Pooled Cohort Equation) should be an individual one [6]. While such individualized guidelines have the potential to improve quality of care and lower health care costs, limitations to their practice application at this time include the availability of patient-specific data, validated disease models and risk calculators, and the potential impact on workflow (eg, risk assessment, shared decision making). More commonly, clinical practice guidelines may issue different recommendations in populations at higher (or lower) risk for a given disease or disease outcome [7]. Methods and processes for clinical practice guidelines to address populations at disproportionate risk of disease or suffering, ie, health disparities, have also been proposed [8-10], although they have been variably implemented.

Often professional societies and groups provide clinical guidance that are distinct from clinical practice guidelines. The methods and processes to develop these other types of guidance vary widely. Clinical guidance can be in the form of expert recommendations or consensus-based recommendations by a committee of experts. In both scenarios, there is no effort to systematically compile or assess the evidence. For example, the American Gastroenterological Association offers expert guidance in the form of "Clinical Practice Updates" distinct from their guidelines. Other types of guidance may take a robust evidence-based approach but do not necessarily meet the definition of clinical practice guidelines. For example, the  (ACP) has issued several "guidance statements" that are based on reviews of other guidelines and not a de novo systematic review of evidence. These address conditions such as breast cancer screening in women aged 40 to 49 [11], prostate cancer screening [12], and colorectal cancer screening [13] for which there is general agreement on all or most of the eligible trials but difference in how the evidence from those trials is interpreted. The ACP reviews and critically appraises existing guidelines on the topic and then formulates its own guidance based on a review of the existing high-quality guidelines and their supporting evidence reviews.

Another type of clinical guidance is appropriate use criteria (AUC), which were initially used by subspecialty societies and their associated imaging procedures. Subsequently, AUC have also been developed for several surgical procedures [14]. AUC are developed from scientific evidence, when possible, and expert opinion and generally offer detailed patient scenarios to rate care as appropriate, may be appropriate, or rarely appropriate.

COMPONENTS OF CREDIBLE GUIDELINES — 

Countless clinical guidelines have been published. They vary in quality.

Best practices — The most trustworthy guidelines can be recognized by adherence to best practices for guideline development previously identified by the Institute of Medicine (IOM) (table 1) [15]. Such guidelines:

Have an explicit description of development and funding processes that is publicly accessible

Follow a transparent process that minimizes bias, distortion, and conflicts of interest

Are developed by a multidisciplinary panel comprising clinicians, methodological experts, and representatives, including a patient or consumer, of populations expected to be affected by the guideline

Use rigorous systematic evidence review and considers quality, quantity, and consistency of the aggregate of available evidence

Summarize evidence (and evidence gaps) about potential benefits and harms relevant to each recommendation

Explain the parts that values, opinion, theory, and clinical experience play in deriving recommendations

Provide a rating of the level of confidence in the evidence underpinning each recommendation and a rating of the strength of each recommendation

Undergo extensive external review that includes an open period for public comment

Have a mechanism for revision when new evidence becomes available

Many guideline developers revised their processes on the basis of these IOM recommendations. The IOM noted that all of these criteria should be met for a guideline to be judged trustworthy [16].

The Guidelines International Network (G-I-N), an international network representing over 100 organizations across 50 different countries, works to establish minimal standards for guideline development [17-19]. Although some of the aspects of how best to implement guidelines development differ between IOM and G-I-N, generally the two bodies agree on the basic elements essential to develop high-quality guidelines, including:

Utilizing a systematic literature review

Establishing transparency and disclosing the methods used for all development steps

A multidisciplinary development group

Disclosure and management of both financial and non-financial conflicts of interests

Clear and unambiguous guideline recommendations

Using a specific grading systems to rate the strength evidence and recommendations

External peer review

Updating (or expiring) guidelines

Reporting standards have been developed to aid in the transparency of individual clinical practice guidelines [20,21]. In addition, critical appraisal tools have been developed to judge the credibility of clinical practice guidelines. One such tool is the Appraisal of Guidelines for REsearch and Evaluation II (AGREE II), which consists of 23 items across six domains (ie, scope and purpose, stakeholder involvement, rigor of development, clarity of presentation, applicability, and editorial independence) [22].

However, the biggest issue about the proposed standards has been the complexity and practicality of implementing them. Specifically, concerns have been raised about the feasibility of implementing all of the IOM's criteria in settings where resources may be constrained.

Expertise in guideline panel membership — Guidelines prepared by panels representing the full range of expertise bearing on the clinical question are more likely to avoid the biases and potential singular perspectives of members who are all from a given specialty [23]. Guidelines prepared by specialty and subspecialty societies usually involve a narrower spectrum of interest limited to members in the society [24].

Guideline panels should represent a wide range of expertise and may include, in addition to primary and subspecialty clinicians, representatives from allied health sciences, public health specialists, decision analysts, economists, consumers, and ethicists. Expertise in interpreting research evidence is a necessary component of the panel. It can be contributed by members who also have other expertise, or a methods expert can be included. Panel membership may also benefit from having diversity in terms of lived experience with representation of different sexes, race/ethnicities, and geographic areas [25].

For example, the US Preventive Services Task Force (USPSTF) makes special effort to balance the interests, clinical and methodologic expertise, as well as sociodemographic diversity in its guidelines panels [26-28]. Guidelines prepared by specialty and subspecialty societies usually involve a narrower spectrum of interest limited to members in the society [24].

Incorporating patient perspectives — Since the focus of clinical guidelines is on patient outcomes, and the importance of achieving or avoiding specific outcomes may differ among patients, including the perspectives of patients in guideline development is important [29]. Almost all interventions influence multiple positive and negative outcomes; thus, specific benefits and specific harms need to be considered to determine the impact of an intervention on improving positive outcomes and generating negative outcomes (eg, side effects, adverse events, costs).

For some health care decisions, the values most patients place on these positive and negative outcomes can be inferred with confidence. These decisions often involve recommendations for which there is strong evidence of benefit for the intervention. An example is the recommendation for beta blocker therapy following a myocardial infarction. Guideline panels assume that nearly all patients would place a higher value on the mortality benefit of beta-blockers in this setting and would place a lesser value on the potential for side effects of beta blockers (eg, lethargy and sexual dysfunction). (See "Acute myocardial infarction: Role of beta blocker therapy", section on 'Society guideline links'.)

However, many health care decisions involve benefits and downsides for which the relative importance is not so clear cut and for which the evidence of benefit is less certain. In such circumstances, patient values related to the importance of different outcomes are likely to vary. For example, consider the decision to treat lower urinary tract symptoms in men with a transurethral resection of the prostate. Individual patients may differ in their priorities to diminish bothersome urinary symptoms or avoid sexual dysfunction. Priorities can vary sufficiently between two patients with identical clinical circumstances such that each might best maximize their individual health outcomes by choosing exactly opposite courses of care. (See "Lower urinary tract symptoms in males", section on 'Society guideline links' and "Surgical treatment of benign prostatic hyperplasia (BPH)", section on 'Society guideline links'.)

Guideline developers should consider patient preferences in formulating recommendations and explicitly identify assumptions made regarding patient values for various outcomes. For some conditions, there is ample literature regarding patient values about competing outcomes. For example, numerous studies have investigated patient values and preferences regarding the trade-offs involved in screening for prostate cancer [30]. (See "Screening for prostate cancer", section on 'Society guideline links'.)

However, for many clinical decisions, the literature related to patient values is less robust and there is no agreement on how best to identify patient preference. Directly soliciting patient input as part of the guideline development process may be a reasonable solution. Different guideline groups have varying degrees of patient or public participation in their guideline processes to solicit their perspectives on, for example, the valuation of outcomes or preferences on interventions and their implementation. The National Institute for Health and Care Excellence (NICE) in the United Kingdom has a robust process in place to involve patients and caregivers in their guideline development and explicitly include patient choice (and cost-effectiveness) as factors in determining recommendations [31].

Evidence-based — Credible guidelines are based on a systematic review that identifies all scientifically sound studies that bear on the question at hand. Systematic reviews should follow standard methodology, including explicit criteria for including or excluding studies. Additional details about the methodology of systematic review and meta-analysis are provided separately (see "Systematic review and meta-analysis"). Reviews in support of clinical practice guidelines often include multiple systematically reviewed questions and may also include nonsystematically reviewed questions to aid in contextualizing the findings of systematically reviewed questions.

High-certainty evidence (ie, high-quality body of evidence) is often not available. In a review of guidelines from the Infectious Diseases Society of America, for example, only 14 percent of the 4000 recommendations were supported by high-certainty evidence [32]. A study of American Heart Association/American College of Cardiology (AHA/ACC) guidelines found that nearly half of recommendations were based on the low-certainty of evidence [33]. (See "Evidence-based medicine", section on 'Assessing the certainty of the evidence'.)

Even when data from well-designed and well-implemented randomized controlled trials are available, they often are not directly applicable to the populations, interventions, or outcomes addressed by the guideline being developed [34]. (See "Evidence-based medicine", section on 'External validity'.)

The aim of practice guidelines to optimize patient management may be even more relevant in circumstances where evidence is equivocal and the clinician is less certain about the choice of strategy. In such circumstances, the synthesis of carefully weighed opinions of experts may be particularly helpful, considered in the context of individual patient factors and adapted as needed. However, expert opinion and usual practice should be labeled as such and not take precedence over stronger evidence [35]. The process of developing guidelines can be used as an impetus to encourage clinical research when evidence is not available [36,37]. Transparency is essential so that it is clear to readers what the quality or strength is of the evidence supporting the guideline recommendation.

Grading recommendations — Guidelines should provide an assessment of the strength of each individual recommendation. Guideline groups may use different approaches to do so. A common approach is to grade the strength of the recommendation and the quality (or certainty) of the evidence separately.

The Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) system is widely used [38-42]. In GRADE, grades have two components: a two-level representation of the strength of recommendation (strong or weak) and a four-level representation of the certainty of the evidence (high, moderate, low, and very low).

Strength of the recommendation – The strength of a recommendation refers to the magnitude of net benefit (ie, benefits minus harms). A strong recommendation is appropriate for situations in which the benefits clearly outweigh the risks (or vice versa) for nearly all patients. A weak recommendation is appropriate for situations in which the risks and benefits are either closely balanced or uncertain.

Certainty of evidence – In GRADE, assessment of the certainty of evidence reflects confidence in the estimates of benefits, harms, and burdens. High-certainty evidence typically comes from well-performed randomized controlled trials or other overwhelming evidence (such as well-executed observational studies with very large effects). Moderate-certainty evidence typically comes from randomized trials with important limitations or from other study designs with special strength. Low-certainty evidence typically comes from observational studies or from controlled trials with very serious limitations. Very low-certainty evidence typically comes from nonsystematic observations, biologic reasoning, or observational studies with serious limitations.

UpToDate uses the GRADE approach to making recommendations with a slightly simplified schema using only three levels for the certainty of the evidence (high [A], moderate [B], or low/very low [C]) (table 2).

The USPSTF uses a letter grade rating system, which incorporates both the magnitude and certainty of net benefit that is uniform across the multiple conditions it reviews [43]. The system rates the recommendations and provides suggestions for practice. Examples of USPSTF recommendations are included elsewhere. (See "Overview of preventive care in adults", section on 'Overview of USPSTF recommendations'.)

Consider outcomes and implementation — Guidelines should first consider the magnitude and certainty intervention effects for both benefits and harms. In addition to whether an intervention has a clinically meaningful net benefit, guidelines should also consider other clinically relevant factors such as:

Convenience (feasibility) and side effects

Patient preferences (acceptability)

The clinical skills and other implementation factors necessary to deliver the intervention successfully or implement the recommendation

Patient values of different outcomes

Cost and cost-effectiveness

These factors, and others, are included in the GRADE Evidence to Decision framework [44], which may be useful for guideline developers to use when formulating a recommendation.

The role of and potential variation in patient preferences for a given intervention and patient values on weighing different outcomes are now recognized as being of great importance for many, if not most, health care decisions, and guidelines on most topics should explicitly state what are the assumptions about patient preferences and values that were used in reaching recommendations. Increasingly, guideline developers may need to consider the impact of recommendations on health equity (ie, if recommendations are likely to improve, worsen, or have no effect on existing health inequities). Although, the methods and processes on how best to determine the impact on health equity are still being refined.

Recency — Guidelines may be produced only once or may be updated at several-year intervals [45]. However, in active areas of research, such as the prevention or treatment of human immunodeficiency virus (HIV) infection, these guidelines may be out of date in less than a year. In these instances, guidelines groups may use a more dynamic approach to updating their recommendations based on signals from active surveillance of the evidence rather than updating recommendations at a fixed-year interval.

In a study of guidelines sponsored by the US Agency for Healthcare Research and Quality, more than three-fourths of guidelines needed updating [46]. A study of 100 quantitative systematic reviews found that new findings with impact on the review outcome occurred within two years of publication for 23 percent of the reviews [47]. The median time for "survival" of an analysis was 5.5 years. When it was active the NGC required evidence that a guideline has been developed, reviewed, or revised within five years for inclusion of the guideline in their listing [48]. It is reasonable that a guideline that has not undergone review (with updating if needed) within five years of publication, in the absence of strong justification, should not be considered current.

There is discussion in the guideline community about the prospect of future "living" guidelines that are continuously updated, supported by "living" systematic reviews [49]. Some examples have been produced during the COVID pandemic [50,51]. However, a number of challenges exist. For example, how often does the living guideline need to be revisited and possibly revised? (In a fast-moving field like COVID-19, situations have arisen where a newer version of the guideline is already being developed before the prior version has finished going through the editorial review process necessary before release and dissemination.) Further, cost and other resources are even more of a challenge when both systematic review and guideline processes need continual updating.

Sponsoring bodies — Statements formally endorsed by respected national bodies are subject to scrutiny by health professionals and sometimes by the public through coverage in the popular media. These organizations have a strong incentive to safeguard their reputation by having their guidelines stand up to scrutiny. However, the imprimatur of a sponsoring organization does not necessarily guarantee quality [52,53].

The United States has opted for guidelines prepared by a variety of organizations, mostly specialty societies or others in the private sector, and some by governmental organizations, rather than a single version prepared by the federal government. Other countries have a single organization developing guidelines. Examples of such organizations include NICE in the United Kingdom, the Institute for Quality and Efficiency in Health Care in Germany, and the Dutch College of General Practitioners in the Netherlands.

Review process — It is preferable for guidelines to undergo careful review and endorsement by experts outside the organization, or at least by representatives of the sponsoring organization other than panel members. Similarly, it is best practice for supporting systematic reviews to undergo independent expert review.

Many groups issuing clinical practice guidelines also solicit public comment on draft recommendations to gather broader stakeholder input. Many professional society guidelines also need approval by leadership or the board of the sponsoring society. Ultimately, any review or approval process should preserve scientific independence of the guideline development against potential conflicts of interest (COIs).

Conflict of interest — The guideline should report COI, financial or others, bearing on the guideline for each member of the panel [23].

The IOM Committee on Standards for Developing Trustworthy Clinical Practice Guidelines recommends written disclosure of any commercial, noncommercial, intellectual, institutional, patient, or public activity pertinent to the guideline scope [1]. It also recognizes that, for some guidelines, a degree of COI might be unavoidable in panel participants (such as relevant clinical specialists whose income is related to providing services pertinent to the guideline) but that these members should be a minority of the panel and should not be chairs or co-chairs. In addition, the G-I-N recommends that COI should be publicly disclosed, updated regularly, and no one with relevant COIs should decide the direction or strength of a recommendation [54].

Guideline panels must balance having adequate expertise and relevant COI. While it is recognized that transparency is essential when participants in the guidelines preparation process have conflicts of interest, it is less clear how best to adapt the process to avoid bias related to any COI [55]. Clinicians with notable expertise in an area are both more likely to be sought out to participate in developing relevant guidelines and to participate in industry-sponsored activities such as speaker's bureaus or advisory panels. In various reports, the frequency of financial COIs among authors of clinical guidelines ranges from 35 to 87 percent [56-59]. G-I-N guidance recommends distinguishing between direct (eg, direct payments for service, stock options) and indirect financial (eg, scientific interest, academic advancement, clinical revenue streams) COI, as well as taking into account the financial amount and degree of participation in any relevant COI [54]. For example, the American Thoracic Society and the USPSTF have processes for managing relevant COI and allowing for various degrees of participation depending on the degree of COI [54].

DISAGREEMENT AMONG GUIDELINES — 

Guidelines on the same clinical question by different expert groups often provide different recommendations. Usually the differences are minor; with screening guidelines, for example, they might differ on the age at which screening should begin and end or the time interval between screening examinations. Uncommonly, recommendations are very different. For example, in 2008, two major guidelines for colorectal cancer screening were published within several months of each other [60,61]. The two guidelines included different sets of screening test options (three options for one, seven for the other) and preferences (in one, detection of adenomas in addition to cancers was preferred over detecting cancers only).

A review of eight guidelines on screening asymptomatic patients for peripheral arterial disease found conflicting recommendations from different organizations, with differing interpretations of the evidence base mostly related to the role of testing in symptomatic patients but not in asymptomatic people [62].

Disagreement may be a barrier to acceptance of guidelines. However, in one study of this question, the extent to which clinicians agreed with each other regarding intervals for cancer screening was not related to the extent to which guidelines agreed on recommended screening intervals [63].

Differing recommendations may occur for many reasons. First, there may be differences in the evidence base evaluated due to stricter or broader inclusion of studies, as well as differences in tolerance for extrapolation of included evidence (eg, populations or settings studied) to actual clinical practice. More commonly, there may be differences in the values or evidence thresholds being applied, especially when the magnitude of net benefit is small or there is weak evidence (ie, low certainty of evidence). For example, guideline recommendations based upon subgroup analysis are particularly subject to limited sample size. Guidelines may disagree is because of the value system of the panel that developed them. One study showed that surgeons tended to favor more aggressive cancer screening intervals than family physicians and internists and that gynecologists favored more aggressive screening for cancers occurring in women [63]. It may be difficult to parse out reasons for differences in values, and, sometimes, these may be due to COI, implicit bias, or other types of cognitive biases. Lastly, guideline groups may also differ in if or how they consider implementation factors (eg, cost, resources). As an example, screening guidelines tend to be less aggressive in Canada and the United Kingdom than they are in the United States.

While the multiplicity of guidelines and their differences can be confusing, the absence of a single standard for clinical practice, especially in the absence of strong evidence, allows for practice innovation.

GUIDELINES IN CLINICAL PRACTICE

Finding guidelines — Guidelines can be found at the websites of the sponsoring organizations (eg, major medical organizations and clinical specialty societies), or by entering "clinical practice guidelines" into a search engine. "Practice guideline" can also be set as a limit term for type of article when using PubMed to search the US National Library of Medicine. Other reputable curated databases of guidelines include the Guidelines International Network (G-I-N) guideline library, the Emergency Care Research Institute (ERCI) Guidelines Trust (which replaced the National Guideline Clearinghouse), and Guideline Central. All of these databases are free to search; however, the ECRI Guidelines Trust and Guideline Central have some content that may require payment to access.

Guidelines do not replace clinical judgement — Guidelines are meant to complement, not replace, clinical judgement. They are suggestions for care, not rules.

Experts in guidelines and evidence-based medicine urge clinicians to use them as a guide rather than as a set of rules or cookbook and to tailor clinical decisions to individual patients. Guidelines often do not adequately account for severity of illness, patient preferences, or clinical judgment to be able to be used as quality measures [64]. In addition, individual patients have biologic variability (eg, drug metabolism, immune response, genetic endowment). Clinical settings also have different available resources determined by the social and economic environment of medicine at the local level [65]. Nonetheless, most patients do fit the recommendations in most guidelines.

Potential benefits/problems of guidelines — Evidence-based, carefully developed, and updated guidelines provide many potential benefits:

Synthesis of the literature by experts

Clear recommendations for translating the evidence base into clinical application to foster best practice

Opportunity to evaluate the outcomes of implementation in the "real world" setting

However, several aspects of guidelines and their implementation need to be recognized as potential problems:

The challenge of keeping guidelines updated when the literature changes.

The potential for inappropriate use of guidelines for other than clinical purposes

Difficulty accessing guidelines at the point of care – Many are lengthy or specific components relevant to a patient are not readily searchable or retrievable

Lack of coordination among guideline development groups, generating differing recommendations

Potential for conflicts of interest

Application of guidelines developed to address a specific condition to patients with multiple comorbidities [66,67]

Impact of guidelines on practice — Simply providing guidelines seems to improve practice, but the effects are small [68]. It is clear that there can be significant gaps between guidelines' recommendations and practice. As an example, in one study of practice in the United States, it was estimated that 68,000 deaths could be prevented if six recommendations from guidelines for the management of heart failure were closely followed [69]. There are many reasons why clinicians do not adhere to guidelines' recommendations. In one report using focus groups of general practitioners in the Netherlands to explore reasons why recommendations from 12 national guidelines were not followed, the most frequently perceived barriers to adoption were identified as disagreement with the recommendations, environmental factors based on organizational constraints, lack of knowledge of the recommendations, and unclear or ambiguous guidelines [70]. In other studies, recommendations were more likely to be followed when they were supported by clear evidence, were compatible with existing norms and values, did not require new skills or change in practice routine, were less controversial, and were stated in specific actionable terms [71,72].

Thus, the "actionability" of a guideline is an important attribute. While well-formulated guidelines can be an invaluable tool to guide best practice in medicine, they should not alone be considered a complete plan for quality improvement. Rather, they need to be delivered in the context of a program to engage patients and clinicians in appropriate decision making, supported by implementation strategies involving systems enhancements, clinical reminders, other quality improvement and decision support tools, and outcomes measurement and feedback. However, even when such support is provided, it remains challenging to change clinician practice. In a randomized trial related to guidelines for the management of nonvariceal upper gastrointestinal bleeding, 43 hospitals were randomly assigned to an intervention (clinicians received published consensus guidelines, algorithms, and written reminders and participated in multidisciplinary guideline education groups and case-based workshops) or control (received guidelines and algorithms) [73]. At one year, guideline adherence was not significantly different between the intervention and control hospitals, with adherence below 10 percent in both groups.

Implementing practice guidelines — Increasing attention has been focused on how best to disseminate guidelines and foster adoption of their recommendations, once developed [74,75]. A framework for the study of guideline implementation ("implementation science") includes five principal domains: characteristics of the intervention, the outer setting, the inner setting, characteristics of the involved individuals, and the process of implementation [76].

Strategies to facilitate implementation of practice guidelines have been proposed and include [77]:

Guidelines should incorporate a checklist of prioritized specific interventions

Identify barriers to adoption, and design supports to address specific barriers

Integrate guidelines for common coexistent conditions

Identify systems and technological solutions to promote adherence with recommendations

Develop transdisciplinary teams (clinical epidemiology, implementation science, systems engineering) to study ways to foster best practices

The Institute of Medicine (IOM) recommends that guidelines that are based upon strong evidence should be worded so that it is possible to evaluate whether care followed recommendations [1]. The IOM suggests that guidelines be structured in format, vocabulary, and content to foster use of computer-aided decision supports by guidelines users. These decision supports may facilitate uptake of guidelines by aiding in summarizing patient data, providing alerts and reminders, retrieving and filtering information relevant to a specific recommendation/decision, or facilitating shared decision making.

Use by non-clinicians — Most guidelines are directed at clinicians and are intended to help clinicians take better care of patients. However, guidelines may sometimes be used by non-clinicians (eg, insurers, healthcare administrators, lawyers) in different ways. For example, health plans and administrators may use them to measure quality and determine payment for care. Guidelines may also be used in malpractice cases.

Guideline recommendations may be translated into performance measures that are then used to assess the delivery of care. These performance measures may be used in "pay for performance" or similar programs that link clinician payment to quality of care as measured by guideline-derived parameters of clinical care. This use is an effort to deal with rising costs and the need for quality improvement. The concept is popular, but the evidence on whether it controls costs and/or improves health outcomes is mixed [78].

SOCIETY GUIDELINE LINKS — 

Links to society and government-sponsored guidelines from selected countries and regions around the world are provided separately. Specific guideline links topics can be found using the UpToDate search tool.

SUMMARY AND RECOMMENDATIONS

Overview – Clinical practice guidelines are recommendations for clinicians about the care of patients with specific conditions. Guideline development should involve a systematic review of the literature related to decision making for the targeted condition/question. Recommendations are based on the evidence and value judgments that should be explicitly identified as such. (See 'Overview' above.)

Use – Guidelines are suggestions for care, not rules. There will always be individual patients who should be managed differently for reasons including biologic differences (eg, rates of drug metabolism, strength of immune response, or genetics), comorbidities, availability of resources, cultural preferences, and patient preferences and values. For some health care decisions, differences in patient preferences on interventions and difference in patient values for various health outcomes can mean that there is no one course of care that can be strongly recommended. (See 'Guidelines do not replace clinical judgement' above.)

Quality – Guidelines vary widely in quality. Credible guidelines involve the following components (table 1) (see 'Components of credible guidelines' above):

Development by a panel representing a full range of expertise and key stakeholders (see 'Expertise in guideline panel membership' above)

An unbiased systematic review of the evidence (see 'Evidence-based' above)

Grading the strength of the evidence and recommendation (see 'Grading recommendations' above)

Incorporation of multiple relevant factors in addition to the balance of benefits and harms, including acceptability to the patient, feasibility of an intervention, as well as other implementation considerations, and costs (see 'Consider outcomes and implementation' above)

A process for ongoing review and updating (see 'Review process' above and 'Recency' above)

Implementation – While well-formulated guidelines can be an invaluable tool to guide best practice in medicine, they should not alone be considered a complete plan for quality improvement. Guidelines need to be delivered in the context of a program to engage patients and clinicians in appropriate decision making, supported by implementation strategies involving systems enhancements, clinical reminders, other quality improvement and decision support tools, and outcomes measurement and feedback. (See 'Impact of guidelines on practice' above.)

ACKNOWLEDGMENT — 

The UpToDate editorial staff acknowledges Robert H Fletcher, MD, MSc, who contributed to earlier versions of this topic review.

  1. Institute of Medicine (US) Committee on Standards for Developing Trustworthy Clinical Practice Guide. Current Best Practices and Proposed Standards for Development of Trustworthy CPGs. In: Clinical Practice Guidelines We Can Trust, Graham R, Mancher M, Miller Wolman D (Eds), National Academies Press, Washington DC 2011.
  2. US Preventive Services Task Force, Bibbins-Domingo K, Grossman DC, et al. Screening for Colorectal Cancer: US Preventive Services Task Force Recommendation Statement. JAMA 2016; 315:2564.
  3. Lin JS, Perdue LA, Henrikson NB, et al. Screening for Colorectal Cancer: Updated Evidence Report and Systematic Review for the US Preventive Services Task Force. JAMA 2021; 325:1978.
  4. Colorectal Cancer Screening: An Updated Decision Analysis for the U.S. Preventive Services Task Force, Knudsen AB, Rutter CM, Peterse EFP, Lietz AP, Seguin CL, Meester RGS, Perdue LA, Lin JS, Siegel RL, Doria-Rose VP, Feuer EJ, Zauber AG, Kuntz KM, Lansdorp-Vogelaar I. (Eds), Agency for Healthcare Research and Quality (US), Rockville (MD) 2021.
  5. Eddy DM, Adler J, Patterson B, et al. Individualized guidelines: the potential for increasing quality and reducing costs. Ann Intern Med 2011; 154:627.
  6. United States Preventive Services Task Force. Aspirin Use to Prevent Cardiovascular Disease: Preventive Medication. 2022. Available at: https://www.uspreventiveservicestaskforce.org/uspstf/recommendation/aspirin-to-prevent-cardiovascular-disease-preventive-medication (Accessed on July 11, 2024).
  7. Lin JS, Evans CV, Grossman DC, et al. Framework for Using Risk Stratification to Improve Clinical Preventive Service Guidelines. Am J Prev Med 2018; 54:S26.
  8. Welch VA, Akl EA, Guyatt G, et al. GRADE equity guidelines 1: considering health equity in GRADE guideline development: introduction and rationale. J Clin Epidemiol 2017; 90:59.
  9. Pottie K, Welch V, Morton R, et al. GRADE equity guidelines 4: considering health equity in GRADE guideline development: evidence to decision process. J Clin Epidemiol 2017; 90:84.
  10. Lin JS, Webber EM, Bean SI, Evans CV. Development of a Health Equity Framework for the US Preventive Services Task Force. JAMA Netw Open 2024; 7:e241875.
  11. Qaseem A, Lin JS, Mustafa RA, et al. Screening for Breast Cancer in Average-Risk Women: A Guidance Statement From the American College of Physicians. Ann Intern Med 2019; 170:547.
  12. Qaseem A, Barry MJ, Denberg TD, et al. Screening for prostate cancer: a guidance statement from the Clinical Guidelines Committee of the American College of Physicians. Ann Intern Med 2013; 158:761.
  13. Qaseem A, Harrod CS, Crandall CJ, et al. Screening for Colorectal Cancer in Asymptomatic Average-Risk Adults: A Guidance Statement From the American College of Physicians (Version 2). Ann Intern Med 2023; 176:1092.
  14. Lawson EH, Gibbons MM, Ingraham AM, et al. Appropriateness criteria to assess variations in surgical procedure use in the United States. Arch Surg 2011; 146:1433.
  15. Laine C, Taichman DB, Mulrow C. Trustworthy clinical guidelines. Ann Intern Med 2011; 154:774.
  16. Ransohoff DF, Pignone M, Sox HC. How to decide whether a clinical practice guideline is trustworthy. JAMA 2013; 309:139.
  17. Qaseem A, Forland F, Macbeth F, et al. Guidelines International Network: toward international standards for clinical practice guidelines. Ann Intern Med 2012; 156:525.
  18. Van der Wees P, Qaseem A, Kaila M, et al. Prospective systematic review registration: perspective from the Guidelines International Network (G-I-N). Syst Rev 2012; 1:3.
  19. Guidelines International Network. Available at: http://www.g-i-n.net/about-g-i-n/introduction (Accessed on November 21, 2017).
  20. Brouwers MC, Kerkvliet K, Spithoff K, AGREE Next Steps Consortium. The AGREE Reporting Checklist: a tool to improve reporting of clinical practice guidelines. BMJ 2016; 352:i1152.
  21. Chen Y, Yang K, Marušic A, et al. A Reporting Tool for Practice Guidelines in Health Care: The RIGHT Statement. Ann Intern Med 2017; 166:128.
  22. Appraisal of Guidelines for REsearch & Evalutation II: Advancing guideline development, reporting and evaluation in healthcare. 2013. Available at: https://www.agreetrust.org/wp-content/uploads/2013/10/AGREE-II-Users-Manual-and-23-item-Instrument_2009_UPDATE_2013.pdf (Accessed on July 11, 2024).
  23. Sniderman AD, Furberg CD. Why guideline-making requires reform. JAMA 2009; 301:429.
  24. Brook RH. Practice guidelines: to be or not to be. Lancet 1996; 348:1005.
  25. Akl EA, Welch V, Pottie K, et al. GRADE equity guidelines 2: considering health equity in GRADE guideline development: equity extension of the guideline development checklist. J Clin Epidemiol 2017; 90:68.
  26. Agency for Healthcare Research and Quality. The Guide to Clinical Preventive Services 2014. Available at: https://www.ahrq.gov/sites/default/files/wysiwyg/professionals/clinicians-providers/guidelines-recommendations/guide/cpsguide.pdf (Accessed on February 05, 2015).
  27. Agency for Healthcare Research and Quality. Nominate a New U.S. Preventive Services Task Force Member. 2024. Available at: https://www.ahrq.gov/cpi/about/otherwebsites/uspstf/nominate.html (Accessed on July 11, 2024).
  28. US Preventive Services Task Force, Davidson KW, Mangione CM, et al. Actions to Transform US Preventive Services Task Force Methods to Mitigate Systemic Racism in Clinical Preventive Services. JAMA 2021; 326:2405.
  29. Montori VM, Brito JP, Murad MH. The optimal practice of evidence-based medicine: incorporating patient preferences in practice guidelines. JAMA 2013; 310:2503.
  30. Vernooij RWM, Lytvyn L, Pardo-Hernandez H, et al. Values and preferences of men for undergoing prostate-specific antigen screening for prostate cancer: a systematic review. BMJ Open 2018; 8:e025470.
  31. National Institute for Health and Care Excellence. NICE’s approach to public involvement in guidance and standards: a practical guide. 2015. Available at: https://www.nice.org.uk/media/default/About/NICE-Communities/Public-involvement/Public-involvement-programme/PIP-process-guide-apr-2015.pdf (Accessed on July 11, 2024).
  32. Lee DH, Vielemeyer O. Analysis of overall level of evidence behind Infectious Diseases Society of America practice guidelines. Arch Intern Med 2011; 171:18.
  33. Tricoci P, Allen JM, Kramer JM, et al. Scientific evidence underlying the ACC/AHA clinical practice guidelines. JAMA 2009; 301:831.
  34. McAlister FA, van Diepen S, Padwal RS, et al. How evidence-based are the recommendations in evidence-based guidelines? PLoS Med 2007; 4:e250.
  35. Fletcher RH. Building the Evidence. In: Implementing Clinical Practice Guidelines, Margolis CZ, Cretin S (Eds), AHA Press, Chicago 1998.
  36. Deresinski S, File TM Jr. Improving clinical practice guidelines--the answer is more clinical research. Arch Intern Med 2011; 171:1402.
  37. Li T, Vedula SS, Scherer R, Dickersin K. What comparative effectiveness research is needed? A framework for using guidelines and systematic reviews to identify evidence gaps and research priorities. Ann Intern Med 2012; 156:367.
  38. Guyatt G, Vist G, Falck-Ytter Y, et al. An emerging consensus on grading recommendations? ACP J Club 2006; 144:A8.
  39. Guyatt G, Gutterman D, Baumann MH, et al. Grading strength of recommendations and quality of evidence in clinical guidelines: report from an american college of chest physicians task force. Chest 2006; 129:174.
  40. Guyatt GH, Oxman AD, Schünemann HJ, et al. GRADE guidelines: a new series of articles in the Journal of Clinical Epidemiology. J Clin Epidemiol 2011; 64:380.
  41. Guyatt GH, Helfand M, Kunz R. Comparing the USPSTF and GRADE approaches to recommendations. Ann Intern Med 2009; 151:363; author reply 363.
  42. Jaeschke R, Guyatt GH, Dellinger P, et al. Use of GRADE grid to reach decisions on clinical practice guidelines when consensus is elusive. BMJ 2008; 337:a744.
  43. United States Preventive Services Task Force Grade Definitions. Available at: https://www.uspreventiveservicestaskforce.org/Page/Name/grade-definitions (Accessed on May 08, 2018).
  44. Alonso-Coello P, Schünemann HJ, Moberg J, et al. GRADE Evidence to Decision (EtD) frameworks: a systematic and transparent approach to making well informed healthcare choices. 1: Introduction. BMJ 2016; 353:i2016.
  45. Shekelle PG. Updating practice guidelines. JAMA 2014; 311:2072.
  46. Shekelle PG, Ortiz E, Rhodes S, et al. Validity of the Agency for Healthcare Research and Quality clinical practice guidelines: how quickly do guidelines become outdated? JAMA 2001; 286:1461.
  47. Shojania KG, Sampson M, Ansari MT, et al. How quickly do systematic reviews go out of date? A survival analysis. Ann Intern Med 2007; 147:224.
  48. National Guideline Clearinghouse (NGC) Inclusion Criteria. Available at: https://www.ahrq.gov/gam/summaries/inclusion-criteria/index.html (Accessed on August 27, 2018).
  49. Akl EA, Meerpohl JJ, Elliott J, et al. Living systematic reviews: 4. Living guideline recommendations. J Clin Epidemiol 2017; 91:47.
  50. Qaseem A, Yost J, Etxeandia-Ikobaltzeta I, et al. What Is the Antibody Response and Role in Conferring Natural Immunity After SARS-CoV-2 Infection? Rapid, Living Practice Points From the American College of Physicians (Version 1). Ann Intern Med 2021; 174:828.
  51. Qaseem A, Yost J, Etxeandia-Ikobaltzeta I, et al. Should Remdesivir Be Used for the Treatment of Patients With COVID-19? Rapid, Living Practice Points From the American College of Physicians (Version 2). Ann Intern Med 2021; 174:673.
  52. Li T, Michaels M. Living Practice Guidelines Require Robust and Continuous Iteration and Uptake. Ann Intern Med 2022; 175:1193.
  53. El Mikati IK, Khabsa J, Harb T, et al. A Framework for the Development of Living Practice Guidelines in Health Care. Ann Intern Med 2022; 175:1154.
  54. Schünemann HJ, Al-Ansary LA, Forland F, et al. Guidelines International Network: Principles for Disclosure of Interests and Management of Conflicts in Guidelines. Ann Intern Med 2015; 163:548.
  55. Jones DJ, Barkun AN, Lu Y, et al. Conflicts of interest ethics: silencing expertise in the development of international clinical practice guidelines. Ann Intern Med 2012; 156:809.
  56. Taylor R, Giles J. Cash interests taint drug advice. Nature 2005; 437:1070.
  57. Okike K, Kocher MS, Wei EX, et al. Accuracy of conflict-of-interest disclosures reported by physicians. N Engl J Med 2009; 361:1466.
  58. Choudhry NK, Stelfox HT, Detsky AS. Relationships between authors of clinical practice guidelines and the pharmaceutical industry. JAMA 2002; 287:612.
  59. Campsall P, Colizza K, Straus S, Stelfox HT. Financial Relationships between Organizations That Produce Clinical Practice Guidelines and the Biomedical Industry: A Cross-Sectional Study. PLoS Med 2016; 13:e1002029.
  60. Whitlock EP, Lin JS, Liles E, et al. Screening for colorectal cancer: a targeted, updated systematic review for the U.S. Preventive Services Task Force. Ann Intern Med 2008; 149:638.
  61. Levin B, Lieberman DA, McFarland B, et al. Screening and surveillance for the early detection of colorectal cancer and adenomatous polyps, 2008: a joint guideline from the American Cancer Society, the US Multi-Society Task Force on Colorectal Cancer, and the American College of Radiology. CA Cancer J Clin 2008; 58:130.
  62. Ferket BS, Spronk S, Colkesen EB, Hunink MG. Systematic review of guidelines on peripheral artery disease screening. Am J Med 2012; 125:198.
  63. Czaja R, McFall SL, Warnecke RB, et al. Preferences of community physicians for cancer screening guidelines. Ann Intern Med 1994; 120:602.
  64. Sackett DL, Rosenberg WM, Gray JA, et al. Evidence based medicine: what it is and what it isn't. BMJ 1996; 312:71.
  65. Wilson MC, Hayward RS, Tunis SR, et al. Users' guides to the Medical Literature. VIII. How to use clinical practice guidelines. B. what are the recommendations and will they help you in caring for your patients? The Evidence-Based Medicine Working Group. JAMA 1995; 274:1630.
  66. Guthrie B, Payne K, Alderson P, et al. Adapting clinical guidelines to take account of multimorbidity. BMJ 2012; 345:e6341.
  67. Lugtenberg M, Burgers JS, Clancy C, et al. Current guidelines have limited applicability to patients with comorbid conditions: a systematic analysis of evidence-based guidelines. PLoS One 2011; 6:e25987.
  68. Farmer AP, Légaré F, Turcot L, et al. Printed educational materials: effects on professional practice and health care outcomes. Cochrane Database Syst Rev 2008; :CD004398.
  69. Fonarow GC, Yancy CW, Hernandez AF, et al. Potential impact of optimal implementation of evidence-based heart failure therapies on mortality. Am Heart J 2011; 161:1024.
  70. Lugtenberg M, Zegers-van Schaick JM, Westert GP, Burgers JS. Why don't physicians adhere to guideline recommendations in practice? An analysis of barriers among Dutch general practitioners. Implement Sci 2009; 4:54.
  71. Burgers JS, Grol RP, Zaat JO, et al. Characteristics of effective clinical guidelines for general practice. Br J Gen Pract 2003; 53:15.
  72. Grol R, Dalhuijsen J, Thomas S, et al. Attributes of clinical guidelines that influence use of guidelines in general practice: observational study. BMJ 1998; 317:858.
  73. Barkun AN, Bhat M, Armstrong D, et al. Effectiveness of disseminating consensus management recommendations for ulcer bleeding: a cluster randomized trial. CMAJ 2013; 185:E156.
  74. Grol R. Successes and failures in the implementation of evidence-based guidelines for clinical practice. Med Care 2001; 39:II46.
  75. Grimshaw JM, Thomas RE, MacLennan G, et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess 2004; 8:iii.
  76. Damschroder LJ, Aron DC, Keith RE, et al. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci 2009; 4:50.
  77. Pronovost PJ. Enhancing physicians' use of clinical guidelines. JAMA 2013; 310:2501.
  78. Mendelson A, Kondo K, Damberg C, et al. The Effects of Pay-for-Performance Programs on Health, Health Care Use, and Processes of Care: A Systematic Review. Ann Intern Med 2017; 166:341.
Topic 2767 Version 49.0

References