ﺑﺎﺯﮔﺸﺖ ﺑﻪ ﺻﻔﺤﻪ ﻗﺒﻠﯽ
خرید پکیج
تعداد آیتم قابل مشاهده باقیمانده : 3 مورد
نسخه الکترونیک
medimedia.ir

Real-world evidence in health care

Real-world evidence in health care
Literature review current through: Jan 2024.
This topic last updated: Aug 31, 2023.

INTRODUCTION — Real-world evidence (RWE) is evidence derived from data produced during the routine provision of clinical care to patients that can inform causal inferences regarding the effects of health interventions. RWE is thought to complement evidence on interventions gained from randomized controlled trials by providing information on their effectiveness and safety in routine clinical practice. RWE has attracted much attention from regulatory agencies, payers, and clinicians in recent years.

This topic will review the strengths and limitations of RWE and outline the uses of RWE in health care.

Other aspects of evidence-based medicine are discussed separately:

(See "Evidence-based medicine".)

(See "Hypothesis testing in clinical research: Proof, p-values, and confidence intervals".)

(See "Systematic review and meta-analysis".)

DEFINITIONS

Real-world data (RWD) and real-world evidence (RWE) – RWD and RWE are separate but related terms [1]. They are defined as follows (figure 1):

RWD is data relating to patient health status and/or the delivery of health care, routinely collected from a variety of sources. RWD therefore differs from data collected primarily for research purposes. Data sources frequently used for RWD are summarized in the table (table 1) and discussed below. (See 'Sources of RWD' below.)

RWE is evidence about the usage and potential benefits or risks of a medical product derived from analysis of RWD, irrespective of the source of the RWD or study design.

RWE can be generated from different types of studies that rely on RWD to variable degrees. Observational (or noninterventional) studies, which entirely rely on RWD, are the main sources of RWE. Occasionally, randomized clinical trials (RCTs) capture RWD. Additionally, RWD may be used in planning a highly-controlled RCT to assess enrollment criteria and trial feasibility. Pragmatic clinical trials can also rely on RWD, for example to capture selected outcomes (eg, electronic health records data, claims data, or data from other digital health technologies). Thus, pragmatic trials can generate RWE. All of this contributes to some of the confusion regarding the terms RWE and RWD.

Terms that can overlap or be confused with RWE

Observational studies – Most RWE studies are observational (or noninterventional) in nature. Observational studies are those in which health outcomes are assessed in patients receiving interventions (eg, medications, procedures, diagnostic tests, medical devices, other medical products) as part of their routine medical care [2]. Participants are not assigned to a specific intervention according to an investigator-specified research protocol (as in a clinical trial), thus the term "noninterventional study" is sometimes used to describe observational studies. The terms RWE and observational (or noninterventional) studies are commonly used interchangeably (as in this topic). Different study designs can be used within the category of observational studies (eg, case-control). (See "Glossary of common biostatistical and epidemiological terms", section on 'Study designs'.)

Clinical trials – A defining feature that distinguishes a clinical trial from an observational study is assignment of the intervention according to an investigator-specified research protocol, usually involving randomization [3]. However, not all clinical trials are randomized. For example, in single-arm trials with an external control group, all selected participants are assigned to the same intervention without randomization. Thus, single-arm trials face similar challenges as observational (or noninterventional) studies to establish causality. (See "Glossary of common biostatistical and epidemiological terms", section on 'Randomized controlled trial'.)

Gray literature – A misconception may exist with respect to the difference between gray (or grey) literature and RWE. The term "gray literature" is variably defined, but generally includes the following [4]:

-Conference abstracts

-Book chapters

-Unpublished data

-Academic papers (eg, theses and dissertations)

-Policy documents

-Personal correspondence

The term gray literature applies to a wide range of formats and scopes. It can apply to both unpublished RWE and unpublished evidence from RCTs. Thus, gray literature can represent an additional source of evidence used in systematic reviews and meta-analysis. (See "Systematic review and meta-analysis", section on 'The literature search'.)

HOW RWD ARE GENERATED AND USED FOR RESEARCH

Sources of RWD — Most RWE studies are conducted with transactional data (ie, data routinely generated during the provision of care) (table 1) [5-7].

Contemporary electronic systems used in health care delivery provide rich individual-level information that can be linked through patient identifiers creating powerful longitudinal databases that are used in RWE studies [6]. The data contributing to these databases generally come from health insurance claims, electronic health records, and registries. Other sources of information outside of the traditional health care system are increasingly used for RWE research. These include patient-reported data from wearable technologies.

However, given that the primary purpose of collecting these data is not to address a clinical research question, their use for research purposes has inherent limitations, including incompleteness, inaccuracy, and risk of bias. In addition, the process of linking different electronic databases to address gaps of individual databases is often challenging. (See 'Limitations of RWE' below.)

Several reasons contribute to the popularity of these data in RWE research:

These data may reflect a broader population than the ones included in most randomized controlled trials (RCTs).

The data are generated more rapidly and at much less cost compared with the information generated in the majority of RCTs or other investigations using primary data.

Individuals are not exposed to experimentation.

Such sources can accurately capture exposures and health events, without depending on subjects' recall.

By contrast, most RCTs rely on primary data collection, in which the researchers specify the outcomes and other variables that will be measured, their definitions, and the timing for their measurement. Primary data collection improves the quality and completeness of the data. However, this can be time consuming, cost prohibitive, or ethically unfeasible for some research questions. Prospective observational (or noninterventional) studies that collect primary data are sparse for the same reasons. Examples include the Framingham Heart Study and the Nurses' Health Study.

Generating RWE from RWD — As an example of the process through which longitudinal electronic health care data (RWD) are extracted and used in a RWE study, consider a hypothetical RWE study evaluating the effectiveness of drug A versus drug B. The process involves the following steps [8]:

Health care encounters and associated services produce a longitudinal dynamic data stream for each individual receiving care, using a unique identifier and the exact dates of services provided [9].

The RWE study selects a section of these data streams for a specific time period and sets these data aside.

The selected section includes multiple longitudinal patient data streams or records, with diagnostic and procedural codes identifying specific encounters and services.

Data are extracted from each longitudinal record using the principles of "patient event time scale." This means that the date that each patient qualifies for the study becomes their primary anchor in the study. This is called the cohort entry date, which is similar to the time of randomization in an RCT. In this example, the cohort entry date would be the date the patient filled a prescription for drug A or drug B.

Secondary events are identified relative to the cohort entry date. In the time window prior to the cohort entry date (sometimes called the "washout window"), exclusion criteria are evaluated. For example, patients may be excluded if they received drug A or B during the washout window or if they had an outcome of interest during the washout window. Baseline characteristics (prognostic factors, comorbidities) are also assessed in the time window before the cohort entry date. During the period of follow-up after cohort entry, outcomes of interest are measured.

LIMITATIONS OF RWE — Real-world data (RWD) are not usually collected primarily for research purposes. Rather, they are data collected for other purposes (eg, insurance claims, electronic health records) that are then used secondarily for research. As such, there are several limitations to using RWD for research purposes [10]:

Health interventions are not assigned at random in routine care. Patients selected for a certain treatment often have fundamental differences from those who do not receive the treatment. This introduces a risk of confounding. (See 'Comparability of the treatment groups' below.)

Data may be inaccurate and/or critical information may be missing.

Since the data informing the study are often completely collected when the study is planned, there is potential for investigators to mine the data with multiple analyses without a specific hypothesis. This approach deviates from established principles for drawing causal inferences, which require forming a priori hypotheses before performing the research study. Registration of RWE studies allows researchers to specify a priori a specific hypothesis (or hypotheses), the planned analyses, and potential revisions to the plan. This improves transparency and increases confidence in RWE studies' results. (See 'Transparency' below.)

Because of these limitations, observational or RWE studies with findings that contradict those of randomized controlled trials (RCTs) are often met with justified skepticism [6]. Examples include post-menopausal hormone replacement therapy, which was suggested to decrease cardiovascular risk in large observational studies [11], but was subsequently found to increase such risk [12]; vitamin E consumption, also suggested to lower cardiovascular disease [13], but this benefit could not be replicated in a large RCT [14]; and the association between statin use and a considerable reduction in the risk of bone fractures and dementia in observational studies [15], which was not reproduced in RCTs [16].

Nevertheless, RWE studies can provide valid results, and there are several examples of RWE findings that were later confirmed by RCTs [17]. However, assessing the validity of RWE findings can be challenging, especially for professionals who are unfamiliar with the underlying methodology [18,19]. This may limit the uptake of RWE studies for informing clinical decision-making.

ASSESSING THE QUALITY OF A RWE STUDY — When considering whether the results of a RWE study should impact clinical decision-making, three major aspects must be considered [20]:

The question of interest must be answerable through the research question addressed by the RWE study (see 'Does the study address the clinical question?' below)

Suitable data must be available to answer the question (see 'Appropriateness of RWD at hand' below)

The RWE study must be designed and analyzed using appropriate methodology to minimize the risk of bias (see 'Assessing risk of bias' below)

Does the study address the clinical question? — To evaluate how relevant the RWE study is to the clinical question of interest, we use the PICO framework [21]:

Population

Intervention(s)

Comparator(s)

Outcome(s)

Other elements that are sometimes included in the PICO framework include Timing (PICOT), Setting (PICOS), and Study Design (PICOD).

The same framework is used when evaluating evidence from clinical trials. For example, the PICO framework can help determine whether the findings from a study addressing a specific age group can be extrapolated to broader age groups. A study may be appropriate to answer some questions but not others. Additional details about the PICO framework are provided separately. (See "Evidence-based medicine", section on 'Formulating a clinical question'.)

Appropriateness of RWD at hand — Considerations around reliability and relevance can guide the assessment of the appropriateness of the data source(s) being used [22,23].

Data reliability – In the context of a RWE study, reliability refers to whether the data at hand are complete and accurate with respect to the key data elements for the specific research question [24]. This definition is distinct from the more common use of the term reliability (eg, in the setting of statistical or diagnostic testing) which refers to the consistency of a measure over repeated testing.

Important considerations for assessing reliability of the data source include the modality of data collection, data cleansing and maintenance procedures, and quality control measures to prepare and maintain the research database. Data reliability can be formally assessed with validation studies. Assessing whether data elements align with expectation (eg, whether the observed prevalence of a medical condition aligns with the expected prevalence in the population) can also help assess data reliability.

Relevance – A data source is considered relevant when the data elements captured in it are deemed sufficient to answer the research question of interest [22]. The research database should generally meet all the following criteria:

It should contain information pertaining to the target population during the relevant time period.

It should include a sufficient number of qualifying individuals who meet criteria for study entry.

It should have sufficient follow-up time.

It should capture key parameters necessary for the study (criteria for inclusion, exposure of interest, outcomes, and characteristics of study participants).

Most research databases relying on secondary data do not completely capture all key parameters because investigators do not control which data elements are collected or how or when they are recorded. For this reason, proxy measurements are often used if they are reasonably close to the desired parameters. Studies relying on proxy measurements can yield findings that closely approximate those from studies using primary data [5]. Previous experience with similar data sources can ensure the measurements of study parameters are used appropriately. In addition, the accuracy of these measurements can be quantified by accepted metrics (eg, sensitivity and specificity). Nevertheless, as proxy measurements are surrogates for the outcomes of interest, the certainty of the findings may be lower (referred to as indirectness in the GRADE framework).

Assessing risk of bias — Many different study designs can be used for RWE studies, each with features that make them more (or less) well-suited to address a specific research question [6,25,26]. When choosing the study design for a RWE study, the main objective is to minimize the potential biases of nonrandom assignment of the health intervention of interest (confounding) and the lack of systematic or protocolized ascertainment of outcomes (ascertainment bias and misclassification bias) [17]. Bias is any systematic error that can produce a misleading estimate of the true effect.

It is critical to assess the potential sources of bias in a RWE study which could undermine its validity. Bias may arise from the specific study design or the analysis implemented [10,25]. The following sections discuss different strategies that researchers use to reduce confounding and other biases in RWE studies and observational (noninterventional) studies in general. It is important to recognize, however, that these techniques are not perfect and investigators can only adjust for confounders that were measured. Residual confounding is an inherent limitation in all observational (or noninterventional) studies.

While these issues are important limitations of RWE studies, it should be noted that there are known limitations of RCTs (eg, highly selected trial populations, highly controlled clinical settings within which RCTs are conducted, long times and high costs for RCT execution, difficult ethical considerations with respect to experimentation, and challenges in identifying and quantifying the risk of rare adverse events). As such, evidence from appropriately designed and well-conducted RWE studies can complement evidence from RCTs.

Emulating a hypothetical randomized trial to decrease bias — One proposed strategy to reduce bias in RWE studies is to emulate a hypothetical target RCT [27,28], although this may not fully address confounding and other biases. (See 'RCT replication projects' below.)

This framework can also be useful for clinicians and other stakeholders when reviewing the results of the RWE study since it can identify major biases in the study design (eg, time-related bias, depletion of outcome-susceptible individuals, reverse causation, confounding) [10,25,29-31].

Many of these biases can be limited by a careful definition of the cohort entry date, which is the date a subject qualifies to enter the study (sometimes called time 0) [32-34]. The cohort entry date is analogous to the time of randomization in a RCT. It is the primary temporal anchor in RWE studies and it is critical in the appraisal of the study's design. All secondary data elements (criteria for inclusion and exclusion, exposure of interest, covariates, follow-up, and study outcomes) are measured relative to this anchor. A schematic of a RWE study design, either provided by the study itself or constructed based on the reported information, can help assess and interpret the validity of a RWE study [8,35].

To further improve clarity about the RWE study's design, its study parameters can be laid out alongside the components of the corresponding hypothetical target trial. Examples of this approach include RWE studies evaluating the effectiveness of cancer screening [36,37] and messenger RNA-based vaccines against coronavirus disease 2019 (COVID-19) [38]. Mapping out how and when the key study parameters were measured with respect to the time of cohort entry and the other time point can help identify misalignment in study design choices with the respect to the hypothetical target trial [39].

Comparability of the treatment groups — RWE studies are at risk of confounding if treatment groups differ in patient characteristics that may influence the outcomes of interest.

Thus, in a RWE study, the type of comparator or control condition selected can have a large impact on confounding, and thus study validity [40]. Assessing the appropriateness of the chosen comparator is critical. The optimal approach to reduce the risk of confounding is to select an active treatment comparator with similar indications, treatment modality, and availability as the intervention of interest [41-43].

Various methods are used to control for confounding in observational/RWE studies. Commonly used methods include:

Propensity score analysis – Propensity score analysis can be used in RWE studies in attempt to improve balance patient characteristics between treatment groups. This approach allows the investigators to simultaneously account for a large number of potential confounders and confounder proxies, even if the outcome of interest is rare. Several strategies exist for using propensity scores to achieve balance in patient characteristics between treatment groups, including matching, fine-stratification, or weighting by propensity score [44].

Traditional multivariate regression analysis – This is discussed separately. (See "Glossary of common biostatistical and epidemiological terms", section on 'Multivariate analysis'.)

Restricted population – Another strategy for creating comparable treatment groups is to restrict the study population through a specific criterion and evaluate the association between the exposures (the different treatments) and outcome of interest in the restricted population. Examples include restricting the study population to people with a particular condition (eg, diabetes) or age category (eg, patients 65 years or older), or to people with a similar propensity score. Of note, the implications of restriction on the generalizability of findings need to be carefully considered [45].

Assessment tools — It can be challenging to assess the validity of RWE studies, particularly for professionals who are unfamiliar with methodologies used in these studies. Several appraisal tools have been developed that guide in the assessment of the validity of RWE to aid both researchers and other stakeholders involved in decision making [46]. Among the available tools, ROBINS-I is one of the most frequently used for observational studies [47].

However, a systematic review found that most of the existing tools used to appraise the quality of observational studies do not adequately address major sources of bias that can severely undermine the validity of RWE studies on treatments (eg, lack of new-user active comparator design, time-related bias, etc) [46].

Transparency — Transparent reporting of how RWE studies are conducted is critical to instill confidence in using evidence from such studies to inform decisions [48].

Study registration – One method for improving transparency and increasing confidence in RWE studies' results is to encourage registration of RWE studies that assess treatment effectiveness [49,50]. Registration includes specifying a priori planned analysis and detailing the potential revisions to the plan.

Existing platforms to register noninterventional studies include the EU Register of Post-Authorisations Studies, run by the European Network of Centres for Pharmacoepidemiology and Pharmacovigilance (ENCePP.eu), and ClinicalTrials.gov hosted by the National Library of Medicine [51]. Registering a RWE study on these platforms enables potential reviewers to examine the study design and the analytical strategy planned priori and to easily detect whether changes to these were made after the protocol was submitted, possibly in response to early study results.

Transparent reporting and reproducibility – Professional societies have agreed upon a list of important study parameters that should be reported in detail in order for a RWE study to be reproducible and reviewable for validity assessment [52]. Reporting guidelines and structured templates have also been proposed to help clearly report and review RWE studies [8,53-55].

USES OF RWE

RWE as a complement to RCT evidence — RWE is intended to complement randomized controlled trials (RCTs) by providing information on effectiveness and safety of health interventions in clinical practice. Interest in RWE stemmed from the limitations of RCTs, which often address narrow questions in highly selected populations, potentially limiting generalizability. Patients in RCTs often do better than other patients, even if assigned to placebo, due in part due to increased monitoring and adherence while on a clinical trial, and greater expertise of the treating clinicians and centers conducting the RCTs compared with a typical real-world setting. In addition, RCTs are costly, time-consuming, and pose ethical challenges related to experimentation.

Pragmatic RCTs are designed to assess an intervention in settings that are more representative of routine clinical practice [56,57]. Relative to highly controlled clinical trials, they can increase generalizability of study findings. As previously discussed, pragmatic RCTs can integrate real-world data (RWD; eg, to identify selected trial outcomes) and thus they can produce RWE. However, pragmatic RCTs still impose high costs, long times for completion, and experimentation.

Health care providers have many more questions than existing RCTs are able to answer. For example, existing RCTs may not provide information on a specific population, relevant comparator, dose, drug combination, or outcome. RWE may be able to address some of these unanswered questions, by leveraging the variation of care observed in clinical practice, without the high cost and the long times required by RCTs, and without exposing study participants to experimentation.

RWE can complement evidence from RCTs by providing information in various therapeutic areas. Examples include conditions for which typical parallel-arm RCTs may be unethical or not feasible, such as oncology and rare diseases, and conditions requiring urgent assessment of candidate treatment options (eg, COVID-19).

RWE can contribute to knowledge gaps in oncology to improve clinical decision-making, including but not limited to treatment utilization patterns, effectiveness, and adverse events of oncological treatments [58,59]. The American Society of Clinical Oncology guidance framework acknowledged RWE as an important aspect of clinical oncology research, in order to answer questions not addressed in RCTs [60]. Two recent oncologic clinical trial emulations showed encouraging results with respect to the capacity to replicate RCT findings, and thus produce potential actionable evidence [61,62].

Similarly, RWE plays an important role as a source of evidence to substantiate clinical and regulatory decision making in the setting of rare diseases [63]. To facilitate the development of safe and effective treatments for rare diseases, the FDA published a guidance document on the design and implementation of RWE studies that can be used in this setting (ie, natural history studies) [64]. According to this document, data derived from such RWE studies may serve as an external control for clinical trials, provided careful planning and assessment has been performed, including demonstration of comparability between treated and control groups [63]. (See 'Comparability of the treatment groups' above.)

Finally, there has been growing interest in whether RWE could supplement available information from RCTs and produce actionable clinical evidence in the setting of the COVID-19 pandemic. During the height of the COVID-19 pandemic, there was an urgent need to rapidly identify effective medical interventions, including vaccines and medications. A plethora of RWE studies responded to this knowledge gap with widely variable results [65]. This raised concerns regarding the role of RWE for clinical decision making [39]. As is the case for other areas, RWE studies assessing the safety and effectiveness of interventions for COVID-19 should abide by well-established principles for ensuring high-quality design and analysis of noninterventional (observational) studies [38,66]. These principles are discussed above. (See 'Assessing the quality of a RWE study' above.)

RWE for regulatory decision making — In the United States, the 21st Century Cures Act in 2016 and the sixth reauthorization of the Prescription Drug User Fee Act in 2017 encouraged the use of RWE to regulate medical products and to support clinical decision making on treatments. Both acts tasked the FDA with preparing guidance regarding which settings and modalities RWE may be used in to inform the safety and the effectiveness of therapeutics in regulatory applications [67,68]. Since then, the FDA has been issuing a series of draft documents providing guidance on the use of RWE to inform regulatory decision making, through which the agency expects that drug sponsors, health care providers, patients, and the general public will have a better understanding of how RWD and RWE can fit into the regulatory process [69].

Similar initiatives are currently ongoing in other countries. The European Medicines Agency (EMA) included RWE research programs as part of the regulatory science research needs [70]. Specific goals for the use of RWE to support regulatory decision making are described in the EMA Network strategy to 2025 [71] and have fueled the creation of a platform to access and analyze RWD from across the EU, the EU Data Analytics and Real-World Interrogation Network (DARWIN EU) [72]. Health Canada and the Canadian Agency for Drugs and Technologies in Health (CADTH) launched an initiative to integrate RWE throughout the life cycle of drugs and announced the intention to co-develop an action plan to optimize the process for the systematic use and integration of RWE into both regulatory and reimbursement decision-making in Canada [73].

Post-approval safety studies — Regulatory agencies have generally relied on RWE for postmarket evaluation of the safety of approved medical products. For example, RWE substantiated the signal of increased risk of cardiovascular events with rofecoxib [74], mitigated the concerns about the risk of bleeding with dabigatran [75,76], and quantified the increased risk of diabetic ketoacidosis with sodium-glucose cotransporter-2 inhibitors [77,78].

In 2008, the FDA launched the Sentinel Initiative, which developed a national system of distributed electronic health care databases, to support the rapid assessment of the safety of medical products [79]. RWE studies assessing the safety of medical products can be part of the risk management strategy at the time of approval or be prompted by an unanticipated safety signal arising after marketing.

A key reason that regulatory agencies have been relying on RWE studies to investigate the safety of medical products is that the risk for confounding is expected to be small. Since most adverse events are not expected by the prescriber, there is little or no expectation the choice of a treatment is driven by the risk of the potential adverse events associated with it [7]. Regulatory agencies also recognize the value of RWE studies in the setting of adverse events that may be too infrequent for RCTs to provide conclusive findings. Nevertheless, even in this context, confounding remains a potential concern.

Effectiveness claims based on RWE studies — The 21st Century Cures Act and the sixth reauthorization of the Prescription Drug User Fee Act [67,68], have prompted regulatory agencies to re-think the role of RWE to support approvals, or label expansions of medical products with respect to their effectiveness. The potential use cases for RWE in these settings are as follows [5]:

Use case 1 (primary approval) – Before a new medical product is introduced on the market, RWE might support its primary approval by leveraging RWD from patients who could serve as external controls for participants in clinical trials [80]. This strategy may be useful when a well-powered RCT is difficult to conduct (eg, rare diseases) [81]. It is more likely to be successful if the disease under study would lead to a predictable and quantifiable health deterioration without the new medical product, and if a large treatment effect is expected [82]. Examples of external controls include individuals who have indications for a new medical product and who are receiving either an older alternative or no treatment at all, if treatment is currently unavailable for a specific condition.

Use case 2 (secondary indications) – RWE could also potentially support a secondary indication. In many cases, a primary approval for a medication relies on intermediate or surrogate outcomes (eg, reduction in hemoglobin A1c for glucose-lowering medications). After the primary approval of a medication, RWE supporting potential benefits with respect to clinical outcomes (eg, cardiovascular events), may be used to substantiate a secondary indication to prevent such outcomes. The certainty of the findings would be limited, however, by the typical concerns with observational studies (ie, bias and confounding). Alternatively, RWE could be used to support an expanded indication with respect to the population for whom the medical product is approved. For example, if a medication is initially approved in adults, RWE can support a secondary indication in pediatric populations. RWE used to support a secondary indication for a medication may rely on its off-label use in clinical practice. When evaluating this type of RWE, it is important to carefully consider how patients using the medication off-label may differ from those treated with an approved alternative.

Use case 3 (accelerated pathways) – RWE could also be used to support adaptive or accelerated approval. Through accelerated approvals, medications for serious conditions can receive early approval based on a surrogate or intermediate outcome that is expected to predict clinical benefit, and in light of limited or no available alternative treatments [83]. However, confidence in these data are limited by indirectness, as surrogate outcomes do not always correlate with patient important outcomes. As such, sponsors are subsequently required to conduct confirmatory studies, to substantiate the expected clinical benefit and to receive full approval for the medication. Such confirmatory studies are often phase IV clinical trials; however, they may also be represented by RWE studies [84]. In the adaptive approval, an initial conditional approval is made on the basis of preliminary data (eg, data from small clinical trials, for a limited population of patients with pressing unmet medical need). Additional data is subsequently collected by the sponsor during the conditional authorization period, using a RWD source, to support effectiveness and safety claims for a treatment under consideration for full marketing authorization [85,86]. Such full authorization could further broaden the initial indication by including additional clinical outcomes or patient populations not considered in the conditional authorization. Alternatively, RWE can be used to further substantiate effectiveness and safety claims after accelerated approval of a medication.

RCT REPLICATION PROJECTS — Regulators are ultimately interested in understanding whether a specific RWE study can produce causal conclusions with respect to a studied effect. It has been shown that RWE studies can come to the same conclusion as RCTs if the minimal components for causal inference (exposures, outcomes, and health status) are measured accurately and completely [87], and if modern study design and analytic approaches are used, as exemplified by the target trial paradigm. (See 'Emulating a hypothetical randomized trial to decrease bias' above.)

Yet, the ability of RWE studies to reach reliable conclusions regarding treatment effects has not been firmly established [48]. Many studies comparing findings from published RWE studies with findings from RCTs, do not help address this gap, since the different studies often address slightly different questions [88]. By examining the design of rigorously conducted RWE studies that yielded findings closely replicating those of RCTs, it may be possible to uncover the study designs and statistical approaches that increase the reliability of RWE [89].

There are several ongoing or completed projects comparing findings from RCTs and RWE studies. Each of these rely on emulating the design and analysis of a RCT as closely as possible using RWD and then comparing the results of the RCT-RWE study pairs based on prespecified agreement measures [5,17,90].

RCT DUPLICATE (Randomized Controlled Trials Duplicated Using Prospective Longitudinal Insurance Claims: Applying Techniques of Epidemiology) aims to replicate 30 completed phase III or IV RCTs and predict the results of seven ongoing phase IV RCTs using insurance claims from Medicare beneficiaries and United States commercially insured patients [91].

OPERAND (Observational Patient Evidence for Regulatory Approval and Understanding Disease) aims to replicate two phase III RCTs (the ROCKET-AF trial for atrial fibrillation and the LEAD-2 trial for type 2 diabetes) using insurance claims from commercial and Medicare Advantage plans and EHR information from OptumLabs Data Warehouse [92].

A separate endeavor will predict the results of two ongoing phase III RCTs (the PRONOUNCE trial for prostate cancer and the GRADE trial for type 2 diabetes) using claims data [93].

By reducing emulation differences, these ongoing RCT replication projects will set more realistic expectations of achievable agreement between RCT-RWE study pairs, and will provide insights with respect to when and why RWE studies do or do not calibrate well against RCTs [94,95].

SUMMARY

Definitions and importance – The term real-world data (RWD) refers to data relating to patient health status and/or the delivery of health care, routinely collected from a variety of sources. RWD therefore differs from data collected primarily for research purposes. The term real-world evidence (RWE) refers to evidence about the usage and potential benefits or risks of a medical product derived from analysis of RWD. Most RWE studies are observational (or noninterventional) studies, and, thus, the two terms are commonly used interchangeably. (See 'Definitions' above.)

The role and use of RWE has been expanding rapidly. RWE can be used to complement information gained from randomized controlled trials (RCTs). (See 'Introduction' above and 'Uses of RWE' above.)

Generation of RWE – Most RWE studies are conducted with transactional data (data routinely generated during the provision of care) (table 1). However, data from wearable technology are rapidly gaining popularity in RWE studies. (See 'How rwd are generated and used for research' above.)

Limitations of RWE – Since RWD do not rely on primary data collection, their use for research purposes have several limitations (see 'Limitations of RWE' above):

Bias and confounding, since health interventions are not assigned at random in routine care and RWE studies lack systematic ascertainment of outcomes (ascertainment bias and misclassification bias).

Data may be inaccurate and/or critical information may be missing.

Since the data informing the study are already completely collected when the study is planned, there is potential for investigators to mine the data rather than forming a priori hypothesis.

Assessing the quality of a RWE study – When considering whether the results of a RWE study should impact clinical decision-making, three major aspects must be considered (see 'Assessing the quality of a RWE study' above):

The question of interest must be answerable through the research question addressed by the RWE study (see 'Does the study address the clinical question?' above)

Suitable data must be available to answer the question (see 'Appropriateness of RWD at hand' above)

The RWE study must be designed and analyzed using appropriate methodology to minimize the risk of bias (see 'Assessing risk of bias' above)

Uses of RWE – Examples of uses of RWE include:

RWE can complement evidence available through RCTs. RCTs have several limitations (eg, they often address narrow questions in highly selected populations, they are costly and time-consuming, and they pose ethical challenges related to experimentation). RWE can mitigate many of these challenges. (See 'RWE as a complement to RCT evidence' above.)

RWE plays an important role as a source of evidence to substantiate clinical and regulatory decision making (eg, in the setting of rare diseases or for postapproval safety monitoring). (See 'Post-approval safety studies' above.)

RWE can also be used to substantiate effectiveness and safety claims after accelerated approval of a medication, or to support a secondary indication. (See 'Effectiveness claims based on RWE studies' above.)

RCT duplication projects – Regulators are ultimately interested in understanding whether a specific RWE study can produce reliable conclusions regarding treatment effects. There are several ongoing or completed projects comparing findings from RCTs and RWE studies. (See 'RCT replication projects' above.)

  1. Concato J, Corrigan-Curay J. Real-World Evidence - Where Are We Now? N Engl J Med 2022; 386:1680.
  2. Observational study. US Department of Health and Human Services. Available at: https://toolkit.ncats.nih.gov/glossary/observational-study/ (Accessed on April 25, 2023).
  3. Collins R, Bowman L, Landray M, Peto R. The Magic of Randomization versus the Myth of Real-World Evidence. N Engl J Med 2020; 382:674.
  4. Paez A. Gray literature: An important resource in systematic reviews. J Evid Based Med 2017; 10:233.
  5. Franklin JM, Glynn RJ, Martin D, Schneeweiss S. Evaluating the Use of Nonrandomized Real-World Data Analyses for Regulatory Decision Making. Clin Pharmacol Ther 2019; 105:867.
  6. Schneeweiss S, Patorno E. Conducting Real-world Evidence Studies on the Clinical Outcomes of Diabetes Treatments. Endocr Rev 2021; 42:658.
  7. Schneeweiss S, Avorn J. A review of uses of health care utilization databases for epidemiologic research on therapeutics. J Clin Epidemiol 2005; 58:323.
  8. Schneeweiss S, Rassen JA, Brown JS, et al. Graphical Depiction of Longitudinal Study Designs in Health Care Databases. Ann Intern Med 2019; 170:398.
  9. Schneeweiss S. Automated data-adaptive analytics for electronic healthcare data to study causal treatment effects. Clin Epidemiol 2018; 10:771.
  10. Bykov K, Patorno E, D'Andrea E, et al. Prevalence of Avoidable and Bias-Inflicting Methodological Pitfalls in Real-World Studies of Medication Safety and Effectiveness. Clin Pharmacol Ther 2022; 111:209.
  11. Grodstein F, Stampfer MJ, Manson JE, et al. Postmenopausal estrogen and progestin use and the risk of cardiovascular disease. N Engl J Med 1996; 335:453.
  12. Hernán MA, Alonso A, Logan R, et al. Observational studies analyzed like randomized experiments: an application to postmenopausal hormone therapy and coronary heart disease. Epidemiology 2008; 19:766.
  13. Stampfer MJ, Hennekens CH, Manson JE, et al. Vitamin E consumption and the risk of coronary disease in women. N Engl J Med 1993; 328:1444.
  14. Heart Outcomes Prevention Evaluation Study Investigators, Yusuf S, Dagenais G, et al. Vitamin E supplementation and cardiovascular events in high-risk patients. N Engl J Med 2000; 342:154.
  15. Chan KA, Andrade SE, Boles M, et al. Inhibitors of hydroxymethylglutaryl-coenzyme A reductase and risk of fracture among older women. Lancet 2000; 355:2185.
  16. Heart Protection Study Collaborative Group. MRC/BHF Heart Protection Study of cholesterol lowering with simvastatin in 20,536 high-risk individuals: a randomised placebo-controlled trial. Lancet 2002; 360:7.
  17. Franklin JM, Schneeweiss S. When and How Can Real World Data Analyses Substitute for Randomized Controlled Trials? Clin Pharmacol Ther 2017; 102:924.
  18. Malone DC, Brown M, Hurwitz JT, et al. Real-World Evidence: Useful in the Real World of US Payer Decision Making? How? When? And What Studies? Value Health 2018; 21:326.
  19. Husereau D, Nason E, Ahuja T, et al. Use of Real-World Data Sources for Canadian Drug Pricing and Reimbursement Decisions: Stakeholder Views and Lessons for Other Countries. Int J Technol Assess Health Care 2019; 35:181.
  20. Wang SV, Schneeweiss S. Assessing and Interpreting Real-World Evidence Studies: Introductory Points for New Reviewers. Clin Pharmacol Ther 2022; 111:145.
  21. Straus SE, Richardson WS, Glasziou P, Haynes RB. Evidence-Based Medicine How to Practice and Teach EBM, 3rd ed, Elsevier Churchill Livingstone, Edinburgh 2005.
  22. Characterizing RWD quality and relevancy for regulatory purposes. Available at: https://healthpolicy.duke.edu/sites/default/files/2020-03/characterizing_rwd.pdf (Accessed on August 22, 2022).
  23. Real-World Evidence. US Food and Drug Administration. Available at: https://www.fda.gov/science-research/science-and-research-special-topics/real-world-evidence (Accessed on March 22, 2023).
  24. Determining Real-World Data’s Fitness for Use and the Role of Reliability. Available at: https://healthpolicy.duke.edu/publications/determining-real-world-datas-fitness-use-and-role-reliability (Accessed on November 01, 2022).
  25. Lash TL, VanderWeele TJ, Haneuse S, Rothman KJ. Pharmacoepidemiology. In: Modern Epidemiology, 4th ed, Schneeweiss S (Ed), Wolters Kluwer, New York 2021.
  26. Taxonomy for monitoring methods within a medical product safety surveillance system: Year two report of the mini-sentinel Taxonomy Project Workgroup. Available at: https://www.sentinelinitiative.org/sites/default/files/Methods/Mini-Sentinel_Methods_Taxonomy-Year-2-Report.pdf (Accessed on November 01, 2022).
  27. Hernán MA, Robins JM. Using Big Data to Emulate a Target Trial When a Randomized Trial Is Not Available. Am J Epidemiol 2016; 183:758.
  28. Schneeweiss S. A basic study design for expedited safety signal evaluation based on electronic healthcare data. Pharmacoepidemiol Drug Saf 2010; 19:858.
  29. Suissa S. Immortal time bias in pharmaco-epidemiology. Am J Epidemiol 2008; 167:492.
  30. Suissa S. Immeasurable time bias in observational studies of drug effects on mortality. Am J Epidemiol 2008; 168:329.
  31. Suissa S, Dell'Aniello S. Time-related biases in pharmacoepidemiology. Pharmacoepidemiol Drug Saf 2020; 29:1101.
  32. Hernán MA, Sauer BC, Hernández-Díaz S, et al. Specifying a target trial prevents immortal time bias and other self-inflicted injuries in observational analyses. J Clin Epidemiol 2016; 79:70.
  33. Ray WA. Evaluating medication effects outside of clinical trials: new-user designs. Am J Epidemiol 2003; 158:915.
  34. Renoux C, Azoulay L, Suissa S. Biases in Evaluating the Safety and Effectiveness of Drugs for the Treatment of COVID-19: Designing Real-World Evidence Studies. Am J Epidemiol 2021; 190:1452.
  35. Wang SV, Schneeweiss S. A Framework for Visualizing Study Designs and Data Observability in Electronic Health Record Data. Clin Epidemiol 2022; 14:601.
  36. García-Albéniz X, Hsu J, Hernán MA. The value of explicitly emulating a target trial when using real world evidence: an application to colorectal cancer screening. Eur J Epidemiol 2017; 32:495.
  37. García-Albéniz X, Hernán MA, Logan RW, et al. Continuation of Annual Screening Mammography and Breast Cancer Mortality in Women Older Than 70 Years. Ann Intern Med 2020; 172:381.
  38. Dickerman BA, Gerlovin H, Madenci AL, et al. Comparative Effectiveness of BNT162b2 and mRNA-1273 Vaccines in U.S. Veterans. N Engl J Med 2022; 386:105.
  39. Califf RM, Hernandez AF, Landray M. Weighing the Benefits and Risks of Proliferating Observational Treatment Assessments: Observational Cacophony, Randomized Harmony. JAMA 2020; 324:625.
  40. Franklin JM, Platt R, Dreyer NA, et al. When Can Nonrandomized Studies Support Valid Inference Regarding Effectiveness or Safety of New Medical Treatments? Clin Pharmacol Ther 2022; 111:108.
  41. Glynn RJ, Knight EL, Levin R, Avorn J. Paradoxical relations of drug treatment with mortality in older persons. Epidemiology 2001; 12:682.
  42. Matthews KA, Kuller LH, Wing RR, et al. Prior to use of estrogen replacement therapy, are users healthier than nonusers? Am J Epidemiol 1996; 143:971.
  43. Setoguchi S, Warner Stevenson L, Stewart GC, et al. Influence of healthy candidate bias in assessing clinical effectiveness for implantable cardioverter-defibrillators: cohort study of older patients with heart failure. BMJ 2014; 348:g2866.
  44. Webster-Clark M, Stürmer T, Wang T, et al. Using propensity scores to estimate effects of treatment initiation decisions: State of the science. Stat Med 2021; 40:1718.
  45. Schneeweiss S, Patrick AR, Stürmer T, et al. Increasing levels of restriction in pharmacoepidemiologic database studies of elderly and comparison with randomized trial results. Med Care 2007; 45:S131.
  46. D'Andrea E, Vinals L, Patorno E, et al. How well can we assess the validity of non-randomised studies of medications? A systematic review of assessment tools. BMJ Open 2021; 11:e043961.
  47. Sterne JA, Hernán MA, Reeves BC, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ 2016; 355:i4919.
  48. Schneeweiss S. Real-World Evidence of Treatment Effects: The Useful and the Misleading. Clin Pharmacol Ther 2019; 106:43.
  49. Michels KB, Rosner BA. Data trawling: to fish or not to fish. Lancet 1996; 348:1152.
  50. Berger ML, Sox H, Willke RJ, et al. Good practices for real-world data studies of treatment and/or comparative effectiveness: Recommendations from the joint ISPOR-ISPE Special Task Force on real-world evidence in health care decision making. Pharmacoepidemiol Drug Saf 2017; 26:1033.
  51. Orsini LS, Monz B, Mullins CD, et al. Improving transparency to build trust in real-world secondary data studies for hypothesis testing-Why, what, and how: recommendations and a road map from the real-world evidence transparency initiative. Pharmacoepidemiol Drug Saf 2020; 29:1504.
  52. Wang SV, Schneeweiss S, Berger ML, et al. Reporting to Improve Reproducibility and Facilitate Validity Assessment for Healthcare Database Studies V1.0. Pharmacoepidemiol Drug Saf 2017; 26:1018.
  53. Langan SM, Schmidt SA, Wing K, et al. The reporting of studies conducted using observational routinely collected health data statement for pharmacoepidemiology (RECORD-PE). BMJ 2018; 363:k3532.
  54. Wang SV, Pinheiro S, Hua W, et al. STaRT-RWE: structured template for planning and reporting on the implementation of real world evidence studies. BMJ 2021; 372:m4856.
  55. Wang SV, Pottegård A, Crown W, et al. HARmonized Protocol Template to Enhance Reproducibility of hypothesis evaluating real-world evidence studies on treatment effects: A good practices report of a joint ISPE/ISPOR task force. Pharmacoepidemiol Drug Saf 2023; 32:44.
  56. Califf RM, Sugarman J. Exploring the ethical and regulatory issues in pragmatic clinical trials. Clin Trials 2015; 12:436.
  57. Thorpe KE, Zwarenstein M, Oxman AD, et al. A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers. J Clin Epidemiol 2009; 62:464.
  58. Petracci F, Ghai C, Pangilinan A, et al. Use of real-world evidence for oncology clinical decision making in emerging economies. Future Oncol 2021; 17:2951.
  59. Vasconcelles M, Jordan B. What's next for real-world evidence in oncology? Am J Manag Care 2022; 28:SP142.
  60. Visvanathan K, Levit LA, Raghavan D, et al. Untapped Potential of Observational Research to Inform Clinical Decision Making: American Society of Clinical Oncology Research Statement. J Clin Oncol 2017; 35:1845.
  61. Wallach JD, Deng Y, McCoy RG, et al. Real-world Cardiovascular Outcomes Associated With Degarelix vs Leuprolide for Prostate Cancer Treatment. JAMA Netw Open 2021; 4:e2130587.
  62. Merola D, Schneeweiss S, Sreedhara SK, et al. Real-World Evidence Prediction of a Phase IV Oncology Trial: Comparative Degarelix vs Leuprolide Safety. JNCI Cancer Spectr 2022; 6.
  63. Wu J, Wang C, Toh S, et al. Use of real-world evidence in regulatory decisions for rare diseases in the United States-Current status and future directions. Pharmacoepidemiol Drug Saf 2020; 29:1213.
  64. Rare Diseases: Natural History Studies for Drug Development. Available at: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/rare-diseases-natural-history-studies-drug-development (Accessed on November 01, 2022).
  65. Mehra MR, Desai SS, Ruschitzka F, Patel AN. RETRACTED: Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis. Lancet 2020.
  66. Franklin JM, Lin KJ, Gatto NM, et al. Real-World Evidence for Assessing Pharmaceutical Treatments in the Context of COVID-19. Clin Pharmacol Ther 2021; 109:816.
  67. H.R.34 - 21st Century Cures Act. Available at: https://www.congress.gov/bill/114th-congress/house-bill/34 (Accessed on November 01, 2022).
  68. PDUFA VI: Fiscal Years 2018 - 2022. Available at: https://www.fda.gov/industry/prescription-drug-user-fee-amendments/pdufa-vi-fiscal-years-2018-2022 (Accessed on November 01, 2022).
  69. FDA Issues Draft Guidances on Real-World Evidence, Prepares to Publish More in Future. Available at: https://www.fda.gov/drugs/news-events-human-drugs/fda-issues-draft-guidances-real-world-evidence-prepares-publish-more-future (Accessed on November 01, 2022).
  70. Regulatory Science. European Medicines Agency. https://www.ema.europa.eu/en/documents/other/regulatory-science-research-needs_en.pdf.
  71. European medicines agencies network strategy to 2025. European Medicines Agency. https://www.ema.europa.eu/en/documents/report/european-union-medicines-agencies-network-strategy-2025-protecting-public-health-time-rapid-change_en.pdf.
  72. Data Analysis and Real World Interrogation Network (DARWIN EU). European Medicines Agency. https://www.ema.europa.eu/en/about-us/how-we-work/big-data/data-analysis-real-world-interrogation-network-darwin-eu.
  73. Tadrous M, Ahuja T, Ghosh B, Kropp R. Developing a Canadian Real-World Evidence Action Plan across the Drug Life Cycle. Healthc Policy 2020; 15:41.
  74. Ray WA, Stein CM, Daugherty JR, et al. COX-2 selective non-steroidal anti-inflammatory drugs and risk of serious coronary heart disease. Lancet 2002; 360:1071.
  75. Southworth MR, Reichman ME, Unger EF. Dabigatran and postmarketing reports of bleeding. N Engl J Med 2013; 368:1272.
  76. Graham DJ, Reichman ME, Wernecke M, et al. Cardiovascular, bleeding, and mortality risks in elderly Medicare patients treated with dabigatran or warfarin for nonvalvular atrial fibrillation. Circulation 2015; 131:157.
  77. Fralick M, Schneeweiss S, Patorno E. Risk of Diabetic Ketoacidosis after Initiation of an SGLT2 Inhibitor. N Engl J Med 2017; 376:2300.
  78. Douros A, Lix LM, Fralick M, et al. Sodium-Glucose Cotransporter-2 Inhibitors and the Risk for Diabetic Ketoacidosis : A Multicenter Cohort Study. Ann Intern Med 2020; 173:417.
  79. Platt RW, Platt R, Brown JS, et al. How pharmacoepidemiology networks can manage distributed analyses to improve replicability and transparency and minimize bias. Pharmacoepidemiol Drug Saf 2019.
  80. E10 Choice of Control Group and Related Issues in Clinical Trials. Available at: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/e10-choice-control-group-and-related-issues-clinical-trials (Accessed on November 01, 2022).
  81. Sasinowski FJ, Panico EB, Valentine JE. Quantum of Effectiveness Evidence in FDA's Approval of Orphan Drugs: Update, July 2010 to June 2014. Ther Innov Regul Sci 2015; 49:680.
  82. Simon R, Blumenthal GM, Rothenberg ML, et al. The role of nonrandomized trials in the evaluation of oncology drugs. Clin Pharmacol Ther 2015; 97:502.
  83. Expedited Programs for Serious Conditions––Drugs and Biologics. Available at: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/expedited-programs-serious-conditions-drugs-and-biologics (Accessed on November 01, 2022).
  84. Beaver JA, Howie LJ, Pelosof L, et al. A 25-Year Experience of US Food and Drug Administration Accelerated Approval of Malignant Hematology and Oncology Drugs and Biologics: A Review. JAMA Oncol 2018; 4:849.
  85. Eichler HG, Oye K, Baird LG, et al. Adaptive licensing: taking the next step in the evolution of drug approval. Clin Pharmacol Ther 2012; 91:426.
  86. Eichler HG, Baird LG, Barker R, et al. From adaptive licensing to adaptive pathways: delivering a flexible life-span approach to bring new drugs to patients. Clin Pharmacol Ther 2015; 97:234.
  87. Shadish WR, Clark MH, Steiner PM. Can Nonrandomized Experiments Yield Accurate Answers? A Randomized Experiment Comparing Random and Nonrandom Assignments. Journal of the American Statistical Association 2008; 103:1334.
  88. Forbes SP, Dahabreh IJ. Benchmarking Observational Analyses Against Randomized Trials: a Review of Studies Assessing Propensity Score Methods. J Gen Intern Med 2020; 35:1396.
  89. Sheffield KM, Dreyer NA, Murray JF, et al. Replication of randomized clinical trial results using real-world data: paving the way for effectiveness decisions. J Comp Eff Res 2020; 9:1043.
  90. Franklin JM, Pawar A, Martin D, et al. Nonrandomized Real-World Evidence to Support Regulatory Decision Making: Process for a Randomized Trial Replication Project. Clin Pharmacol Ther 2020; 107:817.
  91. Randomized Controlled Trials Duplicated Using Prospective Longitudinal Insurance Claims: Applying Techniques of Epidemiology. Available at: https://www.rctduplicate.org/ (Accessed on August 23, 2022).
  92. OptumLabs. Using RWD in regulatory decision-making. Real-world data can be used to complement clinical trials. Available at: https://www.optumlabs.com/work/data-regulatory-decision.htmlgoogle%20scholar.html (Accessed on August 23, 2022).
  93. Yale University-Mayo Clinic Center of Excellence in Regulatory Science and Innovation. Understanding the use of existing real-world data for medical product evaluation. Available at: https://medicine.yale.edu/core/current_projects/cersi/research/ (Accessed on August 23, 2022).
  94. Franklin JM, Patorno E, Desai RJ, et al. Emulating Randomized Clinical Trials With Nonrandomized Real-World Evidence Studies: First Results From the RCT DUPLICATE Initiative. Circulation 2021; 143:1002.
  95. Wang SV, Schneeweiss S, RCT-DUPLICATE Initiative, et al. Emulation of Randomized Clinical Trials With Nonrandomized Database Analyses: Results of 32 Clinical Trials. JAMA 2023; 329:1376.
Topic 140176 Version 4.0

References

آیا می خواهید مدیلیب را به صفحه اصلی خود اضافه کنید؟