/
Comparator Selection in Observational Comparative Effectiveness Research Comparator Selection in Observational Comparative Effectiveness Research

Comparator Selection in Observational Comparative Effectiveness Research - PowerPoint Presentation

ivy
ivy . @ivy
Follow
65 views
Uploaded On 2023-11-09

Comparator Selection in Observational Comparative Effectiveness Research - PPT Presentation

Prepared for Agency for Healthcare Research and Quality AHRQ wwwahrqgov This presentation will Show how to choose concurrent active comparators from the same source population or justify the use of notreatment comparisons historical comparatorsdifferent data sources ID: 1030902

exposure treatment time comparator treatment exposure comparator time comparisons bias comparison choice effectiveness comparative confounding modalities selection research study

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Comparator Selection in Observational Co..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

1. Comparator Selection in Observational Comparative Effectiveness ResearchPrepared for:Agency for Healthcare Research and Quality (AHRQ)www.ahrq.gov

2. This presentation will:Show how to choose concurrent, active comparators from the same source population (or justify the use of no-treatment comparisons/ historical comparators/different data sources)Discuss potential bias (and methods to minimize it) associated with comparator choice Define time 0 for all comparator groups in describing planned analyses Outline of Material

3. In comparative effectiveness research, the choice of comparator directly affects the clinical implication, interpretation, and validity of study results.Treatment decisions are based on factors associated with underlying disease and its severity, general health status or frailty, quality of life, and patient preferences.There is potential for confounding by indication or severity and selection bias associated with different comparison groups. Internal validity relies on defining appropriate dose, intensity of treatment, and exposure window for comparator groups.Introduction

4. Confounding arises when a risk factor for the study outcome of interest directly or indirectly affects exposure (e.g., treatment assignment).The magnitude of potential confounding is generally expected to be smaller when the comparator: Has the same indicationHas similar contraindications Shares the same treatment modality (e.g., tablet or capsule)Conduct sensitivity analyses to quantify effects of potential unmeasured confounding. Consequences of Comparator Choice (1 of 2)

5. Exposure misclassification:Arises when exposure measurement differs between the exposure and comparator groupsIs often more complex in comparative effectiveness research, since each group represents active treatment (nonuse of exposure treatment does not imply use of the comparator treatment)Can differ in each group, especially if different treatment modalities are usedAssess separately for exposure versus comparison groupsConsequences of Comparator Choice (2 of 2)

6. Spectrum of Possible Comparisons (1 of 3)Alternative treatmentsMost common scenario and typically least biased More clinically meaningful and methodologically validCould still result in confounding by severity if not adequately controlled through design/analysisNo treatment/testingAbsence of exposure or absence of exposure and use of an unrelated treatment (active comparator)Choice of time 0 must be clinically appropriate in order to reduce bias

7. Spectrum of Possible Comparisons (2 of 3)Usual or standard careDevelop a valid operational definition for care and for time at initiation (none, single, or a set of treatment/testing modalities)Real-world use must be understood for proper definitionCan vary across geographic regions/treatment settings or change over time; avoid a “wastebasket definition”Historical comparisonUsed with a dramatic shift from one treatment to anotherMay be the only choice with strong selection for a new treatment that is uncontrollable and randomization is unethical/not realisticVulnerable to confounding by indication/severity when this information is unmeasured (overcome by instrumental variable analysis using calendar time)

8. Comparison groups from different data sourcesMultiple data sources can be linked to enhance the validity of observational comparative effectiveness studiesResidual confounding might occur due to:Incomparability of information in exposure and comparison groupsDifferences in observed and unobserved domains as they are sampled differently or different source populationsIssues with generalizability when exposure and comparison groups come from different databasesSpectrum of Possible Comparisons (3 of 3)

9. IndicationAnother treatment used for the same indication as the exposure treatment typically is used as the comparison group Treatments approved for multiple indications—appropriate indication will have to be ensured by defining the indication and restricting the study populationInitiation New-user design prevents underascertainment of early events and avoids selection bias arising from prevalent usersInclusion of prevalent users may be justified when outcomes are rare or occur after long periods of use Operationalizing the Comparison Group in Comparative Effectiveness Research (1 of 2)

10. Exposure time windowPeriod where therapeutic benefit and/or risk would plausibly occurSensitivity analysis to assess whether results are sensitive to different specifications of the exposure window(s)NonadherenceMay differ between treatment and comparators Treatment effects should be compared at adherence levels observed in clinical practice, rather than adjusting for the difference in adherenceDose/intensity of drug comparisonAssess and report dose in each groupMake comparisons at clinically equivalent dose levels Operationalizing the Comparison Group in Comparative Effectiveness Research (2 of 2)

11. Confounding by indication or severity: Medications may be used for patients with a milder disease, and surgery might be reserved for those with more severe disease.Selection of healthier patients to receive more invasive treatments:Sicker patients are less likely to be considered for invasive procedures. Selection becomes more problematic in comparisons across different treatment modalities.Considerations for Comparisons AcrossDifferent Treatment Modalities (1 of 3)

12. Time from disease onset to a treatment:Pay careful attention to the time from initial diagnosis and the general sequence of different treatment modalities needed to prevent immortal person-time bias.Different magnitude of misclassification in drug exposure versus procedure comparison:Misclassification of exposure might be greater with drugs than with devices/procedures.Pharmacy records do not provide information on actual intake.Considerations for Comparisons AcrossDifferent Treatment Modalities (2 of 3)

13. Provider effects in using devices or surgeries:Consider the characteristics of the operating physician and institution where the device implantation or surgery was carried out Be aware of the documented direct relationship between the level of physician experience and better patient outcomes for complex proceduresAdherence to drugs and device failure or removal:Requires assumptions in most data sourcesMay be appropriate to compare without adjusting, as it reflects real-world use Considerations for Comparisons AcrossDifferent Treatment Modalities (3 of 3)

14. Understanding the impact of comparator choice on study design is important.Selection of the comparator group should be primarily driven by a comparative effectiveness question prioritized by the stakeholder community.An over-riding consideration is the generation of evidence that should directly inform decisions on treatments, testing, or health care–delivery systems.Some study questions may not be answered validly due to intractable bias in observational comparative effectiveness research. Conclusions

15. Summary ChecklistGuidanceKey ConsiderationsChoose concurrent, active comparators from the same source population (or justify use of no-treatment comparisons/historical comparators/different data sources)Comparator choice should be primarily driven by a comparative effectiveness question prioritized by the informational needs of the stakeholder community and secondarily as a strategy to minimize bias Discuss potential bias associated with comparator choice and methods to minimize such bias, when possibleBe sure to also describe how study design/analytic methods will be used to minimize biasDefine time 0 for all comparator groups in describing planned analysesChoice of time 0, particularly in no-treatment or usual care, should be carefully considered in light of potential immortal time bias and prevalent user biasEmploy a new-user design as a default, if possible