/
Randomised Controlled Trials  (RCTs) Randomised Controlled Trials  (RCTs)

Randomised Controlled Trials (RCTs) - PowerPoint Presentation

faustina-dinatale
faustina-dinatale . @faustina-dinatale
Follow
522 views
Uploaded On 2016-06-27

Randomised Controlled Trials (RCTs) - PPT Presentation

Graeme MacLennan James Lind Born Edinburgh 1716 On HMS Salisbury in 1747 he allocated 12 men with scurvy Cider Seawater Horseradish mustard garlic Nutmeg Elixir Vitriol Oranges and Limes ID: 379590

intervention trials controlled randomised trials intervention randomised controlled group treatment trial control bias interventions participants health effect care bmj groups rcts allocation

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Randomised Controlled Trials (RCTs)" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Randomised Controlled Trials (RCTs)

Graeme MacLennanSlide2

James Lind

Born Edinburgh 1716

On HMS Salisbury in 1747 he allocated 12 men with scurvy

Cider

SeawaterHorseradish, mustard, garlicNutmegElixir Vitriol Oranges and Limes Slide3

Think about…

Consider how you would go about evaluating the following interventions

Surgical versus medical termination of pregnancy

Referral guidelines for radiographic examination

Paracetamol and/or ibuprofen for treating children with feverNurse counsellors as an alternative to clinical geneticists for genetic counsellingSingle dose of chemotherapy versus radiotherapy treating testicular cancer

http://news.bbc.co.uk/1/hi/health/7647007.stm

Cervical cancer vaccine

http://news.bbc.co.uk/1/hi/health/6223000.stm

Slide4

The need to evaluate health care

Variations in health care

Unproven treatments

Inadequacies in care

Inaccurate medical modelsLimitation of resourcesNew innovations… Crombie (1996)Slide5

Evaluation process

Define research question

What is already known?

Identify appropriate study design

Define population, intervention and criteria for evaluationHow large a study?Consider measurement of evaluation criteria (“outcomes”)How often?Timing? Length of follow up?

To whom? Who collects the data? What format?

Analysis of data

Dissemination and implementationSlide6

Define research question and what is already known

Research question (PICOT)

Population

Intervention

Control/comparator OutcomeTarget Has the question already been answered?Conduct review to assess what is know about interventionSlide7

Definition of population, intervention and “outcomes”

Population

Strict definition (explanatory) or flexible (pragmatic)

Intervention

Dose of drug, timing etc“Outcomes”Health related Quality of LifeBiochemical outcomes

Symptoms

Physical assessment

Patient satisfaction

Acceptability

Cost-effectivenessSlide8

Measuring “outcome”

Questionnaires, interview, medical notes etc

Timing of questionnaires?

Baseline (prior to treatment)

Short term outcomesLong term outcomesWho collects the data?Slide9

Sources of systematic errors

Selection bias

can be introduced by the way in which comparison groups are assembled

Attrition bias

systematic differences in withdrawal/follow upPerformance biasSystematic differences in care providedObservation/detection bias

systematic differences in observation, measurement, assessmentSlide10

What is a randomised controlled trial?

Simple Definition

A study in which people are allocated at random to receive one of several interventions

(simple but powerful research tool)Slide11

Simple RCT model

Trial participants

RANDOMLY

allocated

to experimental

or

CONTROL

group

CONTROL

EXPERIMENTSlide12

What is a randomised

controlled trial?

Random allocation

to intervention groups

all participants have equal chance of being allocated to each intervention group why RCTs are referred to as randomised controlled

trialsSlide13

Terminology

Interventions

are comparative regimes within a trial

Prophylactic, diagnostic, therapeutic e.g.

preventative strategiesscreening programmesdiagnostic tests

drugs

surgical techniquesSlide14

What is a randomised controlled

trial?

One intervention is regarded as control treatment (the group of participants who receive this are the

control

group)NOTE: Contemporaneous (not historical controls)why RCTs are referred to as randomised controlled trialsSlide15

Terminology

Control Group

can be

conventional practice

no intervention (this may be conventional practice)placebo

Experimental Group

receive new intervention

(also called treatment group or intervention group interchangeably)Slide16

What is a randomised controlled trial

?

RCTs are

Experiments:

investigators can influence number, type, regime of interventionsQuantitative: measure events rather than try to interpret them in their natural settings

Comparative:

compare two or more interventionsSlide17

What is a randomised controlled trial?

More Complex Definition

Quantitative, comparative, controlled experiments in which a group of investigators study two or more interventions in a series of participants who are allocated randomly to each intervention groupSlide18

Inclusion/exclusion criteria

Decision rules applied to potential trial participants to judge eligibility for inclusion in trial

See CONSORT statement

www.consort-statement.org

Important that they are applied identically to all groups in a trial! Slide19

What is randomisation?

Randomisation is the process of random allocation

Allocation is not determined by investigators, clinicians or participants

Equal chance of being assigned to each intervention group

Individual peoplepatientscaregivers (physicians, nurses etc)Groups of people, ‘cluster randomisation’

(Covered in more depth in later lecture)Slide20

Pseudo-randomisation

Other allocation methods include

according to date of birth

the number on hospital records

date of invitation etc.These are NOT regarded as randomThese are called pseudo- or quasi-randomSlide21

Terminology

Controlled

clinical trials (CCTs)

are not the same as randomised controlled trialsControlled clinical trials include non-randomised controlled trials and randomised controlled trialsSlide22

Why use randomisation?

Characteristics similar across groups at baseline

can isolate and quantify impact of interventions with effects from other factors minimised

Risk of imbalance not abolished completely even if perfect randomisation

To combat selection biasUnpredictability paradox: review of empirical comparisons of randomised and non-randomised clinical trials, Kunz and Oxman 1998 BMJ http://www.bmj.com/cgi/content/abstract/317/7167/1185Slide23

Why do we need a control group?

Don’t need a control group if completely predictable results

Parachutes when jumping from plane

New drug cures a few rabies cases

ButNo intervention has 100% efficacy Many diseases recover spontaneously Slide24

Regression to the mean

Occurs when an intervention aimed at a group or characteristic that is very different from average

For example selecting people because they have high blood pressure then measuring them in future will see the blood pressure measurements closer to the mean of the population

Morton and Torgerson BMJ 2003 326:1083-4

Bland and Altman BMJ 1994 308:1499 and 309:780Slide25

DISTRIBUTION OF RESULTS

threshold

measurement 1

measurement 2Slide26

Hawthorne Effect

Experimental effect in the direction expected but not for the reason expected

Essentially studying/measuring something can change what you studying/measuringSlide27

Placebo Effect

Effect (usually, but not always positive) attributed to the expectation that a therapy will have an effect

The effect is due to the power of suggestion

A placebo is an inert medication or procedure

Waber et al 2008 JAMA Commercial Features of Placebo and Therapeutic Efficacy http://jama.ama-assn.org/cgi/content/full/299/9/1016Slide28

Effect size

Experimental group

Control group

Hawthorne E.

R. mean

Placebo E.

Therapeutic E.

Real difference

Noise

Signal

EFFECT OF AN INTERVENTIONSlide29

Minimising bias in RCTs

Blinding

Single blind – participants are unaware of treatment allocation

Double blind – both participants and investigators are unaware of treatment allocation

Requires use of placebos in drug trials Schulz and Grimes (2002)Slide30

Concealment of random allocation list

“Trials with inadequate allocation concealment have been associated with larger treatment effects compared with trials in which authors reported adequate allocation concealment”

Schulz KF (1995). Subverting randomisation in controlled trials. JAMA, 274, 1456-8Slide31

Blinding, placebos

RCTs should use the maximum degree of blinding that is possible

Placebo is a ‘dummy’ treatment given when there is no obvious standard treatment

needed as the act of taking a treatment may have some effect -need to attribute

double blind treatments must be indistinguishable to those affectedSlide32

Empirical evidence of biasSlide33

‘Explanatory’ and ‘Pragmatic’ questions

Explanatory

Can

it work in an ideal setting …..?

Efficacy Hypothesis testing Placebo controlledDouble blind

Pragmatic

Does

it work in the real world …..?

Effectiveness

Choice between alternative approaches to health care

Standard care

OpenSlide34

Key differences between

explanatory and pragmatic trials (1)

Explanatory Pragmatic

Question efficacy effectiveness

Setting ‘laboratory’ normal practice

Participants strictly defined broader, clinically indicated (uncertainty)

Interventions strictly defined as clinical practiceSlide35

Key differences between

explanatory and pragmatic trials (2)

Explanatory Pragmatic

Outcomes short-term long-term, patient- surrogates centered and resource orientated

Size small (usually larger

single centre) (often multi-centre)

Analysis treatment received intention to treat

Relevance indirect direct

to practiceSlide36

Example of selection bias for PP in an open trial

Worse prognosis

Exp

Ctrl

None

None

White(2005)

E

ESlide37

Terminology: explanatory versus pragmatic

Explanatory trials

estimates efficacy - that is the benefit the treatment produces under ideal conditions

Pragmatic trials

estimates effectiveness - that is the benefit the treatment produces under routine clinical practice Roland M, Torgerson D. What are pragmatic trials? BMJ 1998;316:285Slide38

RCT as the Gold Standard

The randomised controlled trial is widely regarded as the gold standard for evaluating health care technologies because it allows us to be confident that a difference in outcome can be directly attributed to a difference in the treatments and not due to some other factorSlide39

RCT strengths

Confounding variables minimised

Only research design which can in principle yield causal

relationships

can clarify the direction of cause and effectAccepted by EBM schoolDon’t have to know everything about the participantsSlide40

RCT limitations

Contamination of intervention groups

Comparable controls

Problems with blinding

What to do about attrition?Are patients/professionals willing to be in trial different from ‘refusers’?- external validity Cost!Slide41

Other issues in RCTs (1)

Ethics

Management issues

Interim analysis and ‘stopping rules’

part of ethical concernmechanisms to avoid patient harmData Monitoring and Safety Committee required for trials

Clemens F et al Data monitoring in randomised controlled trials: surveys of recent practice and policies. Clin Trials 2005;2(1):22-33.Slide42

Other issues in RCTs (2)

A power calculation is essential for the validity of a trial and will always be necessary for grant applications and in publications of the trial

(later lecture)

The methods of randomisation should always be reported. It is not enough to say that the patients were randomly allocated to the treatments.

(see CONSORT)Slide43

Parallel group (simple) RCT design in practice

Patient eligible for either treatment

Patient gives informed consent

Yes

No

Randomise

Exclude from trial

Experimental treatment

Standard treatment

Standard treatmentSlide44

Summary

“Gold standard” of research designs

Individual patients are

randomly allocated

to receive the experimental treatment (intervention group) or the standard treatment (control group)Maximises the potential for attributionRandomisation guards against selection bias between the two treatment groupsStandard statistical analysis Good internal validityMay lack generalisability due to highly selected participantsCan be costly to set up and conduct, ethical issuesSlide45

Good study design

General considerations

maximise attribution

Ensure no factor other than the intervention differs between the intervention and control group

Random allocation, if adequately carried out, will in the long run ensure comparable groups with respect to all factorsminimise all sources of errorsystematic error (bias)random error (chance)be practical and ethicalSlide46

Minimise sources of error

Systematic errors (bias)

“inaccuracy which is different in its size or direction in one of the groups under study than the others ”

Minimise bias by ensuring that the methods used are applied in the same manner to all subjects irrespective of which group they belong to.

Random errors (chance)

“Inaccuracy which is similar in the different groups of subjects being compared”

Adequate sample size, accurate methods of measurement

Elwood (1998)Slide47

Study designs

Experimental (Randomised controlled trial)

A new intervention is deliberately introduced and compared with standard care

Quasi-experimental (non-randomised, controlled before and after)

Researchers do not have full control over the implementation of the intervention (“opportunistic research”)Observational (Cohort, case-control, cross-sectional)

describes current practice

observed differences cannot be attributed solely to a “treatment” effectSlide48

Evaluation of health care interventions

Randomised controlled trials are considered as the “gold standard”

However, some debate over the advantages and disadvantages of different research designs for assessing the effectiveness of healthcare interventions

Polarised views

“observational methods provide no useful means of assessing the value of a therapy” (Doll, 1993)RCTs may be unnecessary, inappropriate, impossible or inadequate (Black, 1996)Approaches should be seen as complementary and not as alternatives (Black, 1996)

Interpretation of RCTs in terms of

generalisabilitySlide49

Useful/interesting links

www.jameslindlibrary.org

(History)

www.consort-statement.org

(CONSORT)www-users.york.ac.uk/~mb55/pubs/pbstnote.htm (All the stats notes from BMJ)www.ctu.mrc.ac.uk (MRC CTU)www.rcrt.ox.ac.uk (under construction)Doll R. Clinical Trials the 1948 watershe BMJ 1998;317:1217-1220

The unpredictability paradox: review of empirical comparisons of randomised and non-randomised clinical trials Regina Kunz and Andrew D Oxman BMJ 1998 317: 1185-1190Slide50

References

Black. Why we need observational studies to evaluate the effectiveness of health care.

BMJ 1996

: 312;1215-8

Crombie. Research in Health Care. 1996Doll. Doing more good than harm: the evaluation of health care interventions. Ann NY Acad Sci 1993:703;310-13Elwood M. Critical appraisal of epidemiological studies and clinical trials. 1998 OUP; Oxford.

Greenhalgh T. How to read a paper. 2001 BMJ; London

Schulz and Grimes. Lancet Epidemiology series. 2002