/
Petri Nokelainen petri.nokelainen@tut.fi Petri Nokelainen petri.nokelainen@tut.fi

Petri Nokelainen petri.nokelainen@tut.fi - PowerPoint Presentation

ventuilog
ventuilog . @ventuilog
Follow
342 views
Uploaded On 2020-06-30

Petri Nokelainen petri.nokelainen@tut.fi - PPT Presentation

Issues in Study Design Literature review Research questions hypotheses Design Methodology Data collection Data analyses Writing scientific report Peer review Conclusions ID: 790386

design research learning amp research design amp learning study data post 2004 pre random education scientific control test analysis

Share:

Link:

Embed:

Download Presentation from below link

Download The PPT/PDF document "Petri Nokelainen petri.nokelainen@tut.fi" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Petri Nokelainenpetri.nokelainen@tut.fi

Issues in Study Design

Slide2

Literature

review

Research

questions

/

hypotheses

Design

Methodology

Data

collection

Data

analyses

Writing

scientific

report

Peer review

Conclusions

Intro/theory

Discussion

RQ’s

Method

Results

Database

of

scientific

knowledge

Publication

of the

study

Original

idea for the

research

2

Design

vs.

Methodology

?

Primary

/

existing

data

Measurements

Slide3

Design vs. Methodology

Design

focuses

on

the procedures

related

to outcomesHistorical, comparative,

interpretive, exploratory

researchWhat

evidence is needed to answer

research

question(s)Methodology focuses

on the

research process (instrumentation

and analyses)

Primary

, secondary dataHow to conduct

analyses in robust

and unbiased

way3

Slide4

(Nokelainen, 2008, p. 119.)

D

= Design (

ce

=

controlled

experiment

,

co

= correlational

study)N =

Sample size IO = Independent observations

ML

= Measurement level (c = continuous, d = discrete, n = nominal

)MD

= Multivariate

distribution (n = normal, similar)O = Outliers

C = Correlations

S = Statistical

dependencies (l = linear, nl = non-linear)

Slide5

‘Pretest post-test randomized experiment’A

pplied in many fields, but needs a

random sample

(‘probability sample’) and

random assignment (participants are randomly selected for the experimental and control groups). Research

is

conducted in a controlled environment (e.g

., laboratory) with experiment

and

control groups

(threat to external

validity due

to artificial environment).

Using experimental design, both reliability and validity are maximized via random sampling and control in the given experiment (de

Vaus, 2004).5

Experimental design

Slide6

Random sample

Exp.

Contr.

Pre

Pre

I

-

Post

Post

Random assignment

6

Experimental

design

Slide7

Random

assignment to

groups

Pretest

Intervention

Post-test

Experimental

group

Measurement (X)

Treatment

Measurement (Y)

Control

group

Measurement (X)

No treatment

Measurement (Y)

7

Experimental

design

Slide8

‘N

on-equivalent groups design’

R

esembles experimental design but lacks random assignment (sometimes also random sampling) and controlled research environment.

This type

of design is

sometimes the only way to

do research

in

certain populations as it

minimizes the threats

to external

validity (natural environments

instead of artificial ones).

Random / convenience

sample

Exp.

Contr.

Pre

Pre

I

-

Post

Post

8

Quasi-experimental

design

Slide9

‘D

escriptive study’ or ‘observational study’

A

llows the use of non-probability sample (

a.k.a ‘convenience sample’). Most correlational designs are missing control, and thus loose some of their scientific power (Jackson, 2006).

Some

research journals accept

factorial analysis

(main and interaction

effects,

e.g., MANOVA) based on correlational

design.

Convenience

sample

Exp.

Pre

I

Post

9

Correlational

design

Slide10

RS

RS

TEST

TEST

Pre

CONTROL

Pre

Pre

I

I

-

Post

Post

Post

Pre

-

Post

CS

TEST

Pre

I

Post

CONTROL

RANDOM

SAMPLING

RANDOM

SELECTION

pretest-posttest randomized experiment

Non-Equivalent Groups Design

Correlational design

10

Slide11

Observational studies can utilize cross-sectional

or

longitudinal

designs (see Caskie & Willis, 2006).

Longitudinal design includes series of measurements over time.Change

over

time, age effect.

Cross-sectional study involves usually one measurement and is thus considerably cheaper and faster to conduct (although producing less controllable and less powerful results). If

there

are

several measurements, individual

participants answers

are not

connected

over time (e.g.,

due to anonymity

).

Causal conclusions are usually out of scope of this research type. 11

Time and design

Slide12

One sample

that

remains

the same

throughout

the

study.Longitudinal study produces more convincing results as it allows the understanding of change in a construct over time and variability and predictors of such change over time.However, it takes more time to carry out and suffers from participant drop-out (imputation of missing data

, e.g., Molenberghs, Fitzmaurice,

Kenward

, Tsiatis, &

Verbeke, 2014).

Longitudinal design

Slide13

Measurement is

conducted

once (

or several

times

) and the sample varies

throughout the

study.13

Cross-sectional design

Slide14

Applied in

qualitative

research

. The

aim

is to collect information from

one or

more

cases and

study, describe and explain

them through

how and why

questions

.Cases are represented

, for example,

by

individuals, their communication

and experiences. (For

thorough

discussion, see Flyvbjerg, 2004.)

14Case

study design

Slide15

Controlled experiment designs, when conducted properly, rule out IO violations quite effectively (Martin, 2004), but

correlational designs

usually lack such control (e.g., to rule out employee’s co-operation when they respond to the survey questions).

On the other hand, some qualitative techniques, like

focus group analysis (

Macnaghten & Myers, 2004), are heavily based on non-independent observations as informants

may (or are asked) talk to each other during the data collection. 15

About designs

Slide16

What really

matters

M

ost

important

questions:Scientific impactSocietal

impact

Answered

by designSo

, what drives us:

Design or

method?16

Slide17

TUT Course

Contents

Expert

Team

Research Design

Regulation

of learning and

active

learning

methods in the

context of engineering

education (REALMEE)

Intervention group

Course Planning

Control group

Pedagogical intervention

Education Science Team

Pre- and post tests

Event measures

Research Team

Slide18

Research Design

Regulation

of learning and

active

learning

methods

in the

context of engineering education

(REALMEE)

Slide19

Lack of design shows

up

Dissertations

Journal

manuscripts

Funding applicationsEven in published

research!

19

Slide20

Review

Total number of participants in the 18 reviewed articles was 3485, of which 681 participated in qualitative and 2804 in quantitative studies.

Only

11 articles contained both explanation and justification of selected methodological approach and robust description of data analysis.

Only

eight articles had a section about critical examination of the method(s) and limitations of the study.

Two articles based

on group level data did not discuss about rationale of choosing such approach and related validity

issues (

Chioncel et al., 2003).

(Pylväs et al., in press

.)

Slide21

What really

matters

Scientific

impact

Existing

research, review (Paré et al., 2015).

Research gap

21

Slide22

What really

matters

Scientific

impact

Trends

in publication policies research

design and methodologyqual

vs.

quan, generalizability vs.

representativenessGobo (2004) defines a concept of generalizability for qualitative research by arguing that

the concept of generalizability is based on the idea of social representativeness, which allows the generalizability to become a function of the invariance (regularities) of the phenomenon.

22

Slide23

What really

matters

Thus

, “The ethnographer does not generalize one case or event … but its main structural aspects that can be noticed in other cases or events of the same kind or class.”

(Gobo, 2004,

p. 453.)23

Slide24

What really

matters

Scientific

impact

Trends

in publication policies research

design and methodologyData

,

investigator, theory and

methodological triangulation (

Denzin, 1978) are

applied to compensate design limitations,

reduce

possible researcher bias, and

increase the

strength

of conclusions.Design research

approach (

Bannan-Ritland, 2003

).24

Slide25

What really

matters

Scientific

impact

Trends

in publication policies research

design and methodologylongitudinal

studies (qual

& quan), latent variable

modeling (e.g

., R, lavaan)effect size

(Barry et al., 2016), CI for

effect sizes (Thompson, 1994, 1996)critical

examination of p-

values and NHSTP

25

Slide26

26

null hypothesis significance testing procedure

’ and featured

product,

p

-value.

Gigerenzer

, Krauss and

Vitouch (2004, p. 392) describe ‘the null ritual’ as follows:

1) Set up a statistical null hypothesis of “no mean difference” or “zero correlation.” Don’t specify the predictions of your research or of any alternative substantive hypotheses; 2) Use 5 per cent as a convention for rejecting the null. If significant, accept your research hypothesis; 3) Always perform this procedure.

NHSTP

Slide27

27

A

p

-value is the probability of the observed data (or of more extreme data points), given that the null hypothesis H

0

is true,

P

(

D|H

0) (id.). The first common misunderstanding is that the

p-value of, say t-test, would describe how probable it is to have the same result if the study is repeated many times (Thompson, 1994).

Gerd Gigerenzer

and his colleagues (id., p. 393) call this replication fallacy as “P(

D|H

0) is confused with 1—P(D

).” NHSTP

Slide28

28

The second misunderstanding, shared by both applied statistics teachers and the students, is that the

p

-value would prove or disprove H

0

. However, a significance test can only provide probabilities, not prove or disprove null hypothesis.

Gigerenzer

(id., p. 393) calls this fallacy an

illusion of certainty

: “Despite wishful thinking, p(

D|H0) is not the same as P

(H0|

D), and a significance test does not and cannot provide a probability for a hypothesis.” NHSTP

Slide29

What really

matters

Scientific

impact

Trends

in publication policies research

design and methodologyparadigmatic vs.

a

lgorithmic modeling

(Breiman, 2001)S

eeking or

learning structures from

data?

Exploratory vs confirmatory

approach …

29

Slide30

The target population of the study consisted of ATCOs in Finland (

N=

300) of which 28, representing four different airports, were interviewed.

The research data also included interviewees

’ aptitude test scoring, study records and employee assessments.

(

Pylväs, Nokelainen, & Roisko, 2015.)

Learning

structures

Slide31

The research questions were examined by using theoretical concept analysis.

The

qualitative data analysis was conducted with content analysis and Bayesian classification modeling

.

What are the differences in characteristics between the air traffic controllers representing vocational expertise and vocational excellence?

Learning structures

Slide32

Learning

structures

Slide33

"…the natural ambition of

being good

. Air traffic controllers have perhaps generally a strong professional pride."

Interesting and rewarding work, that is the basis of wanting to stay in this work until retiring.

"I read all the regulations and instructions carefully and precisely, and try to think …the majority wave aside of them. It reflects on work."

Learning

structures

Slide34

Learning

structures

Slide35

Learning

structures

Slide36

Learning

structures

Slide37

Data analysis should not be pointlessly formal, but instead

“ ... it should make an interesting claim; it should tell a story that an informed audience will care about and it should do so by intelligent interpretation of appropriate evidence from empirical measurements or observations” (Abelson, 1995, p. 2).

37

Conclusions

Slide38

Conclusions

Reviewers

(

mostly

seasoned

scientists

) usually accept the intellectual

challenge of an

innovative

methodological approach

.Means to reach an

interesting academic

end are usually

supported … and that

builds YOUR scientific

credibility

over time.

38

Slide39

References

Abelson, R. P. (1995).

Statistics as Principled Argument

. Hillsdale, NJ: Lawrence Erlbaum Associates.

Anderson, J. (1995).

Cognitive

Psychology and Its Implications.

Freeman: New York.Bannan-Ritland

, B. (2003). The

Role of Design in Research

: The Integrative Learning Design Framework. Educational

Researcher, 32

(1), 21-24.Barry, A. E., Szucs, L. E., Reyes, J. V., Ji, Q., Wilson, K. L., & Thompson, B. (2016). The Handling of Quantitative Results in Published Health Education and Behavior Research.

Health Education & Behavior

, 43(5), 518–527. Brannen

, J. (2004). Working qualitatively and quantitatively. In C. Seale, G. Gobo, J. Gubrium, & D. Silverman (Eds.),

Qualitative Research Practice

(pp. 312-326). London: Sage.Breiman, L. (2001). Statistical Modeling: The Two Cultures. Statistical Science,

16(3), 199–231.

Chioncel, N. E., Van

Der Veen, R.G.W., Wildemeersch, D., and Jarvis, P. 2003. “

The validity and

reliability

of focus groups as a

research method in

adult education.” International Journal of Lifelong Education

22(5): 495-517.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Second edition. Hillsdale, NJ: Lawrence Erlbaum Associates.

Fisher, R. (1935). The design of experiments. Edinburgh: Oliver & Boyd.

39

Slide40

References

Flyvbjerg

, B. (2004). Five misunderstandings about case-study research. In C. Seale, J. F.

Gubrium

, G. Gobo, & D. Silverman (Eds.), Qualitative Research Practice

(pp. 420-434). London: Sage.

Gigerenzer, G. (2000). Adaptive thinking. New York: Oxford University Press.

Gigerenzer, G., Krauss, S., & Vitouch, O. (2004). The null ritual: What you always wanted to know about significance testing but were afraid to ask.

In D. Kaplan

(Ed.), The SAGE handbook of quantitative methodology for the social sciences (pp. 391-408). Thousand Oaks: Sage.

Gobo, G. (2004). Sampling, representativeness and generalizability. In C. Seale, J. F. Gubrium, G. Gobo, & D. Silverman (Eds.), Qualitative Research Practice (pp. 435-456). London: Sage.

Hair, J. F., Anderson, R. E., Tatham, R. L., & Black, W. C. (1998). Multivariate Data Analysis

. Fifth edition. Englewood Cliffs, NJ: Prentice Hall.40

Slide41

References

Jackson, S. (2006).

Research Methods and Statistics. A Critical Thinking Approach.

Second edition. Belmont, CS: Thomson.

Lavine, M. L. (1999).

What is Bayesian Statistics and Why Everything Else is Wrong.

The Journal of Undergraduate Mathematics and Its Applications, 20, 165-174. Nokelainen, P. (2006). An Empirical Assessment of Pedagogical Usability Criteria for Digital Learning Material with Elementary School Students.

Journal of Educational Technology & Society, 9

(2), 178-197.

Nokelainen, P. (2008). Modeling of Professional Growth

and Learning: Bayesian Approach. Tampere: Tampere University Press. Nokelainen, P., & Ruohotie, P. (2009). Non-linear Modeling of Growth Prerequisites in a Finnish Polytechnic Institution of Higher Education.

Journal of Workplace Learning, 21

(1), 36-57. 41

Slide42

References

Paré

, G.,

Trudel

, M. C., Jaana, M., & Kitsiou

, S. (2015). Synthesizing

information systems knowledge: a

typology of literature

reviews

. Information and Management,

52(2), 183-199. Pylväs, L., Mikkonen, S., Rintala, H., Nokelainen, P., &

Postareff, L. (in press

). Guiding the

workplace

learning in vocational education and

training: A literature

review

. To appear in Empirical Research

in Vocational

Education

and Training.Thompson, B. (1994). Guidelines for authors. Educational and Psychological Measurement, 54(4), 837-847.

Thompson, B. (1996). AERA editorial policies

regarding

statistical significance

testing: Three suggested

reforms. Educational Researcher

, 25(2), 26-30.

de Vaus, D. A. (2004). Research Design in Social Research

. Third edition. London: Sage.42