/
Using Formative Assessment to Improve Student Achievement Dan Using Formative Assessment to Improve Student Achievement Dan

Using Formative Assessment to Improve Student Achievement Dan - PowerPoint Presentation

tatiana-dople
tatiana-dople . @tatiana-dople
Follow
342 views
Uploaded On 2019-11-01

Using Formative Assessment to Improve Student Achievement Dan - PPT Presentation

Using Formative Assessment to Improve Student Achievement Dan Hyson Data Management Coordinator Hiawatha Valley Education District HVED Winona MN Agenda Review agenda What other questions were you hoping I would address ID: 761948

test item assessment student item test student assessment classroom common performance students level teachers formative difficulty curriculum chance mastery

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Using Formative Assessment to Improve St..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Using Formative Assessment to Improve Student Achievement Dan Hyson Data Management Coordinator Hiawatha Valley Education District (HVED), Winona, MN

Agenda Review agenda What other questions were you hoping I would address? What are the primary functions of assessment? How can teachers use informal instructional techniques to gather formative assessment evidence about student mastery? How can teachers use formal common classroom assessments to gather formative assessment evidence about student mastery?

3. What are the primary functions of assessment? Summative Did students meet standards? (e.g., MCA proficiency, NWEA MAP percent meeting growth targets, traditional classroom unit tests) Formative – provides evidence of student learning you can use to adjust instruction Benchmark screening Are all students meeting standards or growing at a rate that will make them more likely to meet standards in the future? If not, which students are not meeting standards or not growing at the necessary rate? (e.g., AIMSweb / NWEA MAP benchmark performance, common classroom assessments and instructional techniques) Progress monitoring/Mastery monitoring Are those identified students responding to additional intervention we’re providing? (e.g., AIMSweb , common classroom assessments and instructional techniques) Diagnostic If students are not responding, what specific areas of weakness are getting in the way? (e.g., MCA and NWEA MAP sub-skill strands, common classroom assessments and instructional techniques)

ACTIVITY: What assessments do you use within your school to address these primary functions? Summative Benchmark Screening Progress Monitoring/Mastery Monitoring Diagnostic

4. How can teachers use informal instructional techniques to gather formative assessment evidence about student mastery? Typically created and administered by individual teachers within each classroom Letter-card responses/ “Clicker” student response systems Random student responses Whiteboard responses Quick student self-assessment of understanding (e.g., thumbs up, traffic signal technique) Make student thinking visible (Love, 2013; Popham , 2008)

5. How can teachers use formal common classroom tests to gather formative assessment evidence about student mastery? Typically administered by individual teachers within each classroom, but can be created and evaluated as grade level or content area teams Identify “essential learnings ”/ Power Standards within grade and/or content area standards (“Unwrapping Template…”) Ensure that all students have opportunity to learn by monitoring the fidelity of implementation of curriculum and instruction designed to teach learnings Create common classroom assessments to assess those learnings (“Sample Protocol…” and “Assessment Plan”) Set per student and per class/grade level “triggers” If student performance below/above triggers on assessment, adjust instruction

Fidelity of implementation of curriculum and instruction Self-assessment AND peer/administrator observations ( NON-EVALUATIVE ) Questions to address: Is enough time being allocated to deliver the curriculum as intended? What are the non-negotiable components of the curriculum and are they being implemented as intended? If NO to either question, do teachers need refresher training in delivering curriculum?

Set per student and per class/grade level “triggers”  If student performance below/above, adjust instruction “If at least 90 percent of my students don’t earn scores of 90 percent or better on Thursday’s formative quiz, I’ll add a new review lesson on Friday.” “If at least 95 percent of my students correctly answer at least 8 items on tomorrow’s 10-item quiz on Topic X, I will delete next week’s planned Topic X review lesson.” (Popham, 2008, p.65)

Evaluating the quality of a common classroom test by calculating… correlation with summative outcome item difficulty Proportion of test takers answering item correctly Example: If 13 out of 20 students get item correct, item difficulty = 13/20 = .65 item discrimination Item discrimination index = difference between proportion of test takers who earned overall test scores in top 1/3 of class and proportion in bottom 1/3 answering item correctly Example: If 8 out 8 test takers in top 1/3 and 2 out of 8 in the bottom 1/3 of a class of 24 students get item correct, item discrimination index = 8/8 – 2/8 = 1 - .25 = .75

Evaluating the quality of a common classroom test by calculating… item discrimination (continued) Point biserial correlation = correlation between performance on one item and performance on the entire test rpbis = [(Y1 – Y)/ Sy ] Px /1-Px Y1 = mean of total test scores of those answering item correctlyY = mean of total test scores of all test takersPx = item difficulty (proportion of those getting item correct) Sy = standard deviation of total test scores of all test takers

Designing new common classroom tests with items… of optimal difficulty Chance performance level = proportion of success on an item expected by chance For multiple choice item, 1.0 divided by number of response options (e.g., if 4 response choices, 1.0/4 = .25 chance performance level) Optimal difficulty level = [(1.0 – chance performance level) /2] + chance performance level Example: If chance performance level = .25, optimal difficulty level = [(1.0 - .25)/2] + .25 = .625 BUT, not all items on test need be of optimal difficulty. Optimal range = .30 to .70. BUT, OK to have a few of .90 to 1.0, especially at beginning of test to build confidence.

Designing new common classroom tests with items… to test varied levels of understanding Using Bloom’s taxonomy as a guide, design items that call on test taker to demonstrate Knowledge Comprehension Application Analysis Synthesis Evaluation

References/Resources Bailey, K. & Jakicic , C. (2012). Common formative assessment : A toolkit for professional learning communities at work. Bloomington, IN: Solution Tree Press.Love, N. (2013). Data literacy for teachers. Port Chester, NY: National Professional Resources, Inc. Popham, W.J. (2008). Transformative assessment. Alexandria, VA: Association for Supervision and Curriculum Development.Hyson webinar recording and handouts on “Creating or Improving the Effectiveness of Data Teams” - http://www.hved.org/index.php/programs-services/data-management/52-trainings-webinar-recordings

Questions?

Contact information Dan Hyson Hiawatha Valley Education District 1410 Bundy Boulevard Winona, MN 55987 507-452-1200, ext. 119 OR 507-474-7196 (direct line) dhyson@hved.org