/
One Metric to Rule Them All One Metric to Rule Them All

One Metric to Rule Them All - PowerPoint Presentation

danika-pritchard
danika-pritchard . @danika-pritchard
Follow
344 views
Uploaded On 2019-06-28

One Metric to Rule Them All - PPT Presentation

Lessons from the World of Sabermetrics Baseballs Single Metric VAR In the world of sports there is a way to compare every baseball player to every other baseball throughout almost all of baseball history using just one number ID: 760467

data student institutions metric student data metric institutions metrics step college students incoming outcome rate ipeds rank outcomes variables

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "One Metric to Rule Them All" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

One Metric to Rule Them All

Lessons from the World of Sabermetrics

Slide2

Baseball’s Single Metric - VAR

In the world of sports, there is a way to compare every baseball player to every other baseball throughout almost all of baseball history using just one number. It’s called a VAR stat. (VAR = Value above Replacement)AKA – WAR (Wins Above Replacement)And it’s not actually unique to baseball – other sports use it tooIt basically represents your contribution to your team in the form of the number of games you win over an average benchwarmer.

Slide3

Harvard vs. Every Community College Ever

What if I told you that we could create the same kind of metric for virtually all colleges and universities?

We could compare Harvard to your local

state college.

And

Harvard

won’t

always be the best

choice for every individual.

Not only could we compare them, but we could do so using one simple number

.

One metric to rule them all

Slide4

What Matters When Comparing Different Colleges?

When you purchase something – anything – what is it that you consider before purchasing it?

Two Primary Things:

1) Cost

2) Quality

Slide5

My Main Argument

My Main Argument is:

Consumers need to be able to compare all of the vastly different higher education options to each other.

We have the tools to make a single metric that allows for this.

The Metric Needs To:

1) Effortlessly connote a cost/value ratio to users, even when they have vastly different worldviews on college affordability.

2) Should (probably) correct for incoming student characteristics.

The various WAR stats in baseball are not perfect.

But they allow users a way to narrow the field. It’s a starting place for decision making, rather than an ending place.

Slide6

General Framework

1) Calculate a ‘Cost Metric’ that rewards institutions for low costs.

2) Calculate a ‘Value Metric’ that rewards institutions for positive student outcomes.

2b) Adjust this value metric so that institutions with better incoming student characteristics aren’t unduly rewarded for recruiting the best students.

1 & 2 are combined into a single metric that is (initially) weighted equally between the two.

Users can then apply their own weighting schemes to place more or less weight on cost.

Slide7

Let’s Get Specific

Building the One Metric

Slide8

Building the Metrics

There are so many possible variables that are related to cost and student outcomes. How do you decide which one is the best?

You don’t.

Use a set of variables instead – then condense the information into a single dimension.

This is the same philosophy the Carnegie Classification System essentially uses.

Rank institutions on each variable, from best to worse.

Then sum the ranks

into an ‘aggregated measure’.

Then you rank each institution on the ‘aggregated measure’.

That’s it. You’re done.

Basically, you’ve created an ‘average rank’ across several different variables.

Slide9

A Starting Point: Cost Metrics

Beginning Metrics (IPEDS)

Average Net Price (Group 3)

Average Net Price (Group 4)

Average Net Price (Group 4: $0 - $30,000)

Possible Metrics (College Scorecard)

Student Debt at Graduation

% of Students receiving loans

There are lots of potential variables that could be included. But the combination listed above covers the net price of (in equal weight):

Students awarded scholarships.

Students that are eligible for federal financial aid.

The most economically disadvantaged students.

Slide10

A Starting Point: Student Outcome Metrics

Beginning Metrics

IPEDS Grad Rate (100%)

IPEDS Grad Rate (150%)

Wage Data (College

Scorecard/

Payscale

)

Advanced Metrics (i.e. things I’d like to add)

SAM Success Rates (Total 6-year)

Slide11

Should We Adjust Student Outcomes?

It seems natural that we should adjust student outcome metrics to account for the incoming characteristics of the students that attend the institution.

After all, it doesn’t seem fair to compare open access institutions with highly selective institutions. They have vastly different missions!

But there is an argument for not adjusting at all.

If you can get in to Harvard then you are likely to experience student outcomes that are similar to Harvard.

So why penalize Harvard by adjusting its student outcomes down?

Slide12

Why Not Display Both?

Is adjusting really the best course of action?

Honestly, I’m

not

sure. It’s

probably correct to adjust.

Adjusting

is going to be a big challenge

.

Of

course, we can always display both an adjusted rate and an unadjusted rate.

Slide13

Adjusting Student Outcomes, Step 1

Step 1: Create an Incoming Student Characteristic Rank [ISCR] in the same way as the other variables.

Place whatever variables you want in here. Basically, any incoming characteristic that is related to the student but not the institution.

Rank all variables individually (the lower the number the stronger the value is correlated with positive student outcomes). Sum the ranks. This is the final ISCR.

Slide14

ASO: Step 2

Step 2: Use the ISCR to predict the Student Outcome Rank at an aggregate level.

May have to do this separately for different types of institutions? Not sure how this might affect comparisons between 4-year and 2-year institutions.

Slide15

ASO: Step 3

Step 3: Generate a predicted Student Outcome Rank for each institution based on Step 2. [Careful, relationships are highly likely to be non-linear

]

I recommend using a series of ensemble predictive models here.

Slide16

ASO: Step 4 & 5

Step 4: Subtract Predicted Student Outcome from Actual Student Outcome for each institution. This is essentially a ‘value-added’ metric.

Step 5: Rank order this value added metric. This replaces the Student Outcome Metric.

Slide17

A Starting Point: Incoming Student Characteristics

Beginning Metrics (IPEDS)

Percentage

of Admitted Applicants

Average

Incoming ACT/SAT

Percentage

of Pell

Students

Possible Metrics

Percentage of Group 4 Students in $0-$30k and $30k-$48k as a percentage of FT-FT IPEDS cohort.

High School GPA – (CDS? – Not in IPEDS?)

Slide18

Challenges and Future Vision

The data would be used mostly as a starting point. As in ‘What institutions should I be looking at when I am planning on going to college’. It’s really used to create a list of potential institutions, rather than to finalize the list.

Obviously

, there are several

data challenges

The overall vision would be to try and copy a little bit of what has occurred in the sports world – create a ‘

sabremetrics

’ for higher

ed

data.

This

would entail dozens of people working on this data, many of them creating their own versions of the metric and possibly aggregating those results in an ensemble fashion

.

Slide19

Challenges and Future Vision

But there are lots of data gaps

A heavy amount of imputation would need to be done

And comparing community colleges to 4-year institutions would still be difficult, especially since, in an ideal world, the data elements would be the same across the sectors. But this isn’t exactly true.

I actually think that one should create separate rankings for each major Carnegie Classification, and then crosswalk the rankings using a percentile system to adjust for data differences that exist across differences.

This could be especially helpful in some areas when comparing community colleges to 4-year institutions with respect to things like graduation rates.

Slide20

Proof of Concept Draft – College Scorecard Data

Outcome Metrics

Include 150% IPEDS grad rate

A ‘SAM-like’ total success rate as judged from College Scorecard data

Wage Data from College Scorecard ($25k threshold percentages after 10 years)

Cost Metrics

Group 4 Net Price

Group 4 Net Price for $0-$30k

Incoming Characteristics

ACT/SAT Scores (midpoints calculated by college scorecard)

Percentage of Pell Students

Admissions Rate

Slide21

Proof of Concept

Try to remember that this data is seriously limited, and it really is just a proof of concept.

It contains only a fraction of the amount of data I think a finalized version should include.

Avoid the temptation to compare your institution with others!

With that said, let’s look at this new comparison tool…