/
Resource Database Assembly: Resource Database Quality Resource Database Assembly: Resource Database Quality

Resource Database Assembly: Resource Database Quality - PowerPoint Presentation

lois-ondreau
lois-ondreau . @lois-ondreau
Follow
375 views
Uploaded On 2019-02-04

Resource Database Assembly: Resource Database Quality - PPT Presentation

Recommendations amp Observations Part One Todays Presenters Steve Eastwood 211 Arizona Community Information and Referral Services Phoenix Arizona Dave Erlandson United Way 211Ceridian Minneapolis Minnesota ID: 749962

time database complexity resource database time resource complexity staff program work quality metrics airs amp standard records service record

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Resource Database Assembly: Resource Dat..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Resource Database Assembly: Resource Database Quality Recommendations & Observations

Part OneSlide2

Today’s Presenters

Steve Eastwood

2-1-1 Arizona, Community Information and Referral Services, Phoenix, Arizona

Dave

Erlandson

United Way 211/Ceridian, Minneapolis, Minnesota

Polly McDaniel

Institute for Human Services, 2-1-1 HELPLINE, Bath, New YorkSlide3

How We Got Here2013 AIRS Conference sessions in

Portland

Common thread emerged in discussions about best practices, potential metrics, staffing models, etc.

Opened the group to volunteers via the AIRS Networker Open Forum in June, 2013

Put together a new survey for resource work (last one was in 2008)Slide4

How We Got Here

2014

AIRS Conference sessions in

Atlanta

Discussions

of the group this

past year

+ Feedback

today (Atlanta conference)

=

Recommendations for

Staffing

Metrics

Database update percentage requirements

Published white paper supported by AIRS

Compilation of the field’s recommendations and resulting Database Quality MeasuresSlide5

Objectives

Database Quality

Metrics

Staff Performance Metrics

Program MetricsSlide6

Database Quality Metrics

Accuracy

Completeness

Consistency

TimelinessSlide7

Database Quality MetricsAccuracy

According to the Basic Principles of I&R (also known as the I&R Bill of Rights) and AIRS Standard 10, an I&R service maintains accurate information. Accuracy is measured by reviewing the original request submitted against the entry into the resource database.

We recommend that resource staff maintain a 95% accuracy rating for the entries they have updated. A resource department should have in place a procedure to measure this metric at least annually.Slide8

Database Quality MetricsCompleteness

AIRS Standard 8 identifies the data elements that are required and recommended for each organizational record within a resource database. Completeness is measured by the level of completeness for every required field in the database for each record updated annually (and for which data exists).

We recommend that resource staff maintain a 95% completeness rating for the entries they have updated. A resource department should have in place a procedure to measure this metric at least annually

.Slide9

Database Quality MetricsConsistency

Create a database style guide and apply it

consistently

Develop standard description narratives such

as

“Provides food boxes for individuals and families in need.”

“Offers one-time electric bill payment assistance.”

Review and/or audit records for consistency of style and

indexing

Set standards for measuring

consistencySlide10

Database Quality MetricsTimeliness

Set standards for entering changes or new information within a certain time

frame

Average time allowed for adding new info should be at least twice the average time allowed for updating an existing record to allow for research and data entry

process

Review timeliness through

documentation

We recommend maintaining a 95% timeliness

ratingSlide11

Database Quality MetricsComments

What have you done?

What is working, what isn’t?

Other questions?Slide12

Staff Performance Metrics

Records per FTE

Time to Reply

Time to Enter UpdatesSlide13

Staff Performance MetricsRecords Per FTE

This metric supports AIRS Standard 12

The number that was often cited was 650 – 800; however this was unsatisfactory since it was seemingly taken from thin air

Last year the number suggested was 500, which was based off of a limited set of complexity data

Realistic but challenging expectations are

key

Keep in mind outside factors: time off, those pesky updates that just go beyond any reasonable time expectation, skill of staff,

etc

CRS Task AnalysisSlide14

Staff Performance MetricsRecords Per FTE

Complexity plays an important role for this particular metric; however aspects of complexity may need some work (we’ll talk about that in session 2)

Those issues aside we may be nearing a point where we can measure this, with complexity taken into account, and come up with a national average

We’ll need to gather some data

Average database records (total)

Average database complexity (per complexity category)

Average time to complete an annual update (per category)Slide15

Staff Performance MetricsTime to Reply

Set standards for staff to acknowledge requests for database updates or

additions

This includes responding to providers, the general public, I&R staff,

etc…

Should meet standard at least 90% of the

timeSlide16

Staff Performance MetricsTime to Enter Updates

Set standards for how quickly updates and new information should be entered once

received

Should meet standard at least 90% of the

timeSlide17

Staff Performance MetricsComments

What have you done?

What is working, what isn’t?

Other questions?Slide18

Putting Metrics to Work

Polly –

Data Quality Procedure, Includes establishing quarterly monitoring

Performance Development Review Policy

Dave –

Created

a scoring sheet for monthly reviewsSlide19

Putting Metrics to WorkComments

What have you done?

What is working, what isn’t?

Other questions?Slide20

Hope to see you back for the second sessionSlide21

Resource Database Assembly: Resource Database Quality Recommendations & Observations

Part TwoSlide22

Today’s Presenters

Steve Eastwood

2-1-1 Arizona, Community Information and Referral Services, Phoenix, Arizona

Dave

Erlandson

United Way 211/Ceridian, Minneapolis, Minnesota

Polly McDaniel

Institute for Human Services, 2-1-1 HELPLINE, Bath, New YorkSlide23

Objectives

Program Metrics

Policies and Procedures

Inclusion/Exclusion Issues

Defining a Record

Nature of a Resource Database

Resource Department Customer Service Survey

Annual Completion Rate

Record Complexity Slide24

Program MetricsPolicies and Procedures

Taxonomy

Usage Policy (AIRS Standard 9, Quality Indicator 2; AIRS Standard 10, Quality Indicator 11)

The Taxonomy Usage Policy should also discuss Target Term usage or a separate Target Usage Policy should exist (AIRS Standard 10, Quality Indicator 11)

Database Maintenance Procedure (AIRS Standard 10 and 12)

Style Guide/Format Policy (AIRS Standard 10, Quality Indicator 4)

Inclusion/Exclusion

Policy (AIRS Standard 7)Slide25

Program MetricsInclusion/Exclusion Issues

Standard 7, but goes beyond

Sets the amount of maintenance work that an I&R must accomplish year to

year

What should go into a review?

Come to the Inclusion/Exclusion presentation “The Most Important Document You’ll Create” if the following gets you

excited

Inclusion/Exclusion

Brief literature reviews

A discussion on the nature of information in an I&R setting

Solid practices to implement in your next policy

reviewSlide26

Program MetricsDefining a Record

All the data elements that define an organization and its services, programs, and location at which the services are

deliveredSlide27
Slide28

Program MetricsNature of a Resource Database

Are there complex records in the D-List?

If so, can they be re-organized into a simpler structure?

Do the A-List receive priority for establishing personal relationships with their agency contacts? Slide29

Program MetricsAnnual Update Completion Rate

100% annual completion is our Holy

Grail

85-95% annual completion is more realistic due

to

Staffing (and funding) issues

Overlap of updates

cycles

Level of provider cooperationSlide30

Program MetricsResource Department Customer Satisfaction Survey

How satisfied are you with

Anytown

I&R’s listing of your organization’s information? (Excellent – Good – Fair – Poor)

If you chose Poor or Fair, what would you like to change about the way your information is listed?

How satisfied are you with the update process?

Are you aware of our online database?

Do either you or your staff use it? (Yes occasionally – Yes often – No)

How would you rate your experience using the online database?

If you have called (our database department), how would you rate the service you received? Slide31

Program MetricsResource Department Customer Satisfaction Survey

We recommend that those agencies that completed an update within the past year be requested to complete an annual update customer service survey. A satisfaction rating of at least 90% should be

achievedSlide32

Program MetricsDatabase Record Complexity

Sue

Boes

presented on Record Complexity last year

Not all records are created

equal

There’s

a big difference between a one-service agency and a ten-service agency; the I&R world needed a better way of recognizing this

difference

The complexity formula itself is a very solid way of measuring the difference between small, medium, large, and very large records in your database, but it may need some tweaks to work well for

youSlide33

Program MetricsRecord Complexity: Basic Method

Assign points to database elements

Consider what (agency/site/service) records hold the most critical elements and what elements are a breeze

Service

vs

Service Groups? Questions of consistency

Develop a scale

Scale offered can handle everything from 1 to >41

Should the scale be modified to handle “Very Complex” records >81 points?

Determine average work hours

This is the toughest component to narrow down

Gatto

& Kelly note a time tracking sheet is essentialSlide34

Program MetricsA bit more on tracking time

Track in a similar way that lawyers would track billable hours: when you’re actively working on a record

Time is listed in tenths of an hour (6 minute increments)

Be liberal when rounding up (if a task took you 14 minutes, mark it as .3)

Create a good sample size to base your average off of (this is no small task)Slide35

Program Metrics

Basic Method 2: Electric

Boogaloo

Consider Variables:

These are some of the things we considered above, but there are also some intangibles that we need to think about as well(staff skill, agency cooperation/rapport with your I&R, local standards, AIRS best practices, etc.)

Create formula

More of implementation if everything else above is in play

Total the weighted score for each record, multiply by average time

Review possible outcomes:

Keep in mind (among other things,) an increase in weight will tip your scaleSlide36

Program MetricsOutcome Top Tips: Management Tools

Equitable work assignments amongst resource staff

With current time projections 1950 (one year FTE) hours of work can be 65 complex agencies or 780 simple agencies

Jumping back to part one for a moment, this is why it’s difficult to say Records per FTE

Time and cost projections for new initiatives

If United Way would like you to add a service to each of your school districts as a new initiative to support its education impact area, you might use complexity to see if records will be bumped to the next tier of complexity

Keep in mind, this wouldn’t necessarily be a one time cost; maintenance is recurring!Slide37

Program MetricsOutcome Top Tips: Individual Tools

Equitable work amongst your personal update cycle

162.5 (one month FTE) hours of work can be ~6 complex agencies or 65 simple agencies

So balancing out each complexity tier as a percentage each month may help keep a monthly workload relatively even

Use this as a tool to help guide update decisions

Are these three services better as one service group or should they be split up?

Are there agencies with repeated services? Agencies with coalition agreements is a big issueSlide38

Program MetricsHome Brewed Complexity

Many I&R software packages actually implemented a complexity score

Many others did not; so implementing a complexity program can be difficultSlide39

Putting Metrics to Work

Polly –

Allowed my executive director to better see the capacity and time of the work involved in resource work

Switched to dedicated resource time vs contact center responsibilities rather than merging it all together…and “getting the resource work done in between calls”

Steve –

Complexity scoring showed us that some records were over-complicated. We scaled some back to help simplify database maintenance

Standards also gave us a way to measure the effectiveness of training methods and our new trainee and gave ideas for goals for resource staff annual reviews

Dave –

Working on redistribution of work load amongst resource specialists and simplifying more complex records

Cross trained call staff to help

While using complexity has made an impact, feel scoring needs to be adjustedSlide40

Putting Metrics to WorkAction Items

Do we, collectively need to refine any of these metrics?

Is there more

work to be done

Complexity Score – refine variables, time?

FTE’s?

Other issues?Slide41

THANK YOU!!

Steve Eastwood, seastwood@cir.org

Dave

Erlandson

, david.erlandson@Ceridian.com

Polly

McDaniel, mcdanielp@ihsnet.org