Outcome Measurement Program Case Studies

Appendix C:
Study Protocol

Case Study Purpose

The RTC/OM was created with federal funds to assist in the development of effective HCBS outcome measurement tools and this case study makes an important contribution to that goal.   Three case studies were undertaken to examine how commonly used HCBS outcome measurement tools are implemented, and to explore factors that influence their implementation.  Through cross case study analysis, the research team sought to identify common measures and best practices in implementation to inform stakeholders on the effective measurement of HCBS.

Process

With input from the RTC/OM National Advisory Group (NAG), the research team identified several nationally recognized HCBS outcome measurement tools to study that are frequently used by states and organizations that operate HCBS. 

In partnership with the organizations that offer the measurement tools, the research team planned and scheduled study activities including:  identification of critical program components,  collection of materials that describe the tool and its implementation procedures, visits to observe aspects of tool implementation, and interviews with key informants including the tool developers and site leaders,  tool users including service organizations and government agencies, advocates for people receiving HCBS and people receiving HCBS.

Methods

Research Questions (Please note that research questions may change in response to certain project findings):

  1. What are the strengths and challenges of various outcome measurement programs?
  2. How do these strengths and challenges impact measure administration fidelity?
  3. What methodological components need to be in place to ensure measure administration fidelity in the implementation of HCBS outcome measures?
  4. To what extent have known factors that are important to fidelity been attended to in the programs reviewed (e.g., interviewer training, criterion testing, protection against/identification of response bias)?
  5. What are the similarities and differences of implementing various outcome measurement programs?
  6. What are the likely/potential implications for validity and reliability of data gathered that might derive from the differences?
  7. What factors most facilitate or distract from effective implementation of programs regarding community living and participation outcome measurement?
  8. What methodological components need to be in place to ensure measure administration fidelity in the implementation of HCBS outcome measures?
  9. What are the strengths and challenges of various outcome measurement programs and how do these impact measure administration fidelity?
  10. What are the similarities and differences of implementing various outcome measurement programs?
  11. What factors most facilitate or distract from effective implementation of programs regarding community living and participation outcome measurement?

Data Collection

Identification of critical program components in partnership with tool developers:

  • Review of written materials including: procedure manuals, site leader selection process and training materials; marketing and end-user materials; description of technology used to support implementation and measurement; descriptions of technical assistance provided to measurement sites; description of methods for documentation of measurement data, description of methods for analysis and reports or feedback of measurement data, and any current technical and research reports focused on the measurement tool.
  • In-depth interviews with key informants;
  • On-site observation;
  • Review of draft case study findings with the measurement organization; and,
  • Development of a written case study report.

CASE STUDY PLAN

Activity

Estimated Activity Time in Hours per Organization

RT

PP

Prior to Site Visits

Program Partner (PP) reviews case study goals, methods and tasks and confers with Research Team (RT) on any study-related questions and schedules.  PP assigns a primary contact person to interface with RT.

6

3

List the critical components of the measurement program and confer with RT staff to finalize.

4

4

Provide RT with links to program procedure manuals, measurement tools, training curricula, surveyor/site leader job description, sample reports and any reports that aggregate HCBS outcome data for the state

-

4

RT reviews the state website for HCBS information and HCBS outcomes measurement/review activities.3

3

-

RT reviews secondary data sources and related research to obtain additional information about the state’s HCBS.

4

-

Compare state goals and plans for community-based support with HCBS programs and intended outcomes.

2

-

Examine any supplemental measures or indicators that are used in conjunction with the primary HCBS measurement instrument to provide a comprehensive picture of HCBS outcomes.

3

-

PP provides contact information for key informants including State and Regional HCBS staff, advocates and self-advocates, recipients of HCBS and recommends dates and locations for meetings

-

4

PP identifies training dates and other dates when measurement activities can be observed and stakeholders can be interviewed.   PP liaison and RT develop a schedule.

-

2

RT contacts key informants and schedules meetings for group and individual interviews as needed

-

8

During Site Visits

RT observes training and interviews trainers as needed

RT completes key informant interviews

Post Site Visit

RT synthesizes case study in draft report and shares with PP

PP reviews draft and provides RT with feedback and any corrections needed

RT provides corrected draft and final project report when all case studies are complete.

Stakeholder Group Interview Procedures:
  1. Post-meeting guidelines
  2. Post agenda and time frame
  3. Post blank flip chart paper to record “challenges (barriers),” “strengths (facilitators)” and “key implementation components (recommendations)” to improve the process
  4. Work with another person who can assist with writing on a flip chart
  5. Leaders review notes afterward to assure consistent interpretation
  6. Enter flip chart notes into a digital file
  7. Code and analyze using the qualitative software package
Individual Interview Procedures:
  1. Describe the purpose of the interview and project
  2. Take individual notes on paper or electronically whichever is most comfortable
  3. Key interviews may be recorded for reference but will not be transcribed
  4. Enter notes into electronic format
  5. Notes are coded and analyzed using appropriate software
General Introductory Information for Both Individual Interviews and Group Interviews:
  1. Welcome and Introductions.
  2. Discuss commitment to refrain from using identifying material in reports.  Mention that the study is exempt from procedures required by “Institutional Review Boards” that often require consent procedures to safeguard privacy and human rights.
  3. Summarize agenda, guidelines and time frame and ask if anyone needs assistance with the meeting tasks.  If help is needed arrange for the assistance.
  4. Provide project overview and discuss how data will be recorded. 
  5. Answer any questions and proceed with protocol.
Protocol for Interviews with Partner Organization and State Staff:
  1. What led you to choose to use this program for HCBS outcome measurement in your organization or state? (RQ1)
  2. Why did you decide to adopt a personal outcome approach to assessing quality (consumer pressure, sense that it is a standard of good practice, need to have outcome data to guide policy decisions, etc.)? 
  3. How did you choose x as your preferred program (e.g., content, trust of supporting organization, support for data management and summation, ability to participate in a community of practice? 
  4. What did this program offer to improve upon what was previously being done to measure quality?          
  5. Have you used other measurement programs? (RQ1)
    1. which ones?
    2. How do the others compare with this one in ease of implementation, quality of results?
  6. What do you see as the primary strengths in this program for gathering useful, accurate and comprehensive information about life outcomes of HCBS recipients? (RQ1)
  7. What do you see as the primary weaknesses (if any?) in this program that negatively affect the ability to gather accurate and comprehensive information about life outcomes? (RQ1)
  8. To what extent do you believe the xx outcome measurement tool/program is relevant to learning about the experiences of people (name the population served by the people with whom you are meeting)
    1. IDD
    2. Seniors
    3. Behavioral Health/ Psychiatric Dx
    4. Physical disabilities
    5. Other
  9. Is there anything about this program (its procedures, measures or summation) that can lead to false conclusions or inaccurate information about the experiences of the people you support and their service outcomes? (RQ1)
  10. Do you perceive any conflicts of interest that might bias information?
  11. Are there aspects of this program (its procedures, sampling if any, measures or summation) that really help you get a clear and accurate window into the experiences of the people you support and their service outcomes? (RQ1)
  12. If sampling is used, are the samples sufficient enough to represent subpopulations (by age, nature and severity of disability), people living in different types of settings (family, own home, residences of various sizes/types, people in different geographical areas, not to mention people receiving services from different providers
  13. (In states where multiple quality programs exist) What are the similarities and differences of implementing various outcome measurement programs? (RQ1)
  14. What components most facilitate and distract from effective implementation of HCBS outcomes measurement programs. (RQ1)
  15. As you know today we are discussing your (organization's or state's) use of the xx program to measure HCBS Outcomes. This measurement program has the following major components (survey, interview etc. perhaps use a graphic display of the major components on an erasable surface that they can actually manipulate to get the correct list). Probe to be sure all components are listed. (RQ2)
  16. Looking across the major components we've identified in this program, can you rank them in order of their usefulness in obtaining the information you need to know about service outcomes? (RQ2)
  17. How well do you think that the instrument yields accurate information about people? Why or why not? (RQ2)
  18. What steps do you take or procedures do you follow to assure that each component and the entire tool is implemented correctly and that you're getting an accurate picture of outcomes? (describe as best as you can the components of the program as you understand them) (RQ2)
  19. You've mentioned several strategies that help assure correct implementation- which activity is most important in assuring proper administration? How do you make sure that this strategy is implemented as desired? (RQ2)
  20. Is there any other approach to learning about outcomes that you think would help you know more about the people you are serving? (RQ2)
  21. What methods are specified to track any recommendations for quality improvement as a result of the study? (RQ2)
    1. Are recommended actions tracked by all stakeholders?
    2. Are recommended actions acted upon in a timely manner?
  22. Who is responsible for the program’s administration, including training of data collectors, following administration protocol, making decisions about the reliability and validity of data, etc.?
  23. How is the program data currently being used? For example, is it used in annual summary reports, reports to the legislature, in communities of practice, etc.  Are there examples of how the data gathered have affected policy or program decisions?
  24. To what extent have various personal outcomes been integrated into the periodic program reviews of service recipients.  If they have been how are personal outcomes reported and analyzed? 
Interview with Advocates:
  1. In your experience what are the strengths and weaknesses of the various quality assessment/quality review processes used in this state? What works well and not so well? Does one work better than another? In what way? (RQ1)
  2. What positive changes in support, if any, do you attribute to quality reviews? (RQ1)
  3. What are the reviews failing to identify, if anything? (RQ1)
  4. Do you think the measurement program(s) are implemented as intended by the developers? in an effective manner?  in a way that accurately obtains the true experiences and opinions of individuals?) Why/why not? (RQ1)
  5. Describe how the measurement program is well suited to people with (name the population)? (Why?) (RQ1)
  6. In your view, what are the best methods/approaches that people can use to learn about the quality of support and the impact HCBS support has on people’s lives? (RQ2)
  7. What survey methods or questions do you feel are most helpful or productive when trying to assess the quality of supports? (RQ2)
  8. What methods do you feel are the least helpful or productive when trying to assess the value/ quality of supports? (RQ2)
  9. How would you change the methods of assessing quality to make it more useful, accurate or consistent? (RQ2)
  10. What aspects of the survey process or circumstances surrounding the survey present barriers to getting accurate, comprehensive and reliable information about quality outcomes? (RQ2)
  11. What would you change (if anything) to obtain the most important information about the quality of support? (RQ2)
  12. Do you believe that the xx program that looks at HCBS outcomes treats people with dignity and respect? (RQ2)
  13. What helps to assure that formal systems of quality review are correctly implemented? (RQ2)
  14. How are the data gathered on individual outcomes analyzed, summarized, publicized and used? 
    1. Are there systematic efforts to analyze findings and to make program and policy recommendations from findings? 
    2. Does the process of reviewing findings and making recommendations from them include advocates, service users and family members?
  15. What methods are specified to track any recommendations for quality improvement as a result of the study? (RQ2)
    1. Are recommended actions tracked by all stakeholders?
    2. Are recommended actions acted upon in a timely manner?
  16. As an advocate you’ve likely seen various programs of measurement (service quality assessment) through the years and are familiar with the tools used presently. (RQ3)
    1. Which ones seem to work better than others in bringing about positive change?
    2. Why/why not?
  17. How are the measurement programs similar, how are they different? (RQ3)
  18. As a concerned person, how do you receive/obtain information about review results and the steps that the state/providers will take to respond to survey findings? (RQ3)
  19. Are results widely shared in each of the measurement programs? (RQ3)
  20. Do people with disabilities and their families participate fully in the quality process? (RQ3)
    1. What helps or hinders full participation in the process?