Outcome Measurement Program Case Studies
Appendix C:
Case Study Protocol
Case Study Purpose
The RTC/OM was created with federal funds to assist in the development of effective HCBS outcome measurement tools, and this series of case studies will make an important contribution to that goal. Several case studies were undertaken to examine how commonly used HCBS outcome measurement programs are implemented, and to explore factors that influence their implementation. Through cross-case study analysis, the research team will seek to identify standard measures and best practices in implementation to inform stakeholders on the effective measurement of HCBS.
Process
With input from the RTC/OM National Advisory Group (NAG), the research team identified several nationally recognized HCBS outcome measurement programs that are frequently used by states and organizations that operate HCBS.
In partnership with the organizations that implement these measurement programs, the research team planed and scheduled study activities, including identification of critical program components, collection of materials that describe the tool and its implementation procedures, visits to observe aspects of tool implementation, and interviews with key informants, including the tool developers and site leaders, tool users that included service organizations and government agencies, advocates for people receiving HCBS, and people receiving HCBS.
Methods
Research Questions:
What are the strengths and challenges of various outcome measurement programs?
How do these strengths and challenges impact measure administration fidelity?
What methodological components need to be in place to ensure measure administration fidelity in the implementation of HCBS outcome measures?
To what extent have known factors that are important to fidelity been attended to in the programs reviewed (e.g., interviewer training, criterion testing, protection against/identification of response bias)?
What are the similarities and differences in implementing various outcome measurement programs?
What are the likely/potential implications for the validity and reliability of data gathered that might derive from the differences?
What factors most facilitate or distract from effective implementation of programs regarding community living and participation outcome measurement?
What methodological components need to be in place to ensure measure administration fidelity in the implementation of HCBS outcome measures?
What are the strengths and challenges of various outcome measurement programs, and how do these impact measure administration fidelity?
What are the similarities and differences in implementing various outcome measurement programs?
What factors most facilitate or distract from effective implementation of programs regarding community living and participation outcome measurement?
Data Collection
Data collection consisted of the following steps:
Identification of critical program components in partnership with tool developers;
Review of written materials, including procedure manuals, site leader selection process and training materials; marketing and end-user materials; description of the technology used to support implementation and measurement; descriptions of technical assistance provided to measurement sites; description of methods for documentation of measurement data, description of methods for analysis and reports or feedback of measurement data, and any current technical and research reports focused on the measurement tool.
In-depth interviews with key informants;
On-site observations;
Review of draft case study findings with the measurement organization; and,
Development of a written case study report.
Stakeholder Group Interview Procedures:
- Post meeting guidelines
- Post agenda and time frame
- Post blank flip chart paper to record “challenges (barriers),” “strengths (facilitators)” and “key implémentation components (recommandations)” to improve the process
- Work with another person who can assist with writing on a flip chart
- Leaders review notes afterward to assure consistent interpretation
- Enter flip chart notes into a digital file
- Code and analyze using a qualitative software package
Individual Interview Procedures:
- Describe the purpose of the interview and project
- Take own notes on paper or electronically whichever is most comfortable
- Key interviews may be recorded for reference but will not be transcribed
- Enter notes into electronic format
- Notes are coded and analyzed using appropriate software
General Introductory Information for both individual interviews and group interviews:
Welcome and Introductions
Discuss commitment to refrain from using identifying material in reports. Mention that the study is exempt from procedures required by “Institutional Review Boards” that often require consent procedures to safeguard the privacy and human rights.
Summarize agenda, guidelines, and time frame and ask if anyone needs assistance with the meeting tasks. If help is required, arrange for the support.
Provide a project overview and discuss how data will be recorded.
Answer any questions and proceed with the protocol.
Protocol for Interviews with partner organization and state staff.
- What led you to choose to use this program for HCBS outcome measurement in your organization or state? (RQ1)
- Why did you decide to adopt a personal outcome approach to assessing quality (consumer pressure, sense that it is a standard of good practice, need to have outcome data to guide policy decisions, etc.)?
- How did you choose x as your preferred program (e.g., content, the trust of supporting organization, support for data management and summation, the ability to participate in a community of practice?
- What did this program offer to improve upon what was previously being done to measure quality?
- Have you used other measurement programs? (RQ1)
- Which ones?
- How do the others compare with this one in ease of implementation, quality of results?
- What do you see as the primary strengths in this program for gathering useful, accurate, and comprehensive information about the life outcomes of HCBS recipients? (RQ1)
- What do you see as the primary weaknesses (if any?) in this program that negatively affects the ability to gather accurate and comprehensive information about life outcomes? (RQ1)
- To what extent do you believe the xx outcome measurement tool/program is relevant to learning about the experiences of people (name the population served by the people with whom you are meeting)
- IDD
- Seniors
- Behavioral Health/ Psychiatric Dx
- Physical disabilities
- Other
- Is there anything about this program (its procedures, measures, or summation) that can lead to false conclusions or inaccurate information about the experiences of the people you support and their service outcomes? (RQ1)
- Do you perceive any conflicts of interest that might bias information?
- Are there aspects of this program (its procedures, sampling, if any, measures, or summation) that really help you get a clear and accurate window into the experiences of the people you support and their service outcomes? (RQ1)
- If sampling is used, are the samples sufficient enough to represent subpopulations (by age, nature, and severity of disability), people living in different types of settings (family, own home, residences of various sizes/types, people in different geographical areas, not to mention people receiving services from different providers
(In states where multiple quality programs exist) What are the similarities and differences in implementing various outcome measurement programs? (RQ1)
What components most facilitate and distract from effective implementation of HCBS outcomes measurement programs. (RQ1)
As you know, today, we are discussing your (organization's or state's) use of the xx program to measure HCBS Outcomes. This measurement program has the following major components (survey, interview etc. perhaps use a graphic display of the major components on an erasable surface that they can actually manipulate to get the correct list). Probe to be sure all components are listed. (RQ2)
Looking across the major components we've identified in this program, can you rank them in order of their usefulness in obtaining the information you need to know about service outcomes? (RQ2)
How well do you think that the instrument yields accurate information about people? Why or why not? (RQ2)
What steps do you take or procedures do you follow to assure that each component and the entire tool is implemented correctly and that you're getting an accurate picture of outcomes? (describe as best as you can the components of the program as you understand them) (RQ2)
You've mentioned several strategies that help assure correct implementation- which activity is most important in ensuring proper administration? How do you make sure that this strategy is implemented as desired? (RQ2)
Is there any other approach to learning about outcomes that you think would help you know more about the people you are serving? (RQ2)
What methods are specified to track any recommendations for quality improvement as a result of the study? (RQ2)
- Are the recommended actions tracked by all stakeholders?
- Are the recommended actions acted upon promptly?
Who is responsible for the program’s administration, including training of data collectors, following administration protocol, making decisions about the reliability and validity of data, etc.?
How is the program data currently being used? For example, is it used in annual summary reports, reports to the legislature, in communities of practice, etc.? Are there examples of how the data gathered have affected policy or program decisions?
To what extent have various personal outcomes been integrated into the periodic program reviews of service recipients? If they have been, how are individual outcomes reported and analyzed?
Interview with Advocates:
- In your experience, what are the strengths and weaknesses of the various quality assessment/quality review processes used in this state? What works well and not so well? Does one work better than another? In what way? (RQ1)
- What positive changes in support, if any, do you attribute to quality reviews? (RQ1)
- What are the reviews failing to identify, if anything? (RQ1)
- Do you think the measurement program(s) are implemented as intended by the developers? in an effective manner? in a way that accurately obtains the actual experiences and opinions of individuals?) Why/why not? (RQ1)
- Describe how the measurement program is well suited to people with (name the population)? (Why?) (RQ1)
- In your view, what are the best methods/approaches that people can use to learn about the quality of support and the impact HCBS support has on people’s lives? (RQ2)
- What survey methods or questions do you feel are most helpful or productive when trying to assess the quality of supports? (RQ2)
- What methods do you think are the least useful or productive when trying to assess the value/ quality of supports? (RQ2)
- How would you change the methods of assessing quality to make it more useful, accurate, or consistent? (RQ2)
- What aspects of the survey process or circumstances surrounding the survey present barriers to getting accurate, comprehensive, and reliable information about quality outcomes? (RQ2)
- What would you change (if anything) to obtain the most important information about the quality of support? (RQ2)
- Do you believe that the xx program that looks at HCBS outcomes treats people with dignity and respect? (RQ2)
- What helps to ensure those formal systems of quality review are correctly implemented? (RQ2)
- How are the data gathered on individual outcomes analyzed, summarized, publicized, and used?
- Are there systematic efforts to analyze findings and to make program and policy recommendations from findings?
- Does the process of reviewing findings and making recommendations from them include advocates, service users, and family members?
- What methods are specified to track any recommendations for quality improvement as a result of the study? (RQ2)
- Are the recommended actions tracked by all stakeholders?
- Are the recommended actions acted upon promptly?
- As an advocate, you’ve likely seen various programs of measurement (service quality assessment) through the years and are familiar with the tools used presently. (RQ3)
- Which ones seem to work better than others in bringing about positive change?
- Why/why not?
- How are the measurement programs similar, how are they different? (RQ3)
- As a concerned person, how do you receive/obtain information about review results and the steps that the state/providers will take to respond to survey findings? (RQ3)
- Are results widely shared in each of the measurement programs? (RQ3)
- Do people with disabilities and their families participate fully in the quality process? (RQ3)
- What helps or hinders full participation in the process?