Outcome Measurement Program Case Studies


Case Selection

Cases (measurement programs in HCBS) were selected by project staff in consultation with the Project Advisory Committee (PAC) based upon key characteristics (Thomas, 2011). Components considered in selection include cases that illustrate the variety in:

  1.  measure focus  (e.g., organization, region, or state);
  2. methods used to obtain data (e.g., administrative dataset, in-person interview by contracted personnel, in-person interview by state or organization personnel, or use of individuals with disabilities in the interview process,);
  3. type and comprehensiveness of training involved to prepare data collectors;
  4. sample size within the program;
  5. target population;
  6. integration of proxy or non-proxy procedures;
  7. scope and potential for scale-up of the outcome measurement program;
  8. fee for service versus managed care LTSS  environment; and,
  9. depth and alignment of the outcome measurement domain areas with NQF and the results of Phases  #1 and #2 of the RTC/OM.

IM4Q was chosen because it is a state-wide measurement program that incorporates the NCI state-level data with individual quality improvement needs through the Consideration process.  In addition, a unique aspect of the IM4Q is that all data is collected by a team of independent interviewers that includes a family member of someone with a disability, a person with a disability, or both.  Organizations that collect the data are local and do not provide services through Pennsylvania’s DD system.  Entities that collect data include a self-advocacy group, Mental Health Associations, local human service agencies,  and colleges.

Instruments and Sources of Data

Multiple sources were used to collect data on the IM4Q.  Sources of information included key informant interviews of stakeholders with knowledge of the design, administration, and implementation of the IM4Q including, 4 state agency administrators, 5 IM4Q area managers (2 from state public residential facilities and 3 from local human service agencies), 3 local program coordinators from three different organizations,  2 interviews with IM4Q staff at Institute on Disabilities, Institute on Disabilities, Temple University, 1 supports coordinator, 1 family member, 1 monitor/self-advocate, and 1 interview with a professor from Chatham University (one of the local IM4Q programs).  The researchers also attended a meeting of the IM4Q steering/management committees and attended the annual IM4Q conference to attend training sessions related to the implementation and use of the IM4Q.

Data probes and questions were designed with an open-ended format to offer the opportunity to identify new implementation constructs and insights that have not been described in the existing literature related to HCBS outcome measurement implementation.

Other sources of data included:

  1. Examination of peer-reviewed literature concerned with the fidelity of implementation and implementation science as well as current research on quality of life measures and constructs, inclusive of recent publications and endorsements of tools by the NQF and CMS rules regarding HCBS settings and practices;
  2. Progam  documents including marketing materials, protocols and measurement reports, as well as other written materials about the program such as evaluations, surveys, and periodic updates of the program tool and procedures;
  3. Descriptions of policies, regulations, or other system changes resulting from OM program activity; and,
  4. Participation in the annual IM4Q Annual Statewide Training.

An interview guide and protocols were developed to guide the data collection (see Appendix B).   Protocols varied based on the category of persons interviewed.  One standard protocol (Protocol A) was used to interview OM program staff, state administrators, and staff from organizations using the OM tool. Slight variations were made in Protocol A to fit the context of each group.  A second protocol (Protocol B) was developed for other content experts in outcome measurement familiar with the selected OM and the state or region in which the OM program was studied. RTC/OM research staff, national advisors, and staff from NIDILRR and Administration on Community Living reviewed the protocols before implementation. 

We defined implementation fidelity as “the extent to which the critical components of an intended program are present when that program is enacted” (Century et al., 2010).  Adherence to this definition required that the study team identify the critical or essential components of each of the HCBS OM programs.  We developed a query form that OM program senior staff and trainers were asked to complete to identify those components perceived as essential or critical to the success of the program.  These forms were then reviewed with program administrators to clarify decisions about the essential or non-essential nature of the components identified. 

Planning for Site Visits    

The project staff planned the site visit with the OM program staff.   Staff at Institute on Disabilities, Temple University assisted in providing relevant OM program materials and in identifying and scheduling interviews.  Interviews included interviewers, local survey managers, Institute on Disabilities, Temple University Staff, and Office of Developmental Programs staff, including the director.  Researchers from the RTC/OM attended the annual statewide training and conducted interviews with state ODP staff during one visit.  Five phone interviews were conducted with individuals who were unable to attend the annual meeting.