Outcome Measurement Program Case Studies

Methods

Case Selection

Cases were selected in consultation with the Project Advisory Committee (PAC) based upon “key” characteristics (Thomas, 2011).  Characteristics that were considered in selection include a range of cases that illustrate a variety in:

  1. measure focus  (organization, region or state);
  2. methods used to obtain data (administrative dataset, in-person interview by contracted personnel, in person interview by state or organization personnel, use of individuals with disabilities in the interview process, online, tablet or smartphone methods to obtain outcome data);
  3. type and comprehensiveness of training involved to prepare data collectors;
  4. sample size within the program;
  5. target population;
  6. integration of proxy or non-proxy procedures;
  7. scope and potential for scale-up of the outcome measurement program;
  8. fee for service versus managed care LTSS  environment; and,
  9. depth and alignment of the outcome measurement domain areas with NQF and the results of Studies #1 and #2 within the RRTC/OM.

Instruments and Sources of Data

Multiple sources were used to collect data on each of the POMs® characteristics and procedures that are associated with implementation effectiveness using CFIR constructs as a guide (See Appendix C for the study protocol).  While adhering to CFIR theory, the data probes and questions were also designed with an open-ended format to offer the opportunity to identify new implementation constructs and insights that have not been described in the existing literature related to HCBS outcome measurement implementation.

Sources of information included key informant interviews of stakeholders with knowledge of the design, administration, and implementation of the POMs® including, people knowledgeable about the POMs®, administrators, trainers, providers of technical assistance, and surveyors.  We have also interviewed stakeholders who use the program to make policy and program decisions based on OM results, including state and regional administrators and providers who implement HCBS programs.  Interviews were conducted with advocates and self-advocates who are familiar with the program.

Other sources of data included:

  1. an examination of peer-reviewed literature concerned with “fidelity of implementation” and “implementation science” as well as current research on “quality of life” measures and constructs, inclusive of recent publications and endorsements of OM tools by the NQF and CMS rules regarding HCBS settings and practices;
  2. OM program  documents including marketing materials, protocols and measurement reports, as well as other written materials pertaining to the OM program such as evaluations, surveys, and periodic updates of the program tool and procedures;
  3. existing data collected by the OM program;
  4. descriptions of policies, regulations, or other system changes resulting from OM program activity; and,
  5. observations of surveyor training and simulated implementation. Limitations on the project schedule and CQL accreditation schedules did not offer an opportunity to observe interviews conducted by certified interviewers for the purpose of program accreditation. We observed interviews conducted with people in HCBS that were implemented as a component of a four-day training process to immerse trainees in the POMs® measures, procedures and philosophy. 

An interview guide and several protocols were developed to guide the data collection (See Appendix C).  Protocols varied based on the category of persons interviewed.  One standard protocol (Protocol A) was used to interview OM program staff, state administrators, and staff from organizations using the OM tool. Slight variations were made in Protocol A to fit the context of each group.  A second protocol (Protocol B) was developed for use with advocates and self-advocates, and a third protocol was developed for use with advocates and other content experts in outcome measurement familiar with the selected OM and the state or region in which the OM program was studied. RRTC/OM research staff, national advisors, and staff from NIDILRR and ACL reviewed the protocols before implementation. 

We defined implementation fidelity as “the extent to which the critical components of an intended program are present when that program is enacted“(Century et al., 2010).  Adherence to this definition required that the study team identify the critical or essential components of each of the HCBS OM programs.  To accomplish this goal, we developed a query form that OM program senior staff and trainers were asked to complete to identify those components perceived as essential or critical to the success of the program.  These forms were then reviewed with program administrators to clarify decisions about the essential or non-essential nature of the components identified. 

Planning for Site Visits

The project staff held a series of planning meetings with the OM program staff.  These meetings were used to identify and request relevant OM program materials and to identify sites with plans for scheduled implementation activities.  Once potential visit sites were identified, the OM program staff made initial requests to obtain permission for relevant organization staff to participate in the case study.  After the initial approval was granted and site contacts were identified, the team worked with the site visit contact to set up the visit interview and observation schedule.