Outcome Measurement Program Case Studies
Alignment of National Core Indicators – In-Person Survey (NCI-IPS) Components with CFIR
Alignment of National Core Indicators – In-Person Survey (NCI-IPS) Components with CFIR
Framework Domain: Intervention Characteristics
External vs. Internal Development: The first factor considered is the degree to which stakeholders “buy” into the tool. CFIR research indicates that those programs that are not developed by the stakeholders as a “grassroots” solution to a problem or challenge and are centrally controlled, risk implementation failure. Externally developed tools like the NCI-IPS are more likely to be successful if there is leadership buy-in (for example, by state DD directors) and the value of the data to quality assurance and policy-decisions is communicated to staff. Although NCI-IPS is an externally developed tool, two primary factors contribute to state buy-in of the NCI-IPS. NCI-IPS was initially developed with a partnership of states seeking to measure quality and state DD directors (through NASDDDS) continue to be actively involved in improving the instrument. States commit staff time and financial resources as part of their involvement with NCI-IPS. The second factor is that although participation is voluntary, states commit staff and financial resources to implement the NCI-IPS program, meaning that leadership values the information it provides for monitoring system performance and shaping policy.
Buy-in is more difficult at provider, family, and individual levels. The NCI-IPS focuses on system performance and is not intended to address quality outcomes at the provider or individual level. This may make it difficult for individuals to see the benefit of participating in the NCI-IPS as the benefits are not direct. Providers can be an essential part of recruiting individuals and families to participate. Successful recruitment rests on effectively communicating that information learned from NCI-IPS can ultimately lead to system improvements that may eventually improve services and supports for respondents.
Evidence Strength and Quality: The CFIR foundation literature indicates that a tool’s success is strengthened when stakeholders believe in the tool’s credibility. Perceptions of tool relevance and effectiveness may derive from a range of sources such as the influence and views of colleagues, studies of reliability and validity, as well as pilot results and feedback from implementation teams.
NCI-IPS has done extensive testing since its inception to assure reliability and validity (see the section below on reflecting and evaluation). State respondents noted that reliability and validity was one of the strengths of using the NCI-IPS as a measurement tool.
Relative Advantage: The CFIR research evidence supporting this construct finds that there is stronger implementation when stakeholders view their selected intervention as providing an advantage over similar interventions.
States using the NCI-IPS value the ability to compare their states’ performance to others at both the regional and national levels. The data portals and reports are also valued by states. State respondents noted that there is not another program like the NCI-IPS available that allows a comprehensive overview of state system performance that provides for monitoring trends. Also, state agency staff reported that the ability to compare their state to national data and to other states with similar systems was useful.
Adaptability: In every intervention or program proposed for implementation, there will be essential components that must be unchanged regardless of local culture or requirements. However, measurement programs that can be readily introduced to new environments with easily accomplished customization of non-essential components are more successful.
NCI-IPS allows states to add questions of their own to address important local policy issues. HSRI works with states on the development and implementation of these items. The ability to adapt the instrument to meet state needs is a strength of the NCI-IPS program.
Trialability: Research establishes a link between the ability to pilot test a tool and successful implementation. A trial period that can be reversed provides an organization with the opportunity to adapt and retool implementation based on the trial experience. One of the benefits of the NCI-IPS is the standardized data collection that allows for comparison across states, meaning that fidelity to the process as designed is essential.
Complexity: There is an inverse relationship between the complexity of a proposed intervention and the likelihood of its successful implementation. The NCI-IPS uses a survey design that includes closed-ended questions rather than open-ended questions making the survey tool itself less complex. Further, NCI-IPS provides interviewers with thorough training as well as interviewer guides that include prompts and follow-up questions, increasing the simplicity for interviewers. This training covers interview procedures, disability etiquette, an in-depth look at interview items, and practice interviews. Additional training is available online for interviewers.
NCI-IPS’s complexity happens at a state level with the sampling frame design and recruitment of participants. Unlike some measurement programs in which the entity implementing the measurement tool has some direct relationship with the potential respondents, the states implementing NCI-IPS often rely on providers and other entities to support the recruitment of participants. Because states differ in how they field the survey, actual recruitment differs. In general, states use outside entities to conduct interviews. These include survey research firms such as Vital Research and Qlarent, University-based programs, and local entities (such as social service agencies). In the interviews we conducted with states using outside entities, all state agencies pulled a random sample of their participants and sent the contact information for potential participants to the entities responsible for conducting the IPS. The organizations conducting the survey are responsible for contacting and recruiting the participants to participate. All of the interviewers reported that this was one of the most challenging parts of the process as state database information is often dated, and it can be challenging to locate and reach participants.
Design quality and packaging: When components of a measurement tool or program are well-designed and easily accessible to users, the intervention is more likely to be implemented successfully. Flawed or poorly designed components can cause user dissatisfaction and/or lead to incomplete or inaccurate results. Stakeholders noted that training and supporting materials (these training videos, manuals, and visual interview aids) are useful and provide sufficient detail for the interviewer to be able to conduct the interviews in a way that assures reliable data collection. State respondents also noted that the state reports, including the user-friendly reports, were helpful for understanding and explain their state’s system performance.
Cost: The implementation literature finds a negative association between the cost of the intervention and its success with implementation. In addition to the cost of the tool and associated implementation costs, this part of the analytic framework also prompts consideration of “opportunity costs,” those items, projects, wages, or other financial obligations that cannot be pursued, completed, or prioritized due to NCI-IPS implementation. States report investing significant financial and other resources into implementing the NCI-IPS program. In addition to internal staff time to coordinate the program at the state level, states sometimes hire outside entities (such as Vital Research, Qlarent, university-based programs) to complete the interviews accounting for much of the cost. Some states, such as Pennsylvania, have specific funding approved by their legislature earmarked specifically for NCI. Others (such as Minnesota) have it built into their budget allocations and do not feel that it interferes with their other initiatives.
Framework Domain: Outer Setting
Focus on Individual Needs and Resources: In addition to the various components and characteristics of an intervention, the implementation literature identifies significant associations between specific environmental factors and successful implementation. One construct in this domain is the degree of person-centeredness of an organization implementing a measurement tool. The implementation literature indicates that implementation success will increase if implementers are knowledgeable about the individual needs of people receiving support as well as factors that hinder the provision of support.
The state-level staff is knowledgeable about the needs of individuals receiving supports and services at a macro level. Although many NCI-IPS interviewers have backgrounds that are relevant to supporting people with disabilities, it is not a requirement.
Due to the nature of the NCI-IPS program, it is difficult to identify and address needs at the level of the provider or individual. Respondents noted that this was one of the frustrations some had with the NCI-IPS. One state had formerly used a measurement tool that allowed for addressing service problems individuals may have had immediately. Other states’ respondents believed that having a tool that would allow for the identification and remediation of service problems at a provider or individual level would be helpful. NCI training and technical assistance does include developing protocols and guidance for interviewers when they identify areas of unmet needs. This may include providing information for ombuds programs or case management entities.
Information Sharing and Shared Vision (Cosmopolitanism): Another factor external to the measurement tool that affects implementation is the extent and nature of organizational support for staff to expand their roles to include keeping up with research, external training, and participating in professional groups. The implementation literature indicates that organizations that support these kinds of efforts are faster at implementing new practices. One of the critical factors that support the utility of the NCI-IPS program from the view of the state staff interviewed is that it allows for comparisons across states, regions, and nationally. States can compare their performance to other states and learn from other states about policies and practices that are effective in improving HCBS outcomes. For example, NASDDDS publishes information about state-policy initiatives in a regular newsletter for states, holds semi-annual conferences, and hold webinars about NCI with state DD agency staff.
External Policy and Incentives: Motivation to implement an outcomes measurement tool is increased by policies or regulations that call for a focus on outcomes. This is a strength of the NCI-IPS program. The areas assessed in the NCI-IPS survey instrument align with identified HCBS outcomes and quality indicators. Further, states’ DD agency leadership has been involved in designing and refining the HCBS tool.
Within HCBS, earlier forms of monitoring and measurement emphasized examining service inputs and professional practice recommendations with little attention to the actual circumstances of a person’s life. There has not been a federal mandate for any form of required, standard outcomes measurement for the HCBS program. Instead, each state was required to submit evidence of compliance with the commitments made in waiver agreements or in-state plans. The 2014 HCBS settings rule clarifies requirements for state organizations and states to assure that they are achieving important outcomes in the areas of choice-making, self-determination, human rights, community access and inclusion, and person-centered strategies of support, among other requirements. The rule does not prescribe the method of outcomes measurement, but as states prepare their HCBS transition plans, the NCI-IPS’s use in 47 states, as well as the instruments’ coverage of the key outcomes, means that it is likely to continue to be used as a key component of assessing state HCBS system and policy outcomes.
Framework Domain: Inner Setting
Inner setting addresses organizational attributes including structure, networks and communication, culture, implementation climate, tension for change, compatibility with the intervention, incentives and rewards, goals and feedback, learning environment, readiness, available resources, and access to knowledge and information. This study was not designed to examine the capability of one or more specific organizations to implement the OM tool effectively. Instead, it looks at the characteristics of the tool that facilitate or hinder implementation across implementation sites.
Framework Domain: Characteristics of Individuals
The constructs identified in this domain include individual knowledge and beliefs about the intervention, individual stage of change (degree of progress toward skilled implementation), individual identification with the organization, as well as other personal attributes such as motivation, flexibility, etc. Similar to the previous domain, this study was not focused on how the characteristics of specific opinion makers, leaders, or stakeholder’s mediate implementation.
The state staff interviewed for this case study report that their leadership places high importance on quality assurance and the use of data to make policy decisions, and, therefore, has invested in the NCI-IPS program. For example, Pennsylvania’s DD agency has recently focused on voting and on access to communication devices based on their state’s quality assurance program. The staff interviewed are invested in making the implementation of the program work and are interested in the improvement of NCI-IPS as a tool. Their belief is that better data gives them the information they need to make better decisions.
Framework Domain: Process
The CFIR developers identify the “process” domain as the most challenging domain in implementation science to evaluate due to several theories on implementation that range across topics including total quality improvement and management, integrated support, complexity theory, and organizational learning (Dramschroder et al., 2009). The CFIR framework focuses on four constructs that are commonly found, either explicitly or implicitly, in a range of theories and frameworks: 1) Planning, 2) Engaging, 3) Executing, and 4) Evaluating.
Planning: The level of advanced planning for intervention is positively associated with the success of the intervention. NCI-IPS requires a significant level of advanced planning. States implementing NCI-IPS have to develop a sampling frame, ensure interviewers are hired and trained and need to develop a plan to recruit enough participants to meet their projected sample size. NCI-IPS does provide a structure and assistance for states in the planning and implementation stages. This can include help with developing sampling frames, for example.
NCI staff provided California’s training in partnership with California’s State DD Council. Other states (such as Minnesota) use private survey research firms (such as Qlarent or Vital Research), local agencies (such as Pennsylvania), or university-based programs (such as Oklahoma) to conduct the interviews. These vendors are responsible for training and monitoring interviewing. Interviewers participate in training and have ongoing training available via video. Training, as discussed previously, is in-person, although this will be changing in 2020. It includes in-depth information about the instrument and its items, disability etiquette, interview techniques, and opportunities to practice interviewing.
Engaging: States are engaged in the NCI-IPS program. State staff interviewed value the information the NCI-IPS provides about their state and their ability to compare their state to others nationally and regionally. The data is used at the state level by multiple agencies, such as the DD agency, for monitoring Olmstead plans, and state disability councils. The reports and other data analysis and reporting tools were also identified as valuable for seeing trends in outcomes, individual satisfaction with services, and for making policy-decisions about services and supports.
The extent to which NCI-IPS engages participation at a local level is more of a challenge. Respondents reported that there is a challenge in engaging providers, families, and individuals in the NCI-IPS program and requires clear communication as to its benefits. NCI does produce publicly available reported and user-friendly templates that states can use to share state data with stakeholders.
Executing: To determine the components essential to the fidelity of the NCI-IPS, we asked NCI-IPS leaders as well as trainer/facilitators and state NCI-IPS coordinators to identify the core elements of the NCI-IPS measurement program so that we could focus on these components in our examination of the quality of its implementation. To assist them in this process, we provided definitions and examples of essential components and non-essential components. Essential components were identified as:
- The tool measures constructs identified as necessary to HCBS quality.
- NCI-IPS's two-day training provides materials and hands-on activities. Ongoing training is available via video and technical assistance.
- Interviewer training and ongoing assistance ensure reliability.
- The ODESA online data management system is a critical component as it ensures standardized reporting of data across the states.
- Reports are provided at the state level for HCBS systems quality improvement activities. States also have access to their data for further analysis.
Observations and interviews conducted during an onsite visit validated the presence of the components on the list. Based on observations, interviews, and reviews of documents and reports, and other supporting materials, NCI-IPS appears to be designed with robust components that assure fidelity when these components are all implemented as planned. These include the training that is provided to interviewers as well as the availability of technical assistance to address interviewer questions and needs while they are in the field.
Reflecting and Evaluating: The NCI-IPS has been tested and revised numerous times since it was developed in 1997. The original survey was pilot-tested and refined. Additional trials that included measures of inter-rater reliability and test-retest reliability done in 1997, 1998, and 1999 (Smith & Ashbaugh, 2001). In 2008 through 2010, the instrument underwent revisions resulting in further field-testing resulting in inter-rater reliability of .88 or higher across all trials. Feedback from field-tests was used to improve the fielding of questions and to improve the training program. Interviewers are also asked to provide feedback on every interview regarding the respondent’s understanding of the questions and consistency of responses. NCI uses this information to provide clarification about the intent of the item and for interviewer training.
In terms of the state-level ability to reflect and evaluate based on the data collected at the state level, several challenges were identified. Respondents believed a shorter time between the administration of the survey and access to the data would make it more useful.