RTCOM Briefs
Brief 1: Involving Stakeholders to Address Challenges in HCBS Measure Development: Toward Person-Centered Measurement
Introduction
Stakeholders, particularly those who have disabilities and their supporters, should be at the heart of measurement development. In order to ensure that home and community-based services (HCBS) outcome measures are of high quality, the measure development process must include input from stakeholders and most importantly the intended population with which the measures will be used. Furthermore, we contend that the unique challenges associated with measuring outcomes among members of diverse populations of HCBS recipients requires strong stakeholder involvement throughout all stages of development. This process has been affirmed by NIDILRR, ACL, and the Centers for Medicare and Medicaid Services (CMS, 2019). Using a sound HCBS outcome measurement framework; putting all measures developed through multiple expert panel reviews; undertaking cognitive testing and implementing both pilot studies and large-scale testing with the target audience (for greater detail on this process please refer to RTC/OM HCBS Outcome Measurement Brief #3) are necessary, but not sufficient, strategies for quality measure development.
Stakeholders, as we define them, include any individual or group with a vested interest in the outcome under study. For HCBS, this includes persons with disabilities, their family members, service providers (including direct support workers, front line supervisors, and program administrators), experts in the field of HCBS research and measurement, and both state and national level policy-makers. All have different interests and perspectives on HCBS, which makes it a worthwhile effort to obtain their respective feedback at all phases of measure development. The most important stakeholders, though, are the persons receiving services because of the direct impact of HCBS on their daily lives. As people with diverse disabilities from all walks of life have been telling us for years, “nothing about us, without us.”
People with disabilities, their behaviors, thoughts, and attitudes/beliefs have been the objects of measurement for some time. Unfortunately, within disability-related fields, measurement has, all too often, been used in a manner that has not served to protect their rights or improve the supports they receive, but rather, to limit them. Public schools have, for years, used the results of intelligence and achievement tests to deny subgroups of students with disabilities access to the inclusive classroom environments. Our court system has accepted measures of adaptive behavior, functional skills, and other psychological tests as sufficient evidence to place people with disabilities under guardianship arrangements. Measurement has, all too often, been used as a gatekeeping mechanism to segregate people with disabilities or as a means for states to demonstrate compliance with federal regulations regarding HCBS as opposed to aiding real efforts at quality improvement within the system and supporting a higher quality of life for people with of disabilities.
One approach to decrease this suspicion and increase support for participation in HCBS measurement efforts is to involve key stakeholders in discussions and decisions about each phase of the measure development process. This includes the: (1) conceptual framework that will underlie the development of measures; (2) selection of specific measures to be developed; (3) initial evaluation of measures to ensure that they are interpreted in the manner intended; (4) refinement of measures through pilot testing; and (5) final approval of measures through field-testing (see figure 1).
As a result of this past history, many people with disabilities are deeply suspicious of measurement. Frequently, it has benefited the system to a much greater extent than persons with disabilities themselves. As we move toward developing and implementing measures within HCBS that are person-centered in their orientation, the input of people with disabilities, their families, and other stakeholders is critical. No longer can we claim that measures developed by “experts” sitting around a table and exchanging ideas as to what they feel is most important for people with disabilities have content or social validity. Rather, people with disabilities themselves and to a somewhat lesser extent other stakeholders, need to be involved in all aspects of the process so that what we are measuring is not only important for people with disabilities and their families, but important to them and is able to be used to improve the quality and efficacy of the supports they receive.
One approach to decrease this suspicion and increase support for participation in HCBS measurement efforts is to involve key stakeholders in discussions and decisions about each phase of the measure development process. This includes the: (1) conceptual framework that will underlie the development of measures; (2) selection of specific measures to be developed; (3) initial evaluation of measures to ensure that they are interpreted in the manner intended; (4) refinement of measures through pilot testing; and (5) final approval of measures through field-testing.
Addressing the Challenges with Stakeholder Input
In the U.S., the HCBS provides a complex array of services within a variety of settings, to support the needs of the diverse population it serves. Given these characteristics, developing measures useful for all relevant populations and all settings is challenging. Addressing these challenges can only be accomplished through ongoing input from stakeholders on whom measurement systems are likely to have an impact. This process must start with stakeholder input into the selection of a conceptual framework that will guide measure development.
Until 2011, there was no national blueprint of priorities or goals around which to develop and implement measures for HCBS quality improvement. In 2011, with significant public input from the National Priorities Partnership (NPP), the National Quality Forum (NQF) developed a framework to help address gaps in the measurement of home and community-based services. The NQF’s Conceptual Framework for HCBS Outcome Measurement as outlined in their Final Report (NQF, 2016) was based on an environmental scan of the literature, an evaluation of current instruments used in the area, and the backgrounds and experiences of its developers. It is a comprehensive model of broad (domain) and more specific (subdomain) service and outcome areas deemed essential to measure in HCBS. Discussions of measurement, however, all too often remain at a high conceptual level and this was the case with the NQF framework – its content and social validity was, until recently, never validated with the persons it was most likely to impact.
The University of Minnesota’s Rehabilitation Research and Training Center on HCBS Outcome Measurement (RTC/OM) was tasked with developing individual HCBS outcome measures for five groups, including people with age-related disabilities (ARD); intellectual or developmental disabilities (IDD); mental health challenges (MHC); physical disabilities PD), and traumatic brain injury (TBI). A critical part of the measure development process was input from HCBS recipients as well as other stakeholder groups, including family members, service providers, and policymakers. RTC/OM staff, therefore, implemented a series of processes that allowed for stakeholder input at all phases of measure development. These approaches are not the only manner through which involve stakeholders in measure development. They do, however, represent examples of how individuals who are most likely to experience an impact when measures are employed can be involved in their development.
Participatory Planning and Decision Making Groups
The first step taken in this process was to establish the content and social validity of the NQF Conceptual Framework for HCBS Outcome Measurement with critical stakeholders. This was accomplished through a process referred to as Participatory Planning and Decision-Making (PPDM; Abery & Stancliffe 2003; Abery & Anderson, 2015). The PPDM process included meeting with all stakeholder groups and providing them with an opportunity to evaluate the NQF framework, add to it, and stipulate which personal outcomes and service characteristics were most important to measure. In order to obtain a nationally representative sample, PPDM groups were conducted across the country with each stakeholder group (RTC/OM 2017; see Figure 2). Overall, results from PPDM groups indicated a high degree of stakeholder support for the content of the framework. Stakeholders, however, strongly recommended a number of revisions. These included: (1) adding within the broad community inclusion domain a subdomain focused on access to and quality of transportation; (2) the addition of a stand-alone domain for employment; and (3) a greater focus on the self-determination of people with disabilities rather than the degree of choice and control they experience. In addition to these recommended changes, all stakeholders provided importance weightings of personal outcomes and the characteristics of service delivery. These results were used in conjunction with a gap analysis and input from a national advisory group of HCBS stakeholders to finalize the selection of eight outcome domains and subdomains for measure development.
Technical Expert Panels
A second approach to involving stakeholders in measure development is to ensure they are adequately represented on Technical Expert Panels (TEPs) established to review measure concepts after survey items are developed. Participation of people with disabilities, family members, and those associated with the provision of supports is critical at this juncture to ensure that the right questions are being asked in a manner to which respondents can relate. During this phase of measure development, RTC/OM staff recruited panel members who formed independent groups to review each measure concept being developed. Recruitment was based on content expertise and the goal of ensuring that a variety of stakeholders would take part in each review. TEP members individually rated each item that composed the measure concept area on the basis of six criteria: the item’s relevance to the construct being measured; importance; understandability by an interviewer; understandability by the person (respondent); accuracy of measuring the concept; and appropriateness of response options (see figure 3). TEPs provided critical input on the appropriateness of the items of which measure concepts were composed. This input was essential to identify major problems with items that did not adequately measure the construct in question or were likely to be confusing to respondents.
Figure 3. Technical Expert Panel Output
Measure Concept | Feasible M | Feasible SE | Usable M | Usable SE | Important M | Impotant SE | Overall M | Overall SE | Original M | Original SE | PPDM Domain | PPDM Sub |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Choice and Control: Personal choices and goals | 4.2 | 0.2 | 4.6 | 0.2 | 4.8 | 0.1 | 4.5 | 0.1 | 4.4 | 0.2 | 0.3 | 91 |
NEW: Community Inclusion - Transportation | 4.5 | 0.2 | 4.5 | 0.2 | 4.5 | 0.2 | 4.5 | 0.1 | 4.0 | 0.3 | ||
Choice and Control: Choice of services and supports | 4.0 | 0.3 | 4.7 | 0.2 | 4.7 | 0.2 | 4.5 | 02 | 4.3 | 0.1 | 0.3 | 91 |
Community Inclusion: Meaningful activity | 4.5 | 0.2 | 4.0 | 0.3 | 4.8 | 0.2 | 4.4 | 0.1 | 4.5 | 0.2 | -0.2 | 93 |
Service Delivery and Effectiveness: Person's needs met and goals realized | 4.2 | 0.3 | 4.6 | 0.2 | 4.5 | 0.2 | 4.4 | 0.2 | 4.0 | 0.3 | 0.3 | 93 |
Choice and Control: Self-direction | 4.0 | 0.3 | 4.7 | 0.2 | 4.5 | 0.2 | 4.4 | 0.2 | 4.1 | 0.2 | 0.3 | 91 |
Community Inclusion: Social connectedness and relationships | 4.1 | 0.3 | 4.4 | 0.2 | 4.6 | 0.3 | 4.4 | 0.2 | 4.1 | 0.3 | -0.2 | 93 |
Human and Legal Rights: Freedom from abuse and neglect | 4.0 | 0.3 | 4.4 | 0.2 | 4.6 | 0.2 | 4.3 | 0.1 | 4.1 | 0.3 | 0.3 | 96 |
NEW: Community Inclusion - Employment | 4.6 | 0.2 | 4.5 | 0.2 | 3.9 | 0.4 | 4.3 | 0.2 | 4.3 | 0.2 | ||
NEW: Workforce Staff Turnover | 4.3 | 0.2 | 4.2 | 0.2 | 4.1 | 0.3 | 4.2 | 0.2 | 3.7 | 0.4 |
Figure 3 provides the mean and standard error of a second round of technical expert panel ratings on a scale of 0-5 of the feasibility; usability; importance, and overall scores of a select sample of potential HCBS outcome subdomains considered for measure development based on the recommendations of participatory planning and decision making groups (PPDM) with stakeholders. PPDM importance weightings on a 0-100 point scale at the domain and subdomain level are provided in the far right hand column.
Cognitive Testing Measure Concepts with People with Disabilities
The terminology used to describe services or supports can vary based upon disability group membership, the settings in which supports are received, and/or the services and supports themselves. The term “person-centered,” for example, has positive, neutral, and sometimes even negative meaning, when used with members of different disability groups. Thoroughly testing with stakeholders, the language and terminology used in measure concepts, and even how the measure is implemented is, therefore, essential to ensure items are universally understood across HCBS recipients.
Cognitive testing (CT) is designed to obtain direct input from respondents to verify their interpretation of items and the words of which they are composed to ensure that these match the developer’s intent (Ericsson & Simon, 1980; Willis, et al., 1991). It is an essential step for improving item accessibility (Kramer & Schwartz, 2017) as well as contributing to the validity of measures (Castillo-Díaz & Padilla 2013). This form of stakeholder involvement is especially important when measures are intended for use with diverse populations. RTC/OM staff used a cognitive testing strategy referred to as the “Think Aloud Method” to address the core cognitive components of item responding as included in the Cognitive Aspects of Survey Methodology (CASM) model: comprehending the item, retrieving the information needed to answer the item, making a judgment, and reporting a response (Tourangeu, 1984; 2018). This approach provided yet another way to involve people with disabilities in the measure development process. Members from each disability group for whom measures are targeted were paid a stipend to provide survey developers with input as to how they interpreted each item, the clarity of key terms (including the adequacy with which potential responses revealed their thoughts on the matter in question (Beatty & Willis, 2007; Willis, 2005). This process allowed developers to modify items using input from individuals with disabilities to ensure that items were comprehended and respond to as intended. This input allowed measure developers to maximize the accessibility of items to all groups with which they are intended to be used.
Conclusion
The extent to which stakeholders are involved in measure development should be a key criterion in the selection of HCBS outcome measures. It both improves the psychometric qualities of measures and respects the rights of people with disabilities to be the primary determiners of their own lives. Stakeholders can provide valuable information as to what is most important to assess enhancing the person-centeredness of measures. Their involvement, in particular that of persons with disability, has the potential to improve reliability and validity through enhancing comprehension of items across diverse groups of HCBS recipients. Finally, including a broad range of stakeholders in the development and testing of measures assures that the measures are relevant and meaningful to people with disabilities, their families, providers as well as policymakers.
Recommendations
Recommendation 1
HCBS measure developers should include a broad range of stakeholders to both better define domains and subdomains to be measured and ensure that those constructs of focus are fully covered by the measure. This process will help ensure that measures developed are valid and relevant to the people on which they are most likely to have an impact.
Recommendation 2
HCBS measure developers should make efforts to involve stakeholders in all stages of measure development and testing. This is particularly true when person-centered measures are being developed that will be responded to by people with disabilities themselves.
Recommendation 3
When selecting HCBS measures, users need to examine if and how stakeholder input was incorporated in measure development, as well the manner in which the measures have been used in the past and their basic psychometric properties (e.g., evidence of reliability and validity).
References
Abery, B. H., & Anderson, L. L. (2015). Developing a conceptual framework for effective care coordination for adults with physical disabilities.
Abery, B. H., & Stancliffe, R. H. (2003). A tripartite ecological theory of self-determination. In M. Weymeyer, B. H. Abery, R. H. Stancliffe, & D. Mithaug (Eds.), Theory in self-determination: Foundations for educational practice. New York, NY: Thomas Publishing.
Beatty, P. C., & Willis, G. B. (2007). The Practice of Cognitive Interviewing. The Public Opinion Quarterly, 71(2), 287–311. https://doi.org/10.1093/poq/nfm006
Castillo-Diaz, M., & Padilla, J.-L. (2013). How Cognitive Interviewing can Provide Validity Evidence of the Response Processes to Scale Items. Social Indicators Research, 114(3), 963–975.
Centers for Medicare and Medicaid Services (CMS). (2017). Blueprint for the CMS Measures Management System Version 15.0 September, 2019. Retrieved from https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/MMS/Downloads/Blueprint.pdf
Ericcsson, K. A., & Simon, H. A. (1980). Verbal reports as data. Psychological Review, 87(3), 215–251. https://doi.org/10.1037/0033-295X.87.3.215
Kramer, J. M., & Schwartz, A. (2017). Reducing Barriers to Patient-Reported Outcome Measures for People With Cognitive Impairments. Archives of Physical Medicine and Rehabilitation, 98, 1705–1720. https://doi.org/10.1016/j.apmr.2017.03.011
National Quality Forum (NQF). (2016). Quality in Home and Services to Support Community Living: Addressing Gaps in Performance Measurement. Retrieved from http://www.qualityforum.org/Publications/2016/09/Quality_in_Home_and_CommunityBased_Services_to_Support_Community_Living__Addressing_Gaps_in_Performance_Measurement.aspx
Rehabilitation Research and Training Center on Outcome Measurement. (RTC/OM). (2016). Stakeholder Input: Identifying Critical Domains and Subdomains of HCBS Outcomes. Retrieved from https://rtcom.umn.edu/phases/phase-1-stakeholder-input
Rehabilitation Research and Training Center on Outcome Measurement (RTC/OM). (2018). State HCBS Assessment Tools. Online database. Retrieved from https://rtcom.umn.edu/database/state-hcbs-assessment-tools
Tourangeau, R. (1984). Cognitive science and survey methods. In T. Jabine, M. L. Straf, & R. Tourangeau (Eds.), Cognitive Aspects of Survey Design: Building a Bridge Between Disciplines (pp. 73–100). Washington, DC: National Academy Press.
Tourangeau, R. (2018). The survey response process from a cognitive viewpoint. Quality Assurance in Education, 26(2), 169–181. https://doi.org/10.1108/QAE-06-2017-0034
Willis, G. B. (2005). Cognitive interviewing: a tool for improving questionnaire design. Sage Publications.
Willis, G. B., Royston, P., & Bercini, D. (1991). The use of verbal report methods in the development and testing of survey questionnaires. Applied Cognitive Psychology, 5(3), 251–267. https://doi.org/10.1002/acp.2350050307