2025 survey of states–Trends and issues in statewide assessment of students with disabilities.

2025 Survey of States:
Trends and Issues in Statewide Assessment of Students with Disabilities

2025 Survey of States: Trends and Issues in Statewide Assessment of Students with Disabilities

Executive Summary

This report summarizes findings of the National Center on Educational Outcomes’ seventeenth survey of states. The survey has two primary purposes. First, it collects information about the participation and performance of students with disabilities in the statewide summative assessments that comprise the comprehensive assessment system. Second, it reports on current assessment trends and emerging issues that states are experiencing. Forty of the 51 regular states responded, including the District of Columbia, as well as five out of ten unique states. Key findings for regular states include:

  • States primarily monitored decisions about accessibility features by reviewing Individualized Education Program (IEP) system records and conducting desk audits. Most commonly, they monitored the actual provision of accessibility features through scheduled visits to local education agencies (LEAs) or schools and through direct observations of test administration. More than half of states collected data to examine the validity of these features, but a sizable number reported no activity related to monitoring accessibility.
  • States focused on professional development and sharing data with LEAs to support appropriate participation in the alternate assessments based on alternate academic achievement standards (AA-AAAS).
  • States are facing challenges in their efforts to meet the federal limit on participation in alternate assessment. States are primarily concerned about the inappropriate identification of students for the assessment. States also reported a lack of appropriately trained staff and the difficulty of meeting the needs of the small numbers of students in this category.
  • States ensured appropriate assignment to alternate English language proficiency (Alt-ELP) assessments by revising guidelines and providing training. States also used quality assurance activities like reviewing sample IEPs.
  • The use of growth models for accountability varied by assessment type. Student growth percentiles were the most common choice for the general assessment. Most states did not use a growth model for the AA-AAAS and Alt-ELP assessments.
  • States were most likely to disaggregate general assessment results by students' disability category. For all assessments, the most common purpose for disaggregating data was to examine trends.
  • States reported varied methods for scoring and reporting of incomplete assessment records. For example, some states count students who did not finish an assessment in their participation numbers, while most do not.
  • Unique states had similar challenges to regular states, but also some key differences. For example, unique states most often faced staffing and capacity issues as their biggest challenge in meeting the federal limit on participation in alternate assessment.

The survey findings indicate that states are emphasizing training and monitoring as strategies to ensure students with disabilities are properly assessed. While there are significant differences in practices, states are actively working to create systems that align with federal requirements and support appropriate assessment decisions for students with disabilities.

Overview of the 2025 Survey of States

This report highlights the findings of the National Center on Educational Outcomes’ (NCEO’s) seventeenth survey of states. NCEO has conducted biannual surveys of states for over three decades to collect information about the participation and performance of students with disabilities in statewide summative assessments that are part of a comprehensive assessment system. With each survey, NCEO has published a report summarizing the findings. The report gives stakeholders an update on assessment trends and new issues facing states.

In 2025, the survey was sent to state directors of special education and state directors of assessment. The directors submitted one response for the state. To create their responses, the directors worked with other state staff who were experts on policies for including students with disabilities in statewide assessments. Respondents submitted survey responses using an online survey tool.

Topics addressed included: accessibility features, alternate assessments based on alternate academic achievement standards (AA-AAAS), English learners with disabilities, and accountability. The survey consisted of both multiple choice and open-ended items. Multiple choice items allowed for clear analysis of trends across states. Open-ended items allowed each state to highlight the aspects that are most relevant in their state-specific context. States were invited to share links to web-based content or documents in responses to some open-ended items.

Forty of the 51 regular states (78%) responded to the survey. For the purposes of this survey, the District of Columbia was included with the regular states. In addition, five of 10 unique states (American Samoa, Bureau of Indian Education, Commonwealth of the Northern Mariana Islands, Department of Defense, Federated States of Micronesia, Guam, Marshall Islands, Palau, Puerto Rico, U.S. Virgin Islands) completed it.

Throughout the survey and this report, the term “accessibility features” is used to encompass all tiers of accessibility features. This includes universal features available for all students, and designated features available for all students if an adult has identified a need before the assessment. It also includes accommodations available only to students with disabilities or English learners, as identified in Individualized Education Program (IEP), 504, or English learner plans.

All data relate specifically to statewide summative assessments that are part of a comprehensive assessment system. Data in the following figures represent survey results related to state policies and practices for statewide standardized assessments in regular states only. Responses from unique states and entities are not included in the figures. Survey results from unique states are summarized separately in each section.

The figures presented below do not include the full text of each survey item. To review the complete survey and options presented to participants, see Appendix A.

Accessibility Features

Monitoring IEP Team Decision-Making

States were asked how they monitored IEP teams’ decision making about accessibility features on statewide assessments (see Figure 1). Most states reported that they reviewed IEP system records (65%), conducted desk audits (54%), reviewed assessment process or click data (38%), and conducted interviews with students (38%). Some regular states indicated “Other” methods of monitoring decision making (19%). A small percentage indicated that they did not monitor IEP teams’ decision making for assigning accessibility features on statewide assessments (11%).

Figure 1. Methods of Monitoring Decision Making Regarding Accessibility Features, Regular States

A horizontal bar chart showing how states monitor IEP teams' decision making about accessibility features on statewide assessments. The y-axis lists monitoring methods, and the x-axis shows percentages from 0% to 70%. The bars alternate between solid black and crosshatched patterns. "Review of IEP system records" displays the longest solid black bar at 65% of states, followed by "Conduct desk audits" with a crosshatched bar at 54%, "Review of assessment process (click) data" and "Interviews with students" each with bars at 38% (solid black and crosshatched respectively), "Other" with a solid black bar at 19%, and "No activities in this area" with a crosshatched bar at 11%.

Note: Thirty-seven regular states responded to this item. Respondents were able to select multiple response options.

Five unique states responded to this item. All of them reported conducting desk audits to monitor IEP teams’ decision making about accessibility features on statewide assessments. They also reported reviewing assessment process (click) data, reviewing IEP system records, and interviewing students, educators, and administrators.

Monitoring Provision on Statewide Assessments

States were asked how they monitor the provision of accessibility features on statewide assessments (see Figure 2). For regular states, the most frequently reported monitoring strategies were scheduled visits to targeted LEAs or schools (65%) and scheduled visits to randomly selected LEAs or schools (59%). Slightly more than half of responding states conducted direct observations of test administration (54%). One-third conducted interviews with students, educators, and administrators (32%). Remote review methods included desk audits (43%), online or paper record reviews (35%), and review of assessment process (click) data (30%). Fewer states used unscheduled visits to randomly selected LEAs or schools (16%) and unscheduled visits to targeted LEAs or schools (14%). Some states reported using “Other” methods (11%) or indicated that they did not monitor in this area (3%).

Figure 2. Methods of Monitoring the Provision of Accessibility Features, Regular States

A horizontal bar chart showing how states monitor the provision of accessibility features on statewide assessments. The y-axis lists monitoring strategies, and the x-axis shows percentages from 0% to 70%. The bars alternate between solid black and crosshatched patterns. "Scheduled visits to targeted LEAs/schools" displays the longest solid black bar at 65% of states, followed by "Scheduled visits to randomly selected LEAs/schools" with a crosshatched bar at 59%, "Direct observations of test administration" with a solid black bar at 54%, "Conduct desk audits" with a crosshatched bar at 43%, "Online or paper record reviews" with a solid black bar at 35%, "Interviews with students, educators, and families" with a crosshatched bar at 32%, "Review of assessment process (click) data" with a solid black bar at 30%, "Unscheduled visits to randomly selected LEAs/schools" with a crosshatched bar at 16%, "Unscheduled visits to targeted LEAs/schools" with a solid black bar at 14%, "Other" with a crosshatched bar at 11%, and "No activities in this area" with a solid black bar at 3%.

Note: Thirty-seven regular states responded to this item. Respondents were able to select multiple response options.

Five unique states responded to this item. The most frequently reported method for monitoring the provision of accessibility features was direct observations of test administration. This was followed by scheduled visits to randomly selected LEAs or schools and “Other” methods that were not specified.

Examining Validity

States used various approaches to examine the validity of accessibility features in their assessment systems during 2023–24 (see Figure 3). Most commonly, states collected data on the assignment of accessibility features (54%) and the use of accessibility features (46%). More than one-third of states conducted statistical analysis of assessment results (41%). A smaller group engaged in review of research literature on specific accessibility features to determine their validity (35%). Fewer states conducted empirical studies on the effects of accessibility features (14%) or used other methods (11%). Notably, some states reported no activities in this area (22%), suggesting significant variation in states’ efforts to validate accessibility features.

Figure 3. Methods of Examining the Validity of Accessibility Features, Regular States

A horizontal bar chart showing the percentage of states using different methods to examine the validity of accessibility features. The y-axis lists seven categories of validation methods, and the x-axis shows percentages from 0% to 60%. The bars alternate between solid black and crosshatched visual patterns. "Data collection on the assignment of accessibility features" displays the longest solid black bar at 54% of states, followed by "Data collection on the use of accessibility features" with a crosshatched bar at 46%, "Statistical analysis of assessment results" with a solid black bar at 41%, "Review of research literature on specific accessibility features" with a crosshatched bar at 35%, "Empirical studies on the effects of accessibility features" with a solid black bar at 14%, "Other" with a crosshatched bar at 11%, and "No activities in this area" with a solid black bar at 22%.

Note: Thirty-seven regular states responded to this item. Respondents were able to select multiple response options.

Five unique states responded to this item. Most unique states responded with “Other” methods to examine validity. “Other” responses included an end-of-testing survey, conducting data review during state leadership conferences, and conducting IEP reviews. Two unique states reported no activities in this area, and one collected data on the use of accessibility features.

Alternate Assessments Based on Alternate Academic Achievement Standards (AA-AAAS)

Supporting Appropriate Participation

States implemented several strategies to support appropriate participation in the AA-AAAS (see Figure 4). The most common strategy was to provide professional development to local education agency (LEA) special education administrators focusing on training IEP team members on how to use decision-making tools and guidelines (81%). Many states also shared data with LEAs on the percent of students participating in the alternate assessment the previous year (78%) and examined disproportionality in participation (76%). Additional frequently reported practices included oversight activities (73%), IEP reviews (73%), and revising state-developed decision-making guidelines (62%).

Other strategies were reported by fewer states. For example, slightly less than half of the states reported sharing data on the characteristics of participating students (49%). The least common strategies included providing information sessions for families (8%) and other unspecified activities (11%). Overall, the data suggest a strong focus on training, oversight, and data sharing to support appropriate AA-AAAS participation decisions.

Figure 4. Strategies to Support Appropriate Participation in AA-AAAS, Regular States

A horizontal bar chart showing the percentage of U.S. states that implemented various strategies related to the AA-AAAS (Alternate Assessment based on Alternate Academic Achievement Standards). The y-axis lists strategies, and the x-axis shows percentages from 0% to 90%. The bars alternate between solid black and crosshatched patterns. "Provide professional development to LEA special education administrators on training IEP team members in using decision-making tools and guidelines" displays the longest solid black bar at 81% of states, followed by "Share data with each LEA on the percent of students participating in the AA-AAAS from the previous year" with a crosshatched bar at 78%, "Examine any disproportionality in AA-AAAS participation in each LEA" with a solid black bar at 76%, "Conduct oversight activities for a sample of LEAs with AA-AAAS participation rates over 1.0% in the previous year" and "Review a sample of IEPs for appropriate AA-AAAS decision making" each with crosshatched and solid black bars respectively at 73%, "Revise state-developed participation decision-making guidelines for IEP teams to use" with a crosshatched bar at 62%, "Share data with each LEA on the characteristics of students participating in the AA-AAAS from the previous year" with a solid black bar at 49%, "Other" with a crosshatched bar at 11%, and "Provide information sessions to families of students who in the past participated in the AA-AAAS" with a solid black bar at 8%.

Note: Thirty-seven regular states responded to this item. Respondents were able to select multiple response options.

Of the five unique states that responded to this item, the most common strategies for properly assigning students to the alternate assessment were to review sample IEPs and change IEP team guidelines. These unique states also said they provide professional development for local special education administrators on how to train IEP team members. In addition, they share data with each district on the percentage of students who took the assessment the previous year.

Challenges in Meeting the 1% Participation Cap

Regular states identified a variety of challenges in meeting the federal limit on participation to 1% of all students on alternate assessments based on alternate achievement standards (see Figure 5). The most frequently cited challenge was identification and decision making (37%), indicating difficulties in appropriately determining which students should participate in these assessments. Small districts with low enrollment presented challenges for some states (16%). Additionally, staffing and capacity issues affected a smaller percentage of states (11%). Other challenges included concerns about the arbitrary limit, rising rates of students with disabilities, training needs, and assessment opt-out policies (5% each). A substantial portion of states indicated “Other” challenges that were not identified (29%). Notably, a small group of states reported meeting the federal 1% limit on participation, meaning they had fewer students than the maximum allowed and therefore the 1% cap was not a challenge (16%).

Figure 5. Challenges in Meeting the 1% Participation Cap for AA-AAAS, Regular States

A horizontal bar chart showing the percentage of states reporting different challenges in meeting the 1% participation cap. The y-axis lists nine categories of challenges, and the x-axis shows percentages from 0% to 40%. “Identification and decision making” (solid black bar) displays the longest bar at 38% of states; “Small districts/n counts” (crosshatched bar) and “Currently under 1%” (solid black bar) each account for 16% of states; “Staffing and capacity” (crosshatched bar) accounts for 11% of states; “Arbitrary cap” (solid black bar), “Rising rates of students with disabilities” (crosshatched bar), “Training needed” (solid black bar), and “Assessment opt out” (crosshatched bar) each account for 5% of states; and “Other” accounts for 30% of states (solid black bar).

Note: Thirty-eight regular states responded to this item.

Four unique states responded to this item. The most common challenges were related to staffing and capacity issues, followed by identification and decision making. Unique states shared concerns that students taking the alternate assessment may struggle even with appropriate accommodations.

Alternate English Proficiency (Alt-ELP) Assessments

States implemented a range of strategies to ensure appropriate assignment of students to alternate English language proficiency (Alt-ELP) assessments. Most states used multiple approaches (see Figure 6). The most common strategy reported was revising state-developed participation decision-making guidelines for IEP teams (69%). Two-thirds of states provided professional development to LEA special education administrators and encouraged collaboration with IEP team members (64%). Quality assurance activities were also common, with the largest percentage of states reviewing sample IEPs for appropriate decision making (39%) followed by sharing the previous year’s Alt-ELP participation data with LEAs (36%). Additional oversight strategies included examining disproportionality in Alt-ELP participation (31%) and sharing student characteristic data (25%). The least common strategy was to provide information sessions to families of previous Alt-ELP participants (3%). In addition to these strategies, some states reported using other unspecified strategies (28%). Only a small percentage of states reported using no strategies to support appropriate Alt-ELP assignment (6%).

Figure 6. Strategies to Ensure Appropriate Assignment to Alt-ELP Assessments, Regular States

A horizontal bar chart showing the percentage of states implementing various strategies related to Alt-ELP assessment. The y-axis lists strategies, and the x-axis shows percentages from 0% to 80%. The bars alternate between solid black and crosshatched patterns. "Revise state-developed participation decision-making guidelines for IEP teams to use" displays the longest solid black bar at 69% of states, followed by "Provide professional development to LEA special education administrators and encourage them to work with IEP team members" with a crosshatched bar at 64%, "Review a sample of IEPs for appropriate Alt-ELP decision making" with a solid black bar at 39%, "Share data with each LEA on the percent of students participating in the Alt-ELP from the previous year" with a crosshatched bar at 36%, "Examine any disproportionality in Alt-ELP participation in each LEA" with a solid black bar at 31%, "Other" with a crosshatched bar at 28%, "Share data with each LEA on the characteristics of students participating in the Alt-ELP from the previous year" with a crosshatched bar at 25%, "No strategies are used to support the appropriate assignment of students to the Alt-ELP assessment at this time" with a solid black bar at 6%, and "Provide information sessions to families of students who in the past participated in the Alt-ELP" with a solid black bar at 3%.

Note: Thirty-six regular states responded to this item. Respondents were able to select multiple response options.

Five unique states responded to this item. The most frequently reported strategies for ensuring appropriate assignment of students to Alt-ELP assessments were reviewing a sample of IEPs for appropriate decision making and “Other” non-specified strategies. One unique state indicated that no strategies were used to support the appropriate assignment of students to the Alt-ELP assessment.

Accountability

Use of Growth Models

Growth models are one approach used in accountability systems to track student progress over time. States were asked to describe the types of growth models included in their assessment and accountability systems.

General Assessments. Compared to other assessments, the general assessment included the broadest range of growth models (see Figure 7). Student growth percentiles were the most common choice (46%). Value-added models were utilized by slightly less than one-fourth of states (23%), representing the highest usage of this approach across all assessment types. Categorical gains (20%) and growth-to-target models were also used for general assessments (17%). A smaller percentage used either gain scores or no growth model (11% each), and a few used residual gain scores or other approaches (3% each).

Figure 7. Growth Models Used for General Assessments, Regular States

A horizontal bar chart with growth model types on the y-axis and percentage of states on the x-axis (ranging from 0% to 50%). The bars alternate between solid black and crosshatched visual patterns. "Student growth percentiles" displays the longest solid black bar at 46%, followed by "Value-added model" with a crosshatched bar at 23%, "Categorical gains" with a solid black bar at 20%, "Growth-to-target" with a crosshatched bar at 17%, "Gain scores" with a solid black bar at 11%, "No growth model" with a crosshatched bar at 11%, and both "Residual gain scores" and "Other" with small solid black and crosshatched bars respectively at 3% each.

Note: Thirty-five regular states responded to this item. Respondents were able to select multiple response options.

Five unique states responded to this item. The most frequently reported growth model used for general assessments was categorical gains, followed by gain scores, growth-to-target, and student growth percentiles.

AA-AAAS. As shown in Figure 8, more than half of responding regular states did not apply growth measures to the AA-AAAS, an assessment designed for students with the most significant cognitive disabilities (56%). For states that did use a growth model, the largest percentage used a categorical gains model (28%), followed by student growth percentiles (19%). Gain scores (9%), growth-to-target (6%), and value-added models (3%) were used by few states on the AA-AAAS. No states used residual gains models (0%) and there were no other models reported.

Figure 8. Growth Models Used for AA-AAAS, Regular States

A horizontal bar chart with growth model types listed on the y-axis and percentage of states on the x-axis (ranging from 0% to 50%). The bars use two visual patterns: solid black bars and crosshatched bars. "No growth model" has the longest solid black bar at 56% of states, followed by "Categorical gains" with a crosshatched bar at 28%, "Student growth percentiles" with a solid black bar at 19%, "Gain scores" with a crosshatched bar at 9%, "Growth-to-target" with a solid black bar at 6%, "Value-added model" with a crosshatched bar at 3%, and both "Residual gain scores" and "Other" showing no bars at 0%.

Note: Thirty-two regular states responded to this item. Respondents were able to select multiple response options.

Three unique States responded to this item. Two of these states reported using categorical gains to measure growth, while one reported using student growth percentiles.

ELP Assessments. The ELP assessment demonstrated more varied usage of growth models compared to AA-AAAS (See Figure 9), with a more even distribution of states among the top three approaches. Growth-to-target was the most frequent method reported (31%). This was followed closely by the use of categorical gains growth percentiles (19%) and gain scores (6%), with relatively few states using other models (3%).

Figure 9. Growth Models Used for ELP Assessments, Regular States

A horizontal bar chart with growth model types on the y-axis and percentage of states on the x-axis (ranging from 0% to 30%). The bars use alternating solid black and crosshatched patterns. "Growth-to-target" has the longest solid black bar at 31%, followed closely by "Categorical gains" with a crosshatched bar at 28%, "No growth model" with a solid black bar at 25%, "Student growth percentiles" with a crosshatched bar at 19%, "Gain scores" with a solid black bar at 6%, "Other" with a crosshatched bar at 3%, and both "Residual gain scores" and "Value-added" showing no visible bars at 0%.

Note: Thirty-two regular states responded to this item. Respondents were able to select multiple response options.

Two unique states responded to this item. Both states reported using categorical gains to measure student growth on ELP assessments.

Alt-ELP Assessments. Figure 10 presents the types of growth models states reported using for alternate English proficiency (Alt-ELP) assessments. Alt-ELP assessments are designed for students with the most significant cognitive disabilities who are also English learners. Similar to the AA-AAAS, the largest proportion of states used no growth model for the Alt-ELP assessment (47%). Categorical gains were the second most utilized approach (28%), followed by growth-to-target (16%). Student growth percentiles (9%) and gain scores (3%) were less commonly used. No states reported using residual gain scores or value-added models for this assessment.

Figure 10. Growth Models Used for Alt-ELP Assessments, Regular States

A horizontal bar chart with growth model types on the y-axis and percentage of states on the x-axis (ranging from 0% to 40%). The bars alternate between solid black and crosshatched patterns. "No growth model" shows the longest solid black bar at 47%, followed by "Categorical gains" with a crosshatched bar at 28%, "Growth-to-target" with a solid black bar at 16%, "Student growth percentiles" with a crosshatched bar at 9%, "Gain scores" with a solid black bar at 3%, "Other" with a crosshatched bar at 3%, and both "Residual gain scores" and "Value-added model" showing no visible bars at 0%.

Note: Thirty-two regular states responded to this item. Respondents were able to select multiple response options.

Two unique states responded to this item. Both of them reported using categorical gains to measure student growth on Alt-ELP assessments.

Disaggregation of Data

Disaggregation by Type of Assessment. States differed in how they made disaggregated assessment data by primary disability category available to the public. Their approach also varied significantly across assessment types (see Figure 11). General assessments had the highest publication rate (50%), followed by both AA-AAAS and ELP assessments (44% each). Disaggregated Alt-ELP assessments were the least likely to be published (42%). Overall, most states engaged in public reporting of disaggregated data for at least some assessments. Only 6% of states reported not making disaggregated results publicly available for any assessment. Additionally, nearly one-fifth of states (19%) made disaggregated results available by request while 17% used other approaches.

Figure 11. Publication of Disaggregated Assessment Results, Regular States

Bar chart showing percent of states with disaggregated data available: General Assessment 50%, AA-AAAS 44%, ELP 44%, Alt-ELP 42%, Available by Request 19%, Other 17%, Not Published 6%.

Note: Thirty-six regular states responded to this item. Respondents were able to select multiple response options.

Five unique states responded to this item. Most often, unique states disaggregated results for the general assessment, followed by the AA-AAAS. One unique state reported that disaggregated results were not published for any assessment.

Disaggregation for General Assessment. Among all assessment types, states were most likely to publicly share general assessment results disaggregated by primary disability category (see Figure 12). Examining trends was the most common purpose for disaggregating results (47%). About one-third of states either disaggregated for public reporting or did not disaggregate data for the general assessment (32% each). A smaller percentage of states disaggregated only by request (21%). A small proportion selected “Other reasons” (6%). This pattern suggests that states place greater emphasis on analyzing disability-related performance patterns in their primary general assessments compared to the AA-AAAS, ELP assessments, and Alt-ELP assessments (See Figures 14–16).

Figure 12. Reasons for Disaggregating General Assessment Results by Disability Category, Regular States

A horizontal bar chart with reasons for disaggregation on the y-axis and percentage of states on the x-axis (ranging from 0% to 50%). The bars alternate between solid black and crosshatched visual patterns. "To examine trends" displays the longest solid black bar at 47%, followed by "For public reporting" and "No disaggregation" both with bars at 32% (crosshatched and solid black respectively), "Only by request" with a crosshatched bar at 21%, and "Other" with a small solid black bar at 6%.

Note: Thirty-four regular states responded to this item. Respondents were able to select multiple response options.

Five unique states responded to this item. The main reason these states published disaggregated general assessment results by disability category was to track trends. Unique states also indicated that results were disaggregated for public reporting. One state responded that they did not disaggregate general assessment results by primary disability category.

Disaggregation for AA-AAAS. States varied in their purposes for making disaggregated AA-AAAS results public (see Figure 13). Forty percent of states reported disaggregating results to examine trends, representing the most common purpose, while 34% of states indicated they do not disaggregate AA-AAAS results by primary disability category. Equal proportions of states (23% each) disaggregated results for public reporting and only upon request. This pattern suggests that while many states do break down performance data for the AA-AAAS, a sizable number do not conduct a detailed analysis of how students from different disability categories perform. Finally, a small percentage of states reported disaggregating AA-AAAS results for some other purpose (6%).

Figure 13. Purposes for Disaggregating AA-AAAS Results by Disability Category, Regular States

A horizontal bar chart with reasons for disaggregation on the y-axis and percentage of states on the x-axis (ranging from 0% to 40%). The bars alternate between solid black and crosshatched visual patterns. "To examine trends" displays the longest solid black bar at 40%, followed by "No disaggregation" with a crosshatched bar at 34%, "For public reporting" and "Only by request" both with bars at 23% (solid black and crosshatched respectively), and "Other" with a small solid black bar at 6%.

Note: Thirty-five regular states responded to this item. Respondents were able to select multiple response options.

Four unique states responded to this item. The most common reason for disaggregation of AA-AAAS results by primary disability category was to examine trends. States also reported that they disaggregate results for public reporting or only by request. One state indicated that they do not disaggregate results by primary disability category at this point.

Disaggregation for ELP Assessments. Many, but not all states, reported publicly sharing some form of disaggregated ELP assessment data, although the purposes for doing so differed (see Figure 14). The largest group reported no disaggregation (44%). Others disaggregated to examine trends (35%), to support public reporting (24%), or only in response to requests (15%). A small proportion selected “Other reasons” (6%).

Figure 14. Purposes for Disaggregating ELP Assessment Results by Disability Category, Regular States

A horizontal bar chart with reasons for disaggregation on the y-axis and percentage of states on the x-axis (ranging from 0% to 40%). The bars alternate between solid black and crosshatched visual patterns. "No disaggregation" displays the longest solid black bar at 44%, followed by "To examine trends" with a crosshatched bar at 35%, "For public reporting" with a solid black bar at 24%, "Only by request" with a crosshatched bar at 15%, and "Other" with a small solid black bar at 6%.

Note: Thirty-four regular states responded to this item. Respondents were able to select multiple response options.

Three unique states responded to this item. The most common reason that these states disaggregated ELP assessment results by primary disability category was to examine trends, followed by disaggregating results for public reporting. One unique state indicated that results were only disaggregated by request.

Disaggregation for Alt-ELP Assessments. Disaggregation practices for Alt-ELP assessments were more limited compared to other assessments (see Figure 15). Nearly half of states reported no disaggregation of Alt ELP data by primary disability category (47%). About 29% of states reported disaggregating results to examine trends. Equal proportions disaggregated results for public reporting and only by request (21% each). The smallest proportion selected “Other reasons” (6%). This pattern suggests that states may face greater challenges or have fewer incentives to analyze Alt-ELP performance data across different primary disability categories.

Figure 15. Purposes for Disaggregating Alt-ELP Assessment Results by Disability Category, Regular States

A horizontal bar chart with reasons for disaggregation on the y-axis and percentage of states on the x-axis (ranging from 0% to 50%). The bars alternate between solid black and crosshatched visual patterns. "No disaggregation" displays the longest solid black bar at 47%, followed by "To examine trends" with a crosshatched bar at 29%, "For public reporting" and "Only by request" both with bars at 21% (solid black and crosshatched respectively), and "Other" with a small solid black bar at 6%.

Note: Thirty-four regular states responded to this item. Respondents were able to select multiple response options.

Two unique states responded to this item. Both unique states disaggregated Alt-ELP assessment results by primary disability category to examine trends. One of the states also indicated that results were only disaggregated by request.

Accountability Reporting Practices for Students with Missing or Incomplete Assessment Records

Students may not participate in state assessments for various reasons, including absence due to illness, family opt-out decisions (formal or informal), or circumstances related to their disabilities and available accommodations. For example, students with significant cognitive disabilities may not complete an assessment, or their scores may be invalidated because they used non-standard accommodations or modifications that affect score comparability. Under ESEA federal reporting requirements, states must account for these students in their accountability systems, making decisions about whether to count them as participants and whether to assign valid scores. These reporting practices are important because they affect participation rates, achievement results, and ultimately how schools and districts are evaluated for accountability purposes.

Practices for Students Who Did Not Participate. States reported different approaches for counting students who did not participate in state assessments (see Figure 16). Most states did not count students who did not sit for the assessment as participants and did not assign a valid score (74%). A smaller share of states did count absent students as participants did assign a valid score (16%), while some counted these students as participants and assigned a valid score (13%). No states assigned valid scores without also counting the students as participants (0%). A smaller percentage of states used other methods for counting and scoring these students (6%).

Figure 16. Reporting Practices for Students Who Did Not Participate in Statewide Assessments, Regular States

A horizontal bar chart showing the percentage breakdown of how states handle inclusion of students who did not participate in state assessments. The y-axis lists five categories of inclusion manner, and the x-axis shows percentages from 0% to 70%. The bars show: "NOT counted as participants and received no score" accounts for 74% of states (solid black bar); "Counted as participants but received no score" accounts for 16% of states (crosshatched pattern bar); "Counted as participants and earned score counted as valid" accounts for 13% of states (solid black bar); "NOT counted as participants but earned score counted as valid" accounts for 0% of states (no visible bar); and "Other" accounts for 6% of states (solid black bar).

Note: Thirty-one regular states responded to this item. Respondents were able to select multiple response options.

Five unique states responded to this item. All reported that students who did not participate in state assessments were not counted as participants and were not assigned a valid score.

Practices for Students with Incomplete Assessments. States varied in how they counted and scored students who participated in state assessments but did not complete a minimum number of items (see Figure 17). The most common approach was to count these students as participants while not assigning a valid score (57%). A third of states did not count these students as participants and did not assign a valid score (30%). A smaller group of states counted students with incomplete assessments as participants and assigned valid scores (13%). Similar to the results for non-participating students (see Figure 17), no states assigned valid scores without also counting students as participants (0%). An additional group of states used other methods to include this population in accountability reporting (7%).

Figure 17. Practices for Counting Students with Incomplete Assessments, Regular States

A horizontal bar chart showing the percentage breakdown of how states handle inclusion of students who sat for assessments but did not complete enough items to score. The y-axis lists five categories of inclusion manner, and the x-axis shows percentages from 0% to 50%. The bars show: "Counted as participants but received no score" accounts for 57% of states (solid black bar); "NOT counted as participants and received no score" accounts for 30% of states (crosshatched pattern bar); "Counted as participants and earned score counted as valid" accounts for 13% of states (solid black bar); "NOT counted as participants but earned score counted as valid" accounts for 0% of states (no visible bar); and "Other" accounts for 7% of states (solid black bar).

Note: Thirty regular states responded to this item. Respondents were able to select multiple response options.

Five unique states responded to this item. The most common ways of including students with incomplete assessments were either counting the students as participants and assigning them valid scores or not counting the students as participants and not assigning valid scores. One unique state reported counting these students as participants but not assigning valid scores.

Practices for Students with Invalid Scores. States varied widely in their approaches to including students who used accommodations resulting in invalid scores. These types of accommodations included those that were non-standard, modifications, etc. (see Figure 18). Most states counted these students as participants but did not assign them valid scores (41%). More than a quarter of states excluded both the students’ participation and their scores (28%). Some states counted the students as participants and assigned valid scores despite the accommodation-related invalidity (19%). No states excluded students from participant counts while also assigning a valid score when an accommodation-related invalidity occurred (0%). An additional group of states used other methods to include this student population in accountability reporting (10%).

Figure 18. Practices for Counting Students with Invalid Scores, Regular States

A horizontal bar chart showing the percentage breakdown of how states handle inclusion of students who used accommodations resulting in invalid scores. The y-axis lists five categories of inclusion manner, and the x-axis shows percentages from 0% to 40%. The bars show: "Counted as participants but received no score" accounts for 41% of states (solid black bar); "NOT counted as participants and received no score" accounts for 28% of states (crosshatched pattern bar); "Counted as participants and earned score counted as valid" accounts for 19% of states (solid black bar); "NOT counted as participants but earned score counted as valid" accounts for 0% of states (no visible bar); and "Other" accounts for 10% of states (solid black bar).

Note: Thirty-two regular states responded to this item. Respondents were able to select multiple response options.

Five unique states responded to this item. The most common approach was to exclude students who used accommodations that resulted in invalid scores from both participation counts and scoring. Some unique states reported counting these students as participants but assigning no valid scores.

Appendix A: Survey Items Used in the Report

This appendix contains the content of survey items that were addressed in this report. To ensure the best online readability and accessibility for this report, the survey item content has been standardized for the web environment. We have preserved the wording of all question stems and response options, but the original electronic formatting (such as checkboxes or radio buttons) has been omitted to maximize clarity across all devices.

Items 1–19 are not included because they were specifically for NCEO’s internal use.

20. How does your state monitor IEP teams’ decision making for assigning accessibility features on statewide assessments? (Please select all that apply.)

Respondents could select:

  • checkboxes with the following options:
  • Desk audits
  • Interviews with students, families, educators, or administrators
  • Review of assessment process (click) data
  • Review of IEP system records
  • Other (Please describe.)
  • No activities in this area.

21. How does your state monitor the provision of accessibility features on statewide assessments? (Please select all that apply.)

Respondents could select:

  • checkboxes with the following options:
  • Desk audits
  • Direct observations of test administration
  • Interviews with students, educators, and administrators
  • Online or paper record reviews
  • Review of assessment process (click) data
  • Scheduled visits to randomly selected LEAs/schools
  • Scheduled visits to targeted LEAs/schools
  • Unscheduled visits to randomly selected LEAs/schools
  • Unscheduled visits to targeted LEAs/schools
  • Other (Please describe.)
  • No activities in this area.

22. How does your state monitor the appropriateness of the allowable accessibility features that are offered on statewide assessments? (Please select all that apply.)

Respondents could select checkboxes with the following options:

  • Data collection on the assignment of accessibility features
  • Data collection on the use of accessibility features
  • Empirical studies on the effects of accessibility features
  • Review of research literature on specific accessibility features
  • Statistical analysis of assessment results (e.g., differential item functioning [DIF], item-total correlations)
  • Other (Please describe.)
  • No activities in this area.

23. What strategies does your state use to support the appropriate assignment of students to AA-AAAS? (Please select all that apply.)

Respondents could select checkboxes with the following options:

  • Conduct oversight activities for a sample of LEAs with AA-AAAS participation rates over 1.0% in the previous year.
  • Examine any disproportionality in AA-AAAS participation in each LEA.
  • Provide information sessions to families of students who in the past participated in the AA-AAAS.
  • Provide professional development to LEA special education administrators on training IEP team members in using decision-making tools and guidelines.
  • Review a sample of IEPs for appropriate AA-AAAS decision making.
  • Revise State-developed participation decision-making guidelines for IEP teams to use.
  • Share data with each LEA on the characteristics of students participating in the AA-AAAS from the previous year.
  • Share data with each LEA on the percent of students participating in the AA-AAAS from the previous year.
  • Other (Please describe.)
  • No strategies are used to support the appropriate assignment of students to the AA-AAAS at this time.

24. What are the biggest challenges that your state and LEAs face around meeting the 1.0% cap on the AA-AAAS?

25. For which assessment(s) does your state publish disaggregated results for English learners with disabilities? (Please select all that apply.)

Respondents could select checkboxes with the following options:

  • AA-AAAS
  • Alternate English Language Proficiency (Alt-ELP) assessment
  • English Language Proficiency (ELP) assessment
  • General assessment
  • Not published but available by request
  • Other (Please describe.)
  • Disaggregated results are not published for any assessment at this time

26. What strategies does your state use to support the appropriate assignment of students to the Alt-ELP assessment? (Please select all that apply.)

Respondents could select checkboxes with the following options:

  • Examine any disproportionality in Alt-ELP participation in each LEA
  • Provide information sessions to families of students who in the past participated in the Alt-ELP
  • Provide professional development to LEA special education administrators and encourage them to work with IEP team members
  • Review a sample of IEPs for appropriate Alt-ELP decision making
  • Revise state-developed participation decision-making guidelines for IEP teams to use
  • Share data with each LEA on the characteristics of students participating in the Alt-ELP from the previous year
  • Share data with each LEA on the percent of students participating in the Alt-ELP from the previous year
  • Other (Please describe)
  • No strategies are used to support the appropriate assignment of students to the Alt-ELP assessment at this time

27. For each assessment listed, please select which growth model(s) your state uses.

Respondents were presented with a table showing assessments in one column and types of growth models as columns. For each assessment they put a check mark in the column that applied.

Assessment:

  • General Assessment
  • AA-AAAS
  • ELP Assessment
  • Alt-ELP Assessment

Growth Model:

  • Categorical Gains
  • Gain Scores
  • Growth-to-Target
  • Residual Gain Scores
  • Student Growth Percentiles
  • Value-Added Model
  • Other (If selected, a follow up item will appear on the next page)
  • No Growth Model

28. For each assessment listed, please select the purpose(s) for which your state disaggregates results by primary disability category.

Respondents were presented with a table showing assessments in one column and purposes as columns. For each assessment they put a check box in each column that applied.

Assessment:

  • General Assessment
  • AA-AAAS
  • ELP Assessment
  • Alt ELP Assessment

Purposes:

  • To examine trends
  • For public reporting
  • Only by request
  • Other (If selected, a follow-up item will appear on the next page)
  • No disaggregation

29. For each student group listed, please select the manner in which the students were included in your state's 2023–24 assessment accountability reports for the Elementary and Secondary Education Act (ESEA).

Respondents were presented with a table showing student groups in one column and manner in which they were included as columns. For each student group they put a check box in each column that applied.

Student groups:

  • Students who did not participate in state assessments in any way (e.g., absent on test day, parent refusal)
  • Students who sat for the assessment but did not complete enough items to score
  • Students who used accommodations resulting in invalid scores (e.g., non-standard, modifications)

Manner in which students were included:

  • Counted as participants and earned score counted as valid
  • Counted as participants but received no score
  • NOT counted as participants but earned score counted as valid
  • NOT counted as participants and receive no score
  • Other (If selected, a follow-up item will appear on the next page)

If you answered “Other” to any row in items 27–29, please continue to the following corresponding items (27a–29a).

27a. Please describe the "Other" growth model(s) that your state uses.

28a. Please describe the "Other" purpose(s) for which your state disaggregates results by primary disability category.

29a. Please describe the "Other" way in which the student group was included in your state's accountability reports.

Authors

Mari Quanbeck

Virginia A. Ressa

Kristin K. Liu

Linda Goldstone

Darrell Peterson

Andrew Hinkle

All rights reserved. Any or all portions of this document may be reproduced without prior permission, provided the source is cited as:

Quanbeck, M., Ressa, V., Liu, K. K., Goldstone, L., Peterson, D., & Hinkle, A. (2025). 2025 survey of states: Trends and issues in statewide assessment of students with disabilities. National Center on Educational Outcomes.

NCEO logo

The Center is supported through a Cooperative Agreement (#H326G210002) with the Research to Practice Division, Office of Special Education Programs, U.S. Department of Education. The Center is affiliated with the Institute on Community Integration at the College of Education and Human Development, University of Minnesota. Consistent with EDGAR §75.62, the contents of this report were developed under the Cooperative Agreement from the U.S. Department of Education, but do not necessarily represent the policy or opinions of the U.S. Department of Education or Offices within it. Readers should not assume endorsement by the federal government. Project Officer: David Egnor

IDEAS that Work, U.S. Office of Special Education Programs

In collaboration with:

NCEO partner logos: aem, Center for Parent Information & Resources, CCSSO, NASDSE, WestEd

NCEO Core Staff

Andrew R. Hinkle, Co-Director

Kristi K. Liu, Co-Director

Jessica Bowman

Gail Ghere

Linda Goldstone

Michael L. Moore

Darrell Peterson

Mari Quanbeck

Virginia A. Ressa

Kathy Strunk

Yi-Chen Wu

National Center on Educational Outcomes

University of Minnesota

2025 East River Parkway, Room 1-330

Minneapolis, MN 55414

Phone 612/626-1530

http://www.nceo.info

The University of Minnesota shall provide equal access to and opportunity in its programs, facilities, and employment without regard to race, color, creed, religion, national origin, gender, age, marital status, disability, public assistance status, veteran status, sexual orientation, gender identity, or gender expression.

This document is available in alternative formats upon request. Direct requests to nceo@umn.edu.

Institute on Community Integration and University of Minnesota logos

NCEO is an affiliated center of the Institute on Community Integration