Did test-optional policies since 2020 change graduation or retention rates for students who would have submitted lower SAT scores?
Executive summary
Available research finds no consistent, system-wide hit to retention or graduation after the large wave of test‑optional adoptions in 2020: multiple multi‑school studies report similar retention and graduation rates for students who submitted scores and those who did not, and at least one cross‑cohort analysis found “little evidence” of a significant effect on retention for 2021–22 [1] [2]. But rigorous, institution‑level studies show mixed and sometimes adverse effects for particular groups—NBER finds test‑optional rules reduced admissions probability for high‑achieving disadvantaged applicants at Dartmouth, and some campus reports warn voluntary non‑submission can hide lower‑scoring students [3] [4].
1. Early national picture: broad adoption, limited aggregate harm
Colleges doubled the number of test‑optional four‑year institutions after spring 2020, producing a natural experiment many researchers have used; national‑level analyses and multi‑institution datasets repeatedly show little change in retention or graduation tied purely to test‑optional adoption for cohorts entering during and after the pandemic (Urban Institute: number of schools rose from 713 to 1,350; broader studies report no significant retention effect) [5] [2]. Independent reviews and institution pilots likewise report non‑submitters often graduate at rates “equivalent to, or slightly higher than” submitters, and some campuses found no statistically significant difference in retention or graduation between non‑submitters and submitters (Haian Analytics summary; Ithaka S+R; [13]; [14]2).
2. Institution‑level nuance: “averages” mask important variation
University reports and single‑site studies show divergent outcomes. The University of Missouri found non‑submitters had slightly lower first‑semester GPAs but similar retention; other selective colleges reported non‑submitters performed “just as well” or slightly better in graduation rates [6] [7]. These mixed signals suggest aggregate studies average over campus differences in student support, selectivity, mission, and policy design [6] [7].
3. Who may be disadvantaged by voluntary non‑submission
Researchers warn voluntary test‑optional systems can disadvantage specific subgroups. NBER’s Dartmouth analysis concluded the policy reduced admission chances for high‑achieving, disadvantaged students because such applicants submitted scores at lower rates than advantaged peers [3]. Independent analysts using IPEDS and college task‑force reports also warn that weaker test takers disproportionately opt out, and that voluntary non‑submission can conceal high‑ability but less‑informed applicants who would benefit from submitting strong scores [4] [8].
4. Predictive value of test scores versus high‑school grades
Longstanding literature finds SAT and ACT add predictive power for retention and GPA beyond high‑school GPA in many samples; meta‑analyses and national validity studies report SAT + GPA predict first‑year GPA and second‑year retention well (College Board validity studies; other research) [9] [10]. Conversely, advocates of test‑optional cite studies showing high‑school grades are often stronger predictors and that test scores are a weak single predictor of graduation in some institutional analyses (FairTest summary citing Purdue and others) [11]. Both perspectives appear in the research record.
5. Policy design matters: universal vs. voluntary optional
Early research and ongoing grants stress that how test‑optional is implemented changes outcomes: more “inclusive” designs (applied universally, linked to scholarships) were associated with increased access for Black students in some system analyses; conversely, voluntary opt‑in systems can produce adverse signaling effects and enrollment distortions (Penn State summary of forthcoming work; [14]4). Several universities used multi‑year pilots to track retention and graduation specifically because outcomes vary with selection and institutional supports (Higher Ed Dive on Middlebury, Mizzou; [15]; p2_s8).
6. What’s missing in current reporting and next steps for clarity
Available sources document short‑ to medium‑term cohorts and a mix of cross‑institutional and single‑site analyses, but comprehensive causal evidence linking test‑optional policy to long‑term six‑year graduation by narrowly defined groups (e.g., low‑SAT submitters who would have submitted low scores) is limited or still in progress; large system‑level longitudinal work (tracking credits completed, GPA maintenance, aid interactions) is underway but not yet conclusive (Penn State grant, CAF Co‑Lab follow‑ups) [12]. Available sources do not mention an authoritative, national causal estimate that isolates outcomes specifically for the subgroup you asked about — students who would have submitted lower SAT scores.
Bottom line: at the national and multi‑institution level there is no clear, consistent decline in retention or graduation tied to test‑optional policies overall [1] [2], but significant caveats remain: institution‑level variation is large, voluntary submission can harm particular disadvantaged high‑achievers and mask lower‑prepared students, and policy design plus campus supports drive outcomes [3] [4] [12].