What internal SSA models or data sources (Medicare/Medicaid) influence CDR selection and diary assignment?
Executive summary
The Social Security Administration (SSA) relies on an internal CDR profiling model that scores administrative records to estimate the likelihood of medical improvement and thereby determine who is directly released for a full medical review, who receives a mailer, and which medical-diary category a case is assigned to [1]. SSA policy manuals and program analyses show that diary categories (medical improvement expected/possible/not expected and proposed new category) drive review frequency and that researchers use SSA administrative files—like the CDR Waterfall and the Disability Analysis File—to study diary assignment and outcomes [2] [3] [4].
1. How the SSA says it decides who gets a CDR: profiling and diaries
SSA explains that the timing of a Continuing Disability Review (CDR) is set by a “diary date” established at the last favorable decision and that cases are assigned to medical-diary categories—typically MIE (medical improvement expected), MIP (medical improvement possible), or MINE (medical improvement not expected)—which determine whether reviews occur in 3, 5, or 7 years [1] [4]. The agency explicitly states it uses a CDR profiling model that scores administrative information to distinguish beneficiaries who are “directly released” for a full medical review from those who instead receive a mailer questionnaire to screen for indications of improvement [1] [5]. SSA field manuals and POMS guidance codify these diary-driven procedures for initiating and processing CDRs [2] [6] [7].
2. What “administrative information” and SSA data researchers actually cite
Analyses published by SSA researchers and agency files used in internal studies reveal specific administrative datasets that analysts rely upon when examining CDRs: the CDR Waterfall file (used to identify populations and cessation reasons) and the Disability Analysis File (used to obtain demographic, diary-category, diagnosis, and geographic information) are explicitly named in SSA research reports [4]. Those same studies describe the diary category recorded on award notices and the use of diary-type controls in regression models of CDR outcomes, indicating that internal administrative records about award history, diary type, and prior CDRs are central inputs to both operational selection and research evaluation [4] [8].
3. The role of the profiling model — described but opaque
SSA documentation and research repeatedly reference a “CDR profiling model” that assigns scores predicting medical improvement based on administrative information, and that score is used to route cases to mailers or direct development for review [1]. However, the agency’s public materials and the POMS rules describe the model’s role rather than its internal variables or algorithms, leaving its exact inputs, weighting, and whether external health-claims feeds are used largely unlisted in the publicly cited sources [1] [2].
4. Medicare/Medicaid data: what the sources do — and do not — show
None of the provided SSA documents or SSA-authored analyses in the available reporting explicitly state that Medicare or Medicaid claims feeds are ingested into the CDR profiling model or otherwise used to assign diary categories; the cited materials refer to “administrative information” and internal SSA files like the CDR Waterfall and Disability Analysis File without naming third‑party claims systems [1] [4]. Proposed regulatory changes focus on diary categories and assignment criteria rather than specifying new external data sources, and publicly available POMS and performance plans likewise document process and volume priorities without listing Medicare/Medicaid as explicit data inputs [3] [9]. Therefore the sources do not provide direct evidence that SSA uses Medicare or Medicaid claims data in CDR selection or diary assignment.
5. Competing narratives, funding pressures, and practical implications
External commentators and practitioner guides reiterate that SSA uses profiling to flag higher‑likelihood cases for review, a claim consistent with SSA’s own descriptions of a scoring model, but these sources do not add evidence about claim‑level health data use [10]. SSA research also highlights that CDR volumes and timing are influenced by program integrity funding and administrative priorities, an implicit agenda that can shape how aggressively profiling and diary scheduling are applied across beneficiary groups [4] [9]. Where transparency is limited—about the profiling model’s inputs, any use of Medicare/Medicaid claims, or algorithmic thresholds—researchers and advocates face an evidence gap that the agency’s public documents have not closed [1] [4].