What are the differences between MMSE, MoCA, and SLUMS cognitive tests for seniors?
This fact-check may be outdated. Consider refreshing it to get the most current information.
Executive summary
The three common brief screens—MMSE, MoCA and SLUMS—are all 30‑point instruments used to flag cognitive impairment, but they differ in domains tested, sensitivity for mild cognitive impairment (MCI), administration time, availability/cost and the depth of validation across populations (MoCA generally shows higher sensitivity for MCI; MMSE is briefer and more limited; SLUMS emphasizes executive function and is freely available) [1] [2] [3].
1. Origins and basic format: short histories that matter
The MMSE is the oldest and most widely known tool, introduced in 1975 and scored to 30 points; it focuses heavily on memory and language and is brief to administer [4] [5]. The MoCA is a newer 30‑point test developed specifically to detect milder deficits missed by the MMSE and covers additional domains such as executive function, visuospatial skills and abstraction [6] [1]. SLUMS (Saint Louis University Mental Status) is another 30‑point, clinician‑administered screen designed as an alternative to MMSE/MoCA with added tests of executive function [6] [3].
2. What cognitive domains each emphasizes: why scores diverge
MMSE emphasizes memory and language but gives little weight to executive function and visuospatial attention, which produces ceiling effects in people with mild deficits [4] [1]. MoCA deliberately samples broader domains—executive function, attention, visuospatial skills and abstraction—so it spreads MCI and healthy control scores across a wider range and reduces ceiling effects [1]. SLUMS includes animal naming, clock drawing and logical memory tasks that target executive function and verbal fluency, helping it detect early executive decline that MMSE can miss [3].
3. Sensitivity and specificity: who finds mild impairment best
Multiple reports and a meta‑analysis show MoCA generally outperforms MMSE at detecting MCI—MoCA “better meets the criteria” for MCI detection in older adults and shows less ceiling effect than MMSE [7] [1]. Reported sensitivity/specificity numbers in one cohort put MoCA sensitivity higher (example pooled reporting: MoCA ~91% sensitivity vs MMSE ~81%), while SLUMS and MMSE show similar overall diagnostic performance in some studies but SLUMS may be more sensitive than MMSE for MCI in particular samples [4] [3]. Direct comparisons find all three correlate strongly but classify some patients differently, with MoCA and SLUMS flagging deficits that MMSE can miss [8] [9].
4. Practical differences: time, cost and accessibility
Administration time varies: MMSE is brief, SLUMS takes about 7 minutes, and MoCA typically takes longer—about 10–15 minutes—reflecting its broader content [2]. SLUMS is freely available in the public domain; MMSE is no longer free and is constrained by copyright, which has pushed some clinics toward MoCA or SLUMS [8] [3]. MoCA requires training/permission for official use in many settings, which has been part of implementation discussions in clinics [2].
5. Reliability and test‑retest performance: similar, but evidence depth differs
Studies comparing test‑retest reliability report that MMSE, MoCA and SLUMS show broadly similar reliability for monitoring cognitive change in dementia, though the volume and quality of validation studies vary by instrument [10]. Systematic reviews find MoCA and MMSE often carry moderate‑quality evidence in TBI and other conditions, while SLUMS sometimes has lower‑quality or fewer studies; one review rated SLUMS evidence as low or very low in some contexts [11].
6. Where each tool is strongest and weakest: a clinician’s selection map
Use MMSE when you need a quick, familiar screening focused on memory/language and when brevity is essential, but accept that MMSE misses subtle executive deficits and has ceiling effects [4] [8]. Use MoCA when you want higher sensitivity for MCI and broader domain coverage, accepting longer administration and the need for training/score conversion considerations [1] [2]. Use SLUMS when you want a free test that adds executive tasks and may outperform MMSE for MCI in some populations, but recognize SLUMS has fewer large‑scale validation studies and mixed quality evidence in some clinical groups [3] [11].
7. Caveats, conversions and population issues: read the fine print
Scores are not interchangeable without conversion—studies show correlations (e.g., MMSE correlates with MoCA, r values reported) but also substantial differences in distributions for MCI and healthy controls, so using cutoffs requires caution and sometimes equi‑percentile conversion models [9] [1]. Validation, cutoffs and ROC performance vary by education, language and sample (veteran vs community samples), so instrument choice must reflect the examined population and clinical question [3] [12].
Limitations: available sources do not mention proprietary details about current licensing fees for MMSE or the latest 2025 instrument changes; all claims above are drawn from the cited studies and reviews [8] [3] [4] [1].