Which trackers consolidate multi‑state lawsuits versus counting each state separately, and how do their methodologies differ?
Executive summary
Two types of “trackers” dominate reporting: litigation databases that treat coordinated attorney‑general coalitions or multistate class suits as single multistate actions (examples: the State Litigation and AG Activity Database and NAAG’s multistate databases), and 50‑state or side‑by‑side survey tools that catalogue each state’s law or individual state filings separately (examples: 50‑state surveys, HR360 and VirgilHR comparison tools) [1] [2] [3] [4] [5] [6]. The difference is not merely cosmetic: consolidated trackers encode coalition-level metadata (number of states, lead plaintiffs, docket) and treat a joint suit as one event, while state-by-state trackers index statutes or filings by jurisdiction and require aggregation to see cross‑state coordination [1] [3] [4].
1. Which trackers consolidate multistate lawsuits: agencies and litigation databases
The clearest consolidators are databases created to map coordinated state litigation: the State Litigation and AG Activity Database’s “Searchable List of Multistate Lawsuits” presents multistate AG actions as discrete entries that include the number of participating states, links to complaint text, and filters for outcomes and states, treating a coalition suit as one record rather than dozens of separate state entries [1] [2]. The National Association of Attorneys General (NAAG) likewise maintains a Multistate Antitrust Litigation Database and explicitly defines multistate litigation as coordinated actions among two or more AGs—NAAG’s intent is to provide a single authoritative record of such coordinated enforcement rather than a per‑state fragment [3]. Major secondary sources like Ballotpedia rely on the State Litigation and AG Activity Database to enumerate and compare multistate coalitions across presidential administrations, showing how consolidated trackers become the base dataset for other reporting [7].
2. Which trackers count each state separately: 50‑state surveys and comparison tools
By contrast, legal research guides and commercial comparison tools emphasize state‑by‑state treatment. Georgetown Law’s research guides and similar “50‑state survey” approaches catalog statutory language and case law for each jurisdiction and recommend side‑by‑side charts when practitioners need to see differences among states, in effect counting legal rules or filings per state rather than bundling them as one multistate event [4] [8]. Private tools such as HR360’s Multi‑State Laws Comparison Tool and VirgilHR’s Multi‑State Comparison Tool produce customizable side‑by‑side charts of state labor and employment rules, reflecting a methodology that treats each state as a separate datapoint users must compile if they want a cross‑state picture [5] [6]. Law library and academic guides similarly advise constructing bespoke fifty‑state surveys where a consolidated “multistate lawsuit” label would obscure meaningful local variations [4] [9].
3. How their methodologies differ — aggregation, metadata and user needs
Consolidated litigation trackers aggregate at the lawsuit level: they record coalition membership, lead counsel, case caption, docket and outcomes, and present multistate suits as unitary events useful for trend analysis (how often AGs coordinate, which administrations face more coalitions) [1] [2] [7]. State‑by‑state surveys, however, index legal rules, statutory elements, or individual state filings and require the researcher to assemble cross‑state comparisons; their methodology prioritizes granular variation (what law applies in State X versus State Y) over capturing coordination as a single event [4] [8] [5].
4. Why methodology matters — litigation strategy and choice‑of‑law dynamics
The choice between consolidation and per‑state counting has material legal consequences: multistate or nationwide class actions pose complex choice‑of‑law questions and may be treated as a single case even when state laws diverge, so databases that preserve the coalition frame help researchers study coordination strategies and outcomes, while 50‑state surveys are essential when assessing whether substantive variations in state law could defeat class certification or alter remedies [10] [11]. Practitioners and reporters therefore rely on both styles—consolidated trackers to measure collective enforcement and state surveys to drill into the legal heterogeneity that can make or break a multistate action [3] [10] [11].
5. Limits, conflicts and practical guidance on interpretation
Reporting based on a consolidated tracker can obscure state‑level nuance—what looks like a single “multistate win” may rest on differing legal theories or partial participation—while state‑by‑state trackers can understate the political and strategic force of coordinated suits because they scatter the coalition into many entries [1] [4]. Sources used here document both cataloging approaches but do not prescribe a universal standard; researchers should consult consolidated litigation databases (for coalition scope and docketing) alongside 50‑state surveys (for substantive legal differences) to get a full picture [1] [4] [3].