Keep Factually independent

Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.

Loading...Goal: 1,000 supporters
Loading...

Larry Ellison regarding constant monitoring is: "Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on".

Checked on November 15, 2025
Disclaimer: Factually can make mistakes. Please verify important info or breaking news. Learn more.

Executive summary

Larry Ellison told investors in September 2024 that AI could power continuous surveillance — “Every police officer is going to be supervised at all times” and “Citizens will be on their best behavior because we are constantly recording and reporting everything that’s going on,” comments carried by Fortune, Ars Technica and others [1] [2]. Reporting since then highlights both Oracle’s pitch that AI surveillance can reduce abuse and the privacy, bias and civil‑liberties concerns raised by critics and commentators [3] [4].

1. What Ellison actually said — plain transcript and context

Ellison made the remarks during an Oracle financial analysts meeting in September 2024, where he described a future in which police body cams, vehicle cameras, drones and other sensors would be continuously monitored by AI that would “report” problems and supervise officers — and, by extension, citizens — in real time [5] [2]. Multiple outlets quoted him saying officers “are going to be supervised at all times” and that “citizens will be on their best behavior because we are constantly recording and reporting everything that’s going on” [1] [2].

2. Oracle’s business pitch: infrastructure for a surveillance backbone

Reporting frames Ellison’s comments as both a philosophical defense of pervasive monitoring and a market pitch: Oracle positions its cloud, AI and data‑management products as the technical backbone for large‑scale, real‑time analytics on camera and sensor feeds — a lucrative government and commercial market [3] [6]. The Register and other outlets note Oracle has long worked with governments and markets that would use such infrastructure [3] [6].

3. Proponents’ stated benefits: accountability and safety

Ellison and supporters argue continuous AI monitoring could make policing more accountable (AI would flag officer misconduct and report it), secure schools through rapid recognition of intruders, and help public services with population‑level analytics for healthcare and disaster prevention [2] [6]. Fortune and Ars Technica report Ellison framed these capabilities as life‑saving and efficiency gains [1] [2].

4. Critics’ counterarguments: privacy, bias, and efficacy gaps

Critics warn that existing surveillance tools carry privacy harms, risk exacerbating racial or socioeconomic bias, and have mixed evidence on crime reduction. Reporting points to prior programs — such as policing predictive tools that increased surveillance of Black and Latino communities — as cautionary examples [5] [4]. Commentators also liken Ellison’s vision to elements of China’s social‑credit and mass‑camera systems, raising concerns about behavioral control and civil liberties [1] [7].

5. Evidence status: what reporting documents, and what it does not

Contemporary coverage documents Ellison’s public remarks and Oracle’s promotion of AI/cloud services [1] [3]. Reporting also compiles examples of technologies in trial or limited use — automated CCTV trials and drone applications — but does not establish that a nationwide, flawless system Ellison described already exists or is operationally mature at scale in the U.S. [2] [4]. Available sources do not mention a completed, nationwide system that matches Ellison’s full description without caveats [2] [4].

6. Media framing and the political economy of coverage

Different outlets emphasize different angles: business press framed the remarks as a market outlook for Oracle’s AI/cloud opportunity [6], tech publications raised civil‑liberties alarms and compared the idea to Orwellian or Chinese models [1] [2], and opinion sites and investigative pieces catalog real‑world uses that echo parts of Ellison’s pitch [4] [8]. These editorial choices reflect implicit agendas: vendor coverage often focuses on opportunity, while watchdog coverage prioritizes rights and harms [6] [4].

7. What to watch next — policy, contracts, and audits

Follow procurement and contract announcements (which agencies are buying AI analytics and cloud capacity), independent audits of deployed systems, and legal/regulatory moves on facial recognition and ubiquitous recording. Reporting notes Oracle’s continued role as an infrastructure provider and recent high‑profile cloud deals, which make its positioning relevant to future procurement debates [6] [9].

8. Bottom line for citizens and policymakers

Ellison publicly advocated a near‑ubiquitous, AI‑monitored surveillance architecture and positioned Oracle to supply parts of it; reporting shows clear potential benefits claimed by proponents and documented risks flagged by critics, but not proof that a fully integrated, risk‑free system already exists [1] [4] [2]. Policymakers and publics must weigh promised gains in accountability and safety against documented privacy harms, bias in deployed systems, and the political implications of always‑on monitoring [5] [4].

Want to dive deeper?
What did Larry Ellison exactly mean by 'constantly recording and reporting' citizens' behavior?
How would pervasive recording and reporting affect privacy rights under current U.S. law?
Which technologies enable the kind of constant monitoring Ellison described and who controls them?
What are the potential societal benefits and harms of constant surveillance-driven behavior change?
Have similar proposals for pervasive monitoring been implemented elsewhere and what were the outcomes?