Have independent audits verified Duck.ai's metadata‑stripping claims and can those audits be publicly reviewed?

Checked on January 26, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

DuckDuckGo loudly states that Duck.ai strips personally identifying metadata such as IP addresses before forwarding prompts to model providers, and multiple tech outlets have repeated or summarized that claim based on company documentation [1] [2] [3]. However, reporting and commentary in the available sources do not show a publicly released, independent third‑party audit that verifies the metadata‑stripping implementation end‑to‑end, nor do they point to audit reports that can be publicly reviewed [4] [3].

1. What Duck.ai claims and where the claim is documented

Duck.ai’s own help pages and privacy policy explicitly state that “all metadata that contains personal information (for example, your IP address) is completely removed before prompting the model provider,” and repeat that “All metadata that contains personal information … is removed before sending Prompts to underlying model providers” [1] [2]. Tech coverage and product explainers cite those claims, describing Duck.ai as routing requests through DuckDuckGo so the models see requests coming from DuckDuckGo rather than the end user [3] [5]. These are company statements and product descriptions documented publicly by DuckDuckGo [1] [2].

2. Independent verification in reporting: what exists and what is missing

Independent news outlets and reviewers have summarized or tested user‑facing behavior—The Verge and ZDNet repeat the metadata‑stripping claim and note contractual assurances with providers about deletion and non‑training of chats [3] [6]. Some reviews report manual tests consistent with limited location leakage or local storage behavior, but these are ad hoc user tests rather than formal security audits [7] [8]. Crucially, at least one critical piece of commentary notes that DuckDuckGo “never went through a formal privacy audit” and that the only external check referenced was a complaint investigation that verified claims were not false advertising rather than an exhaustive technical audit; that piece frames the absence of a formal, public audit as the main concern [4]. None of the provided sources point to a named, independent audit firm releasing a technical report that verifies metadata‑stripping across network, application, and provider boundaries.

3. Can any audits be publicly reviewed?

Based on available reporting and the linked company pages, there is no publicly posted, comprehensive third‑party audit report of Duck.ai’s metadata handling to review; the documentation that exists is primarily DuckDuckGo’s own policy text and reporting that summarizes or tests behavior [1] [2] [3]. Where independent scrutiny is referenced in commentary (for example, a complaint investigation), sources indicate that scrutiny confirmed advertising‑law claims but did not produce a detailed, public technical audit report for forensic examination [4]. Therefore, a reader seeking a formal audit document to review will not find one in the supplied reporting.

4. Reasons the gap matters and alternative perspectives

Privacy experts and critics flag the absence of a formal audit as meaningful because metadata leaks can occur at layers beyond a simple prompt rewrite—DNS, TLS headers, provider logs, or misconfigurations can expose identifiers even when policies promise removal [4]. Conversely, DuckDuckGo’s approach—calling models on behalf of users, local storage of recent chats, and contractual deletion agreements with providers—offers practical mitigations and is persuasive to some reviewers and writers who tested the product and reported behavior consistent with the claims [3] [9] [6]. Those pragmatic safeguards reduce certain risks but are not the same as independent verification of implementation at scale, which the available sources do not document [3] [9].

5. Bottom line for a reader weighing trust

The factual record in the provided sources shows strong, consistent company assertions and positive product reviews that observe privacy‑forward behavior, but no public, comprehensive independent audit report that verifies Duck.ai’s metadata‑stripping claims end‑to‑end; commentary explicitly flags the lack of a formal audit as a gap [1] [2] [3] [4]. Those seeking assurance beyond company policies and informal tests should look for a named third‑party security/privacy audit made public or request one from DuckDuckGo; as of the cited reporting, such a publicly reviewable audit is not documented [4] [3].

Want to dive deeper?
Has DuckDuckGo commissioned or published a third‑party security/privacy audit of Duck.ai since 2025?
What technical methods can independently verify that an AI intermediary strips metadata before reaching model providers?
How do model providers (OpenAI, Anthropic, Azure/OpenAI) document their receipt and deletion of metadata from intermediaries?