Does discord flag accounts that have been restored and put into user initiated deletion state 13 times?
Executive summary
There is no public, official Discord policy in the provided reporting that says an account will be automatically “flagged” simply for being restored and put into user‑initiated deletion 13 times; official documentation describes a 15‑day pending deletion window and a broader warning/account‑standing system but does not list a repeat‑restore threshold [1] [2]. Community reports and technical writeups show users sometimes get re‑flagged as spam or disabled after recovery and that deleted accounts retain technical identifiers that can block future authentication, so repeated restore/delete cycles could plausibly interact with automated systems even if no rule about “13 times” is published [3] [4] [5].
1. Discord’s published mechanics: restore window and account standing
Discord’s own help center explains that user‑initiated deletions enter a “pending deletion” state for 15 days during which the user can restore the account by logging in or resetting their password, and that most user‑initiated deletions are reversible only in that window [1]. Separately, the company documents a Warning System that tracks violations, surfaces an Account Standing view in‑app, and notes that most violations expire after 90 days and stop impacting standing — but that system is framed around violations, not deletions or restores per se [2].
2. What the community reports: false flags and recovery frustration
Multiple Discord community support threads record users whose accounts were disabled or marked “spam” and who remained restricted even after going through recovery steps; posters complain of being disabled for “joining too quickly” or for passing repeated human verifications and then being unable to re‑join servers despite recovering their accounts [3]. Other threads show users pleading for reinstatement after being disabled or deleted and describe opaque automated responses from Trust & Safety, indicating inconsistent outcomes in appeals and recoveries [6] [7].
3. The technical reality: deleted accounts leave traces that can block access
Independent explainers and community posts note that Discord’s deletion processes include technical flags and revocations — deleted accounts can have API tokens and OAuth grants revoked and may retain unique internal identifiers that persist in logs and can be used to link activity, meaning deleted or anonymized accounts are not necessarily indistinguishable for backend systems [4] [5]. Those technical artifacts make it plausible that repeated deletion/restoration cycles could look anomalous to automated spam or trust‑and‑safety signals even if no policy explicitly bans repetition [4] [5].
4. Where the record is silent — and why “13 times” can’t be confirmed
None of the provided Discord documentation or community posts cite a numeric threshold such as “13 restores” that triggers an automatic flag or punitive action; the reporting covers the 15‑day restore window, the Warning System’s violation expiration, and anecdotal disabled‑account complaints, but does not supply a rule tying a specific count of restore/delete actions to a flag [1] [2] [3]. Therefore, it cannot be asserted from these sources that Discord flags accounts simply for being restored and put into deletion exactly 13 times.
5. Reasoned conclusion and practical implications
Given the evidence, the strongest claim supportable is that Discord has mechanisms that both allow restoration within 15 days and separately track account standing and technical identifiers that can cause accounts to be blocked or treated as spam — and community reports show inconsistent results after recovery — but there is no documented policy or authoritative source in the supplied reporting that names “13” as a trigger for flagging [1] [2] [3] [4]. In short: the assertion that Discord flags accounts for being restored and deleted 13 times is not supported by the provided sources; however, repeated unusual behavior (including frequent deletions and restorations) could plausibly be flagged by automated systems or lead to Trust & Safety scrutiny, and direct confirmation would require Discord’s internal policy or a support response beyond the supplied material [2] [3].