Keep Factually independent

Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.

Loading...Goal: 1,000 supporters
Loading...

What model are you based off?

Checked on November 10, 2025
Disclaimer: Factually can make mistakes. Please verify important info or breaking news. Learn more.

Executive Summary

The original question “What model are you based off?” is reported inconsistently across the sourced analyses: some identify the assistant as Grok‑1 (314B MoE), others claim Grok‑3 or Grok‑4, while several sources explicitly state the available articles do not confirm which model powers this assistant. The most verifiable public technical disclosure in the dataset identifies Grok‑1 as a 314‑billion‑parameter Mixture‑of‑Experts model released by xAI in March 2024, but competing claims and later references to Grok‑3 and Grok‑4 introduce uncertainty about which variant (or a different model entirely) underlies the assistant in question [1] [2] [3]. This analysis extracts the key claims, compares timing and specificity across sources, and highlights where the evidence is definitive versus where it is ambiguous or missing.

1. Conflicting Model Names: Grok‑1, Grok‑3, Grok‑4 — Who’s Right?

Multiple analyses in the dataset assert different model names as the assistant’s base. One source explicitly identifies Grok‑1, describing it as a 314‑billion‑parameter Mixture‑of‑Experts (MoE) large language model that xAI released and open‑sourced details for; that claim includes architecture and parameter counts and is anchored to a March 2024 disclosure [1] [2]. Other entries assert the assistant is built on Grok‑3 or Grok‑4, attributing features such as expanded context windows, advanced reasoning, function calling, and optimized Transformer variants to those later iterations [4] [5] [6]. The dataset therefore contains direct, specific technical attribution to Grok‑1 and separate, less‑concrete assertions about Grok‑3/4, creating a clear conflict in naming and technical claims that the sources themselves do not reconcile.

2. The Strongest Verifiable Claim: Grok‑1’s Technical Disclosure

Among the provided materials, the most technically detailed and dated disclosure concerns Grok‑1 as a 314‑billion‑parameter MoE model trained by xAI, described in an open‑release document and covered in March 2024 reporting; that source details the model’s parameterization and implementation technologies, such as JAX and Rust, and notes the open‑sourced base model was pre‑training output not yet fine‑tuned for dialog [1] [2]. That specificity—parameter count, MoE architecture, engineering stack, and release timing—constitutes the strongest verifiable evidence in this set that xAI publicly disclosed Grok‑1’s existence and design. The presence of this detailed, dated claim means any counterclaim needs equally concrete sourcing to supersede it.

3. Later References and Feature Claims: Grok‑3 and Grok‑4 Appear, But Details Vary

Separate analyses reference Grok‑3 and Grok‑4, attributing improvements such as a two‑million‑token context window, function calling, structured outputs, and “lightning‑fast” reasoning at lower cost, or describing Grok‑3 as a reasoning model with multimodal learning and optimized parameter sharing [4] [5] [6] [3]. Those accounts are less uniform: some emphasize feature sets and deployment on platforms like X and Grok.com, while others make broad claims about architectural similarity to enhanced Transformers. The dataset lacks a single, dated, technical disclosure for Grok‑3/Grok‑4 with the same level of detail provided for Grok‑1, so these references indicate evolution or marketing claims rather than a singular, verifiable architectural disclosure tied to this assistant.

4. Absences Matter: Several Sources Say “No Confirmation”

Crucially, a number of the analyses explicitly state that the referenced articles do not identify the model that powers this specific assistant; they discuss Grok variants generally or xAI product announcements but stop short of confirming the assistant’s underlying model [7] [3]. Those sources function as cautionary notes: they show that public reporting on xAI’s models exists, but linking any particular public article to the claim “this assistant is based on X model” is not always supported. Where a claim attributes an assistant’s identity to Grok‑1, Grok‑3, or Grok‑4 without citing a direct, dated disclosure that names the assistant, the evidence is weaker and may reflect inference or conflation.

5. What This Means: Best Supported Conclusion and Where Uncertainty Remains

The dataset’s best‑supported, dated public technical disclosure is Grok‑1 as a 314B MoE model (March 2024), making Grok‑1 the most defensible identification where source specificity matters [1] [2]. However, later references to Grok‑3 and Grok‑4 introduce plausible successor models or product variants with advanced features, and several analyses expressly decline to confirm the assistant’s model—leaving an evidentiary gap [4] [5] [6] [3]. For a definitive answer about which model powers this assistant, a direct, dated statement from the operator of the assistant linking it to a specific xAI model (Grok‑1, Grok‑3, Grok‑4, or another) is required; absent that, the most robust claim in this dataset is that xAI publicly released Grok‑1, while attribution of this assistant to any specific Grok variant remains unconfirmed [1] [2] [7].

Want to dive deeper?
What is the Grok-1 large language model?
Who founded xAI and developed Grok?
How does Grok AI differ from OpenAI's GPT models?
What are the key features of Grok's training data?
Recent updates and versions of Grok AI by xAI