Author: CJ_Blockchain
On February 3, 2025, a model named DeepSeek-R1 quietly launched on the national supercomputing internet platform.
In the following month, due to its performance directly competing with top closed-source models and its training costs comparable to “白菜价” (cabbage prices), it swept across the globe.
This triggered a sharp decline in AI stocks on the US market and began a “DeepSeek” moment for China AI.

On March 10, 2026, Bittensor's Subnet 3 Templar announced the completion of the largest decentralized large language model (LLM) pre-training operation in history — Covenant-72B
This is the largest decentralized large language model pre-training operation in history:
72 billion parameters, based on a high-performance DCLM corpus for pre-training, fully implemented through the Bittensor Subnet 3 network, without permission, with over 70 independent nodes participating freely.
Bittensor has entered its own DeepSeek moment.
1. Templar (SN3): Paradigm Shift from Data Collection to Core Training
Templar originated from Omega Labs' SN3, originally focused on collecting and mining multimodal data. As the Bittensor mechanism evolved, this subnet made a strategic leap from being a “data mover” to a “model smith.”
Currently, Templar positions itself as a global distributed pre-training infrastructure for large models. It gathers global heterogeneous computing power through incentive mechanisms, aiming to address the extremely high computation costs and centralized review issues in large model training. The successful delivery of Covenant-72B verifies the maturity of this decentralized production model.
2. Covenant-72B: Breaking the Scale Ceiling of Decentralized Training
Covenant-72B is a milestone achievement produced by Templar and is currently the largest dense architecture pre-training model in decentralized networks.
Core Parameters: With 72 billion parameters, pre-trained on a high-performance DCLM corpus.
Performance Benchmark: In foundational model assessments, its performance is on par with Meta's Llama-2-70B.
Instruction Optimization: After fine-tuning, Covenant-72B-Chat demonstrates strong competitiveness in IFEval (instruction following) and MATH (mathematical reasoning) dimensions, even surpassing closed-source models of similar scale on specific metrics.
Inference Efficiency: The model achieves an extremely high throughput of 450 tokens/sec, addressing the response latency pain points of large models in practical applications.
3. SparseLoCo Algorithm: The Underlying Engine of Decentralized Training
Training a 72B scale model in ordinary internet environments, the biggest challenge lies in the communication bandwidth bottlenecks between nodes. Templar achieved a qualitative breakthrough using the core algorithm SparseLoCo:
Extreme Compression: The algorithm selects only 1%-3% of core gradient components for transmission and quantizes data to 2-bits, significantly reducing the demand for network bandwidth.
Low-frequency Synchronization: Unlike traditional clusters where synchronization occurs at every step, SparseLoCo allows nodes to iterate locally for 15-250 steps before global synchronization.
Error Compensation: Through a local gradient accumulation mechanism, it ensures that even with over 97% information loss, the model's convergence accuracy remains intact.
This technological path proves that, even without expensive dedicated clusters like InfiniBand, top-tier intelligence can still be produced using globally distributed ordinary networks.
4. Industry Evaluation and Market Response
Templar's technical achievements have garnered attention from the mainstream AI community and capital markets:
Authoritative Recognition:
Jack Clark, co-founder of Anthropic, classified Templar as the world's largest active decentralized training network in his analysis report, pointing out that its growth exceeded industry expectations.
Jason Calacanis (host of the All-In Podcast, prominent Silicon Valley investor) recently provided an in-depth introduction to the Bittensor mechanism in his blog and hinted at purchasing.
Institutional Layout:
Grayscale continues to increase its holdings in TAO, viewing it as a core position in the decentralized AI track.
DCG established Yuma, specifically focusing on accelerating the development of the Bittensor (TAO) ecosystem, seen as DCG's largest and most direct bet on decentralized AI.
Market Performance:

$TAO: Following Templar's announcement of the completion of the 72B large model training, TAO rose over 30%, showing absolute strength amid BTC's volatility.
$Templar (SN-3): The protagonist Templar increased by 75% in 7 days, becoming the leading entity in capturing Bittensor's current Emission emissions. The current Market Cap is only 70m.

5. Subnet Investment Potential and Ecological Ceiling
Templar's success has opened up a new realm of imagination for the Bittensor ecosystem:
Unlocking the Value Ceiling: For a long time, outsiders questioned that Bittensor was merely “air incentives.” Templar has proven that the protocol can produce productivity tools with commercial competitiveness, shifting TAO's valuation logic from “narrative-driven” to “product-driven.”
Potential of Heterogeneous Computing Power: With the development of “heterogeneous SparseLoCo,” future consumer-grade graphics cards (like the RTX 4090) will be able to participate directly in the training of trillion-parameter models, achieving equality in computing resources.
Deterministic Opportunities of Subnets: Under the dTAO mechanism, subnets like Templar, which possess hardcore technical barriers and can continuously produce high-performance models, have extremely high long-term allocation value for their tokens.
Templar's current MC=75m, FDV=350m
Currently, mainstream large model companies are valued at Open AI 840 billion, Anthropic 350 billion, Minimax 45 billion.
This doesn't mean Templar can directly compete with these companies, but in this narrative-scarce environment, where attention is dissipated and trust in decentralization has waned, Templar's emergence undoubtedly injects a shot of adrenaline into decentralized AI.
Conclusion
Templar proves that a decentralized environment can not only store data but also produce intelligence. Covenant-72B is just the beginning; with the vertical integration of SN3 (pre-training), SN39 (compute power), and SN81 (reinforcement learning), a decentralized OpenAI prototype running on blockchain is already emerging.
From its inception to today, the Crypto industry has debunked countless narratives. The once-popular decentralized storage, decentralized computing power, and decentralized computers seem to have been disproven, but it is heartening to see projects steadfastly advancing on the path of decentralization and achieving results.
The success of Templar is not only Bittensor's DeepSeek moment but may also represent the DeepSeek moment for Crypto.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。