Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy
BTCBTC
💲71653.89
-
3.75%
ETHETH
💲2195.39
-
5.93%
SOLSOL
💲89.94
-
5.35%
WLDWLD
💲0.3637
-
8.13%
USDCUSDC
💲0.9999
+
0.01%
HYPEHYPE
💲42.85
+
6.22%

Adam Cochran (adamscochran.eth)
Adam Cochran (adamscochran.eth)|May 06, 2025 12:29
Being motivated to work on OCR for Sanskrit after working on GPT-5 makes me wondering if AI companies have run out of human generated content for training. The largest models have trained on every internet post, page and every book in history, in every deciphered major language. We have to be hitting not just a plateau but a ceiling soon. This might be an indicator of that. What comes after human training data is a massive problem in scaling the current LLM training approach. As we’ve seen that recursively training on LLM generated data that is only human reviewed and not human produced, generates worse and worse quality results over each iteration (due to subtle differences that we as humans can’t really perceive in the screening process at first) The company that figures out how to up train after hitting the corpus ceiling, is likely to be the winner of AGI.
+6
Mentioned
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Timeline

May 22, 01:53【Kled helps AI developers find training data】
May 14, 14:49【Tencent has enough high-end chips to train future models】
May 13, 13:35【Privasea is a technology project for AI and data privacy】
May 03, 15:30【Training large language models requires trillions of words】
Apr 30, 17:09【Autonomous artificial intelligence is inevitable】
Apr 30, 09:57【DeepSeek releases open-source model with 671 billion parameters】
Apr 28, 12:31【Introduction to the new gmFLOCK gameplay in FLock. io】
Apr 19, 04:00【FlashbackLabs trains artificial intelligence through federated learning】
Apr 16, 07:48【Analysis of the effectiveness of push parameters】
Apr 08, 10:00【Rendering network changes GPU access through decentralized rendering】

HotFlash

|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink

Hot Reads