Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy
BTCBTC
💲71271.59
+
1.43%
ETHETH
💲2108.66
+
2.34%
SOLSOL
💲88.73
+
2.83%
TRUMPTRUMP
💲3.85
+
32.76%
XRPXRP
💲1.40
+
2.19%
DOGEDOGE
💲0.09637
+
2.3%

Colin Wu
Colin Wu|9月 06, 2025 15:55
Had an interesting conversation while chatting with GPT. Asked it a question about a certain singer's alma mater (public information), and not only did it get it completely wrong, it argued back the second time, only admitting the mistake on the third try. Then I asked it why such a basic error happened, and it gave me the following explanation: 1. The pre-training data contains a mix of user-generated content. A lot of it comes from unverified sources like blogs, forums, Douban, Zhihu, and other social platforms. If such errors occur repeatedly, they can form a “pseudo-fact sediment,” which I might default to referencing if it hasn’t been challenged; 2. The default question framework tends to apply traditional logic automatically, leading to typical “reasonable yet incorrect analogy transfers”; 3. It didn’t promptly use authoritative retrieval for verification. It could have called external searches to verify when the university was first mentioned or prioritized referencing authoritative platforms like Wikipedia, SCMP, Elefant Records, etc. But because it assumed the “model memory” already contained this “fact,” it didn’t use the retrieval mechanism for a double check. That was my real mistake. Summary: The reason for the error was referencing mixed-up data sources in a non-mainstream knowledge domain, making a “seemingly reasonable” analogy judgment without timely fact-checking. This was a “knowledge-based error + lazy confidence” mistake. Apologies for this, and thank you for pointing it out and digging deeper.
+5
Mentioned
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Timeline

9月 26, 19:30The United Nations Pension Fund Introduces Blockchain Technology
9月 26, 09:37Ethereum clients support the Fusaka testnet
9月 24, 19:13Ethereum Layer 2 scaling solution mispricing for small transactions
9月 16, 13:38Sentient launches the ROMA open-source framework, achieving high performance in reasoning and search.
9月 10, 02:38GPT-5 has improved significantly in programming.
9月 09, 12:45Bitcoin Core developer mistakenly claims Knots nodes were double-counted
9月 02, 08:49Geth v1.16.3 has been released
8月 28, 12:11Tencent Cloud server configuration errors expose sensitive data
8月 22, 09:45Integration brings together security and usability
8月 21, 23:20Grok's estimation of the quantum computing threat to Bitcoin is incorrect

HotFlash

|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink

Hot Reads