Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy
BTCBTC
💲72301.78
+
2.63%
ETHETH
💲2122.02
+
2.93%
SOLSOL
💲90.39
+
4.2%
USDCUSDC
💲0.9998
-
0.01%
XRPXRP
💲1.42
+
2.16%
DOGEDOGE
💲0.1001
+
5.77%

Juan Benet
Juan Benet|10月 26, 2025 12:51
Hey @hive_echo this is cool. Love the UI to observe the learning process. Can you get learning if you decouple the input clock cycle from the learning & output cycles? The chain-of-thought LLM hack is a crude way to add recurrence around the whole transformer, but I bet we’ll find extremely successful ways to add brain-like recurrence within the networks, mediated by internal processes (logic gates that activate subnets/subprograms). There are whole-brain clock cycles, so you can keep that, but the signals coming in, being processed, learning, thinking, deciding on actions all could be running on separate loops/programs in the brain, not 1 big loop. You could test this in your network by running the output sampling in a “separate” cycle, or by adding a structure between your hidden layer and the output layer that adds some asynchrony. Maybe a way to do this in visual classifiers would be to add an “active focusing” intent: - build a structure that “focuses” the input (most of the input layer is low-res / colorless (like eye periphery), part of the input range is focused (higher res, color). - add a structure between the hidden layer and the output layer that “moves” the focus field across the whole input (like moving the eye, focusing vision on an object) - add a structure that “looks around then decides when to output” based on confidence increasing above some threshold (this is what “adds the separation of programs”) - learning process must learn to “use the eye correctly” to extract signal and pass - the training set would have to have examples that are impossible ideally (or very hard) to distinguish without focusing. (text too blurry at low res) Why this matters: I think the current deep learning approach of single passes across the whole network will turn out to be way less effective than feedback loop structures within. The reason we use single passes is because of backprop. But the brain uses spiking neurons with local learning rules only. One branch of brain-inspired AI research asks whether we can get backprop on top of spiking neurons. Another asks “can we get close enough to backprop, but do better in other ways?” — the big advantage to me has always been in decoupling sub-networks and enabling deep recurrent loops — real thinking. Most higher order animals have very clear thinking and consideration pauses before acting, and active control of sensory & motor systems while learning and deciding how to act. Let alone humans pausing and pondering. Anyway, I think you could do this, and if you get it working well you could maybe kick off a new paradigm of bio-inspired deep learning! Happy hacking! cc @EscolaSean70058 @countzerozzz @davidad(Juan Benet)
+6
Mentioned
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Timeline

10月 30, 12:00Allora reconstructs the underlying logic of AI
10月 28, 20:06Ethereum becomes a necessary prerequisite for the center of the modern economy
10月 27, 12:15Difference Between Cryptocurrency ETFs and ETPs
10月 25, 20:00Seven-year-old children in El Salvador learn BTC knowledge
10月 25, 14:42Most people can use smart contracts
10月 25, 12:34Use Grok Companions to learn other languages
10月 24, 18:36Smart Fund Intelligence launches on Base and Ethereum
10月 24, 05:54Efficient Learning Method: AI-Assisted Thematic Learning
10月 23, 14:00Amjad Masad and Marc Andreessen discuss AI agents
10月 22, 17:49Quickly resolve bugs by switching models

HotFlash

|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink

Hot Reads