
Phyrex|Jul 25, 2025 19:27
When Trump invested trillions for AI, who is providing reliable data for AI?
When Trump invested trillions of dollars in AI, it seemed to be a competition between models, chips and data centers, but it also led to deeper questions. How is the data that AI models rely on verified and traceable? Is the black box and reasoning process in the training process auditable? Can models cooperate or can they only fight independently?
Speaking human language means that when we use AI to obtain information, who can be sure that the information given by AI is correct? Data pollution is no longer a joke. A certain AI application that claimed to be a ChatGPT killer was deeply trapped in the environment of data pollution. When the data sources are all wrong, how can the answers given be correct.
Is the current AI intelligent? Perhaps so, but even the smartest AI requires model training, but we cannot know which data was used for model training, verify whether the GPU has truly completed a reasoning process, or establish trust logic between multiple models.
If we want AI to truly move towards the next generation, we may need to address these three issues simultaneously:
1、 The training data must be reliable and verifiable.
2、 The reasoning process must be auditable by third-party models.
3、 The model must be able to coordinate computing power, exchange tasks and share results without the need for platform matching.
This cannot be solved solely by a model, an API, or a GPU platform, but requires a truly AI built system that can both store data permanently at low cost, giving the data the authority to review and be reviewed, and enable models to perform inference verification between them. It also needs to support models in autonomously discovering computing power, coordinating tasks, and auditing the execution of each step under specific conditions.
This is difficult to achieve on centralized platforms, so is it possible to achieve it on decentralized platforms, and why do we need to use decentralized methods to achieve it?
I think only blockchain can truly integrate "data storage, data execution, and data verification" into the same underlying network. This is also one of the biggest charms of blockchain, its immutability and transparency, but the problem is that not every chain is suitable for being the underlying layer of AI.
If it's just storage, the IPFS protocol already exists, but storage alone is not enough. Smart contracts need to be able to directly call data, audit inference results, and even coordinate GPU resources to complete computing tasks. These features, not to mention IPFS, are currently not available in most L1 or AI applications.
If there is really any correlation, it may be that @ irys_xyz has some opportunities. Irys is not a traditional storage chain, but is preparing to be built into a data execution network for AI. Treat data as programmable assets. Models can read data, validate reasoning, call computing power on the chain, and implement pricing, authorization, profit sharing, and verification through smart contracts.
Of course, there are still some immature aspects of Irys at present, but this direction of development should be correct. Whether it is centralized AI or decentralized AI, if the data source is not trustworthy, all computing power will be like building a tower on sand, and even the strongest model will only be like a moon in water or a flower in a mirror.
Share To
Timeline
HotFlash
APP
X
Telegram
CopyLink