The public testnet of the Mira network was launched yesterday. It aims to build a trust layer for AI. So, why does AI need to be trusted? How does Mira address this issue?
When people discuss AI, they tend to focus more on its powerful capabilities. However, interestingly, AI has "hallucinations" or biases, which do not receive much attention. What is an AI "hallucination"? Simply put, it means that AI sometimes "makes things up" and speaks nonsense with a straight face. For example, if you ask AI why the moon is pink, it might provide you with many seemingly reasonable explanations.
The existence of "hallucinations" or biases in AI is somewhat related to certain current AI technological paths. For instance, generative AI outputs content by predicting the "most likely" responses to achieve coherence and reasonableness, but sometimes it cannot verify the truth; additionally, the training data itself may contain errors, biases, or even fabricated content, which can also affect AI's output. In other words, AI learns human language patterns rather than facts themselves.
In summary, the current probabilistic generation mechanism combined with data-driven models almost inevitably leads to the possibility of AI hallucinations.
If the content with biases or hallucinations is merely general knowledge or entertainment, it may not have immediate consequences. However, if it occurs in highly rigorous fields such as healthcare, law, aviation, or finance, it can lead to significant consequences. Therefore, addressing AI hallucinations and biases is one of the core issues in the evolution of AI. Some approaches use retrieval-augmented generation techniques (combining with real-time databases to prioritize verified facts), while others introduce human feedback to correct model errors through manual labeling and human supervision.
The Mira project is also attempting to address the issues of AI bias and hallucinations, specifically by building a trust layer for AI to reduce biases and hallucinations and enhance AI reliability. So, from an overall framework perspective, how does Mira reduce AI biases and hallucinations and ultimately achieve trustworthy AI?
The core of Mira's approach is to verify AI outputs through consensus among multiple AI models. In other words, Mira itself is a verification network that validates the reliability of AI outputs, leveraging the consensus of multiple AI models. Additionally, a crucial aspect is the decentralized consensus for verification.
Thus, the key to the Mira network is decentralized consensus verification. Decentralized consensus verification is a strength in the field of cryptography, and it also utilizes the collaboration of multiple models to reduce biases and hallucinations through collective verification models.
In terms of the verification architecture, it requires a type of independently verifiable statement. The Mira protocol supports converting complex content into independently verifiable statements. These statements require node operators to participate in verification. To ensure the honesty of node operators, cryptoeconomic incentives/penalties are employed, with different AI models and decentralized node operators participating to ensure the reliability of verification results.
Mira's network architecture includes content transformation, distributed verification, and consensus mechanisms to achieve verification reliability. In this architecture, content transformation is a crucial part. The Mira network first decomposes candidate content (generally submitted by clients) into different verifiable statements (to ensure that models can understand them in the same context). These statements are distributed by the system to nodes for verification to determine their validity and aggregate results to reach consensus. These results and consensus are returned to the clients. Furthermore, to protect client privacy, the candidate content is transformed into statements that are randomly sharded and given to different nodes to prevent information leakage during the verification process.
Node operators are responsible for running verification models, processing statements, and submitting verification results. Why are node operators willing to participate in the verification of statements? Because they can earn rewards. Where do the rewards come from? From the value created for clients. The purpose of the Mira network is to reduce the error rate of AI (hallucinations and biases). Once this goal is achieved, it can generate value, such as reducing error rates in healthcare, law, aviation, and finance, which would create significant value. Therefore, clients are willing to pay. Of course, the sustainability and scale of payments depend on whether the Mira network can continuously provide value to clients (reducing AI error rates). Additionally, to prevent speculative behavior from nodes randomly responding, nodes that continuously deviate from consensus will have their staked tokens reduced. In summary, it ensures that node operators honestly participate in verification through economic mechanism games.
Overall, Mira provides a new solution for achieving AI reliability by building a decentralized consensus verification network based on multiple AI models, bringing higher reliability to AI services for clients, reducing AI biases and hallucinations, and meeting clients' demands for higher accuracy and precision. While providing value to clients, it also brings rewards to participants in the Mira network. In one sentence, Mira attempts to build a trust layer for AI. This will promote the deeper application of AI.
Currently, the AI agent frameworks that Mira collaborates with include ai16z, ARC, and others. The public testnet of the Mira network was launched yesterday. Users can participate in the Mira public testnet by using Klok, which is a chat application based on Mira's LLM. By using the Klok application, users can experience verified AI outputs (and compare them with unverified AI outputs) and earn Mira points. As for the future use of the points, it has not been disclosed yet.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。