On January 20, 2026, Pundi AI announced a partnership with the security agency Vital Block to build a data asset framework of "on-chain + audit + KYC" around the Dataset Tokens (DTOKs) generated by its Data Pump product. According to information disclosed by Pundi AI's official social media accounts, Vital Block will provide optional code auditing and team KYC services for DTOKs-related projects, attempting to fill a critical gap in the credibility and accountability of on-chain AI data as the DeAI narrative heats up. However, behind this collaboration lies the long-standing controversy of bias and fraud caused by the decentralized and opaque sources of AI training data, as well as the inherent tension between the decentralized, anonymous culture emphasized in the Web3 world and the need for security audits and compliance identification. This article will explore this contradiction, outlining the potential significance of the DTOKs and audit-KYC combination in the DeAI space, while emphasizing that this is more of an important experiment, still quite distant from being validated by the market.
From Black Box to On-Chain: Why AI Data Needs to Be "Minted as Assets"
In the traditional AI industry, models are seen as the core of value, but the data used to train these models is often buried deep within closed systems and black box processes. Data sources typically span public internet scraping, proprietary data from partners, and various third-party data repositories, with the quality of labels, collection methods, and authorization scopes being opaque. This leads to the accumulation of bias, difficulties in timely identification of result falsification, and a blurred definition of responsibility: when model outputs trigger real-world risks, it is challenging to trace back to specific data fragments, let alone assign accountability and remedy. This asymmetry not only restricts larger-scale industry collaboration but also suppresses the space for data providers to receive fair compensation.
As AI gradually becomes deeply integrated with on-chain finance, governance, and infrastructure, relying solely on "trusting brands" or "trusting model providers" is no longer sufficient to support high-value automated decision-making. DeAI projects aim to construct a traceable, priceable, and verifiable form of data assets, allowing data to no longer be just an input cost behind models, but rather a clearly defined, separable, tradable, and continuously revenue-generating on-chain asset. This requires that the source, processing methods, and authorization information of each piece of data leave a trace on some public or semi-public ledger, forming constraints at both technical and institutional levels.
Thus, the demand for on-chain data assetization naturally emerges: by registering metadata, ownership, and usage records on-chain, AI data can possess a source proof similar to a "property certificate," as well as an auditable usage history. On this basis, economic incentive mechanisms can be layered, allowing those who contribute high-quality data to continuously earn revenue as model usage and downstream applications grow. This design is both a counterattack against the current AI data black box and an attempt to leverage the transparency and programmability of Web3 to build new credit and trading infrastructure for AI data.
Pundi AI's Data Pump and DTOKs: Packaging Data into Tradable On-Chain Credentials
Pundi AI's Data Pump product attempts to materialize the above vision. According to public information, Data Pump allows developers to "mint" their own or legally authorized datasets into Dataset Tokens (DTOKs), registering metadata and ownership information related to the dataset on-chain. Here, "minting" is not just a simple token issuance; it emphasizes solidifying the basic description, usage limitations, and ownership relationships of the data at the contract level, ensuring that each DTOK corresponds to a recognizable and referable data asset unit.
Regarding the expected use cases for DTOKs, Pundi AI has targeted three groups of users: AI agents, developers, and Web3 applications. For AI agents, DTOKs can serve as an entry point for selecting, acquiring, and invoking data on-chain, allowing agents to choose datasets that meet specific quality standards or audit statuses based on on-chain records during task execution. For developers, DTOKs represent the "packaging and listing" of data products, enabling datasets to be integrated into a broader DeAI protocol and DApp ecosystem through DTOKs, with opportunities to earn usage fees in various scenarios. For Web3 applications, DTOKs provide a standardized data access interface, allowing applications to clearly bind to specific data assets when invoking AI capabilities, thus reserving space for accountability tracing and revenue distribution.
In terms of revenue distribution logic, although current public details are limited, it can be inferred that the design of DTOKs points to a multi-party incentive alignment structure: data providers hope to continuously share the revenue generated from data usage through DTOKs; AI applications and agents expect to obtain more reliable data sources at relatively controllable costs to improve model output quality and reduce compliance risks; potential investors and liquidity providers may share the value appreciation brought by the growth of the entire data market by participating in DTOKs-related funding pools or protocol mechanisms. If such a framework can be realized, it will help to more directly measure and distribute the value of "high-quality data" on-chain.
However, it is important to emphasize that there are still significant information gaps regarding the specific usage details and transaction flow mechanisms of DTOKs. For example, how price discovery for DTOKs is achieved, how data access permissions are tied to token holdings or other conditions, and the response processes in cases of data misuse or default have not been fully explained in public materials. The briefing has explicitly prohibited speculation or fabrication regarding the specific trading pairs and market-level details supported by Data Pump, which means that external observers can currently only understand the intentions of this model at a high level of abstraction, without being able to draw definitive conclusions about its practical operability.
Vital Block's Involvement: Raising a "Safety Barrier" Between Anonymity and Compliance
In Pundi AI's setup, Vital Block plays the role of a security collaborator in the DTOKs ecosystem, providing optional code auditing and team KYC services. According to Pundi AI's official disclosures, Vital Block's involvement aims to reduce technical risks at the contract level through professional audits and enhance the traceability of the entities behind the datasets by understanding team identities, thereby alleviating concerns about data falsification and malicious project exits to some extent.
The challenge of this arrangement lies in how to introduce audits and KYC in a Web3 environment where decentralization and anonymous culture are prevalent, without completely stifling the space for anonymous innovation. On one hand, project parties and data providers often prefer to maintain a certain degree of anonymity due to privacy, security, and regulatory uncertainties; on the other hand, as AI data becomes linked to capital flows and real business scenarios, institutional users and more cautious developers increasingly require some form of identity verification and accountability to prevent building critical businesses on black box data. The optional audit and KYC services attempt to find a compromise between these two demands: not turning compliance audits into rigid entry barriers to the ecosystem, but providing a "bonus path" that can be recognized by the market for projects that wish to establish higher trust levels.
It is worth noting that the current information regarding the collaboration between Pundi AI and Vital Block mainly comes from Pundi AI's official announcements, and the briefing specifically points out that it is not publicly confirmed whether Vital Block has independently verified the collaboration details through its official channels. This means that when interpreting this collaboration, external observers need to be aware of the relatively singular information source and that it is currently impossible to cross-verify from multiple channels. On the other hand, setting security services as optional rather than mandatory may have subtle effects on ecosystem expansion: if audits and KYC become hard entry requirements, it may significantly raise participation thresholds in the short term, suppressing the influx of innovative projects; whereas in the current voluntary adoption model, it relies more on the market to spontaneously form a consensus that "passing audits and KYC is a hallmark of quality datasets," a process that requires time and real success and failure cases to shape.
Accelerating the DeAI Narrative: Data Security Modules Become a Part of Infrastructure Competition
Placing Pundi AI's DTOKs and the auditing and KYC services provided by Vital Block within the larger DeAI narrative reveals the route differentiation of different projects in addressing the "trustworthy data" issue. Some teams choose to emphasize verifiable computation and zero-knowledge proofs at the model level, indirectly enhancing trust by proving that the model executes correctly under specific rules; some projects focus on encryption and privacy protection during the data collection process, attempting to ensure that the input side is not tampered with or leaked; while Pundi AI's path leans more towards working on the asset and governance layers, by packaging data into on-chain assets and layering security audits and identity verification, making "credibility" more reflected in the credit identifiers at the asset and project levels.
Within this framework, trustworthy data assets are expected to generate a series of chain reactions on the performance of AI models, accountability tracing, and institutional participation willingness. For models, the ability to continuously access datasets that have undergone certain scrutiny and whose sources can be traced will help improve output quality and robustness in the medium to long term; for accountability tracing, when model outputs trigger disputes, it will be clearer to trace back to which DTOKs were used, corresponding to which data providers and project teams, supplemented by on-chain records and audit reports, forming a more actionable accountability and adjustment mechanism; for institutional participants, seeing clear security processes and identity verification paths behind data assets will make it easier to quantify and make decisions in terms of risk assessment, thereby enhancing their willingness to participate in the DeAI ecosystem.
As for the external expectation of establishing "new data security standards," it is currently more suitable to discuss it as a possibility rather than a foregone conclusion. On one hand, the combination of Pundi AI and Vital Block is indeed attempting to fill the gap in on-chain AI data security modules, proposing a representative practical path; on the other hand, the lack of sufficient landing cases and public data makes it difficult to determine whether this model can be replicated on a large scale, whether it will form industry consensus, or if it is just one of many explorations. Before more empirical support is available, viewing it as an important experimental sample may be more prudent than directly claiming it has become a unified standard.
Risks and Unresolved Issues: From Singular Information Sources to Real Execution Details
There are multiple layers of uncertainty surrounding this collaboration. First, from the perspective of information disclosure, the key information regarding the collaboration between Pundi AI and Vital Block mainly comes from Pundi AI's official channels, and Vital Block has not comprehensively confirmed the collaboration content through independent announcements. This singular source pattern means that external readers need to maintain a certain level of skepticism when interpreting related statements, especially when assessing the depth of collaboration, service coverage, and long-term commitments, and should await further information from multiple parties.
Secondly, there are still considerable gaps in the specific design at the product and market levels that cannot be filled. The briefing has explicitly required that no speculation or fabrication be made regarding the specific trading pairs supported by Data Pump that have not yet been disclosed, which also reminds observers that it is premature to discuss liquidity structures, secondary market price behaviors, and other issues at this stage. Once imaginative extensions of these elements are made in the absence of key information, it may mislead readers, packaging uncertainty as established fact.
Looking ahead, there are several dimensions worth focusing on. First, the actual adoption rate of audit and KYC services by projects within the DTOKs ecosystem is crucial. Under the optional premise, how many datasets and teams are willing to accept additional security and identity verification, and to what extent will these "audited" labels translate into real market preferences? Second, the true data quality behind DTOKs, including dimensions such as accuracy, representativeness, and update frequency, must withstand verification over long-term use, rather than remaining confined to white papers and announcements. Third, when project defaults, data falsification, or security incidents occur, how the ecosystem will handle these situations is important. Although the current briefing does not provide details on specific compensation mechanisms, accountability tracing processes, or compliance frameworks, these real issues will eventually surface and become key questions for assessing the credibility of the entire system.
What to Watch Next: From Announcements to Real Transactions
Overall, the collaboration between Pundi AI and Vital Block around DTOKs attempts to provide a relatively complete link to the question of "turning AI data into trustworthy assets": first, by minting datasets into on-chain assets (DTOKs) through Data Pump, and then layering optional code auditing and team KYC services on the asset level, thereby strengthening the verifiability of data on both technical and credit dimensions. This design at least offers a concrete, observable experimental sample for the DeAI field, helping to shift the industry from abstract discussions to quantifiable, traceable practical paths.
Looking at the broader landscape, the DeAI field is bound to witness more games and collaborations between security agencies, data providers, and AI projects in the future. On one end are innovators who insist on anonymity and complete openness, hoping to minimize identity exposure and compliance burdens; on the other end are institutional forces inclined to introduce audits, KYC, and compliance processes, striving to participate in the deep integration of AI and on-chain assets under controllable risk conditions. Balancing these two forces through voluntary mechanisms, tiered services, and differentiated pricing will become one of the main threads in the competition for DeAI infrastructure in the coming years.
In this process, judgments regarding the collaboration between Pundi AI and Vital Block should remain cautious. The current more reasonable positioning is to view it as an important and representative experiment, rather than a successfully validated commercial and security paradigm. Only when more real DTOKs issuance, transaction, and usage data are disclosed, and when audits and KYC play substantive roles in specific cases, even demonstrating differentiated protective effects during risk events, will there be sufficient basis for the outside world to assess the merits and replicability of this approach. Until then, continuous tracking and calm observation may be the most responsible attitude towards this emerging narrative.
Join our community to discuss and grow stronger together!
Official Telegram community: https://t.me/aicoincn
AiCoin Chinese Twitter: https://x.com/AiCoinzh
OKX Benefits Group: https://aicoin.com/link/chat?cid=l61eM4owQ
Binance Benefits Group: https://aicoin.com/link/chat?cid=ynr7d1P6Z
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。




