The all-doctor team of Pluralis, except for interns, comes from Amazon. What makes the open AI project Pluralis special?
Currently, AI technology continues to break through, and the exploration of innovations in AI training models is becoming increasingly in-depth. In this wave, the monopoly risks of centralized models and the lack of incentive mechanisms for open-source models urgently require better solutions.
Against this backdrop, the Pluralis project has emerged. Its team consists entirely of PhDs, with all members except interns coming from Amazon. This article will introduce the core technological concepts of Pluralis in the field of decentralized AI training, the team composition, funding situation, and its innovative protocol learning training paradigm.
What is Pluralis?
Pluralis Research is dedicated to creating a decentralized, open-source AI development model through "protocol learning." This model gathers multiple computing resources in a decentralized manner to collaboratively train models. In this process, it ensures that no single participant can obtain the complete model weights.
The core innovation of Pluralis's protocol learning lies in the protocol model, which utilizes a key feature of neural networks: no participant can extract the complete set of weights. This design ensures that value flows to contributors while protecting model ownership, cleverly balancing the openness of AI development with monetization needs.
In Pluralis, model designers can propose their model ideas, while computing and data providers can contribute the resources needed to train the models. These protocol models are open, publicly developed, and effectively incentivize all parties to contribute by granting participants partial ownership of the trained model, moving towards the goal of truly open-source artificial intelligence.
Background of Pluralis
The Pluralis team is strong, with 8 team members listed on the official website, all of whom, except for one intern, come from Amazon's AI research department, and all hold PhDs.
Founder Alexander Long: Holds a PhD in Computer Science from the University of New South Wales and previously served as an applied scientist at Amazon from March 2021 to May 2024. His doctoral thesis focused on sample-efficient reinforcement learning and non-parametric memory in deep learning.
Founding Scientist Gil Avraham: Holds a PhD in Machine Learning from Monash University, served as an applied scientist at Amazon from December 2021 to August 2024, and was later promoted to senior applied scientist, joining Pluralis in October 2024.
Founding Scientist Yan Zuo: Holds a PhD in Electrical and Electronic Engineering from Monash University, interested in large-scale optimization, statistical modeling, machine learning, and computer vision, served as an applied scientist at Amazon from August 2021 to October 2024.
Founding Scientist Ajanthan Thalaiyasingam: Holds a PhD in Computer Science from the Australian National University, served as a machine learning scientist at Amazon from December 2020 to March 2024, and was later promoted to senior machine learning scientist, joining Pluralis in October 2024.
Founding Scientist Sameera Ramasinghe: Holds a PhD in Machine Learning and 3D Vision from the Australian National University, co-founder and CTO of AI technology company ConscientAI, served as an applied scientist at Amazon from May 2022 to November 2024.
It is evident that the founders, founding scientists, and research scientists of Pluralis all have experience working at Amazon, each with strengths in machine learning, computer vision, and large language models (LLMs), with some members having also served as postdoctoral researchers.
In terms of funding, Pluralis completed a $7.6 million financing round in March 2025. This round was led by CoinFund and Union Square Ventures, with participation from Topology, Variant, Eden Block, and Bodhi Ventures. The financing was conducted in equity form and included warrants for future cryptocurrency.
What is Protocol Learning?
In Alexander Long's paper "Protocol Learning, Decentralized Frontier Risk and the No-Off Problem," a new AI model training paradigm—protocol learning—was proposed. Its goal is to collaboratively train models using a decentralized incentive network, breaking through the limitations of current centralized and open-source methods.
Alexander Long pointed out that while centralized models are efficient, they pose monopoly risks and lack transparency in governance; open-source models, on the other hand, lack sustainable incentive mechanisms. Protocol learning serves as a compromise solution, incentivizing participants to contribute computing resources and building a decentralized training network, theoretically aggregating several orders of magnitude more computing power than centralized training.
From a technical feasibility perspective, decentralized training needs to meet requirements such as efficient communication, model sharding, elastic training, Byzantine fault tolerance, and support for heterogeneous nodes. Although there has been some progress in distributed training, pipeline parallelism, and fault tolerance mechanisms, these have not yet been fully integrated into large-scale (100B+ parameters) models. Additionally, by distributing ownership based on computing contributions, economic incentives can be formed, but issues such as computing verification still need to be addressed, potentially using techniques like game-theoretic staking or zero-knowledge proofs.
Of course, protocol learning also comes with new risks. Decentralized models are difficult to unilaterally terminate; if a model goes out of control or is misused, it requires coordination across the entire network to stop, which is extremely challenging. Furthermore, a balance must be sought between incentives, security, and controllability to prevent malicious behavior.
Pluralis believes that the future of artificial intelligence is not only distributed but also decentralized. The technical barriers to decentralized training are not insurmountable, and the benefits it brings will be immense.
In summary, Pluralis is building a decentralized AI training infrastructure aimed at promoting the collective creation of cutting-edge models through protocol learning, fundamentally democratizing the production and access of AI foundational models.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。