Source: Alpha Community
Image source: Generated by Wujie AI
The security of AI models is crucial for both AI application companies and users and clients using AI applications. For AI companies, protecting the security of AI models without adding unnecessary work costs and safeguarding the original data and algorithms from leakage is equally important.
HiddenLayer, a company, has developed a comprehensive security platform that provides plug-and-play AI security without adding unnecessary complexity at the AI model level or requiring access to original data and algorithms.
Recently, the company secured a Series A funding of $50 million led by M12 and Moore Strategic Ventures, with participation from Booz Allen Ventures, IBM Ventures, Capital One Ventures, and Ten Eleven Ventures (which led its seed round), making it the highest-funded Series A in the AI security field to date.
HiddenLayer has helped protect AI/ML models used in finance and cybersecurity by several Fortune 100 companies.
It has also established strategic partnerships with Intel and Databricks and received honors such as the most innovative startup company at RSAC and the most promising early-stage startup company at SC Media. The company has nearly quadrupled its staff in the past year and plans to increase its workforce from 50 to 90 by the end of this year, further investing in research and development.
Encountering attacks on AI, serial entrepreneurs see opportunities
According to Gartner's statistics, in 2022, two-fifths of all AI network attacks targeted organizations suffered AI privacy breaches or security incidents, with one-fourth of the attacks being malicious.
The UK's National Cyber Security Center also warned that "attackers are targeting large language model chatbots (such as ChatGPT) to obtain confidential information, generate offensive content, and trigger unintended consequences."
In a study commissioned by HiddenLayer and conducted by Forrester, 86% of respondents are "very concerned or concerned" about the security of their organization's machine learning models.
Most responding companies indicated that they currently rely on manual processes to address AI model threats, and 80% of respondents hope to invest in a solution that can manage the integrity and security of ML models in the next 12 months.
Compared to other fields, cybersecurity is particularly technical and specialized. According to a previous study by Fortune magazine, the global cybersecurity market is expected to reach $403 billion by 2027, with a compound annual growth rate of 12.5% from 2020 to 2027.
HiddenLayer was co-founded by Christopher Sestito (CEO), Tanner Burns (Chief Scientist), and James Ballard (CIO). The idea for this startup came after they encountered network attacks on AI models at their previous company, Cylance (a security startup later acquired by BlackBerry).
Chris Sestito, CEO and co-founder of HiddenLayer, recalled, "After the machine learning models we were protecting were directly attacked through our product, we led the rescue effort and realized that this would be a huge problem for any organization deploying machine learning models in our product. We decided to establish HiddenLayer to educate enterprises about this significant threat and help them defend against attacks."
Sestito led threat research at Cylance, Ballard was head of the data planning team at Cylance, and Burns was a threat researcher.
Speaking about the market opportunity, Chris Sestito said, "We know that almost every company is now using AI in various forms, but we also know that no other technology has achieved such widespread adoption without security protection. We are committed to creating the most frictionless security solution for the market to meet this unmet need of customers."
Regarding the technology, Chris Sestito said, "Many data scientists rely on pre-trained, open-source, or proprietary machine learning models to shorten analysis time and simplify testing work, and then gain insights from complex datasets. Using publicly available pre-trained open-source models can expose organizations to transfer learning attacks on tampered publicly available models.
Our platform provides tools to protect AI models from adversarial attacks, vulnerabilities, and malicious code injection. It monitors the input and output of AI systems and tests the integrity of models before deployment. It uses techniques to observe only the vectors (or mathematical representations) of the input to and output from the models, without needing access to their proprietary models."
Todd Graham, Managing Partner at M12, stated, "Inspired by their own experience with adversarial AI attacks, the founders of HiddenLayer have created a platform that is essential for any enterprise using AI and ML technologies.
Their firsthand experience with these attacks, combined with their vision and novel approach, makes the company the preferred solution for protecting these models. From the first meeting with the founders, we knew this was a big idea in the security field and we hope to help them scale."
Building the comprehensive MLSec platform to protect AI security
HiddenLayer's flagship product is the security platform (MLSec) for detecting and preventing network attacks on machine learning-driven systems, which is the industry's first MLDR (Machine Learning Detection and Response) solution to protect enterprises and their customers from emerging attack methods.
This MLSec platform consists of HiddenLayer MLDR, ModelScanner, and Security Audit Reporting.
HiddenLayer's MLSec platform is equipped with a simple yet powerful dashboard that allows security managers to understand at a glance whether their enterprise ML/AI models are secure. It also automatically lists the severity of security issues and alerts based on priority and stores data for compliance, audits, and reporting that enterprises may be required to perform.
The MLDR solution adopts a machine learning-based approach to analyze billions of model interactions per minute to identify malicious activities without accessing or prior knowledge of users' ML models or sensitive training data. It can detect and respond to attacks on ML models, protecting intellectual property and trade secrets from theft or tampering, ensuring users are not vulnerable to attacks.
HiddenLayer also offers consulting services for Adversarial Machine Learning (AML) experts, who can conduct threat assessments, provide training for clients' cybersecurity and development operations teams, and conduct "red team" exercises to ensure clients' defense measures work as expected.
Types of Attacks Prevented by HiddenLayer's MLSec Platform
Inference/Extraction: Extraction attacks involve attackers manipulating model inputs, analyzing outputs, and inferring decision boundaries to reconstruct training data, extract model parameters, or steal models by training an approximate target model.
Model Theft: Attackers steal the results of expensive machine learning frameworks.
Training Data Extraction: Attackers can conduct membership inference attacks by observing only the outputs of a machine learning model without accessing its parameters. Membership inference may raise security and privacy concerns when the target model is trained on sensitive information.
Data Poisoning: Poisoning occurs when attackers inject new, specifically modified data into the training set, deceiving or subverting machine learning models to provide inaccurate, biased, or malicious results.
Model Injection: Model injection is a technique that modifies machine learning models by inserting a malicious module that introduces secret harmful or undesired behavior.
Model Hijacking: This attack can inject malicious code into an existing PyTorch model, leaking all files in the current directory to a remote server.
Specific Services Provided by HiddenLayer
Threat Modeling: Assess the overall AI/ML environment and asset risks through discovery interviews and scenario-based discussions.
ML Risk Assessment: Analyze the client's AI operational lifecycle in detail and conduct an in-depth analysis of the client's most critical AI models to determine the current AI/ML investment's risk to the organization and the efforts and/or controls needed for improvement.
Expert Training: Provide full-day training for data science and security teams to help them defend against these AI attacks and threats.
Red Team Assessment: The Adversarial Machine Learning Research (AMLR) team will simulate attackers' attacks to evaluate existing defenses and patch vulnerabilities.
AI/ML Model Scanning: Test and verify existing AI/ML models using HiddenLayer's model integrity scanner to ensure they are protected from threats (e.g., malware) and tampering.
ML Detection and Response (MLDR) Implementation Services: Professionally implement and integrate HiddenLayer's MLDR product into the AI/ML environment, providing the necessary functionality and visibility to data science and security teams to prevent attacks, improve response times, and maximize model effectiveness.
Strengthening the Ecosystem through Collaboration with Industry Giants
In addition to product and platform development, HiddenLayer has strong partnerships. It has partnered with Databricks, allowing enterprise users to deploy AI models into Databricks' data lake and use the MLSec platform. This embeds security into AI at the data lake level.
Through a strategic partnership with Intel, the combination of Intel SGX's confidential computing and HiddenLayer's machine learning model scanner provides a comprehensive AI security solution.
These two major strategic partnerships have made HiddenLayer's entire ecosystem more complete and have gained favor with customers. It has already secured several major clients in the finance and government sectors.
As AI enters the practical stage, the entrepreneurial opportunities in AI security are becoming apparent. Protect AI, a company specializing in AI model security, recently secured a $35 million Series A funding led by Evolution Equity Partners and Salesforce Ventures.
According to HiddenLayer co-founder Sestito, as the AI market grows, the AI security market will also grow in sync. In addition to Protect AI and HiddenLayer, other companies such as Robust Intelligence, CalypsoAI, Halcyon, and Troj.ai are also active in the AI security field.
For example, HiddenLayer's early investor, Ten Eleven Ventures, also invested $20 million in seed funding for Halcyon, a company mainly focused on AI ransomware defense tools to help users of AI software prevent attacks and recover quickly from them.
As the wave of AI transitions from the conceptual hype stage to the practical application stage, with a shift from primarily large model entrepreneurship to AI application entrepreneurship, AI security becomes increasingly important. Whether ensuring the security of AI models or protecting the security of AI applications, the development of AI security can further deepen the penetration of AI in both the consumer and enterprise sectors.
There are already many AI security startups overseas, and the same demand exists in the Chinese market. We look forward to seeing outstanding local entrepreneurs venture into this important entrepreneurial field.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。