![]() By further evaluating the import artifacts from a sample of files, we estimate that explicitly-flagged, risky pickle files are currently less than 1% of the global population. Our analysis shows that more than 80% of the evaluated machine learning models in the ecosystem consist of pickle-serialized code, which is vulnerable to code injection / arbitrary code execution risks. Using Splunk with the HuggingFace API and test results from the AI Risk Database, we can provide some quantitative evaluation into the specifics of the most popular ML model sharing hub, HuggingFace. In this post, we’ll briefly explore the current state of adversarial AI risk and deep-dive into one of the most pressing near-term concerns – the popularity of inherently risky methods for sharing preserved machine learning models. ![]() Researchers have alluded to this challenge clearly, pointing out a fundamental misalignment between the “priorities of practitioners and the focus of researchers.” That is, there is often a gap between the viability of academic approaches and the expectation of operationally-realistic attack scenarios targeting ML systems. But as AI / Machine Learning (ML) systems now support millions of daily users, has our understanding of the relevant security risks kept pace with this wild rate of adoption? Image generators and large language models (LLMs) have captured global attention and fundamentally changed the Internet and the nature of modern work. Early 2023 has been characterized by an explosion of Artificial Intelligence (AI) breakthroughs.
0 Comments
Leave a Reply. |