ClawHub, the official plugin center for the open-source AI Agent project OpenClaw, is facing a significant security crisis as blockchain security firm SlowMist has identified a wave of poisoned plugins. The security breach involves a large-scale supply chain poisoning attack, where malicious actors have infiltrated the platform with harmful "skills" designed to compromise user security.
SlowMist's investigation revealed that attackers are disguising malicious commands as seemingly benign "dependency installation/initialization steps" within the SKILL.md files of the plugins. These commands are often concealed using Base64 encoding to evade initial detection. Upon execution, they initiate a two-stage attack chain that can lead to severe consequences for users.
The security firm's scans have identified a staggering 341 malicious skills out of 2,857 scanned. Koi Security also participated in the investigation and flagged the 341 malicious skills. These malicious programs are designed to steal user passwords, collect sensitive host information and documents, and upload the stolen data to attacker-controlled servers. Some plugins even mimic legitimate crypto tools, YouTube utilities, or automation helpers to further deceive users. The associated malicious infrastructure has been linked to the Poseidon hacker group.
The discovery highlights the insufficient review mechanisms within ClawHub, which allowed the infiltration of numerous malicious skills. This poses significant security risks to both developers and end-users who rely on the platform for extending the functionality of their AI agents. Experts are now urging users to exercise extreme caution when installing plugins from ClawHub.
SlowMist recommends several preventative measures to mitigate the risks. Users are advised to carefully audit the "installation steps" section in all SKILL.md files, scrutinizing any commands before execution. They should also be wary of prompts requesting system passwords or broad system access. It is crucial to obtain dependencies and tools exclusively from official channels to avoid downloading compromised versions. Developers are urged to test plugins in isolated environments before deploying them to production systems.
The incident raises broader concerns about the security of AI agent platforms and the potential for supply chain attacks within these ecosystems. As AI agents become increasingly popular, they present attractive targets for malicious actors seeking to exploit vulnerabilities and compromise user data. Security experts are warning against blindly trusting and running plugin commands, emphasizing the need for independent scans and verification of software sources. Gary Marcus, a security expert, has cautioned against using AI tools, stating that they pose significant privacy and security risks. The incident serves as a stark reminder of the importance of robust security practices and continuous monitoring in the rapidly evolving landscape of AI development.
