Automated security scanner for AI agent skills. The App Store review process for the OpenClaw ecosystem.
AI agents are becoming powerful — and with power comes risk. When you install a skill from OpenClaw, GitHub, or any third-party source, you’re granting an AI agent new capabilities. A malicious skill can exfiltrate secrets, execute arbitrary code, or hijack your agent’s behavior through prompt injection.Panguard Skill Auditor is the security gate between untrusted skills and your AI agents. Think of it as the App Store review process, but for AI skills.
The OpenClaw ecosystem makes it easy to discover and install skills for AI agents like Claude Code, Cursor, and Windsurf. But this openness creates a new attack surface:
Attack Vector
What It Does
How Common
Prompt Injection
Overrides agent instructions to change behavior
Very common
Tool Poisoning
Embeds reverse shells, curl | bash, or sudo commands
Growing
Hidden Unicode
Zero-width characters that hide malicious instructions from human review
Emerging
Encoded Payloads
Base64-encoded eval() or exec() calls that bypass text scanning
Emerging
Secret Exfiltration
Reads .env, .ssh/, .aws/ and sends data to external servers
Common
Excessive Permissions
Requests more access than the skill actually needs
Very common
Manual review catches some of these — but zero-width Unicode is invisible to the human eye, and Base64 payloads require decoding to inspect.