Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
-
Updated
Feb 3, 2026 - Python
Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
Kernel-enforced agent sandbox and agent security CLI and SDKs. Capability-based isolation with secure key management, atomic rollback, cryptographic immutable audit chain of provenance. Run your agents in a zero-trust environment.
Intervention layer with audit logs for OpenClaw agents. Browser-aware. Trajectory-aware. Human-routable.
This repository contains Cursor Security Rules designed to improve the security of both development workflows and AI agent usage within the Cursor environment. These rules aim to enforce safe coding practices, control sensitive operations, and reduce risk in AI-assisted development.
A plugin-based gateway that orchestrates other MCPs and allows developers to build upon it enterprise-grade agents.
Runtime security enforcement and threat hunting engine for autonomous AI fleets. Build Swarm Detection & Response (SDR) platforms with Clawdstrike.
A native policy enforcement layer for AI coding agents. Built on OPA/Rego.
AI-first security scanner with 76 analyzers, 7,300+ detection rules, and repo poisoning detection for AI/ML, LLM agents, and MCP servers. Scan any GitHub repo with: medusa scan --git user/repo
Build Secure and Compliant AI agents and MCP Servers. YC W23
See what your AI agents can access. Scan MCP configs for exposed secrets, shadow APIs, and AI models. Generate AI-BOMs for compliance.
Security toolkit for AI agents. Scan your machine for dangerous skills and MCP configs, monitor for supply chain attacks, test prompt injection resistance, and audit live MCP servers for tool poisoning.
Scan A2A agents for potential threats and security issues
Agent orchestration & security template featuring MCP tool building, agent2agent workflows, mechanistic interpretability on sleeper agents, and agent integration via CLI wrappers
The antivirus for OpenClaw — approve dangerous actions, scan skills, block secret leaks, and keep humans in control, for safety.
Security scanner MCP server for AI coding agents. Prompt injection firewall, package hallucination detection (4.3M+ packages), 1000+ vulnerability rules with AST & taint analysis, auto-fix.
[DEPRECATED] Moved to microsoft/agent-governance-toolkit
Open-source firewall for AI agents. Policy engine that audits and controls what OpenClaw, Claude Code, Cursor, Codex, and any AI tool can do on your machine.
AI Agent Security Middleware — 8-layer defense, DLP data flow, prompt injection detection, zero dependencies. SDK + OpenClaw plugin.
Fine-grained authorization for AI agents using OpenFGA.
AIM - The open-source NHI platform for AI agents. Cryptographic identity, governance, and access control.
Add a description, image, and links to the agent-security topic page so that developers can more easily learn about it.
To associate your repository with the agent-security topic, visit your repo's landing page and select "manage topics."