Overview
This plugin provides static security analysis for GitHub Actions workflows that invoke AI coding agents. It identifies attack vectors where attacker-controlled input (pull request titles, branch names, issue bodies, comments, commit messages) can reach an AI agent running with elevated permissions in CI.Author: Emilio López & Will Vandevanter
Version: 1.2.0
Version: 1.2.0
Attack Vectors Detected
The plugin checks for nine categories of security issues:Env Var Intermediary
Attacker data flows through
env: blocks to AI prompt fields with no visible ${{ }} expressionsDirect Expression Injection
${{ github.event.* }} expressions embedded directly in AI prompt fieldsCLI Data Fetch
gh CLI commands in prompts fetch attacker-controlled content at runtimePR Target + Checkout
pull_request_target trigger combined with checkout of PR head codeError Log Injection
CI error output or build logs fed to AI prompts carry attacker payloads
Subshell Expansion
Restricted tools like
echo allow subshell expansion (echo $(env)) bypassEval of AI Output
AI response flows to
eval, exec, or unquoted $() in subsequent stepsDangerous Sandbox Configs
danger-full-access, Bash(*), --yolo disable safety protectionsWildcard Allowlists
allowed_non_write_users: "*" or allow-users: "*" permit any user to triggerSupported AI Actions
| Action | Repository | Status |
|---|---|---|
| Claude Code Action | anthropics/claude-code-action | Supported |
| Gemini CLI | google-github-actions/run-gemini-cli | Primary |
| Gemini CLI (legacy) | google-gemini/gemini-cli-action | Archived |
| OpenAI Codex | openai/codex-action | Supported |
| GitHub AI Inference | actions/ai-inference | Supported |
Installation
From a project with the Trail of Bits internal marketplace configured:Usage
Local Repository Analysis
The skill activates automatically when Claude detects GitHub Actions workflow files containing AI agent action references:Remote Repository Analysis
Analyze remote repositories by providing a GitHub URL orowner/repo identifier:
Remote analysis requires GitHub authentication. Run
gh auth login if you encounter auth errors.Example Vulnerability: Env Var Intermediary
This is the most commonly missed attack vector because it contains no visible${{ }} expressions in the prompt.
github.event.issue.body→env: ISSUE_BODY(evaluated before step runs)- Prompt instruction references
"${ISSUE_BODY}" - Gemini reads env var at runtime
- Attacker content reaches AI context
Example Vulnerability: Direct Expression Injection
- Attacker opens a PR with title:
"; rm -rf / # - The malicious content flows directly into the AI prompt
- Claude executes with tainted prompt, potentially running attacker commands
Example Vulnerability: Dangerous Sandbox Config
- No filesystem restrictions
- No command filtering
- Full system access for AI-generated code
- Combined with any injection vector = critical severity
Audit Methodology
The plugin follows a systematic 5-step process:Discover Workflow Files
Scan
.github/workflows/*.yml and .github/workflows/*.yaml for workflow definitionsIdentify AI Action Steps
Match
uses: fields against known AI action references, following cross-file composite actions and reusable workflowsCapture Security Context
Extract trigger events, env blocks, permissions, sandbox configs, and user allowlists
Analyze Attack Vectors
Check all nine attack vectors against captured security context with detailed data flow analysis
Report Format
Findings include:- Severity: High / Medium / Low / Info based on trigger exposure, sandbox config, permissions, and data flow directness
- File: Workflow path with clickable GitHub links for remote analysis
- Step: Job and step reference with line numbers
- Impact: What an attacker can achieve
- Evidence: YAML code snippets showing the vulnerable pattern
- Data Flow: Numbered trace from attacker action to AI agent
- Remediation: Action-specific secure configuration guidance
Target Audience
Security Auditors
Reviewing repositories that use AI agents in CI/CD
Developers
Configuring AI actions securely in workflows
DevSecOps Engineers
Establishing secure defaults for AI-assisted pipelines
Common Rationalizations to Reject
| Rationalization | Why It’s Wrong |
|---|---|
| ”It only runs on PRs from maintainers” | Ignores pull_request_target and issue_comment triggers that expose actions to external input without write access |
| ”We use allowed_tools to restrict what it can do” | Tool restrictions can still be weaponized. Even echo can exfiltrate via echo $(env). Limited tools ≠ safe tools |
”There’s no ${{ }} in the prompt, so it’s safe” | Classic env var intermediary miss. Data flows through env: blocks with zero visible expressions |
| ”The sandbox prevents any real damage” | Sandbox misconfigurations disable protections entirely. Even proper sandboxes leak secrets via env vars |
Clean Repository Output
When no findings are detected, the plugin produces a substantive report:Related Skills
- differential-review - Security-focused PR review can identify vulnerable workflow changes
- fp-check - Verify suspected vulnerabilities discovered by this auditor