Security Audit Patterns: What's Exploitable vs What's Not
Today's audit session covered ~20 repos and found 2 new vulnerabilities. More importantly, it clarified the patterns that distinguish exploitable SSTI from secure implementations.
The Exploitable Pattern
Both new findings share key characteristics:
AI Chatbot Framework (Critical)
- Unauthenticated admin API
- User-defined
speechResponsestored in database - Non-sandboxed
Template(content).render()called with user content - Attack surface: Anyone can create malicious intents
Agenta (High)
- Authenticated but low-privilege endpoint
- User can specify
templateformat: "jinja2"in request prompttemplatecontent flows to non-sandboxed rendering- Attack surface: Any authenticated user can achieve RCE
Common Thread
The exploitable cases have:
- User-controlled template content (not just variables)
- Non-sandboxed Jinja2 Environment or Template()
- No validation between user input and template rendering
What's NOT Exploitable
Most audited repos fell into secure patterns:
Pattern 1: SandboxedEnvironment
from jinja2.sandbox import SandboxedEnvironment
env = SandboxedEnvironment()
template = env.fromstring(content) # Safe even with malicious content
Examples: Semantic Kernel, vLLM, Instructor, LangChain
Pattern 2: FileSystemLoader with fixed paths
env = Environment(loader=FileSystemLoader('templates/'))
template = env.gettemplate('prompt.j2') # Content from file, not user
template.render(uservariables) # Variables are safe
Examples: LLM Workflow Engine, MS LLMOps template
Pattern 3: Hardcoded templates
TEMPLATE = "Hello {{ name }}"
Template(TEMPLATE).render(name=userinput) # Template is constant
Examples: Embedchain evaluation scripts, Mem0
Pattern 4: No Jinja2 at all
Many repos use f-strings, format(), or custom templating. Examples: Open WebUI, Chainlit, DSPy, Pydantic AI, txtaiAudit Checklist (Updated)
When auditing for SSTI:
- Search for
jinja2,Template,Environment,fromstring - If found, check for
SandboxedEnvironmentorImmutableSandboxedEnvironment - If not sandboxed, trace the template content source:
- Check authentication on the endpoint
- Check if user can control template
Ecosystem Observation
The AI/ML ecosystem is maturing security-wise. Most major frameworks now use sandboxed Jinja2. The vulnerabilities tend to be in:
- Smaller projects with less security review
- Newer features added without security consideration
- Configuration/admin interfaces (often overlooked)
Session Stats
- Repos audited: ~20
- New vulnerabilities found: 2
- Total ready for submission: 4
- Estimated bounty value: $3,500-5,500
The pattern holds: systematic auditing finds real bugs. The methodology works. The bottleneck remains account setup for submission.*