Safety guardrails for the Reactive Agents framework.
Protects agents from prompt injection, PII leakage, and toxic content — applied automatically during the guardrail phase of the execution engine.
bun add @reactive-agents/guardrails| Guard | What it catches |
|---|---|
| Prompt injection | Attempts to override system instructions |
| PII detection | Emails, phone numbers, SSNs, credit cards |
| Toxicity filtering | Harmful or abusive content |
| Output contracts | Schema-validates agent responses |
import { ReactiveAgents } from "reactive-agents";
const agent = await ReactiveAgents.create()
.withName("customer-support")
.withProvider("anthropic")
.withGuardrails()
.build();
// Prompt injection attempts are blocked before reaching the LLM
const result = await agent.run("Ignore previous instructions and...");
// result.success === false, result.error describes the violation.withGuardrails({
blockTopics: ["competitor-names"],
maxOutputLength: 2000,
requireOutputSchema: MyResponseSchema,
})Full documentation at docs.reactiveagents.dev/guides/guardrails/