How to Secure Your LangChain Agent in 5 Minutes
LangChain makes it easy to build powerful AI agents. But with great power comes great responsibility — and risk. If you haven't already, read why AI agents need permissions first.
In this tutorial, you'll learn how to add a security layer to your LangChain agent in under 5 minutes. Using other frameworks? Check our guides for CrewAI and AutoGPT.
What You'll Add
- ✅ Permission scopes (control what the agent can do)
- ✅ Rate limiting (prevent runaway costs)
- ✅ Audit logging (know what happened)
- ✅ Human approval for sensitive actions
Step 1: Install Agent Shield
pip install agent-shield langchain
Step 2: Wrap Your Tools
from langchain.agents import Tool from agentshield import AgentShield shield = AgentShield(api_key="as_live_xxx") # Your existing tool def send_email(to: str, subject: str, body: str): # email sending logic pass # Wrap it with Agent Shield @shield.protect(scope="email.send") def secure_send_email(to: str, subject: str, body: str): return send_email(to, subject, body) # Use in LangChain email_tool = Tool( name="send_email", func=secure_send_email, description="Send an email" )
Step 3: Add Human Approval for Sensitive Actions
For deeper implementation patterns, see our guide on human-in-the-loop approval workflows:
@shield.protect(scope="payments.send", require_approval=True) def transfer_money(amount: float, recipient: str): # This won't execute until a human approves process_payment(amount, recipient)
Step 4: Configure Rate Limits
In your Agent Shield dashboard, set rate limits. For advanced configurations, see our complete guide to rate limiting for AI agents:
- Max 10 emails per hour
- Max 100 API calls per minute
- Max $50 in transactions per day
Full Example
from langchain.agents import initialize_agent, Tool from langchain.llms import OpenAI from agentshield import AgentShield shield = AgentShield(api_key="as_live_xxx") @shield.protect(scope="search.web") def search_web(query: str): # search implementation pass @shield.protect(scope="email.send", require_approval=True) def send_email(to: str, subject: str, body: str): # email implementation pass tools = [ Tool(name="search", func=search_web, description="Search the web"), Tool(name="email", func=send_email, description="Send email"), ] llm = OpenAI(temperature=0) agent = initialize_agent(tools, llm, agent="zero-shot-react-description") # Now your agent is protected! agent.run("Research AI news and email me a summary")
What Happens Behind the Scenes
Every time your agent tries to use a tool:
- Agent Shield checks if the action is permitted
- Verifies rate limits haven't been exceeded
- If
require_approval=True, waits for human approval - Logs the action for audit
- If all checks pass, executes the action
Ready to secure your LangChain agent?
Start Free →