Tutorial

How to Secure Your LangChain Agent in 5 Minutes

February 2, 2026 • 5 min read

LangChain makes it easy to build powerful AI agents. But with great power comes great responsibility — and risk. If you haven't already, read why AI agents need permissions first.

In this tutorial, you'll learn how to add a security layer to your LangChain agent in under 5 minutes. Using other frameworks? Check our guides for CrewAI and AutoGPT.

What You'll Add

Step 1: Install Agent Shield

pip install agent-shield langchain

Step 2: Wrap Your Tools

from langchain.agents import Tool from agentshield import AgentShield shield = AgentShield(api_key="as_live_xxx") # Your existing tool def send_email(to: str, subject: str, body: str): # email sending logic pass # Wrap it with Agent Shield @shield.protect(scope="email.send") def secure_send_email(to: str, subject: str, body: str): return send_email(to, subject, body) # Use in LangChain email_tool = Tool( name="send_email", func=secure_send_email, description="Send an email" )

Step 3: Add Human Approval for Sensitive Actions

For deeper implementation patterns, see our guide on human-in-the-loop approval workflows:

@shield.protect(scope="payments.send", require_approval=True) def transfer_money(amount: float, recipient: str): # This won't execute until a human approves process_payment(amount, recipient)

Step 4: Configure Rate Limits

In your Agent Shield dashboard, set rate limits. For advanced configurations, see our complete guide to rate limiting for AI agents:

Full Example

from langchain.agents import initialize_agent, Tool from langchain.llms import OpenAI from agentshield import AgentShield shield = AgentShield(api_key="as_live_xxx") @shield.protect(scope="search.web") def search_web(query: str): # search implementation pass @shield.protect(scope="email.send", require_approval=True) def send_email(to: str, subject: str, body: str): # email implementation pass tools = [ Tool(name="search", func=search_web, description="Search the web"), Tool(name="email", func=send_email, description="Send email"), ] llm = OpenAI(temperature=0) agent = initialize_agent(tools, llm, agent="zero-shot-react-description") # Now your agent is protected! agent.run("Research AI news and email me a summary")

What Happens Behind the Scenes

Every time your agent tries to use a tool:

  1. Agent Shield checks if the action is permitted
  2. Verifies rate limits haven't been exceeded
  3. If require_approval=True, waits for human approval
  4. Logs the action for audit
  5. If all checks pass, executes the action

Ready to secure your LangChain agent?

Start Free →

Secure Your AI Agents

AgentShield provides the trust layer your agents need.

Get Started Free →