How to Add Web Search to LangChain Agents (Python)
Why Agents Need Web Search
LLMs have a knowledge cutoff. When a user asks about today's news, recent prices, or current documentation, the model either hallucinates or says "I don't know." Web search tools solve this by letting the agent retrieve real-time information from Google.
Prerequisites
pip install langchain langchain-openai httpx
You'll also need a [Searlo API key](https://dashboard.searlo.tech/auth) (free, 3,000 credits included).
Step 1: Create the Search Tool
import httpx
from langchain.tools import StructuredTool
from pydantic import BaseModel, Field
SEARLO_API_KEY = "your_api_key_here"
class SearchInput(BaseModel):
query: str = Field(description="The search query")
def web_search(query: str) -> str:
"""Search the web using Searlo API and return results."""
response = httpx.get(
"https://api.searlo.tech/api/v1/search",
params={"q": query, "num": 5},
headers={"X-API-Key": SEARLO_API_KEY},
timeout=10.0,
)
data = response.json()
results = data.get("organic", [])
return "\n".join(
f"[{r['position']}] {r['title']}\n{r['snippet']}\nURL: {r['link']}"
for r in results
)
search_tool = StructuredTool.from_function(
func=web_search,
name="web_search",
description="Search Google for real-time information. Use this when you need current data, news, or facts you're not sure about.",
args_schema=SearchInput,
)
Step 2: Create the Agent
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_functions_agent
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
llm = ChatOpenAI(model="gpt-4o", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful research assistant. Use the web_search tool to find current information when needed. Always cite your sources with URLs."),
("human", "{input}"),
MessagesPlaceholder("agent_scratchpad"),
])
agent = create_openai_functions_agent(llm, [search_tool], prompt)
executor = AgentExecutor(agent=agent, tools=[search_tool], verbose=True)
Step 3: Run It
result = executor.invoke({
"input": "What are the latest developments in quantum computing this week?"
})
print(result["output"])
The agent will:
1. Recognize it needs current information
2. Call `web_search` with an appropriate query
3. Read the results
4. Synthesize a response with source citations
Tips for Production
• **Set a timeout**: Searlo responds in ~300ms, but network conditions vary. Use a 10-second timeout.
• **Limit results**: `num=5` is usually enough. More results cost the same (1 credit per search) but add more tokens to the LLM context.
• **Use TOON format**: Add `&format=toon` to the API call to get token-optimized output that's 60% smaller — saves money on LLM calls.
• **Cache frequent queries**: If multiple users ask similar questions, cache search results for 5-10 minutes.
Next Steps
• See the full [Integrations page](/integrations) for CrewAI, LlamaIndex, and n8n examples
• Learn about [MCP protocol](/mcp) for zero-code AI tool integration
• Read about [SERP API for AI agents](/search-api-for-ai-agents) for more architecture patterns
bashpip install langchain langchain-openai httpx
pythonimport httpx from langchain.tools import StructuredTool from pydantic import BaseModel, Field SEARLO_API_KEY = "your_api_key_here" class SearchInput(BaseModel): query: str = Field(description="The search query") def web_search(query: str) -> str: """Search the web using Searlo API and return results.""" response = httpx.get( "https://api.searlo.tech/api/v1/search", params={"q": query, "num": 5}, headers={"X-API-Key": SEARLO_API_KEY}, timeout=10.0, ) data = response.json() results = data.get("organic", []) return "\n".join( f"[{r['position']}] {r['title']}\n{r['snippet']}\nURL: {r['link']}" for r in results ) search_tool = StructuredTool.from_function( func=web_search, name="web_search", description="Search Google for real-time information. Use this when you need current data, news, or facts you're not sure about.", args_schema=SearchInput, )
pythonfrom langchain_openai import ChatOpenAI from langchain.agents import AgentExecutor, create_openai_functions_agent from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder llm = ChatOpenAI(model="gpt-4o", temperature=0) prompt = ChatPromptTemplate.from_messages([ ("system", "You are a helpful research assistant. Use the web_search tool to find current information when needed. Always cite your sources with URLs."), ("human", "{input}"), MessagesPlaceholder("agent_scratchpad"), ]) agent = create_openai_functions_agent(llm, [search_tool], prompt) executor = AgentExecutor(agent=agent, tools=[search_tool], verbose=True)
pythonresult = executor.invoke({ "input": "What are the latest developments in quantum computing this week?" }) print(result["output"])