Integrations

Plug Searlo Into Your Stack

Copy-paste integration code for LangChain, CrewAI, LlamaIndex, n8n, Python, and Node.js. Get real-time Google SERP data in your app in under 5 minutes.

AllAI FrameworkAutomationCore SDK
🦜

LangChain

AI Framework

Use Searlo as a LangChain tool for agent web search and RAG retrieval

Custom Tool for agentsWebSearchRetrieverRAG pipeline integration
# pip install langchain httpx
from langchain.tools import tool
import httpx

@tool
def searlo_search(query: str) -> str:
    """Search the web using Searlo SERP API."""
    resp = httpx.get(
        "https://api.searlo.tech/v1/search/web",
        params={"q": query, "toon": "true"},
        headers={"X-API-Key": "your_key"},
    )
    return resp.json()["toon"]  # Token-optimized output

# Use in a LangChain agent
from langchain.agents import initialize_agent, AgentType
from langchain_openai import ChatOpenAI

agent = initialize_agent(
    tools=[searlo_search],
    llm=ChatOpenAI(model="gpt-4o"),
    agent=AgentType.OPENAI_FUNCTIONS,
)
agent.invoke("What are the latest AI trends?")
🚀

CrewAI

AI Framework

Give your CrewAI agents real-time web search with Searlo

Custom BaseToolAgent web accessMulti-agent workflows
# pip install crewai httpx
from crewai import Agent, Task, Crew
from crewai.tools import BaseTool
import httpx

class SearloSearchTool(BaseTool):
    name: str = "Web Search"
    description: str = "Search the web for current information"

    def _run(self, query: str) -> str:
        resp = httpx.get(
            "https://api.searlo.tech/v1/search/web",
            params={"q": query, "toon": "true"},
            headers={"X-API-Key": "your_key"},
        )
        return resp.json()["toon"]

researcher = Agent(
    role="Research Analyst",
    goal="Find accurate, current information",
    tools=[SearloSearchTool()],
)
🦙

LlamaIndex

AI Framework

Add web search retrieval to your LlamaIndex RAG pipeline

Custom QueryEngineWeb retrieval nodeIndex augmentation
# pip install llama-index httpx
from llama_index.core.tools import FunctionTool
import httpx

def web_search(query: str) -> str:
    """Search Google via Searlo for real-time web results."""
    resp = httpx.get(
        "https://api.searlo.tech/v1/search/web",
        params={"q": query, "num": 5},
        headers={"X-API-Key": "your_key"},
    )
    results = resp.json().get("organic", [])
    return "\n".join(
        f"- {r['title']}: {r['snippet']}" for r in results
    )

search_tool = FunctionTool.from_defaults(fn=web_search)

# Use in a LlamaIndex agent
from llama_index.agent.openai import OpenAIAgent
agent = OpenAIAgent.from_tools([search_tool])
agent.chat("Find recent news about RAG pipelines")

n8n

Automation

Add Searlo web search to your n8n automation workflows

HTTP Request nodeWebhook triggersData transformation
// n8n HTTP Request Node Configuration
{
  "method": "GET",
  "url": "https://api.searlo.tech/v1/search/web",
  "qs": {
    "q": "={{ $json.query }}",
    "num": 10,
    "gl": "us"
  },
  "headers": {
    "X-API-Key": "your_searlo_api_key"
  },
  "json": true
}

// Connect: Trigger → HTTP Request → Process Results
// Use expressions to pass dynamic queries from forms,
// webhooks, or previous nodes.
🐍

Python

Core SDK

Native Python integration with async support and type hints

Async supportType hintsError handling
# pip install httpx
import httpx
from typing import Any

async def search(
    query: str,
    num: int = 10,
    country: str = "us",
    toon: bool = False,
) -> dict[str, Any]:
    """Search Google via Searlo SERP API."""
    async with httpx.AsyncClient() as client:
        resp = await client.get(
            "https://api.searlo.tech/v1/search/web",
            params={
                "q": query,
                "num": num,
                "gl": country,
                "toon": str(toon).lower(),
            },
            headers={"X-API-Key": "your_key"},
        )
        resp.raise_for_status()
        return resp.json()

# Usage
import asyncio
results = asyncio.run(search("best SERP API 2026", toon=True))
print(results["toon"])  # Token-optimized output

Node.js

Core SDK

Server-side integration with Express, Fastify, or standalone

Express middlewareFetch APITypeScript support
// Node.js 18+ with built-in fetch
const SEARLO_KEY = process.env.SEARLO_API_KEY;

async function search(query, options = {}) {
  const params = new URLSearchParams({
    q: query,
    num: options.num || 10,
    gl: options.country || "us",
    ...(options.toon && { toon: "true" }),
  });

  const res = await fetch(
    `https://api.searlo.tech/v1/search/web?${params}`,
    { headers: { "X-API-Key": SEARLO_KEY } }
  );

  if (!res.ok) throw new Error(`Searlo: ${res.status}`);
  return res.json();
}

// Express route example
app.get("/api/search", async (req, res) => {
  const data = await search(req.query.q, { toon: true });
  res.json(data);
});

Works with Any Language

Searlo is a standard REST API. Any language that can make HTTP requests can integrate in one line:

curl "https://api.searlo.tech/v1/search/web?q=your+query&toon=true" \
  -H "X-API-Key: your_api_key"

# Also works: Go, Ruby, PHP, Rust, Java, C#, Kotlin, Swift...
# Any HTTP client works. JSON response. No SDK required.

Integration FAQ

Does Searlo have an official Python or Node.js SDK?+

Searlo uses a simple REST API that works with any HTTP client — no SDK installation needed. Use httpx/requests in Python or built-in fetch in Node.js. We provide copy-paste code for every major framework above.

How do I integrate Searlo with LangChain?+

Create a custom LangChain Tool that calls the Searlo API and returns results. Use it in any LangChain agent or chain. The TOON format is especially useful — it returns token-optimized output that saves 60% on LLM input costs.

Can I use Searlo with n8n or Make.com?+

Yes. Use the HTTP Request node in n8n or the HTTP module in Make.com. Point it to https://api.searlo.tech/v1/search/web, add your API key as a header, and pass the query as a parameter. Results come back as structured JSON.

What is the MCP protocol integration?+

MCP (Model Context Protocol) lets AI agents like Claude, Cursor, and VS Code Copilot call Searlo directly as a tool. Install the Searlo MCP server and your AI assistant gets instant web search access. See our MCP page for setup instructions.

How do I handle rate limits?+

Rate limits scale with your tier — Free: 5 req/s, paid tiers up to 30 req/s (Enterprise). Implement exponential backoff with the Retry-After header. For bulk operations (rank tracking, lead gen), use async/concurrent requests with a rate limiter to maximize throughput. Your tier upgrades automatically as you purchase more credits.

Ready to integrate?

Get your API key and start building in under 5 minutes.