Sitemap

BigTool From LangChain

BigTool enables efficient tool selection for AI Agents, optimising performance in tasks where an extensive number of tools are part of the available toolsets.

5 min readApr 16, 2025

--

OK, the AI Agent architecture is largely settled, with well-defined elements like one or more Language Models acting as the backbone, a model that facilitates reasoning and acting.

And also access to tools.

BigTool enables semantic search over tools based on their descriptions.

Tools

In the context of AI Agents, tools are external functions or services that agents can call to perform specific tasks, such as querying databases, fetching web data, or executing computations.

Tools extend the AI Agent’s capabilities beyond its internal knowledge, enabling it to interact with the environment or process information dynamically.

The BigTool tool Registry’s Purpose is to Organise tools for easy access and invocation.

AI Agents, often built with frameworks like LangChain or LangGraph, select and use tools based on task requirements, leveraging tool descriptions or metadata for efficient decision-making.

Effective tool integration enhances agent performance, especially in complex workflows involving numerous or specialised tools.

If you like this article & want to show some love ❤️

- Clap 50 times, each one helps more than you think! 👏

- Follow me on Medium and subscribe for free. 🫶

- Also follow me on LinkedIn or on X! 🙂

Balancing The Number of Tools & AI Agents

So there is the question, do you have one AI Agent with a ever increasing number of tools… Or do you decide on an architecture where you have multiple AI Agents, each with access to a smaller number of tools.

Where you follow a pattern of grouping your tools together according to function, and have different AI Agents accessing those groups. Obviously then you will have an AI Agent orchestration layer.

The debate of one AI Agent with access to multiple tools, or multiple AI Agents with access to a smaller number of tools will be decided on the use-case and the specific implementation.

Practical Example

The example below illustrates how an AI agent navigates a sequence of steps — observation, reasoning, and action — to resolve a query, dynamically accessing tools to execute tasks effectively.

The “BigTool” Concept

The term “big tool” in langgraph_bigtool refers to the ability to manage and utilise a large set of tools.

The principle of BigTool, although introduced and coined by LangChain, does not need to be exclusively used within the LangChain framework.

The BigTool approach can be implemented in any AI Agent framework…

Key aspects of this concept include:

Semantic Tool Selection

The use of embeddings and an InMemoryStore allows the AI Agent to select tools based on semantic similarity, rather than exact matches, improving usability for natural language queries.

Modularity

The separation of tool discovery, registration, indexing and AI Agent creation makes the system extensible. For example, you could swap the math module for another library or add custom tools.

The registry serves as a centralised lookup table for the agent to reference when deciding which tool to invoke.

Overview of the Implementation

The code example below creates an AI Agent that can dynamically discover and use mathematical functions from Python’s math module as tools.

These tools are registered, indexed with embeddings for semantic search, and integrated into a LangGraph-based agent.

The BigTool implementation enables the system to handle a large set of tools (in this case, math functions) efficiently, using a structured approach to tool discovery, registration, and invocation.

This code can be copied and run as-is within a Python notebook…

pip install langgraph-bigtool

######

pip install langgraph-bigtool "langchain[openai]"

######

%env OPENAI_API_KEY=<Your API Key>

######

import math
import types
import uuid

from langchain.chat_models import init_chat_model
from langchain.embeddings import init_embeddings
from langgraph.store.memory import InMemoryStore

from langgraph_bigtool import create_agent
from langgraph_bigtool.utils import (
convert_positional_only_function_to_tool
)

# Collect functions from `math` built-in
all_tools = []
for function_name in dir(math):
function = getattr(math, function_name)
if not isinstance(
function, types.BuiltinFunctionType
):
continue
# This is an idiosyncrasy of the `math` library
if tool := convert_positional_only_function_to_tool(
function
):
all_tools.append(tool)

# Create registry of tools. This is a dict mapping
# identifiers to tool instances.
tool_registry = {
str(uuid.uuid4()): tool
for tool in all_tools
}

# Index tool names and descriptions in the LangGraph
# Store. Here we use a simple in-memory store.
embeddings = init_embeddings("openai:text-embedding-3-small")

store = InMemoryStore(
index={
"embed": embeddings,
"dims": 1536,
"fields": ["description"],
}
)
for tool_id, tool in tool_registry.items():
store.put(
("tools",),
tool_id,
{
"description": f"{tool.name}: {tool.description}",
},
)

# Initialize agent
llm = init_chat_model("openai:gpt-4o-mini")

builder = create_agent(llm, tool_registry)
agent = builder.compile(store=store)
agent

Below is the LangGraph render of how the AI Agent operates… You can see how a tool is selected, and used to reach the end of the chain.

Below the query is defined…

query = "Use available tools to calculate arc cosine of 0.5."

# Test it out
for step in agent.stream(
{"messages": query},
stream_mode="updates",
):
for _, update in step.items():
for message in update.get("messages", []):
message.pretty_print()

And the output from the AI Agent…

In Conclusion

LangChain can really be considered being at the forefront of developing ideas and implementations which really only exist in research and as a mere idea.

Granted, LangChain is a more technical approach and there is quite a bit of debate comparing them to DSPy and other open AI Agent / Agentic frameworks.

And although I cannot comment with authority on LangChain’s performance when it comes to large scale implementations, one thing I really enjoy…

Is getting to grips with new ideas and concepts they develop and how they package it together in a notebook for self-experimentation.

Chief Evangelist @ Kore.ai | I’m passionate about exploring the intersection of AI and language. From Language Models, AI Agents to Agentic Applications, Development Frameworks & Data-Centric Productivity Tools, I share insights and ideas on how these technologies are shaping the future.

--

--

Cobus Greyling
Cobus Greyling

Written by Cobus Greyling

I’m passionate about exploring the intersection of AI & language. www.cobusgreyling.com

Responses (1)