Using LangChain With Model Context Protocol (MCP)
The Model Context Protocol (MCP) is an open-source protocol developed by Anthropic, focusing on safe and interpretable Generative AI systems.
MCP emerged from the need to address a key limitation with Large Language Model (LLM) applications, that being their isolation from external data sources and tools.
One of the key focus areas of LLM-based applications has been the aspect of Data Delivery. Getting data to the LLM for inference, this has been the objective of RAG implementations, Fine-tuning and how also the objective of MCP.
MCP’s primary purpose is to standardise how LLM-based applications connect to diverse systems, as seen in the image below:
Eliminating Custom Integration
There is this challenge with AI Agents to delivery data to the AI Agent, or in other words; integrate the AI Agent/LLM-based application to external data sources.
There is been numerous attempts to integrate somehow seamless by leveraging GUI’s, web browser and web search. All of these avenues have advantages and disadvantages.
MCP has the potential to function as a universal interface, think of it as the virtual / software version of USB-C for AI.
Enabling seamless, secure and scalable data exchange between LLMs/AI Agents and external resources.
MCP uses a client-server architecture where MCP hosts (AI applications) communicate with MCP servers (data/tool providers) v
Developers can use MCP to build reusable, modular connectors, with pre-built servers available for popular platforms, creating a community-driven ecosystem.
MCP’s open-source nature encourages innovation, allowing developers to extend its capabilities while maintaining security through features like granular permissions.
Ultimately, MCP aims to transform AI Agents from isolated chatbots into context-aware, interoperable systems deeply integrated into digital environments.
Step by Step Instructions
Anthropic’s Model Context Protocol (MCP) is an open source protocol to connect LLMs with context, tools, and prompts. It has a growing number of 𝘴𝘦𝘳𝘷𝘦𝘳𝘴 for connecting to various tools or data sources. Here, we show how to connect any MCP server to LangGraph agents & use MCP tools…
If you are like me, getting a prototype work, no matter how simple it is, brings an immense sense of clarity and understanding; at leat in my own mind.
To get started, open a Terminal app…below is where to find it on a MacBook.
In the terminal window, create two tabs; from the one we will run the server, and from the other the client.
It is good practice to create a virtual environment to install and run code; the command below creates the virtual environment called MCP_Demo
.
python3 -m venv MCP_Demo
Then run this command to activate (enter) the virtual environment:
source MCP_Demo/bin/activate
You will see your command prompt is updated with (MCP_Demo)
.
Run the following lines of code in sequence:
pip install langchain-mcp-adapters
pip install langchain-mcp-adapters
export OPENAI_API_KEY=<your_api_key>
Replace the text <your_api_key
with your OpenAI API key.
In the one terminal tab, create a text file: vim server.py
And paste the following python code:
# math_server.py
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("Math")
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
@mcp.tool()
def multiply(a: int, b: int) -> int:
"""Multiply two numbers"""
return a * b
if __name__ == "__main__":
mcp.run(transport="stdio")
Close the text file, start and run the server with the following command:
python3 math_server.py
You won’t see anything, the terminal tab will look as follows:
Now we are going to create and run the client…
While the MCP server is running in the one tab, go to the second tab…
Create a file to paste the client code in: vim client.py.
Paste the code below into the file:
# Create server parameters for stdio connection
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from langchain_mcp_adapters.tools import load_mcp_tools
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
import asyncio
model = ChatOpenAI(model="gpt-4o")
server_params = StdioServerParameters(
command="python",
# Make sure to update to the full absolute path to your math_server.py file
args=["math_server.py"],
)
async def run_agent():
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
# Initialize the connection
await session.initialize()
# Get tools
tools = await load_mcp_tools(session)
# Create and run the agent
agent = create_react_agent(model, tools)
agent_response = await agent.ainvoke({"messages": "what's (3 + 5) x 12?"})
return agent_response
# Run the async function
if __name__ == "__main__":
result = asyncio.run(run_agent())
print(result)
Run the client with the command: python3 client.py
The client will run once and end with the output below:
{'messages':
[HumanMessage(content="what's (3 + 5) x 12?",
additional_kwargs={}, response_metadata={},
id='87a8b6b6-9add-4da7-aea5-1b197c0fc0f5'),
AIMessage(content='',
additional_kwargs={'tool_calls': [{'id': 'call_1eyRzR7WpKzhMXG4ZFQAJtUD',
'function':
{'arguments': '{"a": 3, "b": 5}', 'name': 'add'},
'type': 'function'},
{'id': 'call_q82CX807NC3T6nHMrhoHT46E',
'function':
{'arguments': '{"a": 8, "b": 12}', 'name': 'multiply'},
'type': 'function'}],
'refusal': None},
response_metadata={'token_usage':
{'completion_tokens': 51,
'prompt_tokens': 77,
'total_tokens': 128,
'completion_tokens_details':
{'accepted_prediction_tokens': 0,
'audio_tokens': 0,
'reasoning_tokens': 0,
'rejected_prediction_tokens': 0},
'prompt_tokens_details':
{'audio_tokens': 0,
'cached_tokens': 0}},
'model_name': 'gpt-4o-2024-08-06',
'system_fingerprint': 'fp_eb9dce56a8',
'finish_reason': 'tool_calls',
'logprobs': None},
id='run-13c01640-f92b-48b7-9340-c2ad983eb1c8-0',
tool_calls=[{'name': 'add', 'args': {'a': 3, 'b': 5},
'id': 'call_1eyRzR7WpKzhMXG4ZFQAJtUD',
'type': 'tool_call'}, {'name': 'multiply',
'args': {'a': 8, 'b': 12},
'id': 'call_q82CX807NC3T6nHMrhoHT46E',
'type': 'tool_call'}],
usage_metadata={'input_tokens': 77,
'output_tokens': 51,
'total_tokens': 128,
'input_token_details': {'audio': 0,
'cache_read': 0},
'output_token_details': {'audio': 0,
'reasoning': 0}}),
ToolMessage(content='8',
name='add',
id='f8e0aba5-7a62-44c6-92a3-5fe3b07c9bd5',
tool_call_id='call_1eyRzR7WpKzhMXG4ZFQAJtUD'),
ToolMessage(content='96',
name='multiply',
id='66b9bbd9-b99a-402f-b26c-df83f5a69fa3',
tool_call_id='call_q82CX807NC3T6nHMrhoHT46E'),
AIMessage(content='The result of \\((3 + 5) \\times 12\\) is 96.',
additional_kwargs={'refusal': None},
response_metadata={'token_usage': {'completion_tokens': 22,
'prompt_tokens': 143,
'total_tokens': 165,
'completion_tokens_details': {'accepted_prediction_tokens': 0,
'audio_tokens': 0,
'reasoning_tokens': 0,
'rejected_prediction_tokens': 0},
'prompt_tokens_details': {'audio_tokens': 0,
'cached_tokens': 0}},
'model_name': 'gpt-4o-2024-08-06',
'system_fingerprint': 'fp_eb9dce56a8',
'finish_reason': 'stop',
'logprobs': None},
id='run-6c00a336-7d52-4917-9186-b282a5984b10-0',
usage_metadata={'input_tokens': 143,
'output_tokens': 22,
'total_tokens': 165,
'input_token_details': {'audio': 0, 'cache_read': 0},
'output_token_details': {'audio': 0,
'reasoning': 0}})]}
Finally
MCP is a convenient way of integrating AI Agents with information and services supplying context and memory.
Chief Evangelist @ Kore.ai | I’m passionate about exploring the intersection of AI and language. From Language Models, AI Agents to Agentic Applications, Development Frameworks & Data-Centric Productivity Tools, I share insights and ideas on how these technologies are shaping the future.