Large Language Model Driven Decision Making: Functions & Tools
AI Agents can be defined in a number of ways…some define AI Agents as fully autonomous systems that operate independently over extended periods, using various tools to accomplish complex tasks.
AI autonomy & agency exist on a spectrum, with varying degrees of independence depending on the system’s design.
Integrating function calls within generative AI applications introduces both structure and an initial layer of autonomy.
This enables AI systems to evaluate and respond to requests with a degree of self-direction.
As AI technology advances, these levels of autonomy are expected to grow, allowing models to manage tasks with increasing independence and sophistication. This progression will enhance AI’s ability to handle complex functions more autonomously.
Introduction
But there is also an option to use LLMs as the decision making mechanism.
Something that Anthropic refers to as model-driven decision-making via functions.
Hence I wanted to do a short piece on the difference between Tools and Functions.
Functions and tools represent two distinct aspects of how large language models (LLMs) interact with external systems and information.
Functions is an approach which has been largely overlooked which can serve as an LLM driven decision making system.
Find the simplest solution possible, only increase complexity when needed.
Functions & Tools
In the context of Generative AI applications with LLMs, the terms tools and functions are often used interchangeably, though they define different approaches.
Tools typically refer to external APIs or services that the AI agent can invoke to extend its capabilities beyond text generation.
Functions, on the other hand, are often internal integration points where the LLM autonomously decides to invoke specific processes based on the input context.
Both approaches enhance the model’s functionality, tools are externally driven and functions are embedded within the LLM’s decision-making logic.
This distinction highlights the varying degrees of autonomy and integration within Generative AI systems.
Functions
Functions are more tightly integrated into the LLM’s logic & decision-making processes.
Functions provide a layer of autonomy to the LLM by embedding integration points directly into its reasoning process.
The model itself decides when to invoke a specific function based on the context and user request, enabling it to dynamically choose between using these functions or relying on its default, internal knowledge.
This decision-making capability allows the LLM to autonomously navigate complex tasks by determining the most appropriate method to fulfil a query.
Tools
Tools are external and are used when the AI Agent interacts with other systems or APIs, often involving a predefined mechanism for their invocation.
In contrast, tools are external interfaces connected to an AI Agent, linking it to various APIs such as search engines, weather services, or databases and more.
These tools are activated through predefined commands or triggers and are not inherently part of the LLM’s internal logic, but rather serve as external extensions that the AI Agent can utilise to fetch or interact with real-time information beyond its native knowledge base.
Levels of Autonomy
In function calling with language models, the model autonomously decides whether a specific function call is appropriate based on the request.
Upon identifying a match, it transitions to a structured approach, preparing the necessary data parameters for the function.
This allows the model to act as a mediator, efficiently handling functions while maintaining flexibility in processing the request.
In contrast to function calling within language models, tools in the context of AI Agents refer to external APIs or services that the AI Agent explicitly invokes to perform tasks or fetch information.
Unlike function calls, where the LLM autonomously decides when to trigger internal processes, tools are often predefined and manually integrated into the AI agent’s workflow.
Chief Evangelist @ Kore.ai | I’m passionate about exploring the intersection of AI and language. From Language Models, AI Agents to Agentic Applications, Development Frameworks & Data-Centric Productivity Tools, I share insights and ideas on how these technologies are shaping the future.