How Would The Architecture For An LLM Agent Platform Look?

A recent study explored how LLM-based agent architecture might look in the future. This architecture is segmented into three stages…

4 min readMay 24, 2024

--

Introduction

The current state of Agents…

An agent has a single UI which is primarily text based and a single LLM acts as the backbone for the agent. There is a set of tools defined each with a description in natural language, and the LLM uses each of those tools based on the tool description.

The agent’s capabilities can be enhanced with more tools being made available for the agent to use. Some of these can include Human In The Loop features, where the purpose of the tool is to reach out to a human for input.

Foreseeable Agent Augmentation…as I see it

The most obvious improvement and advancement of agents will be in the area of the LLM backbone and also the tools which are available to the agent. The easiest way to improve the agent, is to add more tools.

Back To The Study

The study highlights that agents have the properties of interactivity, intelligence and also the ability to be proactive.

Looking at the study, an Agent Item can be thought of as a tool or a very simple agent; a single tool agent.

The agent recommender can be seen as an agent with access to multiple tools. Hence stage 1 and stage 2 are very much conceivable considering the architecture available today.

Stage 1

The study sees stage 1 as follows:

Agent Recommender will recommend an Agent Item to a user based on personal needs and preferences. Agent Item engages in a dialogue with the user, subsequently providing information for the user and also acquiring user information.

And as I mentioned, the Agent Recommended can be seen as the agent, and the Agent Items as the actions.

Stage 2

This stage can be seen as a multi-tool agent…

Rec4Agentverse then enables the information exchange between Agent Item and Agent Recommender. For example, Agent Item can transmit the latest preferences of the user back to Agent Recommender. Agent Recommender can give new instructions to Agent Item.

Stage 3

Here is the leap where collaboration is supported amongst Agent Items and the agent recommender orchestrating everything.

Tools

There is a market for a no-code to low-code IDE for creating agent tools. Agent tools will be required as the capabilities of the agent expands.

Integration

The graphic below from the study shows the Agent Items (which I think of as tools)…

The left portion of the diagram shows three roles in their architecture: user, Agent Recommender, and Agent Item, along with their interconnected relationships.

The right side of the diagram shows that an Agent Recommender can collaborate with Agent Items to affect the information flow of users and offer personalised information services.

What I like about this diagram is that it shows the user / agent recommender layer, the information exchange layer and the information carrier layer, or integration.

Considerations

There are a number of challenges:

  1. Efficient Inference
  2. External Knowledge Update and Edit
  3. Privacy
  4. Robustness

In Conclusion

The paper discusses how recommender systems function within an Agents platform, focusing on agents’ unique traits like interactivity and intelligence.

It introduces a new concept, Rec4Agentverse, comprising Agent Item (tools) and Agent Recommender.

The study progresses through three stages to enhance user-agent interaction.

Using a travel planning scenario, it analyses each stage’s attributes and potential scalability. Rec4Agentverse is seen as a promising paradigm but requires more exploration, including its application fields and risks.

⭐️ Follow me on LinkedIn for updates on Large Language Models ⭐️

I’m currently the Chief Evangelist @ Kore AI. I explore & write about all things at the intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.

LinkedIn

--

--