The Growing LangChain EcoSystem

LangChain is expanding into four key areas; no-code to low-code flow builders, vector stores, implementing cutting-edge research and LLM management.

Cobus Greyling
4 min readAug 7, 2023

--

I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.

LangChain is expanding in four key aspects:

1️⃣ Two no-code to low-code flow builders in Flowise and LangFlow have emerged to build LLM based flows. Initially these flow builders can seem abstract, but both Flowise and LangFlow have a number of templates and presets to start with. Building LLM-based flows is a logical extension of prompt chaining.

2️⃣ LangChain implements many research papers in a practical way; especially in the area of prompt engineering and autonomous agents.

3️⃣ With regards to LLM Generative App stack, there is a general movement away from fine-tuning LLMs. There is an emerging focus on semantic similarity, being able to observe, inspect and granularly manage prompts.

Having contextually relevant reference data ready to present to the LLM at the right time at inference. Contextual data can be presented in three ways, prompt pipelines, embeddings, or vector stores. As seen below from the LangFlow interface, the vector store presence is growing.

4️⃣ LLM management is being addressed by 🦜🛠️LangSmith.

The focus of LangSmith is on managing the link between LangChain applications and Large Language Models (LLMs).

LangSmith is not a flow building builder or prompt chaining designer and does not currently supersede any of the application flow builders like Flowise and LangFlow.

Currently LangSmith is not focussed on prompt performance per-se like ChainForge or Flux.

LangSmith does not compare prompts at scale and assist with prompt management. LangSmith does have a playground where experimentation is possible; the playground is currently only available for OpenAI.

LangSmith is intended to quantifying LLM performance, optimising single or multiple LLM interactions. LangSmith can also be useful to migrate between LLMs.

LangSmith is ushering in an era where the LLM becomes a utility, and Generative Apps become multi-LLM based. With Gen-Apps migrating between LLMs, based on cost, performance, and latency.

Metrics are logged to LangSmith from a LangChain application by making use of tags in the LangChain code.

pip install -U langsmith
pip install langchain
pip install openai

from getpass import getpass
HUGGINGFACEHUB_API_TOKEN = getpass()

from langchain import HuggingFaceHub
from langchain import PromptTemplate, LLMChain
pip install huggingface_hub

import os
from uuid import uuid4

unique_id = uuid4().hex[0:8]
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_PROJECT"] = f"Basic_Project_1"
os.environ["LANGCHAIN_ENDPOINT"] = "https://api.smith.langchain.com"
os.environ["LANGCHAIN_API_KEY"] = "xxxxxxxxxxxxxxxxx" # Update to your API key
os.environ['OPENAI_API_KEY'] = str("xxxxxxxxxxxxxxxxx")
os.environ['HUGGINGFACEHUB_API_TOKEN'] = str("xxxxxxxxxxxxxxxxx")

question = "What is the year of birth, of the man who is commonly regarded as the father of the iPhone? "
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])

repo_id = "google/flan-t5-xxl"

llm = HuggingFaceHub(
repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64}
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run(question))

LangSmith surfaces key metrics like run count, latency (P50,P99) & token usage per application call.

LLM Application data (Chain conversations, prompts, etc) can be stored, edited, rerun & managed within LangSmith.

--

--

Cobus Greyling
Cobus Greyling

Written by Cobus Greyling

I’m passionate about exploring the intersection of AI & language. www.cobusgreyling.com

Responses (1)