Flowise Is A Graphical User Interface (GUI) for 🦜🔗LangChain

Learn how to develop Low-Code, No-Code LLM Applications with ease!

Cobus Greyling
5 min readMay 18

In this post, I aim to demonstrate the ease and affordability of enabling web browsing for a chatbot through Flowise, as well as how easy it is to create a LLM-based API via Flowise.

This allows for the creation of LLM chat utilities to be used on a website, or as a support API for traditional chatbot interfaces.

Finally, I’ll consider a few important points at the end of this post…

Websearch Chatbot

The web search chatbot shown below is reliant on OpenAI’s LLM and also leverages OpenAI Embeddings.


Below is the conversation I and with the web search bot, notice the concise and short answers the bot supplies, and how context is managed within the conversation.

AI : Hi there! How can I help?
Me : What is the size of Namibia?
AI : The total area of Namibia is 823,290 Km2 (317,874 sq. miles).
Me : Which countries borders it?
AI : Namibia shares borders with Angola and Zambia to the north.
Me : And to the south?
AI : To the south, Namibia shares a border with South Africa.
Me : Was it a German colony?
AI : Yes, Namibia was a German colony. It was known as German South West Africa from 1884 to 1915.
Me : what happened in 1915?
AI : In 1915, Namibia was occupied by South African forces during World War I, leading to the end of German colonial rule in the country.

Creating LLM-Based APIs with Flowise

In a previous post I gave a basic overview of how to build LLM based apps using Flowise. But where will these Generative Apps fit into real-world products and services?

Even-though it is quite possible to build a self-contained Gen App with the likes of Flowise or LangFlow, it is more probable that these LLM chains or autonomous agent applications will be used and accessed via an API as a supporting technology.

Traditional chatbot frameworks, customer support agents and more will most probably leverage LLM Apps via an API call. Rather than the LLM App being a complete chatbot/Conversational AI solution.

Considering the cost, possible latency and the lack of locally hosted API’s, LLM Apps will most probably be used as a utility for certain aspects of a conversational experience.

The image below shows a basic application consisting of only three elements; OpenAI, a prompt template and a LLM Chain combining those two elements.

The prompt template is also basic with only one variable defined, question. It would make sense to quickly and efficiently expose this LLM App via an API to be incorporated into other chatbot frameworks.

In this Flowise example, any flow or application can efficiently be exposed as an API by clicking on the API button.

The complexity of the API is highly configurable, with additional parameters which can be passed via the API call. There are four different integration types available within Flowise.

The image below shows how the Flowise API is accessed via Postman and illustrates the ease with which an LLM API can be created.

A large language model app is a chain of one or multiple prompted calls to models or external services (such as APIs or data sources) in order to achieve a particular task.

Chains can be predefined prompt chains or created on thy fly by an Agent.


Many new startups and software products are relying heavily on large language models and do not have a clear business differentiator.

While these companies may be successful in the short term, they will soon be overshadowed by companies that have more innovative products and services.

Considering the image below, for any technology provider to be sustainable, the UX needs to be compelling and stellar. Added to this, the propriety software or company specific intellectual property must be future proof.

And as seen with products like Flowise and LangFlow, really anyone can build intelligent LLM based APIs. Hence products which are really just a thin wrapper around a LLM will have a hard time proving their longevity and relevance.

Below is a video tutorial on three Flowise application types; web search, a single prompt chain and multiple prompt chains.

I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.




Cobus Greyling

Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; NLP/NLU/LLM, Chat/Voicebots, CCAI.