Learn How To Install & Run Flowise For š¦šLangChain!
Recently I wrote a few articles on the Large Language Model (LLM) chaining tool Flowise. The question I received the most was, how did you install itā¦so here is the full tutorial!
Iām currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI and language. Including NLU design, evaluation & optimisation. Data-centric prompt tuning & LLM observability, evaluation & fine-tuning.
In the past I always opted for two main avenues of installing and running software outside of a notebook like Colab. The first is a virtual machine via Anaconda, the second option is a AWS EC2 instance.
The EC2 route is convenient when you need to run specific hardware and software. Hence this was ideal for all the NVIDIA Riva prototypes I built.
For installing & running Flowise, I made use of replitā¦hereās howā¦
Replit is an online integrated development environment (IDE) that can be used with a variety of programming languages, including JavaScript, Python, Go, C++, Node. js, Rust, and any other language available with the Nix packager.
āļø Please follow me on LinkedIn for updates on LLMs āļø
To begin, head to replit and create a login. What I like about replit, is that you can really achieve much without having to register a credit card or commit to any payment.
Next, create a repl using the Python template.
Within the shell window, run the command: npm install -g flowise
as seen below:
Once executed, in the shell window, run the following command in the shell window: npx flowise start
.
The command ends itās execution with the following:
ā”ļø[server]: Flowise Server is listening at 3000
š¦[server]: Data Source has been initialized!
And Flowise is displayed in a browser window within replit. Click on the ānew tabā icon on the top right to open the GUI in full-screen mode.
As seen below, now you have a fully working installation of Flowise to build LLM Apps, running in your browser.
The second most feedback I received was on the interface and principles of LLM applications being ambiguous and convoluted.
Considering LLM based applications in its simplest formā¦
The image below depicts the most basic LLM. The LLM Chain acts as an interface and allows for this application to be chained into a larger application.
To get started, click on Marketplaces at the top right, scroll down and select Simple LLM Chain from the list.
To enable the template, you need to click on Use Template.
The first step [1] is to enter your OpenAI api key into the OpenAI component or node. Secondly, [2] you need to supply a prompt which will define the instruction to the LLM.
In this case, the prompt is: What is a good name for a company that makes {product}?
You will [3] have to save the flow, and click [4] on the chat dialog button to start using the app.
Below is an example of a chat interaction with the LLM.
In Closing
In this article I go through a slightly more complex example of a LLM chatbot with memory. I also show how the LLM App can be exposed via an API.
For a general overview, please take a look at this article.
Iām currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.