Source

Vercel Launched A Comprehensive LLM Playground

Even-though all LLMs have their own playground, Vercel has achieved niche status with their playground for the following reasons.

Cobus Greyling
4 min readJun 23, 2023

--

I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.

The Vercel playground is easy accessible with an astounding level of access sans any payment. The playground also acts as a starting point for building LLM based generative apps, for both chat and completion mode, with a seamless transition to the documentation and coding in the Next.js framework.

This is a very astute approach to attract makers and introduce them to the Vercel offering.

Multiple models can be run simultaneously in parallel. The same prompt can be run in sync across various models or individually; this can be toggled on or off for each LLM via the GUI.

Current available models:

  • Anthropic: claude-instant-v1, Al claude-v1
  • Replicate: alpaca-7b, stablelm-tuned-alpha-7b.
  • HuggingFace: bloom, bloomz, flan-t5-xxl, flan-u12, gpt-neox-20b, oasst-sft-4-pythia-12b-epoch-3.5, santacoder.
  • Cohere: command-medium-nightly, command-xlarge-nightly.
  • OpenAI: gpt-4, gpt-3.5-turbo, text-ada-001, text-babbage-001, text-curie-001, text-davinci-002, text-davinci-003.
Run multiple models with the same prompt and compare the responses side-by-side.

Considering the image below, clicking on the </> icon opens a parameters dialog for a model. This allows for easy experimentation and finding the best parameter fit, for each use-case, based on each model.

Connect your app to any of the AI models with auto-generated code snippets for Next.js, SvelteKit and Node.js using the Vercel AI SDK.

Immutable links can be generated to surface and share experimentation, tweaks and prompts.

In the above examples, the chat mode is used.

And below the prompt mode is visible.

Obviously the prompt mode can also be used for a chat use-case, as shown with the prompt below.

What I find interesting, is that the chat interface managed context, and follow-up questions can be asked which implicitly reference previous conversation context.

The UI is very intuitive with attention to detail; for instance, if you hover over a dialog bubble, there is an option to copy the contents to the clipboard or edit the dialog input. Model order can be rearranged by dragging a pane, history can be cleared and more.

What would I add to the playground?

I would love to have a dark mode option for the playground.

Having Python and curl code generated would be convenient; however, I do understand that Vercel needs to consider which export will serve their ecosystem best.

Example prompts would definitely help, these example prompts can cover summarisation, classify and embeddings.

Embeddings might be more complex to implement, but seeing Vercel is focussed on LLM streaming and the UI, embeddings make for a visually pleasing presentation.

The inclusion of so many HuggingFace based models is really refreshing, the playground can play an important role in education and introducing users to alternatives in LLMs.

⭐️ Please follow me on LinkedIn for updates on LLMs ⭐️

I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.

https://www.linkedin.com/in/cobusgreyling
https://www.linkedin.com/in/cobusgreyling

--

--

Cobus Greyling

I explore and write about all things at the intersection of AI & language; LLMs/NLP/NLU, Chat/Voicebots, CCAI. www.cobusgreyling.com