A First Look At The NVIDIA NeMo Service Playground
The NVIDIA NeMo playground reminds much of the OpenAI playground in terms of layout and functionality. The playground is ideal to test prompts and create use-cases.
I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.
The NeMo playground lives within the NVIDIA NeMo LLM Service offering, which is not currently generally available. It is however possible to apply for early access.
Considering the image below, with a sentence completion example from the NeMo service playground where a full stop is used the stop word. There are buttons available to view documentation, API Key and Bug Reporting. Models are available via a dropdown, with an option to view customisation fine-tuning.
The code can be viewed and copied, based on the experimentation within the playground.
Settings for number of tokens, temperature, Top K and Top P are also available. I’m curious to know what the Create Your Customisation button leads to, and to what extent fine-tuning is possible via the no-code interface.
From the image below, it is clear that completion forms the main component or use mode of the Playground, and that different modes or presets are not implemented.
OpenAI has actually cycled past the implementation of modes, where modes other than the chat mode is being marked as legacy and is in the process of deprecation.
It is clear that NeMo is focussed on completion and the whole playground strongly reminds of early versions of OpenAI’s playground.
Using the parameters of Temperature, Top-k, Top-p, and Beam Search Width, the application of the LLM gives you the capability to tweak the level of creativity in the model.
You can choose whether to increase or decrease the produced output by adjusting these parameters.
Areas I would love to explore is the ambit of customisation and how fine-tuning will be approached, also what examples and templates are included and if there will be any implementation of modes or a ChatML-like format.
Lastly, model availability will be interesting in terms of default models and the ease with which models could be included.
⭐️ Follow me on LinkedIn for updates on Conversational AI ⭐️
I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.