NVIDIA LaunchPad & RIVA
My Development Framework Sandbox — 2xH100
The last few days I have been prototyping and experimenting with NVIDIA LaunchPad…and their Data Flywheel…and RIVA.
LaunchPad
Experimenting and building with NVIDIA is a bit more technical than other environments. Many of the components feel loosely coupled and it is up to you as the builder to create your ideal product.
Hence there is the flexibility of a number of tools and utilities, but then this flexibility introduces some technical complexity.
Once you log into NVIDIA LaunchPad, you have access to a number resources, as shown below. The ones I used the most was Jupyter Notebook, System Console and the Desktop.
There is no additional configuration or key exchange required, hence you basically have access to your own machine, with all the access methods you will need to build solutions.
I can imagine how an organisation who wants to get off to a running start with NVIDIA, can give their developers and builders access to LaunchPad.
LaunchPad Code Server IDE
The web-based IDE is the recommended way to engage with your NVIDIA LaunchPad environment, offering built-in drag-and-drop uploads and straightforward download features.
For accessing larger datasets or files, use the Code Server terminal to download them directly from an external object store, bypassing file size restrictions during uploads.
Personally I used the notebook UI and command line tool the most.
NVIDIA RIVA
The best way to describe NVIDIA Riva, is as a GPU-accelerated software development kit (SDK).
Designed for building and deploying highly customisable, real-time speech and translation AI applications.
It has microservices for:
- Automatic Speech Recognition (ASR): Converts audio to text, supporting languages like Arabic, English, French, German, Hindi, and more, with high accuracy for real-time transcription and virtual assistants.
- Text-to-Speech (TTS): Generates human-like speech from text, offering expressive voices in languages such as English, German, and Mandarin.
- Neural Machine Translation (NMT): Translates text or speech across languages, enabling multilingual conversational pipelines.
Riva supports integration with large language models (LLMs) and retrieval-augmented generation (RAG). It also provides pre-trained models, fine-tuning capabilities via NVIDIA NeMo, and high-performance inference powered.
I would describe Riva is part of NVIDIA’s AI Enterprise suite, offering scalability and low-latency performance for industries like telecommunications, healthcare, and retail.
Riva Virtual Assistant Example
OK, this is one of my favourite NVIDIA code prototypes…it is a web based voicebot.
The NVIDIA Riva Web UI App shows the capabilities of the Riva SDK for building real-time speech and translation AI applications.
It has an interactive interface to experience automatic speech recognition (ASR), text-to-speech (TTS), and neural machine translation (NMT) in action; as seen below.
Users can input audio or text to see how Riva transcribes speech, generates human-like audio, or translates across multiple languages seamlessly.
The app highlights Riva’s low-latency performance, powered by GPU acceleration and optimised models.
It supports various languages demonstrating multilingual conversational pipelines.
Designed for developers and enterprises, this demo UI illustrates easy integration with custom applications.
Overall, it serves as a practical tool to show how the different RIVA components can be orchestrated to create a complete application.
Chief Evangelist @ Kore.ai | I’m passionate about exploring the intersection of AI and language. From Language Models, AI Agents to Agentic Applications, Development Frameworks & Data-Centric Productivity Tools, I share insights and ideas on how these technologies are shaping the future.