Photo by Ricardo Gomez Angel on Unsplash

Chatbot Responses With Dynamic URL’s Using IBM Watson Assistant Actions

And How This Ties Into Web 3.0


What is the Web 3.0? It is the third generation of the Internet…a network which is underpinned by intelligent interfaces and interactions.

An Amazon Echo Show skill integrated to the Mercedes-Benz API for vehicle specifications and images. The user can ask questions regarding Mercedes vehicles and a topical interactive display is rendered which van be viewed, listened and touch navigation. Follow-up speech commands can also be issued.

The Web 3.0 will be constituted by software, with the browser acting as the interface or access medium.

Most probably the closest comparison currently to the Web 3.0 are devices like the Amazon Echo Show or Google Nest Hub.

Where we see multiple modalities orchestrated to from one user experience.

The user issues speech input and the display renders a user interface with images, text and/or video. The content can be viewed, listened to, interacted with via touch or speech navigation.

Navigation related questions are responded to with navigation options. The chatbot is enabled to retrieve the relevant content and present it to the user.

This multi-modal approach lowers the cognitive load as user input will primarily be via voice and not typing. Media is projected back to the user via text, images, video, maps and speech. Secondary user input based data presented will most probably result in touch navigation by the user.

Hence we will see full renderings of contextual content based on spoken user intent.

A big advantage of Web 3.0 is that a very small team can make significant breakthroughs seeing it is software based.

Amongst other key elements, the Web 3.0 will be defined by personalized bots which serves users in specific ways.

These bots will facilitate intelligent interactions with the user and all relevant devices.

Imbedding Audio and Video files linked to specific intents. In the dialog development environment of IBM Watson Assistant, various assistant responses can be defined. The list range from the basic to the more feature rich. Text, Option buttons, to Images, Audio, Video, iframes and connecting to an human agent.

Bots will interact via voice, text and contextual data. Focusing on customer service, support, informational, sales, recommendations and more.

Intelligent conversational bots will not only communicate in text but any appropriate media.

This new iteration of the web will have pervasive bots which will surface in various ways and linked to the context of the user’s interaction.

Imagine a user is reading through your website, and at a particular point they can click on text which takes them to a conversational interface which is contextually linked to where the user clicked.

Another speculative illustration of the Web 3.0, with a speech interface to issues commands to the Mercedes-Benz vehicle API, seen in the video below.

An iframe implementation of linking to a mobile website contextual to the query from the user.

The display changes based on speech input.

These two tables below attempts to quantify and describe the differences between Web 2.0 and Web 3.0.

This is obviously from a conversational perspective, there are other components which will contribute to Web 3.0.

Another speculative illustration of the Web 3.0, with a speech interface to issues commands to the Mercedes-Benz vehicle API.

A broad overview of what the shift might entail…

A more detailed view of how the user interface and experience will change…

User interfaces demand complexity. The complexity needs to vest somewhere. The traditional approach (Web 2.0) is to surface complexity to the user via the user interface. Adding to the user’s cognitive load and limiting input to typing, increasingly so using a mobile phone.

With Web 3.0 simplicity is surfaced to the user, this means that the complexity needs to move under the hood, and be addressed by the framework or platform. This makes the development and implementation tricky with added overhead, but allows for a simplistic, customized and a multi-modal user interface.

User experience is also about how the user feels after user the interface. Lowering cognitive load contributes to this improved user experience.

Dynamic URL Chatbot Responses

04 November 2021 this new feature was released. IBM Watson Assistant Actions allows for variables to be easily added to a URL link within the response of the action step. Hence the bot response or dialog returned to the user. This enable makers to efficiently build-up a URL during the course of the conversation and use this as a future reference. The applications for this feature is vast.

The simple demo Action with three steps to collect three pieces of information to complete the order.

These URL’s can, for instance, be used to allow users of the chatbot to navigate to a specific location in a website, query an order, or serve as a future reference to access data.

Variables from the conversation can be accessed by typing the dollar sign ($) character. Once this is typed, a list is presented with variable names.

Obviously the values of these variables can be user input, or back-end lookup.

The links can help personalize the conversation and grant users access to specific and relevant information.

Variables are named after the step in which the data that is stored in the variable is collected from the customer.

Variables are named after the step in which the data that is stored in the variable is collected from the customer.

In this example, the variable in which the email address is stored, is populated by he customer in Step 2, which asks What is your email address?.

When the variable is subsequently referenced in a step text response, the variable is represented as 2. What is your email address?

Variables exist for the duration of a single action.

The preview of the action collecting the user information.

This feature is especially convenient when the chatbot box is not seen as an entity on its own and the whole user experience needs to be contained within the chatbot box.

But rather, an experience is created where the conversation is facilitated within the chat window, but then enabling the user to navigate on the web interface based on conversational input.

The link can also be sent to the user via email, SMS etc. Or, the link can be used to render customized information.


With Web 3.0, chatbots will be accessed from text, the web, emails, text messages and more.

Conversations will be shorter, with users dropping into chatbot conversation to perform specific tasks…or chat to a live agent.

Contextual awareness will be important…vertically and horizontally.

Vertically as users resume conversations via the same medium, the chatbot should be fully aware of previous conversations and this must inform the context of the current conversation.

Horizontally as users move from one medium to another. Having a conversation over the phone, with a service representative. And later initiating a conversation with the chatbot via the same issue, contextual awareness must be maintained.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Cobus Greyling

Cobus Greyling

Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; NLP/NLU/LLM, Chat/Voicebots, CCAI.