Now You Can Use Any Language With IBM Watson Assistant

Using Universal Models That Adapt To The Language Of Training Data.

Cobus Greyling
7 min readJul 18, 2021

--

Introduction

When venturing into the field of chatbots and Conversational AI, usually the process starts with a search of what frameworks are available. Invariably this leads you to one of the big cloud Chatbot service providers.

Most probably you will end up using IBM Watson Assistant, Microsoft LUIS/Bot Framework, Google Dialog Flow etc. There are advantages…these environments offer easy entry in terms of cost and a low-code or no-code approach.

However, one big impediment you often run into with these environments, is the lack of diversity when it comes to language options.

This changed 17 June 2021 when IBM introduced the Universal language model. This allows you to build a chatbot in any language you want to implement.

According to IBM, the universal model applies a set of shared linguistic characteristics and rules from multiple languages as a starting point. It then learns from training data written in the target language that you add to it.

Minority Languages

There is a real requirement for chatbots in areas where the target audience can only speak vernacular. In Africa alone, there are seven major language families. With the total number of languages natively spoken in Africa is estimated to be between 1,250 and 2,100.

With some estimate this to be more than 3,000. All depending on what you see as a language or a dialect.

Making provision for these languages is often not feasible or viable for the large cloud providers; even though highly desirable by the these markets.

These geographic areas are often in dire need of access to information at a very low cost. Low cost often means asynchronous communication like chatbots via text or SMS.

When considering creating a Conversational UI in vernacular, the assumption is made that you require:

  • Massive Computing Power
  • Masses of Training Data and
  • Very Specialized Knowledge.

Watson Assistant has solved for all three of these impediments.

Translation Is Not The Best Aproach

One of the chatbot framework design considerations for minority languages is translation. Hence translating vernacular into on of the larger languages for NLU.

However, he best user experience will be achieved by using multiple conversational skills. With each of the skills supporting one language that you want to support, from a NLU perspective.

The list of Watson Assistant languages from the documentation with the Universal language at the end.

As seen above, with Watson Assistant you can use one of 13 language models. Or, now, choose the universal model which can adapt to any other language.

To deploy, attach each language skill to its own assistant that you can deploy in a way that optimizes its use by your target audience.

For example, use the web chat integration with your French-speaking assistant to deploy to a French-language page on your website. Deploy your German-speaking assistant to the German page of your website.

Maybe you have a support phone number for French customers. You can configure your French-speaking assistant to answer those calls, and configure another phone number that German customers can use.

Some Background

IBM Watson Assistant is constituted by two main component. An Assistant with one or more skills. Multiple skills, and multiple skill types need to be orchestrated within an Assistant. And as seen earlier in this story, it makes sense to have an Assistant dedicated to a language.

The basic components of Watson Assistant. The basic architecture of Watson Assistant consists of two main parts; skills and an assistant.

Each skill type, of which there are three, has a specific use-case; this can be extended of course.

However, used out of place, can seriously impede the scaling of your chatbot.

Let’s start with the difference between an Assistant and Skills

The assistant can be seen as the container of the conversational agent.

The assistant also houses the skills, and the assistant also facilitates the connectors to the integration mediums.

The Assistant direct requests down the optimal path for solving a customer problem.

By adding skills, your assistant can provide a direct answer to an in-domain question or reference more generalized search results for requests more complex.

Here is a few key characteristics of an Assistant:

  • The assistant integrates to various mediums; Facebook, Slack, Twitter etc.
  • The assistant also house the different skills.
  • An assistant can have a single or multiple skills.
  • You can also think of skills as different elements representing different parts of an organization.
  • Assistants can be assigned to specific conversational languages.
  • When an assistant is split out into multiple assistants, efficiencies can be achieved by maintaining the structure across the different Assistants.

Watson Assistant make provision for three types of skills:

Practical Implementation

Below is a practical implementation of a minority language, Afrikaans. When creating an Actions Skill or Dialog Skill, under Language, right at the bottom you will see the Another Language option.

For both the Actions Skill and the Dialog Skill Another Language is listed.

From a language perspective everything else remains the same when creating the skill and assistant. Two features which are glaringly absent is generating user examples and Entity Annotation.

The recommended examples & Annotate entities options are not present when Another Language is chosen.

Entity Annotation is really important for scaling a chatbot, as it:

  • Creates context of how the entities are being used within intents.
  • Annotating Entities also adds much needed structure to training data.
System Entities seem to work with a non-defined language option.

System entities are available when using Universal/Another Language. From the list I only tested system date.

Fuzzy matching works quite well in Entity extraction.

Another feature also available is fuzzy matching. This feature was tested with weekday names in Afrikaans. Even with weekdays being misspelled Watson Assistant was able to detect the weekday names, and also normalize the name.

Testing intent recognition with different variations.

From the above example different variations were used to test the intent recognition. The test data was fairly different from the training data. The test was obviously not scientific or benchmarked in any way. But functionality is definitely there and initial results are astounding.

This example is more complex with compound entities. Where the day and the time need to be detected.

This example is more complex with compound entities. Where the day and the time need to be detected. You can see the weekday is misspelled and then normalized in the result. The time is also picked up, with the custom defined entity for time, and the system entity.

Time is not normalized and most probably needs to be confirmed.

The weekday name is normalized to the entity and the system and custom entity is shown for time.

Compound entities work quite well and the time is also detected. Something which needs to be established is the use case where entities need to be defined which are not constituted by a finite list.

For example, months, weekdays etc. are all finite lists. Detecting a city name, or region are for all practical considerations an infinite list. A user can explicitly be asked for the city name. But detecting it contextually on the first dialog turn makes for better conversation.

Conclusion

The more astute Conversational AI environments are mastering the art of adding functionality, and by implication complexity, to their platform; while simultaneously simplifying the user interface.

This is also evident with the Watson Assistant approach.

The Good

  • Easy and seamless process selecting an alternative language.
  • It would make sense to mark skills and or Assistants in their name with the relative language.
  • The accuracy from an intent and entity perspective seem good.
  • Good results could be achieved with a small amount of training data.
  • Within entities Fuzzy Matching can still be used with good success.

The Not-So Good

  • Data such as user conversations cannot be used to show training recommendations.
  • Entity Annotation is not available when using the Universal option for languages not explicitly defined. This is really the biggest drawback for me when using Another Language. This is one feature I really hope IBM Watson Assistant adds for Universal languages.
  • There is no insight into the NLP pipeline or settings to optimize the skill or assistant. This could be seen as an advantage by some. And this definitely align with the big cloud providers’ approach.

--

--

Cobus Greyling

I explore and write about all things at the intersection of AI & language; LLMs/NLP/NLU, Chat/Voicebots, CCAI. www.cobusgreyling.com