Extend Natural Language Understanding Functionality With Custom ML Models

This Is How To Enhance the IBM Watson NLU API With Watson Knowledge Studio ML Models

Cobus Greyling
6 min readOct 22, 2020

--

Introduction

You Can Extend Natural Language Understanding With Custom Models To Support Specific Features.

When creating a NLU API, there are often generic and general Natural Language Understanding features available.

NLU from the IBM Cloud services list.

One example of such an interface is spaCy, where relatively advanced functionality is available out of the box.

IBM Watson Natural Language Understanding is another such an example where you have access, on activation, to considerable NLU capability.

In certain use cases and instances the standard approach is sufficient. But there will come a stage where you would want to customize it. As you would do with a NLU API like Rasa’s.

IBM Watson Knowledge Studio annotation of documents.

IBM has a unique way of building a machine learning model which augments and informs your NLU API.

In this story I am going to look at using IBM Watson Knowledge Studio to create a model.

This model can then be deployed and referenced from the NLU API, hence acting as a reference for analysis of different elements.

For the purpose of this guide, we are only going to focus on extracting entities.

Getting Started with IBM Watson Natural Language Understanding

Watson NLU has a very simplistic console which gives you access to the API credentials. The documentation is expensive in assisting you to access the API in an array of ways.

The very simplistic NLU API console.

For this demonstration I made use of Postman to access the NLU API. Here is a simple query I performed on the NLU API with no customization or additional models.

Accessing & testing the NLU API from the Postman application.

Below is the JSON input in the expected format for the NLU API. The text element is our utterance in text format. An URL can also be defined here.

All we want to retrieve are entities. However, additional elements can be added. But as stated previously, for the purpose of this story we are focusing on entities.

The number of entities can also be defined; should no number be set, the maximum is 50.

{
"text": "While I was at the Toyota dealership I got a call from Apple regarding my iPhone.",
"features": {
"entities": {
}
}
}

See the return JSON below; the position of each detected entity is identified. Toyota is identified as company. Also Apple is identified is a company. It would have been ideal if the entity of iPhone was detected, and even dealership.

These pieces of information is vital in understanding the utterance’s context.

{
"usage": {
"text_units": 1,
"text_characters": 81,
"features": 1
},
"language": "en",
"entities": [
{
"type": "Company",
"text": "Toyota",
"relevance": 0.963296,
"disambiguation": {
"subtype": [
"Organization",
"AutomobileCompany",
"ManufacturingPlant",
"AwardWinner"
],
"name": "Toyota",
"dbpedia_resource": "http://dbpedia.org/resource/Toyota"
},
"count": 1,
"confidence": 0.999806
},
{
"type": "Company",
"text": "Apple",
"relevance": 0.376233,
"count": 1,
"confidence": 0.998934
}
]
}

Fortunately there is a way of referencing a machine learning model, to inform the NLU API

{
"url": "www.url.example",
"features": {
"entities": {
"model": "your-model-id-here"
},
"relations": {
"model": "your-model-id-here"
}
}
}

Getting Started with IBM Watson Knowledge Studio

Watson Knowledge Studio is not as daunting as it initially seems. The first step is to upload the documentation you want to use in creating the ML model.

Uploading documentation to start creating a ML Model.

These documents are related to the target field or area of interest of your NLU API.

IBM has very extensive documentation on how to go about creating the model.

There is also downloadable reference documents to use as the basis of your model.

Rather start with a relatively small collection of documents.

This assists in the process of human annotation of the document.

Small documents can help human annotators identify coreference chains throughout the document.

As annotation accuracy improves, you can add more documents to the corpus to provide greater depth to the training effort.

Process of manually annotating the document.

From the image above, you can see the list of entities on the right, and the document next to it. By a simple process of selecting one or more words, it can be assigned to the appropriate entity type. Of course consistency is crucial in creating a good model.

After annotating, training and evaluation can be performed.

The next step is to train the model and perform evaluation. Training does take a few minutes; on the top right-hand corner the progress is confirmed with elapsed time.

Confirmation on successful ML training.

Now that we have completed the training of our model, the time comes to deploy it so we can actually use it. The ML model can be deployed to Discovery or Natural Language Understanding.

We are deploying to NLU.

Deploying the Watson Knowledge Studio model.

Deployment does take a while. Once done, you are supplied with a Model ID to reference your model. With each query, the NLU API accesses the ML model if the Model ID is defined.

Post ML Model deployment, the Model ID is created.

It must be noted, if a model id is referenced, which is not active or incorrect, the query fails.

Example NLU API Referencing the ML Model

For the test a sentence is used which relates to the documents annotated in Watson Knowledge Studio.

Vekiarides founded TwinStrata with CTO John Bates, who was previously a distinguished technologist with HP storage division and an executive at Incipient.

You will see that the sentence is very specific and in general it will be very difficult to analyze.

Making use of Postman again, the JSON input contains the text, with the number of entities set to 40. The model reference is also defined.

{
"text": "Vekiarides founded TwinStrata with CTO John Bates, who was previously a distinguished technologist with HP storage division and an executive at Incipient.",
"features": {
"entities": {
"model": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"limit": 40,
"document": false
}
}
}

Entities detected are:

  • Person
  • Organization
  • Work Title
{
"usage": {
"text_units": 1,
"text_characters": 154,
"features": 1
},
"language": "en",
"entities": [
{
"type": "ORGANIZATION",
"text": "HP",

"disambiguation": {
"subtype": [
"NONE"
]
},
"count": 1,
"confidence": 0.996624
},
{
"type": "PERSON",
"text": "Vekiarides",

"disambiguation": {
"subtype": [
"NONE"
]
},
"count": 1,
"confidence": 0.994682
},
{
"type": "PERSON",
"text": "John Bates",

"disambiguation": {
"subtype": [
"NONE"
]
},
"count": 1,
"confidence": 0.988645
},
{
"type": "ORGANIZATION",
"text": "TwinStrata",

"disambiguation": {
"subtype": [
"NONE"
]
},
"count": 1,
"confidence": 0.977798
},
{
"type": "ORGANIZATION",
"text": "Incipient",

"disambiguation": {
"subtype": [
"NONE"
]
},
"count": 1,
"confidence": 0.972558
},
{
"type": "TITLEWORK",
"text": "CTO",

"disambiguation": {
"subtype": [
"NONE"
]
},
"count": 1,
"confidence": 0.966713
},
{
"type": "TITLEWORK",
"text": "technologist",

"disambiguation": {
"subtype": [
"NONE"
]
},
"count": 1,
"confidence": 0.916097
}
]
}

And all these identified are very specific entities which are not that common.

Conclusion

This approach allows for the use of documentation related to your organization. The annotation of these documents is a relatively easy task with the advantageous that the ML models created, can be re-used in different parts of the organization.

It does not need to be dedicated to the NLU API, but via Watson Discovery it can be made available to other services. In an organization where IBM Cloud is used extensively, ML models might already exist which can be plugged into the NLU API.

--

--

Cobus Greyling

I explore and write about all things at the intersection of AI & language; LLMs/NLP/NLU, Chat/Voicebots, CCAI. www.cobusgreyling.com