AI: Build Your First Machine Learning Model (Part 1 of 2)
First we shape our tools
“We shape our tools and thereafter our tools shape us” — Marshall McLuhan
Starting your very first AI project can seem daunting especially if you don’t know were to start. This short tutorial will take you step-by-step through the process to create a custom machine learning model.
You can teach Watson your unique domain with a custom machine learning model that identify entities and relationships unique to your industry in unstructured text.
Here you can build models without needing to write code.
We will start with a small data sample which can be conceptualized with one glance, and hence testing your results will be easy and any errors or success will be evident.
You will be using the IBM Watson environment as a tool to train and test our work.
More specifically, you will train Custom Categories in Watson Knowledge Studio and deploy it to the IBM Watson NLU API. Making use of language and more specifically natural language processing is something most people can relate to. Hence making this tutorial more digestible.
For any organization wanting to build a higher order natural language understanding interface, there will be a requirement for the creation of a custom category. This is an easy way to use data to build a model which can be extended to the Watson NLU API.
Before we dive in, it is important to note that IBM does warn this feature is only experimental at this stage. So take caution in taking any functionality live making use of this.
Go ahead and create an instance of Watson Knowledge studio in the Dallas location. I selected the Lite plan. Below I defined the service name and the region/location is set to Dallas.
From the manage page of Watson Knowledge Studio, click the Launch button to launch the instance.
When you get the option to create a workspace within Knowledge Studio, select the Classify content into custom categories.
Click on the Create Workspace button and give your workspace an appropriate name. The next step, is to prepare your training data.
Often training data is thought of as unnecessarily complex, and it some instances complex training data is necessitated. But for this exercise we are using the example give in the IBM documentation here.
Often, when a small sample of data is used, it is easy to test your input against the training data. The file format must be CSV. Each line, as seen below, represents a category.
The first value on a line specifies the label of the category or subcategory. To specify a subcategory label, enter the parent categories separated by forward slashes before the name of the subcategory. Subcategories can be 5 levels deep. Labels must be unique and is case insensitive so the label sport and Sport cannot coexist.
You can specify more key phrases as additional values, up to a maximum of 20.
On the screen where you can train your categories model, just drag and drop your CSV file. The training of the model is immediately initiated and should not take too long.
If the model training succeeds, you will see the Test Your Model screen.
If training fails edit your CSV file accordingly and retry the training process. You can enter text in the input box and click Run Test. The results will display and you will easily be able to match your input with the Category to see how accurate it was. here you have the option to Retrain model, or to Deploy model.
Let’s look at one more example.
This is a straight forward sentence and the custom category is detected. A score closer to 1.0 indicates a very high level of certainty that the text passage corresponds to the respective category.
If you are satisfied with the accuracy of your model, you can go ahead and deploy your newly created custom model to IBM Watson Natural Language Understanding or IBM Watson Discovery. For this exercise, we are only to make use of NLU.
It is import to note, you will be issued a model ID which is required within your Watson NLU query to identify the model you want to reference. Also, you will be required to have a Watson NLU subscription.
Next, we will test our model by referencing it from a NLU API call. At this stage it will become clear where the true power of this piece of cloud orchestration lies.
Click here for part two…
Read Part Two Here:
Read More Here: