AI: Shining a Light on Dark Data
Is Dark Data something I need to be scared of?
Growing up I was often scared of the dark; perhaps, not so much of the dark itself. But more the unknown; that which cannot be seen.
The same applies to data, much of organizations data is dark and hence a largely unknown.
According to IBM, 90 percent of the world’s data was created in the past two years. Most of this data is unstructured, or commonly referred to as dark data.
This data is captured by sensors, cameras or customer conversations, to name just a few sources. And most organisations deemed it near impossible for them to even start exploring and making sense of this data.
What is dark data?
This dark data is highly unstructured. This means that most data is captured by structured means. The user interface demands that the data is in a certain structure; humans structure the data in the right format as demanded by the User Interface (UI). Whereas structured data can be categorized, sorted, indexed etc. and hence be made useful; unstructured data on the other hand is deemed as dark and hard to impossible to derive value from.
Traditional data capturing
In the 80’s many organizations had scores of people capturing data from documents into their mainframe. Subsequently organizations started to digitize their environments, and entering or capturing data happened as a matter of course and part of the workflow. The armies of data capture teams were deemed redundant.
Now we as users are so adapt to entering our details and data, hence the process has been handed over to the user. We, as users, have to organize our own data for input via forms and the like. Hence aiding the whole process of capturing data.
Enter Unstructured Data
Video should soon represent up to 90% of all consumer internet traffic; customers are having conversations with companies via email, voice, chatbots and the like. Just think of all the sensors, social media activity etc. generating data which is highly unstructured.
Examples of Interpreting Dark Data
Here are some examples of how AI and ML can be used to search video and audio.
Video One; this demonstrations is underpinned by IBM Cloud and Watson services, including Natural Language Understanding, Speech to Text, Visual Recognition, Cloudant, also leveraging the IBM Github project called Dark data .
The sample application, called Dark Vision, processes videos by extracting frames and tagging these frames independently. No custom models were created to train against; so the results are just default.
You will see three images are also uploaded. The first being that of a fruit basket. Keywords for the image include banana, fruit, food, melon, olive color and lemon yellow color.
An image of Ashton Kutcher is uploaded, and the image keywords are male, person, official, investor. A second female face is also identified in the background.
The third image of Ginni Rometty, is tagged with woman portrait photo, person, blue coat. With face detection of 87% female.
Two YouTube videos from GeoBeats were used for the video analysis. Both being approximately 90 seconds in length. The first being on New York. From the video, image keywords with percentages are listed, with audio keywords. Concepts are listed and emotional analysis from the transcript. The video transcript is also available. Frames can be searched for keywords like tower, watercraft, sky etc. Going to the second video of Paris. The full transcript is again available, with Audio key words.
The emotion is also analysed. Concepts are identified like Paris, Mona Lisa, Louvre etc. Again the video can be searched with words like Tower, arch, Eiffel. This is a good example how the IBM Cloud and Watson elements can be orchestrated to fashion unique solutions to real world problems.
Video One illustrates how videos can be processed frame-by-frame with the audio to return values like Audio Keywords, Entities, Emotional Analysis and the video transcript. From the frames landmarks, structures faces and more can be detected.
Below is a video on how a model was created using 50 pictures of each of the Tesla models. Post training, a picture of a vehicle can be shown to the model, and a certainty can be returned on if the vehicle is indeed a Tesla. And, what model Tesla it is.
The International Space Station, launched in 1998, serves as a microgravity and space environment research laboratory in which crew members conduct experiments.
These experiments have produced large amounts of data many of which are available to the public. Windows on Earth showcases images taken from the ISS, and are some of the most popular. We will use images of cities at night in this Code Pattern to build a Visual Recognition custom classifier using IBM Watson Visual Recognition.
There are thousands of images created by the ISS, and using AI we can help to categorize and organize these images. In the video is an image of Chicago. Watson detects it is a indeed Chicago with point eight seven accuracy. The closest city is Houston. Also, I submitted a picture of Tokyo. It is recognized with an accuracy of point nine one.
This video is a demonstration using IBM Watson Visual Recognition with an image broken down into 200 x 200 pixel tiles. Each tile is tested against a Watson Visual Recognition model for any sign of rust. This is a practical implementation of Machine Learning. The model was created and trained using a few hundred images with rust.
Read More Here: