Home Innovation Technology Why is machine learning so hard to explain? Making it clear can...

Why is machine learning so hard to explain? Making it clear can help with stakeholder buy-in

0
115
Why is machine learning so hard to explain? Making it clear can help with stakeholder buy-in


Nobody is going to invest in a technology that they don’t fully get. Helping them understand how and why it works will encourage adoption.

Getty Images/iStockphoto

More about artificial intelligence

It’s hard to get stakeholders to buy into technology they don’t understand. In the case of artificial intelligence (AI) and machine learning (ML), very few people actually get it, leaving an explainability gap for data scientists and businesses.

Three years ago, the MIT Technology Review published an article about AI titled, “The Dark Secret at the Heart of AI.” “No one really knows how the most advanced algorithms do what they do. That could be a problem,” Will Knight wrote. “Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey… . The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

SEE: TechRepublic Premium editorial calendar: IT policies, checklists, toolkits, and research for download (TechRepublic Premium)

“Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions…. What if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why.”

Technologies that are “hidden” in AI like machine learning are difficult for anyone to explain.That’s why it creates risks for companies and for CIOs and data scientists who are expected to explain how their AI operates.

“The fundamental explainability flaw with AI is that it uses ML, and ML is a black box,” said

Will Uppington, co-founder and CEO of Truera, which provides software that aids companies in operationalizing AI and ML. “That means even when models work, data scientists don’t necessarily know why. This hinders data scientists from building high quality ML applications quickly and efficiently. It also becomes a problem when non-data scientists,

such as business operators, regulators, or consumers, ask questions about a result.”

Uppington said that Model Intelligence Platforms can help address the explainability issue.

“This software helps data scientists and non-data scientists explain, evaluate and extract insights from models and the data used to build the models,” Uppington said. “You can think of it as the equivalent of Board for Machine learning. This software is also the key to ensuring that models are fair and that companies can adopt them responsibly.”

SEE: Natural language processing: A cheat sheet (TechRepublic)

For instance, if you’re a bank, you must be able to explain to regulators how your lending AI software works, and how it guards against bias. Even if you don’t have to deal with regulators, technologists must be able to explain to their boards, C-level executives and end business users how an AI/ML model works, and why they should trust the results.

Ensuring—and maintaining—trust in what the AI says isn’t just about cleaning and vetting data to ensure that it isn’t biased before the AI goes live. Over time, there is bound to be “drift” from the original data and algorithms that operate against it. You have to monitor and tune for that as well.

Tools can be added to AI/ML deployment, and maintenance testing that can ascertain the accuracy of AI/ML systems. With this tooling, organizations can test against a representative number of test cases to understand how the AI’s underlying “black box” ML decisioning is working, and whether the results it is delivering are “true.”

SEE: Artificial intelligence is struggling to cope with how the world has changed (ZDNet)

In one use case, Standard Chartered Bank used software to understand how the AI model it was building operated in making the lending decisions that it made. By inputting different lending profiles and criteria, Standard Charter’s team could see the results that the AI engine returned, and why. They could confirm that the decisioning of the AI stayed true to what the bank expected, and that both the data and the decision-making process were unbiased. Just as importantly, those working on the project could explain the AI process to stakeholders. They had found a way to crack open AI’s ML “black box.”

“If data scientists can’t explain how their AI applications work, then business owners aren’t going to approve them, business operators aren’t going to be able to manage them, and end users can reject them,” Uppington said. “Companies are increasingly aware of the challenge of building trust among stakeholders. It’s why the data scientists [and AI] leaders in our recent survey said that ‘stakeholder collaboration’ was the No. 1 organizational challenge facing their company.”

Also see



Source link

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here