Connect with us

Artificial Intelligence

Work Underway to Assess and Rate AI Model Transparency

Published

on

Consumers of AI models need more visibility and more transparency to be able to trust AI models that others are building.

By AI Trends Staff

We hold these truths to be self-evident: a machine learning model is only as good as the data it learns from. Bad data results in bad models. A bad model that identifies butterflies when it should be recognizing cats is easy to spot.

Sometimes a bad model might be more difficult to spot. If data scientists and ML engineers that trained the model selected a subset of available data with an inherent bias, the model results could be skewed. Or the model might not have been well-trained enough, or it could have issues with overfitting or underfitting.

If there are many ways the model could fail, how are we to trust the model? asks a recent account in Forbes on AI transparency and explainability by Ronald Schmelzer.

Application development best practices would call for a quality assurance (QA) and testing process supported by tools that can spot bugs or deviations from programming norms. Patched applications are run through regression tests to make sure new issues are not introduced. Continuous testing goes on as the application function gets more complex.

Ronald Schmelzer, managing partner and principal analyst at Cognilytica

While AI is different, it’s not like principles of sound software engineering suddenly do not apply.

Still, machine learning models derive functionality from the data and the use of algorithms that attempt to build the most accurate model from the data. The model is an approximation.

“We can’t just bug fix our way to the right model. We can only iterate. Iterate with better data, better algorithms, better configuration hyperparameters, more horsepower. These are the tools we have,” states author Schmelzer, managing partner and principal analyst at Cognilytica, an AI-focused research and advisory firm. He holds a BS in computer science and engineering from MIT, and an MBA from Johns Hopkins University.

Consumers of models do not have access to tools that tell them whether the model is a good one. They have a choice, use the AI or not.

“There is no transparency. As the market shifts from model builders to model consumers, this is increasingly an unacceptable answer. The market needs more visibility and more transparency to be able to trust models that others are building,” Schmelzer states.

Advertisement

So how do we get there?

Efforts to assess AI model transparency are underway. The National Institute of Standards and Technology (NIST) in May 2019 convened a meeting on the advancement of AI standards as part of AI strategy plans from the White House. The group discussed the idea of a method by which models could be assessed for transparency. Analysts from Cognilytica subsequently developed a multi-factor transparency assessment and contributed it to the Advanced Technology Academic Research Center (ATARC), a non-profit that seeks collaboration between government, academia and industry to address technology issues. The ATARC runs a number of working groups; Schmelzer chairs the group on AI Ethics and Responsible AI working group.

The proposed assessment aims for model developers to assess based on five factors for transparency:

  • How explainable is the algorithm used to build the model?
  • Can we get visibility into the data set used for training?
  • Can we get visibility into the methods of data selection?
  • Can we identify the inherent bias in the data set?
  • Can we get full visibility into model versioning?

Even though the models would be self-assessed by model developers, it would be progress.

“A transparency assessment is sorely needed by the industry,” Schmelzer stated. “In order to trust AI, organizations need models they can trust. It’s very hard to trust any model or any third party source without having transparency into how those models operate.”

Suggestions for Three Practical Steps

Three practical steps leaders can take to mitigate the effect of bias were suggested by Josh Sutton and Greg Satell writing in Harvard Business Review in October 2019. Sutton is the CEO of Agorai, founded in 2018, focused on offering reusable AI models in specific industries; Satell is a speaker and author of books including “Cascades: How to Create a Movement that Drives Transformational Change,” released in 2019.

First, subject the AI system to rigorous human review; second, insist engineers understand the algorithms incorporated in the AI system; and third, make the data sources used to train the AI available for audit.

“We wouldn’t find it acceptable for humans to be making decisions without any oversight, so there’s no reason why we should accept it when machines make decisions,” the authors state.

Advertisement
Eric Haller, head of Experian DataLabs

In an interview with the authors, Eric Haller, head of Datalabs at Experian, the credit reporting company, said early models used for AI were fairly simple, but today, data scientists need to be more careful when selecting models.

“In the past, we just needed to keep accurate records so that, if a mistake was made, we could go back, find the problem and fix it,” Haller stated. “Now, when so many of our models are powered by artificial intelligence, it’s not so easy. We can’t just download open-source code and run it. We need to understand, on a very deep level, every line of code that goes into our algorithms and be able to explain it to external stakeholders.”

Read the source articles in Forbes and in Harvard Business Review.

Source

Continue Reading
Advertisement
Advertisement
Advertisement Submit

TechAnnouncer On Facebook

Advertisement
Al Kingsley, CEO of the NetSupport Group Al Kingsley, CEO of the NetSupport Group
Business Technology2 weeks ago

The Business Cost of a Missed Message

Business leaders depend on emails and direct messages to deliver the information that keeps our teams advancing toward critical goals....

Right Airbnb Management Company Right Airbnb Management Company
Real Estate Technology3 weeks ago

How to Choose the Right Airbnb Management Company

Running a successful Airbnb property requires a lot of effort and time, which is why many hosts turn to Airbnb...

A Review of the Shure SM7B Microphone A Review of the Shure SM7B Microphone
Tech Reviews4 weeks ago

Unleashing the Power of Sound: A Review of the Shure SM7B Microphone

The Shure SM7B microphone has made waves in the audio world, becoming a favorite among podcasters, musicians, and broadcasters alike....

Pocket Cinema Camera 6K Pro Pocket Cinema Camera 6K Pro
Tech Gadgets4 weeks ago

Capturing Magic: A Review of the Blackmagic Pocket Cinema Camera 6K Pro

The Blackmagic Pocket Cinema Camera 6K Pro is a game-changer for filmmakers and content creators. With its impressive features and...

Apple 2023 MacBook Air Apple 2023 MacBook Air
Tech Reviews4 weeks ago

Unleashing Power: A Review of the Apple 2023 MacBook Air with M2 Chip

The Apple 2023 MacBook Air with M2 chip is a sleek and powerful laptop that has captured the attention of...

BTC staking campaign BTC staking campaign
Bitcoin1 month ago

Exploring pSTAKE’s edge within Binance’s latest BTC staking campaign

Recently, Binance launched its latest BTC Staking on Babylon Campaign, inviting users to participate in an exciting opportunity to earn...

The 2022 Apple MacBook Air with M2 chip The 2022 Apple MacBook Air with M2 chip
Electronics1 month ago

Apple MacBook Air: A Student’s Best Friend

The 2022 Apple MacBook Air with M2 chip has quickly become a favorite among students and professionals alike. With its...

DJI Avata 2 DJI Avata 2
Drones Technology1 month ago

Experience the Sky Like Never Before with the DJI Avata 2

Flying the DJI Avata 2 Fly More Combo is an exhilarating experience that takes you to new heights. This FPV...

Sony Alpha 7 IV: A Comprehensive Review Sony Alpha 7 IV: A Comprehensive Review
Tech Reviews1 month ago

Unleashing Creativity with the Sony Alpha 7 IV: A Comprehensive Review

The Sony Alpha 7 IV is a remarkable camera that has captured the attention of both amateur and professional photographers...

Market Turmoil: Iran's Missile Attack on Israel Sends Stocks Down Market Turmoil: Iran's Missile Attack on Israel Sends Stocks Down
Trending Technology1 month ago

Market Turmoil: Iran’s Missile Attack on Israel Sends Stocks Down

U.S. stock markets experienced a significant downturn on October 1, 2024, following Iran’s missile strikes on Israel, which escalated geopolitical...

Advertisement
Advertisement Submit

Trending

Pin It on Pinterest

Share This