• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • About Us
  • What is AI?
  • AI Education
  • AI Jobs
  • Contact Page

010101

Artificial Intelligence Resources

  • AI Writing
  • AI Books
  • AI Movies
  • AI Tools
  • AI in the Media
  • AI Bill of Rights

Artificial intelligence: 3 ways to prioritize responsible practices – The Enterprisers Project

November 10, 2022 by AVA Leave a Comment

User account menu
Main navigation
The question of how to use AI responsibly has been a hot topic for some time, yet little has been done to implement regulations or ethical standards. To start seeing real industry change, we need to shift from simply discussing the risks of unbridled AI to implementing concrete practices and tools.
Here are three steps practitioners can take to make responsible AI a priority today.
AI models can be sensitive. Something as minor as capitalization can affect a model’s ability to process data accurately. Accurate results are foundational to responsible AI, especially in industries like healthcare. For example, a model should understand that reducing the dose of a medication is a positive change, regardless of the other content presented.
Tools like CheckList, an open source resource, look at failure rates for natural language processing (NLP) models that aren’t typically considered. By generating a variety of tests, CheckList can generate model robustness and fix errors automatically. Sometimes it’s as easy as introducing a more pronounced sentiment to training data – “I like ice cream VERY much” instead of “I like ice cream” – to train the models. While different statements, the model can be trained that both are positive.
[ Also read AI ethics: 4 things CIOs need to know. ]
Most widely used datasets make some serious mistakes in labeling. Training datasets usually include noise and errors. For example, computer vision could identify a lobster as a crab. Cleanlab, another open source tool, automatically finds errors in any machine learning dataset. It uses Confident Learning, leveraging all the useful information to find noise in the dataset and improve the error rate.
Identifying labeling errors is a big deal that can compromise the quality of the datasets. Cleanlab can automatically identify cases with wrong labels and propose a correction. By evaluating the labels that seem to conflict with the rest of a data set, Cleanlab can predict which labels are most likely to be wrong, making it easier for people to find and fix errors.

Combating bias is one of the trickier aspects of delivering responsible AI solutions, as humans and systems are inherently biased. For example, you can learn the gender of a patient easily from their medical record, even if it doesn’t explicitly state male or female, from the “she” or “he” pronouns it uses. Age is also straightforward. Factors like race, ethnicity, and social or environmental factors that can affect health, however, are also vitally important but not as easily discernible.
The problem is that data reflects real-world patterns of health inequality and discrimination. Lack of representation affects training data and thus results in biased AI design and deployment practices.
While systemic problems are at the root of this, NLP can help. In healthcare, for example, it can help practitioners make sense of structured and unstructured data and create a more complete and accurate picture of each patient. This model will also learn and improve over time. Fortunately, there are many open source tools available on the market: Here are 12 options to explore.
There’s still much work to be done to ensure that AI is used responsibly. Until standardized industry requirements are in place, enterprises must take matters into their own hands. To bring responsible AI beyond slideware, it’s time to start implementing ethical practices today. Checking for model robustness, identifying labeling errors, and detecting and mitigating bias are good ways to start.
[ Want best practices for AI workloads? Get the eBook: Top considerations for building a production-ready AI/ML environment. ]
Keep up with the latest advice and insights from CIOs and IT leaders.
Keep up with the latest advice and insights from CIOs and IT leaders.

Keep up with the latest advice and insights from CIOs and IT leaders.
The Enterprisers Project is an online publication and community helping CIOs and IT leaders solve problems.
The opinions expressed on this website are those of each author, not of the author’s employer or of Red Hat. The Enterprisers Project aspires to publish all content under a Creative Commons license but may not be able to do so in all cases. You are responsible for ensuring that you have the necessary permission to reuse any work on this site. Red Hat and the Red Hat logo are trademarks of Red Hat, Inc., registered in the United States and other countries.
Follow us @4Enterprisers on Twitter
Like The Enterprisers Project on Facebook
Watch us at The Enterprisers Project
Connect with us on Linkedin
RSS Feed
Copyright ©2022 Red Hat, Inc.
Legal

source

Filed Under: Uncategorized

Reader Interactions

Leave a Reply

You must be logged in to post a comment.

Primary Sidebar

Recent Posts

📢 Artificial Intelligence in Aviation Market Size to Reach USD 45.8 Bn by 2032 – Rise with Steller CAGR 44.1% – EIN News

There were 895 press releases posted in the last 24 hours and 391,588 in the … [Read More...] about 📢 Artificial Intelligence in Aviation Market Size to Reach USD 45.8 Bn by 2032 – Rise with Steller CAGR 44.1% – EIN News

  • Artificial Intelligence Competitiveness, Inclusion, and Innovation … – Lexology
  • Digital Dr. Dolittle: decoding animal conversations with artificial … – KUOW News and Information
  • Artificial intelligence helps find bad police officers in Aurora | David … – NewsBreak Original

Follow Us Online

  • Facebook
  • LinkedIn

Ads, Of Course

Footer

Main Nav

  • Home
  • About Us
  • What is AI?
  • AI Education
  • AI Jobs
  • Contact Page

Secondary Nav

  • AI Writing
  • AI Books
  • AI Movies
  • AI Tools
  • AI in the Media
  • AI Bill of Rights

Copyright © 2023 · 010101.ai · Website by Amador Marketing