• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • About Us
  • What is AI?
  • AI Education
  • AI Jobs
  • Contact Page

010101

Artificial Intelligence Resources

  • AI Writing
  • AI Books
  • AI Movies
  • AI Tools
  • AI in the Media
  • AI Bill of Rights

ChatGPT's ability to write like humans could erode trust in many fields – Axios

December 19, 2022 by AVA Leave a Comment

Illustration: Gabriella Turrisi/Axios
The world's response to the oracular artificial intelligence program called ChatGPT started with chuckles but has quickly moved on to shivers.
What's happening: Trained on vast troves of online text, OpenAI's chatbot remixes those words into often-persuasive imitations of human expression and even style.
Yes, but: A growing chorus of experts believes it's too good at passing for human. Its capacity for generating endless quantities of authentic-seeming text, critics fear, will trigger a trust meltdown.
Why it matters: ChatGPT's ability to blur the line between human and machine authorship could wreak overnight havoc with norms across many disciplines, as people hand over the hard work of composing their thoughts to AI tools.
Education is where ChatGPT's disruption will land first, but any discipline or business built on foundations of text is in the blast radius.
What they're saying: "Shame on OpenAI for launching this pocket nuclear bomb without restrictions into an unprepared society.” Paul Kedrosky, a venture investor and longtime internet analyst, wrote on Twitter earlier this month. “A virus has been released into the wild with no concern for the consequences."
The intrigue: AI companies, including OpenAI, are working on schemes that could watermark machine-generated texts.
The big picture: The intense online debate over ChatGTP among technologists, investors and critics has surfaced a range of warnings over its failings.
Accuracy: ChatGTP's conversational fluency masks its inability to distinguish between fact and fiction.
Bias: OpenAI has tried to limit the potential for ChatGPT to say things that are blatantly offensive or discriminatory, but users have found many holes in its restraints. (That's likely what OpenAI wanted to happen in this public trial so it could improve the product.)
Control: Large-scale machine learning-based AI provides output without explanation: Programmers know what they fed the program, but not why it arrived at a particular answer.
The other side: Historically, previous waves of automation — like the Industrial Revolution — triggered eras of instability but left society intact.
Our thought bubble: Writing is hard! The more writing AI does for us, the fewer of us will practice the skill.

source

Filed Under: Uncategorized

Reader Interactions

Leave a Reply

You must be logged in to post a comment.

Primary Sidebar

Recent Posts

🌱 ChatGPT Artificial Intelligence App + Child And Family Well-Being – Patch

Hello everyone! I'm back with your fresh copy of the San Diego Patch newsletter. … [Read More...] about 🌱 ChatGPT Artificial Intelligence App + Child And Family Well-Being – Patch

  • Most Jobs Soon To Be ‘Influenced’ By Artificial Intelligence, Research Out Of OpenAI And University Of Pennsylvania Suggests – Forbes
  • Microsoft Wants To Restrict Artificial Intelligence, But Not To Protect … – Giant Freakin Robot
  • Guidelines for Using Artificial Intelligence Are Released by The ICMR – Analytics Insight

Follow Us Online

  • Facebook
  • LinkedIn

Ads, Of Course

Footer

Main Nav

  • Home
  • About Us
  • What is AI?
  • AI Education
  • AI Jobs
  • Contact Page

Secondary Nav

  • AI Writing
  • AI Books
  • AI Movies
  • AI Tools
  • AI in the Media
  • AI Bill of Rights

Copyright © 2023 · 010101.ai · Website by Amador Marketing