The Barking Crow
Sorry, I keep saying that. Just in case you’re a judge who wants me to say one word and wants an artificially intelligent being to also say one word and will then use those words to decide which of us to kill.
If this fear of mine sounds far-fetched, consider yourself an idiot, because it’s happening. In thought experiments. And in experiments about those thought experiments. Nobody’s getting killed (because of this—plenty of people are getting killed for other reasons—) but the question is being asked. As we’ve been tipped by some guy named Ethan Mollick’s Twitter account, MIT researchers have created what they call “a minimal Turing test,” one in which a human submits one word and an artificially intelligent android submits another and a human judge decides which is the AI. The most popular word used by humans is “love.” AI predicts that, though. It predicts that humans will either say “love” or “compassion.” So it hedges and says “empathy,” and in a Love vs. Empathy matchup, empathy is killed 59% of the time by the human judge (it’s 61% if the human does pick compassion). A relief. The AI is bad at not getting itself killed. We need that out of our AI.
Moving on to the fun stuff. Want to guess the safest word for a human to say in this situation?
(It doesn’t need to be capitalized or accompanied by an exclamation mark, but when has that stopped us before?)
In a Poop vs. Empathy matchup, the human judge correctly identifies the human 78% of the time. Poop, it turns out, is the most human word we can say. Which gives me a whole new idea on how to rebrand myself as an approachable human politician.
Your email address will not be published.
Begin typing your search term above and press enter to search. Press ESC to cancel.
The Barking Crow