Click here to sign in with or
Forget Password?
Learn more
share this!
7
Twit
Share
Email
November 20, 2023
This article has been reviewed according to Science X’s editorial process and policies. Editors have highlighted the following attributes while ensuring the content’s credibility:
fact-checked
proofread
by Intelligent Computing
In a paper published in Intelligent Computing, Philip Nicholas Johnson-Laird of Princeton University and Marco Ragni of Chemnitz University of Technology propose a novel alternative to the Turing test, a milestone test developed by computing pioneer Alan Turing. The paper suggests that it is time to shift the focus from whether a machine can mimic human responses to a more fundamental question: “Does a program reason in the way that humans reason?”
The Turing test, which has long been a cornerstone of AI evaluation, involves a human evaluator attempting to distinguish between human and machine responses to a series of questions. If the evaluator cannot consistently differentiate between the two, the machine is considered to have “passed” the test. While it has been a valuable benchmark in the history of AI, it has certain limitations:
- Mimicry vs. Understanding: Passing the Turing test often involves mimicking human responses, making it more a test of mimicry and language generation than genuine human-like reasoning. Many AI systems excel at mimicking human conversations but lack deep reasoning capabilities.
- Lack of Self-Awareness: The Turing test does not require AI to be self-aware or have an understanding of its own reasoning. It focuses solely on external interactions and responses, neglecting the introspective aspect of human cognition.
- Failure to Address Thinking: Alan Turing himself recognized that the test might not truly address the question of whether machines can think. The test is more about imitation than cognition.
Johnson-Laird and Ragni outline a new evaluation framework to determine whether AI truly reasons like a human. This framework comprises three critical steps:
1. Testing in Psychological Experiments:
The researchers propose subjecting AI programs to a battery of psychological experiments designed to differentiate between human-like reasoning and standard logical processes. These experiments explore various facets of reasoning, including how humans infer possibilities from compound assertions and how they condense consistent possibilities into one, among other nuances that deviate from standard logical frameworks.
2. Self-Reflection:
This step aims to gauge the program’s understanding of its own way of reasoning, a critical facet of human cognition. The program must be able to introspect on its reasoning processes and provide explanations for its decisions. By posing questions that require awareness of reasoning methods, the researchers seek to determine if the AI exhibits human-like introspection.
3. Examination of Source Code:
In the final step, the researchers delve deep into the program’s source code. The key here is to identify the presence of components known to simulate human performance. These components include systems for rapid inferences, thoughtful reasoning, and the ability to interpret terms based on context and general knowledge. If the program’s source code reflects these principles, the program is considered to reason in a human-like manner.
This innovative approach, replacing the Turing test with an examination of an AI program’s reasoning abilities, marks a paradigm shift in the evaluation of artificial intelligence. By treating AI as a participant in cognitive experiments and even submitting its code to analysis akin to a brain-imaging study, the authors seek to bring us closer to understanding whether AI systems genuinely reason in a human-like fashion.
As the world continues its pursuit of advanced artificial intelligence, this alternative approach promises to redefine the standards for AI evaluation and move us closer to the goal of understanding how machines reason. The road to artificial general intelligence may have just taken a significant step forward.
More information: Philip N. Johnson-Laird et al, What Should Replace the Turing Test?, Intelligent Computing (2023). DOI: 10.34133/icomputing.0064
More information: Philip N. Johnson-Laird et al, What Should Replace the Turing Test?, Intelligent Computing (2023). DOI: 10.34133/icomputing.0064
Explore further
Facebook
Twitter
Email
Feedback to editors
4 hours ago
0
Nov 18, 2023
0
Nov 17, 2023
2
Nov 16, 2023
0
Nov 15, 2023
0
4 hours ago
4 hours ago
4 hours ago
5 hours ago
6 hours ago
7 hours ago
8 hours ago
8 hours ago
9 hours ago
9 hours ago
Oct 17, 2023
Nov 2, 2023
Sep 21, 2023
Jul 26, 2023
Sep 26, 2023
Oct 31, 2023
4 hours ago
4 hours ago
8 hours ago
9 hours ago
9 hours ago
6 hours ago
Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form. For general feedback, use the public comments section below (please adhere to guidelines).
Please select the most appropriate category to facilitate processing of your request
Thank you for taking time to provide your feedback to the editors.
Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.
Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient’s address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Tech Xplore in any form.
Daily science news on research developments and the latest scientific innovations
Medical research advances and health news
The most comprehensive sci-tech news coverage on the web
This site uses cookies to assist with navigation, analyse your use of our services, collect data for ads personalisation and provide content from third parties. By using our site, you acknowledge that you have read and understand our Privacy Policy and Terms of Use.
Leave a Reply
You must be logged in to post a comment.