Language models are transforming science and society thanks to recent technological advances in artificial intelligence and machine learning. However, language models are still somewhat contentious despite their astounding potential for problem-solving and even for writing code. They occasionally output errors and produce biased results based on the millions of documents they are trained on. However, there is a way for these language models to make up for their erroneous responses, and that is through user interaction.
The idea behind the approach was that giving a model feedback via user interaction to clarify one’s intentions behind their inquiry and the desired response type might provide better aid in enhancing the model’s understanding. Taking a step in this new research direction, Allen Institute for AI (AI2) investigated a novel method that enables users to give instructive feedback to language models even when they are unsure of the correct response.
In order to address this, the researchers present MemPrompt — Memory-assisted Prompt Editing with User Feedback, an innovative approach of pairing GPT-3 with a memory of recorded events when the model misreads the user’s intents. To increase the performance and accuracy of the model, corrective user feedback is also given to clarify the intended task further. AI2’s out-of-the-box methodology combines traditional prompt engineering techniques and dynamic interactivity with model prompts. This makes it possible even for people who are not academics to offer their input and try to assess the model’s understanding based on their expectations.
The researchers used four tasks, namely two lexical and two complex ethical reasoning problems, to evaluate their approach. According to their findings, a user can interactively train a deployed GPT-3 by enhancing the overall accuracy through several queries that depict various misunderstandings. MemPrompt, to put it briefly, is a flexible architecture that symbolizes a step toward the development of low-cost utility enhancements, even for very large pre-trained language models. It has several applications, but personalization—where user preferences can be stored in the model’s memory to mold a model according to their desire—is one of its most important ones. More details regarding MemPrompt can be accessed here.
Check out the Paper, Code, Tool and AI2 Article. All Credit For This Research Goes To Researchers on This Project. Also, don’t forget to join our Reddit page and discord channel, where we share the latest AI research news, cool AI projects, and more.
Khushboo Gupta is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Goa. She is passionate about the fields of Machine Learning, Natural Language Processing and Web Development. She enjoys learning more about the technical field by participating in several challenges.
Marktechpost is a California based AI News Platform providing easy-to-consume, byte size updates in machine learning, deep learning, and data science research
© 2021 Marktechpost LLC. All Rights Reserved. Made with ❤️ in California
Join Our ML Reddit Community