• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • About Us
  • What is AI?
  • AI Education
  • AI Jobs
  • Contact Page

010101

Artificial Intelligence Resources

  • AI Writing
  • AI Books
  • AI Movies
  • AI Tools
  • AI in the Media
  • AI Bill of Rights

Artificial intelligence: Designing agents that can communicate and cooperate in Diplomacy | Nature Communications – Nature Middle East

December 7, 2022 by AVA Leave a Comment

Nature Communications
December 7, 2022
Artificial intelligence (AI) agents that can negotiate and form agreements, allowing them to outperform other agents without this ability in the board game Diplomacy, are reported in a Nature Communications paper. The findings demonstrate a deep reinforcement learning approach for modelling agents that can communicate and cooperate with other artificial agents to make joint plans when playing the game.
Developing AI that can demonstrate cooperation and communication between agents is important. Diplomacy is a popular board game that offers a useful test bed for such behaviour, as it involves complex communication, negotiation and alliance formation between the players, which have been long-lasting challenges for AI to achieve. To play successfully, Diplomacy requires reasoning about concurrent player future plans, commitments between players and their honest cooperation. Previous AI agents have achieved success in single-player or competitive two-player games without communication between players.
János Kramár, Yoram Bachrach and colleagues designed a deep reinforcement learning approach that enables agents to negotiate alliances and joint plans. The authors created agents that model game players and form teams that try to counter the strategies of other teams. The learning algorithm allows agents to agree future moves and identify beneficial deals by predicting possible future game states. Moving towards human-level performance, the authors also investigated the conditions for honest cooperation, by examining some broken commitment scenarios between the agents, where agents deviate from past agreements.
The findings help form the basis of flexible communication mechanisms in AI agents that enable them to adapt their strategies to their environment. Additionally, the findings show how the inclination to sanction peers who break contracts dramatically reduces the advantage of such deviators, and helps foster mostly truthful communication, despite conditions that initially favour deviations from agreements.
doi:10.1038/s41467-022-34473-5
Dec 7
Dec 7
Dec 7
Dec 6
Dec 6
Dec 6
Return to research highlights
Subscribe to the world’s leading scientific publications
Register for free weekly e-alerts!
How to publish your research in Nature
High-quality language editing and scientific editing
A global indicator of high-quality research
Stay connected:
© 2022 Springer Nature Japan K.K. Part of Springer Nature Group.

source

Filed Under: Uncategorized

Reader Interactions

Leave a Reply

You must be logged in to post a comment.

Primary Sidebar

Recent Posts

🌱 ChatGPT Artificial Intelligence App + Child And Family Well-Being – Patch

Hello everyone! I'm back with your fresh copy of the San Diego Patch newsletter. … [Read More...] about 🌱 ChatGPT Artificial Intelligence App + Child And Family Well-Being – Patch

  • How ChatGPT Is Fast Becoming The Teacher's Pet – Forbes
  • Evidence of a cognitive bias in the quantification of COVID-19 with CT: an artificial intelligence randomised clinical trial … – Nature.com
  • How AI will change the way we work – Yahoo Finance

Follow Us Online

  • Facebook
  • LinkedIn

Ads, Of Course

Footer

Main Nav

  • Home
  • About Us
  • What is AI?
  • AI Education
  • AI Jobs
  • Contact Page

Secondary Nav

  • AI Writing
  • AI Books
  • AI Movies
  • AI Tools
  • AI in the Media
  • AI Bill of Rights

Copyright © 2023 · 010101.ai · Website by Amador Marketing