OpenAI issued an open call for experts to join a mass stress-test program for its artificial-intelligence systems on Tuesday, hoping to further hone the new controversial technology.
In asking experts to join its Red Teaming Network, the startup hopes to show it is taking concerns about AI seriously. The approach formalizes previous efforts by OpenAI to work with outside experts to improve its programs and borrows its name from a term given to hackers invited inside a company to examine and strengthen its cybersecurity.
AI has captivated public interest in recent months—particularly OpenAI’s ChatGPT tool—but the emerging technology has its flaws, susceptible to dispensing false information, like saying a legal election was stolen, or dangerous information, like the recipe for napalm.
“Working with individual experts, research institutions, and civil society organizations is an important part of our process,” wrote the company in a blog post announcing the Red Teaming Network. “We see this work as a complement to externally specified governance practices, such as third party audits.”
Leave a Reply
You must be logged in to post a comment.