PROVIDENCE — Rhode Island hosts several top experts in artificial intelligence, a technology that’s been getting a lot of attention lately with the release of the language learning model ChatGPT.
But AI is already here. It’s in the Roomba that avoids your tennis shoes. It’s in the letters of recommendation that one local college professor is writing. It’s even in this morning’s edition of our newsletter Rhode Map. (Click here to subscribe and see how it fared.)
And last week, it was at the Rhode Island House of Representatives, where a committee had the first-of-its-kind hearing on AI. The hearing served as a symbol of growing efforts to understand and eventually regulate the technology.
Advertisement
“The day is going to come that it’s going to become a big deal in this state,” said state Representative Lauren Carson, a Newport Democrat.
We’re still a ways from legislative movement on the topic. Other states, including Vermont, have already made efforts on AI. But one early initiative in Rhode Island could be an inventory of where artificial intelligence is actually being used in state government.
Some of the top concerns about AI include its use in determining eligibility for government benefits or in the criminal justice system, which can raise issues with bias, transparency, and civil rights. Many experts, including here in Rhode Island, share those concerns — not in an “I’m afraid I can’t do that, Dave” sort of way, but in the “this is already happening and it’s bad” sort of way. And, they say, something can be done about it.
“Just like we expect cars to have seat belts and anti-lock brakes, and we expect roads to have literal guardrails on them, we should do the same thing for AI as well if we want to make the most use of it,” Suresh Venkatasubramanian, a professor at Brown University who recently worked in the Biden White House on AI policy, testified Thursday.
Advertisement
Among the concerns lawmakers raised at Thursday’s hearing: the displacement of jobs and the use of AI to help make benefit decisions.
“I’m concerned we’re going to continue to see the trend of using more AI to fill in the jobs of eligibility technicians, which will result in cases of people not receiving their benefits despite the fact that they’re entitled to them,” said state Representative David Morales, a Providence Democrat.
“Unfortunately, almost certainly, because we keep seeing these things happen over and over again,” Venkatasubramanian said, pointing to cases like one in Idaho.
Venkatasubramanian defined AI as “a design of automated systems that can sense environments, learn from inputs, and make logical inferences.”
The pace of advancement is bewildering, and it touches on everything from Boston Analytics’ robot dog, to to self-driving cars, to language models like ChatGPT. The Boston Globe interviewed several Rhode Island-based experts in the field. Each was asked a lot of complex questions pointing back to one simple one: Will AI be a good thing or a bad thing for humanity? And depending on the answer, what should be done about that? Here’s a sampling of what they had to say (edited for length and clarity.)
Suresh Venkatasubramanian, director, Center for Tech Responsibility at Brown University; professor, computer science and data science; formerly the assistant director for Science and Justice, White House Office of Science and Technology Policy, where he worked on the Blueprint for an AI Bill of Rights.
Advertisement
Good or bad for humanity? “I think it’s up to us. It’s a choice we are going to have to make, or a collection of choices we’re going to have to make. Our choices now are going to decide whether we end up with the good use cases or the bad use cases. There’s a lot of talk about how we have to wait and see what AI will do… no. The people who are not waiting and seeing are the ones who are making these systems, and are ignoring all the cries for guardrails. We have to collectively have the world to say no, we are going to decide.”
Bonus question: So what should be done to address these concerns?
“We have no lack of knowledge of how to do this — the Blueprint for an AI Bill of Rights has very detailed guidelines on what we should be doing. And states actually are beginning to write legislation around this.”
Matt Lenz, senior director of state advocacy for the industry group BSA | The Software Alliance. BSA’s members range from Adobe to Zoom, and Lenz’s role has a national scope, but Lenz lives in Cranston and serves on the University of Rhode Island’s Board of Trustees.
Good or bad for humanity? “I think overall, it’s going to be good for humanity. It’s going to automate some of the routine tasks that some folks may do so that they can concentrate on areas of more creative thinking or more skill-based initiatives. I think it’s going to advance (society) the same way that the internet has advanced society, or really any type of groundbreaking technology such as this.”
Advertisement
Bonus question: What sort of appropriate guardrails may be necessary for Congress and at the state level to consider?
“Our Framework to Build Trust in AI has a lot of what we think would be appropriate for putting in guardrails for automatic decision-making systems.”
Drew Zhang, was recently appointed as the Alfred J. Verrecchia Endowed Chair in Artificial Intelligence and Business Analytics at the University of Rhode Island. He has a focus on artificial intelligence and business analytics, and a history of working in AI in both academic and professional contexts.
Good thing or bad thing? “I really don’t know. If you look at the last 10 years, who would have predicted we would be here? I think it’ll be a mixture of both. Certainly it makes our life easier, makes our life richer in many ways — all kinds of angles, (like) entertainment and self-driving cars. You can make clear arguments about AI being good. But like any other technology, there are always going to be side effects. The question is, how big are they?”
Bonus question: Are language models like ChatGPT… sentient?
“I honestly don’t think we’re there. It’s a model that generates future text based on historical texts. What happens in the future, we really don’t know, but at this point, if you ask me where I fall, I fall in the spectrum of this debate very very close to the end where we’re saying, ‘No, it’s not sentient, it’s far from it.’”
Advertisement
Ellie Pavlick is the Manning Assistant Professor of Computer Science at Brown University with a research focus on natural language processing. Pavlick was recently interviewed on 60 Minutes about ChatGPT.
Good thing or bad thing? “I’m an optimistic person so I like to think good, but I will say right now I’m more cautious than excited. I would like to see us do the next couple years intelligently — you get the sense that everyone’s in this mad dash to put generative AI into every little nook and cranny of every business because they’re worried that their competitors are going to do it and if they don’t do it, they’re going to be irrelevant. I think that rushing could have some bad consequences — dealing with things like the risk of fake news, the risks of privacy breaches and security breaches. We’re bad at anticipating how they’re going to behave.”
Bonus question: So even the developers don’t know how these language models, or other large neural networks in general, work?
“No one really does. There are things that we understand. We kind of have a recipe for how to build them, but we don’t really understand why that recipe would yield the results that it yields. And then we don’t know how to predict the behavior of the model other than to run it and see what comes out.”
Stephen Atlas is a URI associate marketing professor who is integrating language models — ChatGPT in particular — into the classroom, as a way to prepare students and the future workforce. He’s used ChatGPT for letters of recommendation, while disclosing it.
Good thing for a bad thing? “I think it’s like any technological shift. It comes with a lot of changes. And I think like other technologies, it is going to result in an overall increase in productivity and quality of life — across the board on improvements in our next stage of civilization.”
Bonus question: It doesn’t sound like AI replacement is a big concern of yours.
“It’s understandable that many people are concerned with being replaced by AI, but for me that underscores the value of learning how to apply this accessible technology in a way that it can help us accomplish our goals.”
Brian Amaral can be reached at brian.amaral@globe.com. Follow him on Twitter @bamaral44.
Digital Access
Home Delivery
Gift Subscriptions
Log In
Manage My Account
Customer Service
Delivery Issues
Feedback
Help & FAQs
Staff List
Advertise
Newsletters
View the ePaper
Order Back Issues
News in Education
Search the Archives
Privacy Policy
Terms of Service
Terms of Purchase
Work at Boston Globe Media
Leave a Reply
You must be logged in to post a comment.