© 2024 New England Public Media

FCC public inspection files:
WGBYWFCRWNNZWNNUWNNZ-FMWNNI

For assistance accessing our public files, please contact hello@nepm.org or call 413-781-2801.
PBS, NPR and local perspective for western Mass.
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

An open letter signed by tech leaders, researchers proposes delaying AI development

ADRIAN FLORIDO, HOST:

One problem with artificial intelligence technology is that it's getting really, really good - to the point where technology experts are worrying about the profound negative impacts it could have on society. Today, more than a thousand tech leaders and researchers - among them, Elon Musk - signed an open letter calling for a six-month pause in developing the most powerful AI systems. Peter Stone is one of the signatories to that letter. He is associate chair of computer science and director of robotics at the University of Texas. Thanks for joining us.

PETER STONE: My pleasure. Thanks for inviting me.

FLORIDO: AI is a technology - a system that learns skills by analyzing massive amounts of data to the point where it can start to perform a lot of the tasks that, until now, only humans could do, like have a conversation or write an essay. So when tech professionals talk about their fear of advanced AI, what are you talking about?

STONE: From my perspective, it's very important to distinguish different types of artificial intelligence technologies. The one you described is one of the more recent ones - generative artificial intelligence models based on neural networks. And I think myself and many other AI professionals and researchers are concerned about the possible uses and misuses of these new technologies and concerned that the progress is moving more quickly than is allowing us to have time to really understand the true implications before the next generation comes out.

Some of the things that we've been coming to terms with are having to do with changing people's opinions in the political sphere and understanding, you know, how that can happen when it's appropriate. People are still getting to grips with the intellectual property implications of these generative models. But there's still, I believe, many realms and domains where we haven't had time yet to explore what these models can do. And that's the thing that concerns me the most - is that, while we're still understanding that, the next generation is being developed. To me, it seems a little bit like, immediately after the Model T was invented, jumping straight to a national highway system, with cars that can go 80 miles an hour, without having the time to think about what regulations would be needed along the way.

FLORIDO: The letter you signed calls for a pause in the development of some of the most advanced AI technology. Why a pause? What would that achieve?

STONE: So the pause, if enforceable, would give time for the dust to settle, really, on what are these potential implications of these models. And so, you know, the pause would, for one thing, give the academic community a chance to educate the general public about what to expect from these models. They're fantastic tools, but it's very easy and natural for people to give them more credit than they deserve - to expect things from them that they're not capable of. You know, I think there's sort of a need for some time for everybody to understand how they can be regulated. That's sort of called for in the letter as well - to let, you know, governments and society respond.

FLORIDO: I should be clear here that your letter is not directed at a government agency. You're asking these tech companies to police themselves - to sort of hit the brake themselves. But these companies are locked in a race to develop the most advanced technology. What incentive do they have to heed your warnings?

STONE: So I think there is no incentive other than the agreement or the - you know, the moral compass, as is mentioned in the letter, of the people who are doing the development. And we're not likely to see the effect that the letter is directly calling for, but I think what it is going to do is raise public awareness of the need for, you know, understanding and the need for, if possible, taking some steps to sort of slow down and think a little bit more soberly about the next step before, you know, racing, as you said, to be the first to generate the next bigger model.

FLORIDO: Are you excited about the potential in artificial intelligence technology?

STONE: Oh, absolutely. This is a fantastic time to be in the field of artificial intelligence. There's really exciting things happening, and I would not at all be in favor of stopping research on artificial intelligence. I identify very much in the letter with the statement that humanity can enjoy a flourishing future with artificial intelligence, but I don't think it'll happen automatically. I think we need to think very carefully about what we should do, not just what we can do, when it comes to AI development. If we do it correctly, I think the world is going to become a much better place as a result of progress in artificial intelligence.

FLORIDO: I've been speaking with Peter Stone from the University of Texas, Austin. Thank you.

STONE: My pleasure. Thank you.

(SOUNDBITE OF MUSIC) Transcript provided by NPR, Copyright NPR.

Linah Mohammad
Prior to joining NPR in 2022, Mohammad was a producer on The Washington Post's daily flagship podcast Post Reports, where her work was recognized by multiple awards. She was honored with a Peabody award for her work on an episode on the life of George Floyd.
Juana Summers is a political correspondent for NPR covering race, justice and politics. She has covered politics since 2010 for publications including Politico, CNN and The Associated Press. She got her start in public radio at KBIA in Columbia, Mo., and also previously covered Congress for NPR.