MIT robotics teacher and I robot co-founder Rodney Brooks thinks artificial intelligence (AI) is awesome.
It’s not as impressive as many of his supporters have claimed, Brooks told TechCrunch in a statement. interview published Saturday (June 29).
“I’m not saying LLMs aren’t important, but we need to be careful about how we evaluate them,” Brooks said, referring to large language models like OpenAI’s ChatGPT.
The problem with generative AI, he added, is that it can perform certain tasks competently, it can’t do everything humans can do, and people tend to overestimate its capabilities. .
“When a human sees an AI system performing a task, it immediately generalizes it to similar and similar things. make an estimate of “The skill of the AI system; not just the performance, but the skill around that,” Brooks said. “And they’re typically very too optimisticand that’s because they use a model of a person’s performance on a task.
This phenomenon was brought up here last week after a Bloomberg News report that some users are exchanging high volumes of messages with AI chatbots and, in some cases, attributing human qualities to chatbots.
“One of the ethical concerns is that although users may feel listened to, understood And loved one, this emotional attachment can In fact exacerbate their isolation. Giada Pistilliprincipal ethicist at AI startup Hugging Face, said in the report.
Besides these ethical concerns, Brooks said that try to assign AI’s human capabilities are a error, because it leads people to want to use technology for things that don’t make sense. For example, Brooks founded a robotic warehouse system called Robust.aiSomeone recently suggested offering a Masters in Law degree for their robots. But Brooks believes that would slow things down.
“When you get 10,000 orders that you have to ship in two hours, you duty optimize for this. The language is not go help; it’s just going to slow things down,” he said. “We have massive data processing and AI optimization techniques and planning. And this is how we fulfill orders quickly.
Meanwhile, PYMNTS recently spoke with experts about efforts to train AI to recognize humor. A number of strategies have emerged on this front, Pedro DomingosProfessor Emeritus of Computer Science at the University of Washingtontold PYMNTS.
“Refine the models on collections of jokes, cartoons, humorous essays and books, etc., available on the Web. Explain to the models what is funny and appropriate against no, and prodding them in various ways until they produce something to our liking. Train models to produce funnier, more appropriate humor by asking humans to rate their output accordingly.
He cautioned, however: “None of this guarantees success, and humor remains one of the hardest things for AI models to achieve.”
For all PYMNTS AI coverage, subscribe daily AI Newsletter.