Missy Cummings shows risks of unfettered AI in Voices of Discovery lecture | Today at Elon
The director of George Mason University’s Responsible AI Program and the Mason Autonomy and Robotics Center illustrated reasons why AI must be perfected and monitored by human reasoning during her lecture at Elon.
Share:
Robots gone rogue. Medical applications of artificial intelligence that misdiagnose. Self-driving cars that mistakenly brake, or those that don’t brake at all, and cause crashes.
In example after example during her Voices of Discovery lecture, Missy Cummings explained why AI can’t be trusted to perform safety-critical tasks. The reason is simple: Generative AI has no reasoning.
“Humans are really good at inductive and abstract thinking. AI is good is at lower-level decision-making as long as they have the right data fed into them, but in no way, shape or form, should anything in generative AI be let loose without a human babysitter,” said Cummings, director of the Mason Responsible AI Program and the Mason Autonomy and Robotics Center at George Mason University.
Cummings’ research emphasizes human-AI collaboration, and in addition to teaching, she regularly works with corporate partners to instill principles of responsible use of AI to mitigate risks that occur if algorithms are left to their own devices. She delivered “The Promises and Perils of AI,” the second Voices of Discovery lecture of the 2024-25 academic year, on Monday, Nov. 11, at Lakeside Meeting Rooms.
She began researching artificial intelligence and human-machine interaction after flying F-18s as a Naval officer and military pilot. The technology at the time was so advanced that human pilots couldn’t understand the automation during flight, leading to numerous deaths of pilots she knew.
She explained two types of AI. Deterministic AI is based on simple rules. Nondeterministic — or generative, large-language AI — are built on millions of data sets and use statistical weighting to sift through millions of variables. “Every time you run them, you could get a different answer and that is a problem,” Cummings said. She described generative models as “psychopaths,” “exceedingly confident about what they don’t know,” and built with bias toward their own abilities.
“Nothing you ever read out of a large-language model can be trusted at all,” she said.
Because AI is only as accurate as the data its fed, it’s up to humans to think through critical variables to give generative AI enough data to make accurate decisions in safety-critical applications like driving, flight, medical and military uses. She spent much of the lecture discussing risks involved with self-driving cars. Self-driving cars may not have been trained to recognize a stop sign obscured by snow or leaves, or an articulated bus versus a regular city bus, or we may not understand how they react to various environmental conditions.
She sees great risk in allowing AI to create medical patient summaries, a practice which some providers are moving toward and which she warns against. Generative AI has the tendency to look for averages, turning a patient with individual conditions into a person with the average of human conditions.
“The danger zone: You really push into the red when you have nondeterministic system and high safety-criticality,” she said. “AI is a tool, not a silver bullet. That’s what some companies want it to be, but it’s not.”
But she also sees potential for humans to harness AI to do dangerous jobs, like mining or loading ships, as long as they have human supervision.
“We need to make sure computer scientists and engineers are working together, but we also need strong liberal arts in this area. Workforce development and AI risk management. What is responsible AI?, How do we govern AI? What does it mean to have AI and liability, and how are we going to legislate that in the future?” Cummings said. “Elon is perfectly primed — there are visionaries on campus thinking the right way: AI is a tool in a toolbox and it’s important for everyone to understand what those tools are capable of.”
Voices of Discovery brings preeminent scientists and mathematicians to Elon University to share their experiences and perspectives with students and the community. Sponsored by Elon College, the College of Arts and Sciences, and Elon University, the annual speaker series is fundamental to creating a science-conscious community developing students as informed, critically thinking citizens.
The final Voices of Discovery speaker of the 2024-25 academic year will be Kate Brauman, deputy director of the Global Water Security Center at the University of Alabama, on March 10. Her lecture will be “Water Security: Making Sense of Global Trends in Water Availability and Water Use.”
link