STEM Trusting AI to Do the Hard Stuff: Think, Reflect, Follow Rules

(Photo by Amy Manley)

Trusting AI to Do the Hard Stuff: Think, Reflect, Follow Rules

As a pioneer in neuro-symbolic AI, Paulo Shakarian conducts research that applies across national security, cybersecurity and critical decision-making systems.
Diane Stirling Nov. 19, 2025

Artificial intelligence is no longer theoretical. It diagnoses diseases, guides self-driving vehicles, screens for airport security threats and helps make military decisions, in addition to performing dozens of everyday tasks. But can we trust AI to get such critical decisions right, and trust that it is acting in our best interests?

A person wearing a dark suit jacket, maroon dress shirt, patterned tie, and a lapel pin stands in a modern hallway. Behind them is a glass wall with the text “Syracuse University Department of Electrical Engineering and Computer Science” visible.
Shakarian directs the Leibniz Lab in the Department of Electrical Engineering and Computer Science and is also a course instructor. (Photo by Amy Manley)

Paulo Shakarian, the inaugural K.G. Tan Endowed Professor of Artificial Intelligence in the College of Engineering and Computer Science (ECS), has built his career on answering those questions. He’s recognized internationally as a pioneer in neuro-symbolic AI, an approach that combines the pattern-recognition power of machine learning with the logical reasoning of traditional AI. His work has practical applications across national security, cybersecurity and critical decision-making systems.

Shakarian came to the University this fall from Arizona State University, where he served as research director for the School of Computing and AI. At Syracuse, he directs the Leibniz Lab, a research lab in the Department of Electrical Engineering and Computer Science dedicated to unifying ideas of reasoning and learning in AI. He also is a course instructor.

We sat down with Shakarian to get his take on some of our most pressing questions about AI.

 

Q:
How does metacognitive AI—AI that reflects on its own thinking and decisions—make AI more helpful and trustworthy?
A:

Metacognitive AI allows AI systems to consider their own potential mistakes and correct them, similar to how someone might catch themself making an error and fix it. In humans, metacognition controls mental resources like memory and effort. Our vision at the Leibniz Lab is to create metacognitive structures that give users insights into potential mistakes and allow AI frameworks to regulate their use of energy and computing power.

Q:
Your research challenges the reliance on deep learning and pattern recognition in AI. Why is that approach problematic?
A:

Deep learning has provided significant advances but has a fundamental limitation: It finds statistical patterns and produces “average” answers based on the data it has seen before. That approach becomes problematic when a system encounters situations it hasn’t previously faced. This is why AI can sometimes generate “hallucinations”—responses that are statistically likely but that don’t conform to our mental models of the world.

Q:
Why is it important to combine rule-based AI with pattern-recognition AI?
A:

Machine learning is great at learning patterns in data, but it isn’t capable of precise reasoning. Rule-based systems can follow logical steps or mathematical rules perfectly, but it’s often challenging to obtain rules from data. These approaches are clearly complementary and by combining them, we can get AI that both learns from data and reasons precisely.

Q:
How does metacognitive AI address the biggest overlooked security risks in AI systems?
A:

Generative AI technology greatly expands the “attack surface” in systems—the number of ways an adversary can attempt to launch an attack. Traditional software has limited ways users can interact with it (specific buttons and menus that are easier to secure). But AI chatbots accept any text input. That can create countless opportunities for attackers to craft malicious prompts to try to manipulate the system. Security teams can’t anticipate every possible attack. With metacognitive AI, instead of trying to block attacks upfront, the AI can monitor itself, determine when its behavior is outside the norm and flag suspicious activity.