Artificial intelligence is no longer theoretical. It diagnoses diseases, guides self-driving vehicles, screens for airport security threats and helps make military decisions, in addition to performing dozens of everyday tasks. But can we trust AI to get such critical decisions right, and trust that it is acting in our best interests?

Paulo Shakarian, the inaugural K.G. Tan Endowed Professor of Artificial Intelligence in the College of Engineering and Computer Science (ECS), has built his career on answering those questions. He’s recognized internationally as a pioneer in neuro-symbolic AI, an approach that combines the pattern-recognition power of machine learning with the logical reasoning of traditional AI. His work has practical applications across national security, cybersecurity and critical decision-making systems.
Shakarian came to the University this fall from Arizona State University, where he served as research director for the School of Computing and AI. At Syracuse, he directs the Leibniz Lab, a research lab in the Department of Electrical Engineering and Computer Science dedicated to unifying ideas of reasoning and learning in AI. He also is a course instructor.
We sat down with Shakarian to get his take on some of our most pressing questions about AI.
Metacognitive AI allows AI systems to consider their own potential mistakes and correct them, similar to how someone might catch themself making an error and fix it. In humans, metacognition controls mental resources like memory and effort. Our vision at the Leibniz Lab is to create metacognitive structures that give users insights into potential mistakes and allow AI frameworks to regulate their use of energy and computing power.
Deep learning has provided significant advances but has a fundamental limitation: It finds statistical patterns and produces “average” answers based on the data it has seen before. That approach becomes problematic when a system encounters situations it hasn’t previously faced. This is why AI can sometimes generate “hallucinations”—responses that are statistically likely but that don’t conform to our mental models of the world.
Machine learning is great at learning patterns in data, but it isn’t capable of precise reasoning. Rule-based systems can follow logical steps or mathematical rules perfectly, but it’s often challenging to obtain rules from data. These approaches are clearly complementary and by combining them, we can get AI that both learns from data and reasons precisely.
Generative AI technology greatly expands the “attack surface” in systems—the number of ways an adversary can attempt to launch an attack. Traditional software has limited ways users can interact with it (specific buttons and menus that are easier to secure). But AI chatbots accept any text input. That can create countless opportunities for attackers to craft malicious prompts to try to manipulate the system. Security teams can’t anticipate every possible attack. With metacognitive AI, instead of trying to block attacks upfront, the AI can monitor itself, determine when its behavior is outside the norm and flag suspicious activity.