I studied faces to understand people. That’s why AI doesn’t scare me.

Years ago, long before AI became the gravitational center of every cybersecurity conversation (prove me fucking wrong), I spent time studying behavioral patterns and facial microexpressions using Ekman’s work. Not because I wanted to “read minds” or spot lies like a TV detective, but because I wanted to understand how humans leak intent, often without realizing it. The way tension shows up in posture, the way uncertainty alters rhythm, the way confidence can be performed but rarely sustained.

That training didn’t turn me into a human polygraph, and it wasn’t meant to, actually.
What it gave me was something far more useful: sensitivity to incongruence. To the small gaps between what is said and what is meant, between what is shown and what is felt.

What I’ve felt, what I’ve known
Never shined through in what I’ve shown
Never be, never see
Won’t see what might have been

Ironically, that skill has become even more valuable in the age of AI.

We are obsessing over AI and we are neglecting people

Right now, the industry is focusing 150% of its attention on artificial intelligence: AI-generated phishing, AI-written malware, AI-powered deepfakes, AI identities pretending to be human. It’s understandable, right.. technology is powerful, fast, and genuinely disruptive.
But there’s a blind spot forming. And it’s stunning hug!

AI may generate the message, the voice, the image, the workflow, but humans are still the ones driving intent: humans decide when to attack, when to escalate, when to push urgency, when to retreat. And humans are remarkably bad at hiding themselves completely.
There are tons of traces you can find, no one can really, today, hide himself from the world.

AI identities are “only” machine-driven: they are consistent, efficient, statistically coherent.
What they are not is passion-driven. They don’t hesitate in the same way, they don’t leak frustration, excitement, impatience, or fear unless someone tells them to simulate it. And simulation always leaves seams.

Ash: “You still don’t understand what you’re dealing with, do you? The perfect organism. Its structural perfection is matched only by its hostility.”
Lambert: “You admire it.”
Ash: “I admire its purity. A survivor… unclouded by conscience, remorse, or delusions of morality.”

This is where social engineering is misunderstood today!
Too often, it’s framed as a purely technical problem: detect the fake email, block the synthetic voice, classify the generated content.
That’s necessary, but insufficient.
Social engineering has always been about understanding people under pressure.

AI can fake output.
It can’t fake intent.

One of the uncomfortable truths of this field is that AI is very good at producing surface-level authenticity. Grammar, tone, cadence, even emotional cues can be mimicked convincingly. But beneath that surface, AI-driven identities lack something fundamental: lived context.
Humans carry history into interactions, they carry mood, they carry stakes, they carry inconsistency, they – well, we! We carry frustration and love and pain and tears and so many other emotions no machine will ever be able to replicate.

When you’ve trained yourself to observe people (really observe them) you start noticing what doesn’t belong:

  • a timing that’s too perfect
  • a response that advances the conversation too cleanly
  • an emotional cue that appears without buildup
  • a sense of urgency that isn’t grounded in shared experience

These are not things AI detects well, because they’re not rules. They’re patterns of being human.

Man fears the darkness, and so he scrapes away at the edges of it with fire.

And here’s the paradox: the more we automate identity verification, the more valuable human judgment becomes.

AI identities operate on processes while humans operate on motivations.
That difference is subtle, but it’s exploitable, defensively!

In my work, this skill doesn’t just help me detect deception.
It shapes how I design simulations, how I assess risk, how I think about recovery, because recovery isn’t only about restoring systems, it’s more restoring confidence – which is the ability for people to trust their own perception again.

If we teach people that AI is unstoppable and inscrutable, we weaken them. If we teach them how humans reveal themselves even through AI-mediated interactions, we strengthen them.

The human factor isn’t a weakness

It’s a leverage, don’t you think?
I mean, there’s a narrative in cybersecurity that humans are the weakest link. I’ve never agreed with that framing. Humans are variable, yes. Emotional, yes. Inconsistent, absolutely – also dickheads, but that’s part of the game.

But variability is not weakness. It’s a signal.
Machines are predictable by design, humans are not. And that unpredictability (when understood rather than suppressed) becomes defensive leverage.
More: understanding people has added value not just to my job, but to everything else in my life, because it sharpens conversations and improves judgment. It makes manipulation easier to spot and harder to execute.

AI will keep getting better, there’s no doubt about that. But the idea that it replaces human understanding is a category error.
Identity, at its core, is still human. And social engineering, even in the age of AI, is still about people.
That’s why I don’t fear AI-driven identities, I just pay attention to the humans behind them.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top