Finance organizations have always been in attackers' crosshairs, but the reasons go well beyond money, and AI is making the tactics sharper and faster to deploy. In our recent Executive impersonation in Finance: Exploring modern phishing tactics webinar, Alex Orleans (Head of Threat Intelligence, Sublime) sat down with Andrew Becherer (CISO, Sublime) to unpack the threat landscape facing financial services, from nation-state motivations to AI-powered phishing.
Finance is a target for reasons beyond money
Criminals want money, but state-sponsored actors have more sophisticated interests. Financial institutions sit at the intersection of enormous data flows, market intelligence, sanctions policy, cross-border transactions, and political financing. That makes them perfect targets for financial, intelligence, and disruptive attacks.
North Korea blurs the criminal/state line by targeting crypto assets and fintech companies for direct revenue generation. Russian intelligence persistently monitors fund flows tied to EU politics and sanctions regimes. Iran has historically launched disruptive campaigns against U.S. financial institutions as geopolitical retaliation, most notably Operation Ababil in 2012, a wave of DDoS attacks on banking infrastructure triggered by nuclear enrichment tensions. As Alex noted, "whenever something big happens in the world, it's going to affect money."
Two less-obvious vectors also came up. First, insurance companies and sanctions-focused institutions can find themselves targeted during regional conflicts, as Iran demonstrated when it went after firms connected to tanker attacks in the Gulf. Second, remote employment fraud, fake personas using AI to land "no-show" jobs inside financial or fintech firms, gives bad actors both a financial paycheck and potential insider access. A dual-purpose nightmare most security teams aren't fully modeling.
Financial institutions are targeted from every angle, but one of the most common attack vectors is email. To increase their odds of success, adversaries have favored executive impersonation in their attacks (a fact echoed in our 2026 Email Threat Report).
How executive impersonation actually works
Impersonation attacks break into categories like executive impersonation, employee impersonation, vendor impersonation, and brand impersonation. All types exploit trust, but the most effective version in finance uses executive impersonation to deliver “skip-level” attacks. These attacks impersonate a boss’s boss and create a different level of urgency.
Finance culture amplifies this. Leadership responsiveness is baked in, deals are time-sensitive, and employees are conditioned to act fast when a senior exec reaches out. Combine that with hierarchical norms, you don't push back on someone three levels above you, and attackers have a reliable psychological exploit that requires minimal technical sophistication.
AI is making executive impersonation scalable
Historically, pulling off a convincing impersonation was labor-intensive. Building an org chart from LinkedIn, matching communication styles, knowing who reports to whom, that took hours of manual recon. Not anymore.
Agentic AI tools can now ingest public data from professional networks, social media, conference talks, press releases, and corporate comms, then spit out convincing org charts and capture an executive's linguistic patterns. The result is highly personalized lures at scale.
The attack pattern is increasingly multi-stage. A first email arrives that looks authentic but contains nothing malicious, just enough to establish a reply thread. Once the target responds, detection systems register a trusted conversation, and then the malicious payload arrives in a follow-up.
"You can override the security instincts, you can override the scrutiny by relying on trust, by capitalizing on urgency. And then you're more likely to get that click."
– Alex Orleans, Head of Threat Intelligence, Sublime
What defenders should do
The guidance from Alex was refreshingly unsensational: eat your vegetables. Strong MFA, patching hygiene, proper identity management, phishing-aware training, the boring stuff blocks most attacks before they escalate. Attackers are often stymied not by cutting-edge defenses, but by organizations that simply close the obvious gaps.
For impersonation specifically: make sure employees have out-of-band verification channels. If a message arrives from an executive via email, they should be able to confirm via Slack or text, and feel empowered to do so without it seeming like insubordination.
State actors, in particular, don't give up easily. As Alex put it, "once you're seeing that level of targeting, it's probably not going to stop." That needs to be built into the threat model.
There's more in the full webinar
The financial threat landscape was the centerpiece, but the conversation ranged into territory worth watching the recording for. The panel dug into:
- Anthropic's "Project Glasswing" and the Mythos model, an AI deemed too dangerous to release publicly, currently being used to proactively find vulnerabilities in critical open-source infrastructure. What does this mean for the near-term threat environment?
- AI vs. AI: who wins long-term? Alex and Andrew debate whether LLMs and agentic tools ultimately favor attackers or defenders, and their short-term vs. long-term answers diverge in interesting ways.
- Deepfake video impersonation, simulated Zoom environments, FaceTime fakes, and whether technical watermarking can even help when attacks arrive on personal devices.
Watch the webinar to get the full story from Andrew and Alex.
Get the latest
Sublime releases, detections, blogs, events, and more directly to your inbox.




.avif)