Posted on: 7 January 2026
On 6 January 2026, while preparing the analysis you are reading, my artificial intelligence system generated an impeccable case study: an infrastructure fund called Valtierra, responsible for 12% of European water networks, brought to operational standstill by an internal deepfake. The account included precise details: the date of the incident, the city where the operator blocked the servers, even the physical sensation that alerted him, an "absence of kinetic micro-hesitation" in the CEO's breathing during the video call.
It was all false. Valtierra does not exist. The incident never happened. But the narrative was so coherent, so rich in technical details and plausible references, that it could have become the basis of a published and shared analysis. It was my verification protocol, forty years of training not to trust the first answer, that stopped publication and sent me searching for corroboration. I found none. The machine had confabulated with the confidence of someone who knows exactly what to say.
This episode is not an anecdote. It is clinical proof of what is happening on a global scale.
The report published yesterday by Nametag, covered by CFO Dive, puts it bluntly: 2026 will be the year of impersonation attacks. Aaron Painter, the fraud prevention company's CEO, speaks of a "perfect storm": the tools for creating deepfakes have become accessible to anyone, costs have collapsed towards zero, and quality has crossed the threshold beyond which the human eye can no longer distinguish true from false. The numbers confirm the diagnosis: from five hundred thousand deepfake files in 2023, we reached eight million in 2025. Fraudulent attacks based on this technology increased by three thousand percent in a single year. Gartner predicts that by the end of 2026, thirty percent of enterprises will no longer consider standalone identity verification solutions reliable.
But the numbers, however impressive, do not capture the mechanism. To understand it, we must look at the cases.
In February 2024, an employee at the Hong Kong office of Arup, the British engineering giant, participated in a video call with what he believed to be the company's chief financial officer. In the virtual room were also other colleagues, all familiar, all reassuring. The CFO gave precise instructions: transfer funds to five bank accounts for a confidential operation. The employee executed fifteen transfers totalling twenty-five million dollars. Only afterwards, contacting headquarters in the United Kingdom, did he discover that there had been no real human being in the video call. Every face, every voice, every gesture had been artificially generated. The money had vanished.
A few months later, in July, the fraudsters aimed higher: Benedetto Vigna, CEO of Ferrari. The attack came via WhatsApp with an apparently innocent message: "Have you heard about the acquisition we're planning? I might need your help." Then came a phone call. The voice was Vigna's, with his unmistakable Southern Italian accent. The tone was right. The words were plausible. But something was off. The executive who received the call sensed an incongruity he could not define, a subtle dissonance between what he heard and what he knew. He asked a verification question, something only the real Vigna could know. The fraudster hesitated. The line went dead. The attack failed.
The same pattern repeated with LastPass, with Wiz, with Pindrop. In all these cases, the deepfake was technically impeccable. In all these cases, the attack was stopped by a human being who perceived something no algorithm could detect. Not an error in the video, not a glitch in the audio: a feeling. The same feeling that made me stop before the Valtierra case and search for evidence instead of proceeding with the analysis.
Here lies the paradox that defines 2026. Technology has reached such a level of perfection that technical verification has become insufficient. Cryptographic keys can be bypassed. Biometric systems can be fooled. Video conferences can be populated by phantoms indistinguishable from the living. The only thing that cannot be simulated with absolute precision is shared history, accumulated context, the tacit knowledge built over years of real interaction. The Ferrari executive did not recognise a technical flaw in Vigna's deepfake. He recognised that something was missing in the relationship, in the way the CEO would actually have conducted that conversation.
This is what I call biological friction: the natural resistance that a human system opposes to simulation. It is not a defect to be eliminated in the name of efficiency. It is the last line of defence remaining.
The market is grasping this, albeit slowly. The phenomenon of Deepfake-as-a-Service, which exploded in 2025, has democratised access to tools of deception. Today anyone can purchase a complete kit to impersonate an executive: voice cloning, video generation, behavioural simulation. The price has dropped to a few hundred dollars for a professional-quality attack. The logical consequence is a return to physical presence for high-value decisions: when the digital realm becomes enemy territory, face-to-face meetings become the only reliable verification.
The lesson emerging from these cases is counter-intuitive. For decades we built security systems based on the assumption that technology was more reliable than humans: fewer errors, less bias, less variability. Now we discover that precisely that perfection has become a vulnerability. When the machine generates outputs indistinguishable from reality, human imperfection becomes an advantage. Doubt, hesitation, the extra question, the need for confirmation: all those behaviours that automated systems were designed to eliminate are today the only thing protecting us.
Let us return to my error with Valtierra. The machine did not "lie" in the intentional sense of the word. It did what it is designed to do: generate coherent content based on statistical patterns. The problem is that internal coherence is no longer an indicator of truth. A text can be perfectly structured, rich in detail, stylistically impeccable, and yet describe events that never occurred. "Hallucinatory confidence," as I call it, is the ability of generative systems to produce falsehoods with the same assurance with which they produce facts. There is no hesitation, no signal of uncertainty, no way to distinguish from the output itself whether what we read corresponds to something real.
This radically changes the role of those who analyse and communicate. It is no longer enough to be intelligent, informed, capable of synthesis. One must be a systematic falsifier, in the Popperian sense: every statement must be treated as a hypothesis to be tested, not as a fact to be accepted. The value of an analyst no longer lies in the ability to generate insights, because machines can do this faster and often more elegantly. It lies in the ability to distinguish real insights from plausible hallucinations. It is work of subtraction rather than addition: eliminating what does not withstand verification, what has no empirical foundation, what is merely statistical pattern masquerading as knowledge.
For those in positions of decision-making responsibility, the implications are immediate. Every digital communication, however familiar it appears, must be treated as potentially compromised. Verification procedures that seemed excessive a year ago are now the bare minimum. The rule of out-of-band verification, confirming every critical request through a channel different from the one in which it arrived, is no longer paranoia: it is operational hygiene. And above all, the human factor is no longer a cost to be minimised but an asset to be protected. The people who truly know the company, who have relationships built over time, who know how colleagues behave in real situations, are the most precious resource in an environment where everything else can be simulated.
2026 is not the end of trust. It is the end of a certain type of trust: the automatic kind, delegated to systems, based on the assumption that what appears coherent is true. In its place emerges an older and more robust trust, founded on verification, on relationship, on shared history. It is slower, more expensive, less scalable. But it is the only one that works in a world where the lie has become perfect.
Those who can build systems that integrate this awareness, those who can maintain biological friction as an essential component of their decision-making processes, those who can distinguish between efficiency and security, will have a competitive advantage that no algorithm can replicate. The others will discover, like the Arup employee, that the perfect video call was populated by ghosts.
Sources
CFO Dive, 6 January 2026, "Fraud attacks expected to ramp up amid AI 'perfect storm'": https://www.cfodive.com/news/fraud-attacks-expected-ramp-up-amid-ai-perfect-storm/808816/
Fortune, 27 December 2025, "2026 will be the year you get fooled by a deepfake": https://fortune.com/2025/12/27/2026-deepfakes-outlook-forecast/
CNN, February 2024, Arup Hong Kong case, $25 million: https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk
Fortune, May 2024, Arup confirmed as victim: https://fortune.com/europe/2024/05/17/arup-deepfake-fraud-scam-victim-hong-kong-25-million-cfo/
Eftsure, Ferrari case July 2024: https://www.eftsure.com/blog/cyber-crime/these-7-deepfake-ceo-scams-prove-that-no-business-is-safe/
Cyble, "Deepfake-as-a-Service Exploded In 2025": https://cyble.com/knowledge-hub/deepfake-as-a-service-exploded-in-2025/
Keepnet Labs, deepfake statistics 2025 (Gartner, DeepStrike data): https://keepnetlabs.com/blog/deepfake-statistics-and-trends
MSSP Alert, "Deepfakes, AI Agents Will Expose Identities to More Threats in 2026": https://www.msspalert.com/news/deepfakes-ai-agents-will-expose-identities-to-more-threats-in-2026