Lithuania Braces for AI-Powered Cyber Fraud

Lithuania Braces for AI-Powered Cyber Fraud

With a distinguished career spent on the front lines of corporate cybersecurity, Malik Haidar has witnessed the evolution of digital threats firsthand. Now, as an expert in the intricate dance between innovation and security, he offers a sobering look at the new era of AI-driven cybercrime. In this conversation, we explore Lithuania’s ambitious national strategy to counter these threats, delving into the sophisticated, multimodal attacks that are rendering old defenses obsolete. We’ll uncover how criminals orchestrate AI to create hyper-realistic deceptions, the psychological tactics behind adaptive social engineering, and the AI-powered countermeasures being deployed to protect an entire nation’s digital infrastructure.

Lithuania’s €24.1 million “Safe and Inclusive E-Society” mission unites academia, cybersecurity firms, and government. How does this collaborative model accelerate innovation, and can you walk us through a specific pilot project being tested on critical infrastructure or in a public institution?

This model is a game-changer because the era of isolated research is definitively over. When you have threats evolving as fast as they are today, you can’t afford the luxury of academics working in one silo and businesses in another. The Lithuanian mission, with its €24.1 million in funding, is designed to break down those walls. It brings the theoretical rigor of universities like KTU and Vilnius Tech together with the market-facing agility of companies like NRD Cyber Security. In practice, this means scientific knowledge is immediately pressure-tested and transformed into market-ready solutions. For example, one of the most critical pilot projects involves developing threat-detection sensors for our industrial infrastructure. We’re not just writing papers about it; we’re building and deploying prototypes in real-world environments, hardening the systems that control our power grids and water supplies against sophisticated attacks.

Generative AI is making traditional, pattern-based fraud detection less effective. Could you explain the key differences between a “classic fraud” email and a modern AI-generated phishing attempt? Please elaborate on how these new attacks achieve a level of realism that can fool even a careful user.

The difference is night and day. For years, we trained people and systems to look for the classic signs of fraud: grammatical errors, strange phrasing, or generic greetings. Our firewalls and filters were built to recognize those recurring patterns. But Generative AI has completely erased that boundary. A modern phishing email created by an LLM is a masterpiece of deception. It’s written in flawless, contextually perfect language, uses the precise terminology of the institution it’s impersonating, and can even replicate a specific person’s communication style. It’s no longer about mass scale, but about chilling realism. These messages don’t look like fraud; they look like legitimate, professional communication. The attack’s quality has skyrocketed because it’s personalized, often using public data about the victim, making it incredibly difficult for even a cautious user to spot the deception.

Criminals now orchestrate an arsenal of AI tools, from FraudGPT for text to ElevenLabs for voice cloning and StyleGAN for deepfakes. Can you describe how these are combined in a single, multimodal attack to bypass verification, perhaps sharing an anecdote of how this was used to create fake financial accounts?

It’s like a symphony of deception, with each AI tool playing its part. An attacker might start by using an LLM like FraudGPT to craft a highly convincing phishing email to get initial information. Then, they’ll use StyleGAN or Stable Diffusion to generate a photorealistic face for a fake ID, even editing the metadata to look legitimate. The scariest part is the voice. With just a few seconds of audio scraped from social media, tools like ElevenLabs can clone a person’s voice with stunning accuracy. We’re seeing this play out in attempts to open accounts at FinTech companies and crypto platforms. The criminal uses these tools to create a completely fabricated digital identity. They submit the AI-generated documents, and when the system asks for a “liveness” video check, they use a deepfake. If a human agent calls for verification, they use the cloned voice to answer. It’s a full-stack, multimodal fraud that can bypass both automated checks and the human sense of trust.

We are seeing a new frontier in adaptive social engineering where AI bots adjust their tactics in real time. Can you illustrate a typical scenario where an AI orchestrates a multi-channel attack, and what specific psychological vulnerabilities are these new, scalable deceptions designed to exploit?

This is where it becomes deeply personal and unnerving. Imagine an AI bot that starts by scraping your LinkedIn profile and company website. It then crafts an email that perfectly mimics the tone of a senior colleague, referencing a real project. If you don’t respond, the system doesn’t just give up. It automatically pivots. An hour later, you might get an SMS, and then a message on Slack, with the tone shifting from formal to urgent. If you express doubt, the LLM behind the bot generates plausible reassurances, perhaps quoting a real internal policy it found online. The final move is often a phone call using a cloned voice of that colleague, creating immense pressure. This whole sequence is orchestrated by AI. It’s a new evolution of cybercrime where social engineering becomes intelligent and scalable, with each interaction designed to exploit our fundamental psychological weak points—our desire to be helpful, our deference to authority, and our fear of missing an urgent deadline.

The National Cyber Security Centre successfully reduced ransomware incidents by fivefold between 2023 and 2024. Beyond threat monitoring, what specific AI-driven defense strategies are proving most effective, and what metrics are being used to measure their impact on protecting Lithuanian citizens and businesses?

That fivefold reduction is a testament to an aggressive, forward-looking strategy. It goes far beyond simply monitoring for known threats. The National Cyber Security Centre is heavily integrating AI into proactive defense, focusing on anomaly detection. Instead of just looking for the signature of a known ransomware strain, our AI systems learn the normal rhythm of a network—the typical data flows, user behaviors, and system processes. When the AI detects a subtle deviation from that baseline, even if it doesn’t match a known threat, it flags it for immediate investigation. This allows us to catch novel attacks before they can deploy their payload. The key metrics aren’t just the number of incidents blocked, but also “dwell time”—how long a threat is active before detection—and the speed of our automated response. We’re measuring success by how quickly and automatically we can neutralize a threat, ensuring the digital services that Lithuanian citizens and businesses rely on remain secure and trustworthy.

What is your forecast for the evolution of AI-driven cyber threats over the next five years?

Over the next five years, I predict we will see the rise of fully autonomous AI agents carrying out cyberattacks from start to finish with minimal human intervention. These agents will be capable of identifying vulnerabilities, crafting custom exploits, executing multi-stage social engineering campaigns, and even negotiating ransom payments on their own. The attacks will become hyper-personalized and self-propagating, adapting their methods in real time to bypass new defenses. We will also see a dramatic increase in AI-driven disinformation campaigns that are nearly impossible to distinguish from reality, targeting not just individuals but entire societal and democratic processes. The battlefield will shift from defending against human-steered attacks to countering AI agents, forcing us to develop our own autonomous AI defense systems that can operate at machine speed. It will be an era defined by an escalating race between malicious and defensive AI.

subscription-bg
Subscribe to Our Weekly News Digest

Stay up-to-date with the latest security news delivered weekly to your inbox.

Invalid Email Address
subscription-bg
Subscribe to Our Weekly News Digest

Stay up-to-date with the latest security news delivered weekly to your inbox.

Invalid Email Address