I’m thrilled to sit down with Malik Haidar, a renowned cybersecurity expert who has dedicated his career to protecting multinational corporations from sophisticated threats and hackers. With a deep background in analytics, intelligence, and security, Malik brings a unique perspective by integrating business strategies into cybersecurity solutions. In this conversation, we dive into the evolving landscape of security awareness training, the rise of AI-powered attacks, and innovative approaches to preparing employees for modern threats. We’ll explore how personalized training can make a difference, the challenges faced by traditional methods, and what the future holds for the industry.
Can you walk us through how personalized training simulations, like tailored deepfake scenarios, stand out from traditional security awareness methods?
Absolutely. Traditional security training often relies on generic videos or modules that are the same for every employee, regardless of their role or risk level. While that can cover basic concepts, it often fails to engage people on a personal level. With personalized simulations, like a deepfake phone call mimicking a colleague or boss, we create a real-world scenario that feels immediate and relevant. It’s not just theory—it’s a hands-on experience that forces employees to think critically in the moment. This approach triggers emotional responses and muscle memory, making them far more likely to recognize and respond to a real attack.
Why do you believe customizing training content for individuals or organizations is more impactful than a one-size-fits-all approach?
Customization taps into relevance. When training reflects an employee’s specific environment—say, the tools they use or the types of interactions they have daily—it resonates more. For instance, a finance team member might get a simulated phishing email about an urgent invoice, while an IT staffer could face a deepfake call requesting system access. This relevance makes the threat feel real and personal, which drives engagement and retention. Generic training often feels like a chore; tailored content turns it into a practical skill they can apply immediately.
How do you view the current state of security awareness training content in terms of relevance and effectiveness?
Frankly, a lot of the content out there is outdated or disconnected from today’s threats. Many programs still focus on basic phishing emails or password hygiene, which are important but only scratch the surface of what we’re seeing now, like AI-generated deepfakes or sophisticated social engineering. I agree with recent findings that a significant portion of professionals find their materials irrelevant—probably because they haven’t evolved with the speed of attackers. Employees tune out when content feels like it’s from a decade ago, and that’s a huge risk when threats are advancing daily.
What steps can be taken to bridge the gap between outdated training materials and the rapidly changing threat landscape?
First, we need to prioritize agility in content creation. That means constantly updating training to reflect the latest attack methods, like AI-powered scams, and using real-world data to inform scenarios. Second, leveraging technology like AI itself can help generate dynamic, relevant simulations that adapt to new threats in real time. Finally, collaboration with industry experts and even ethical hackers can provide insights into emerging tactics. It’s about creating a living, breathing training program rather than a static set of slides.
How has the accessibility of AI tools changed the game for threat actors launching attacks like deepfakes?
The barrier to entry has practically vanished. AI tools for creating deepfakes or crafting convincing phishing messages are now widely available, often for free or at a low cost, and they don’t require advanced technical skills. Open-source models mean anyone with a laptop can experiment, from curious teenagers to malicious actors. This democratization of technology has exponentially increased the volume and sophistication of attacks, as threat actors can now impersonate voices or faces with just a few clicks, making it harder for even savvy individuals to spot the fakes.
How critical is it for organizations to adapt quickly to these AI-driven threats?
It’s absolutely urgent. The speed at which these threats are growing means that organizations can’t afford to lag behind. A single successful deepfake attack can lead to massive financial loss or data breaches, not to mention reputational damage. The reality is that many employees aren’t prepared for these new tactics because they’ve never encountered them in training. If organizations don’t pivot to proactive, modern defenses now, they’re essentially leaving the door wide open for attackers who are innovating much faster than most corporate strategies.
Do you think traditional security awareness training providers still play a valuable role in today’s environment?
They do have a role, especially for foundational education on classic threats like email scams or malware. Many organizations still face those daily, and legacy providers have the experience to cover those basics well. However, their value diminishes when it comes to emerging threats like AI-powered attacks. They’ve built systems around scale and standardization, which can be a strength but also a limitation when attackers are using highly personalized, cutting-edge methods.
What are some of the biggest hurdles legacy training companies face in keeping pace with today’s fast-evolving attackers?
The biggest hurdle is their inertia. Many of these companies rely on established content libraries and delivery models that take time to update. Meanwhile, attackers using AI tools can pivot overnight. There’s also a cultural challenge—shifting from a compliance-driven mindset, where training is a box to check, to a dynamic, threat-focused approach requires a complete overhaul of their strategy. Without rapid innovation, they risk becoming irrelevant as threats outpace their solutions.
Why do you think so many organizations hesitate to report successful attacks, especially those involving advanced tactics like deepfakes?
It often comes down to fear of reputational damage. Admitting a breach, especially one as sophisticated as a deepfake scam, can erode trust from clients, partners, and even employees. There’s also the concern of legal or regulatory repercussions, particularly for private companies that aren’t mandated to disclose. Many prefer to handle it quietly, hoping to mitigate the damage internally. Unfortunately, this silence means the broader industry doesn’t learn from these incidents, which only benefits the attackers.
Based on your experience, how widespread do you believe these unreported attacks actually are?
They’re far more common than most realize. In conversations with CISOs and security leaders, I’ve heard that successful deepfake or AI-driven attacks have impacted over half of their organizations in the past year alone, compared to a much smaller fraction just a couple of years ago. These incidents often go under the radar unless they’re catastrophic or involve a public entity with disclosure requirements. The underreporting creates a false sense of security in the industry, which is dangerous.
Have you observed a noticeable shift in how employees engage with personalized, AI-driven training compared to older, standardized methods?
Definitely. Employees are much more engaged when the training feels directly applicable to their world. With personalized AI-driven simulations, they’re not just watching a video—they’re interacting with a scenario that mimics a real threat they might face. I’ve seen employees go from passively clicking through modules to actively discussing and questioning tactics during and after training. That shift in mindset, from obligation to curiosity, is a game-changer for building a security-conscious culture.
Can you share a specific instance where tailoring training content to an individual or organization led to a tangible improvement in threat awareness?
Sure. We worked with a mid-sized company where the finance team was a frequent target for payment fraud. We crafted a simulation involving a deepfake voice call from what sounded like their CEO, urgently requesting a wire transfer. During the exercise, several team members initially fell for it, but post-training debriefs helped them identify red flags like unusual urgency or mismatched contact details. A few weeks later, a real attempt occurred, and the team flagged it immediately, preventing a significant loss. That direct connection between training and real-world application was eye-opening for them.
Where do you see the security awareness training industry heading in the next few years, especially with the rise of AI tools?
I think we’re on the cusp of massive innovation. Social engineering remains a core component of most successful attacks, and with AI tools becoming cheaper and more accessible, the volume and complexity of these threats will only grow. The industry will have to pivot toward hyper-personalized, adaptive training that evolves as fast as the threats do. Companies that can’t innovate quickly will fall behind, while those leveraging AI for both attack simulations and defense strategies will lead the way. It’s going to be a race between defenders and attackers, and training will be at the forefront.
Do you have any advice for our readers on staying ahead of these evolving cybersecurity threats?
My biggest piece of advice is to stay curious and proactive. Don’t wait for an attack to learn—seek out training or resources that reflect the latest threats, like AI-generated scams. Question anything that feels off, even if it seems to come from a trusted source, and verify through separate channels. For organizations, invest in modern, personalized training programs that prepare your team for real-world scenarios. Cybersecurity isn’t just an IT issue—it’s everyone’s responsibility, and building that awareness starts with staying informed and engaged.