In today’s rapidly evolving digital landscape, cybersecurity has never been more critical. I’m thrilled to sit down with Malik Haidar, a seasoned cybersecurity expert with a wealth of experience protecting multinational corporations from sophisticated threats and hackers. With a deep background in analytics, intelligence, and security, Malik has a unique ability to blend business perspectives with cutting-edge cybersecurity strategies. In this interview, we dive into the explosive growth of AI tool adoption, the alarming gaps in security training, the risks of sharing sensitive data with AI platforms, the surge in cybercrime, and the persistent challenges in fostering a culture of digital safety. Let’s explore how these trends are shaping the future of cybersecurity and what can be done to stay ahead of the risks.
How do you explain the dramatic rise in AI tool usage from 44% last year to 65% this year?
The jump in AI tool usage is really a reflection of how accessible and powerful these technologies have become. Over the past year, tools like ChatGPT have gone mainstream, offering user-friendly interfaces and tangible benefits like boosting productivity or solving complex problems instantly. Businesses are also pushing for efficiency, and AI is often seen as a quick fix for streamlining tasks. Plus, the hype around AI has created a cultural shift—people feel they need to adopt it to stay competitive, whether in their personal lives or at work. But this rapid adoption often outpaces the understanding of risks, which is where we’re seeing major gaps.
What do you think makes ChatGPT so dominant with a 77% adoption rate compared to other tools like Gemini or Copilot?
ChatGPT’s dominance comes down to its early mover advantage and its versatility. It hit the market with a lot of buzz and quickly became synonymous with AI for many users. Its conversational style feels intuitive, almost human-like, which makes it appealing across a wide range of users—from students to professionals. While tools like Gemini and Copilot have their strengths, ChatGPT has built a reputation for being a go-to for everything from drafting emails to brainstorming ideas. Brand recognition and ease of use have kept it ahead of the pack.
In which industries or job roles have you seen AI adoption growing the fastest?
I’ve noticed the fastest growth in industries like tech, marketing, and education. Tech companies are naturally early adopters, using AI for coding and data analysis. Marketing teams are leveraging it for content creation and customer insights, while educators and students are using it for research and learning support. Job roles like content creators, developers, and analysts are particularly quick to integrate AI because it directly enhances their output. However, this enthusiasm often overlooks security protocols, especially in non-tech sectors where digital literacy might be lower.
Why do you think 58% of AI users haven’t received training on security or privacy risks associated with these tools?
This gap exists largely because AI adoption has outpaced organizational readiness. Many companies are still playing catch-up, focusing on implementing AI rather than securing it. There’s also a lack of awareness at the leadership level about the specific risks AI introduces, like data leaks or misuse. Budget constraints and time limitations play a role too—training programs take resources, and many businesses prioritize short-term gains over long-term safety. Unfortunately, this leaves employees to navigate these tools without guidance, often underestimating the risks.
What are the most significant dangers of using AI tools without proper security training?
The dangers are multifaceted. First, there’s the risk of data exposure—when users input sensitive information, like company secrets or personal data, into AI platforms, they might not realize it could be stored or accessed by third parties. Second, untrained users are more susceptible to AI-enabled scams, like phishing emails crafted with uncanny precision using AI. Lastly, there’s the potential for misuse, where employees might inadvertently violate privacy laws or company policies because they don’t understand the tool’s boundaries. Without training, these risks compound and create vulnerabilities across entire organizations.
How can companies begin to address this training gap without overwhelming their workforce?
Companies need to start with bite-sized, practical training that integrates into daily workflows. Instead of long, generic sessions, offer short tutorials focused on real-world scenarios—like how to spot a risky AI prompt or safely handle data. Gamification can make learning engaging, rewarding employees for completing modules. Leadership also needs to model good behavior by prioritizing security themselves. Finally, partnering with cybersecurity experts to tailor training to specific roles or industries can ensure relevance without overloading staff. It’s about building a culture of awareness, not just checking a box.
With 43% of users sharing sensitive workplace info with AI tools without employer knowledge, what types of information are most at risk?
The information most at risk includes internal company documents, financial records, and client data. These are often shared because employees see AI as a quick way to analyze or summarize complex info, not realizing the tool might store or expose it. Trade secrets, strategic plans, and personal identifiable information are also vulnerable. Once this data is in an AI system, it’s often out of the company’s control, creating a potential goldmine for cybercriminals or even competitors if there’s a breach.
Why do you think employees feel comfortable sharing things like company documents or client data with AI tools?
A big reason is the lack of awareness about how AI systems handle data. Many employees assume these tools are secure or private, like using a company laptop, when in reality, they’re often cloud-based and managed by third parties. There’s also a trust factor—AI feels like a helpful assistant, so people don’t think twice about sharing sensitive stuff. Plus, the pressure to get work done quickly can override caution. Without clear policies or training, employees simply don’t see the harm until it’s too late.
What practical steps can businesses take to prevent employees from engaging in this risky behavior with AI?
Businesses need to establish clear, enforceable policies about AI usage, explicitly stating what can and cannot be shared. Technical controls, like blocking certain AI platforms or monitoring data inputs, can act as a safety net. Regular communication—through emails, meetings, or posters—can reinforce the importance of data protection. Offering secure, company-approved AI alternatives also helps, so employees aren’t tempted to use unvetted tools. Ultimately, fostering an environment where employees feel they can ask questions without fear of reprimand is key to changing behavior.
Cybercrime victimization has risen by 9%, with 44% of people experiencing data or monetary loss. What’s driving this increase?
Several factors are driving this spike. The proliferation of AI has made it easier for criminals to craft sophisticated attacks, like highly personalized phishing emails or deepfake scams. The growing reliance on digital platforms for everything from banking to socializing also expands the attack surface. Additionally, the lack of basic cybersecurity habits—like weak passwords or skipping updates—leaves many people exposed. Cybercriminals are capitalizing on these vulnerabilities, especially as economic pressures push more individuals to take risks online, like falling for get-rich-quick schemes.
Why are younger generations like Gen Z and Millennials being hit harder by scams and cybercrime?
Younger generations are more immersed in digital life, which increases their exposure to risks. They’re active on social media, often oversharing personal details that scammers can exploit. They’re also more likely to engage with emerging tech, like cryptocurrency, which is a hotbed for fraud. There’s a certain overconfidence too—many believe they can spot a scam, but modern attacks are incredibly slick. Plus, financial instability in these age groups can make them more susceptible to promises of quick money or too-good-to-be-true deals.
What types of cybercrimes are becoming more prevalent, and how are they evolving with technology?
Phishing remains a top threat, but it’s evolved with AI to become hyper-targeted, using personal data to mimic trusted contacts. Crypto scams are also on the rise, often disguised as investment opportunities on social media. Identity theft is getting more sophisticated with deepfake tech, where criminals impersonate voices or faces to trick victims. Tech support and online dating scams are adapting too, leveraging AI to build trust over time. The common thread is that these crimes are becoming harder to detect, blending seamlessly into everyday digital interactions.
With 55% of people reporting no access to cybersecurity training, what’s holding companies back from providing it?
A lot of it comes down to resources—time, money, and expertise. Smaller companies, in particular, might not have the budget to develop or outsource training programs. There’s also a perception that cybersecurity is an IT problem, not a company-wide priority, so it gets deprioritized. Some leaders doubt the effectiveness of training, especially if past efforts didn’t yield measurable results. And frankly, competing business goals—like hitting sales targets—often take precedence over proactive security measures, even though the cost of a breach can be far greater.
What’s your forecast for the future of cybersecurity in the context of AI and evolving cybercrime trends?
I think we’re heading into a period of both great opportunity and significant challenge. AI will continue to revolutionize how we work, but it’ll also arm cybercriminals with tools to launch more deceptive and widespread attacks. I foresee a growing emphasis on AI-specific security measures, like better data encryption and user authentication within these platforms. Governments and industries will likely push for stricter regulations around AI usage and data handling. On the flip side, cybercrime will become more personalized and automated, requiring us to double down on education and adaptive defenses. The key will be fostering a proactive mindset—waiting for a breach to act won’t cut it anymore. We need to build resilience now.