Harnessing AI: A Strategic Edge for Cybersecurity Pros

Harnessing AI: A Strategic Edge for Cybersecurity Pros

In the rapidly evolving world of cybersecurity, few voices stand out as much as Malik Haidar’s. With a career dedicated to protecting multinational corporations from sophisticated threats, Malik has become a beacon of insight, blending analytics, intelligence, and a keen business perspective into his approach to security. His expertise in integrating AI into cybersecurity workflows offers a fresh lens on how professionals can adapt to disruptive technologies. Today, we dive into his thoughts on navigating the challenges and opportunities AI presents, from overcoming resistance to building custom tools, enhancing daily tasks, and maintaining human judgment in an increasingly automated landscape.

How do you see security professionals falling into the trap of resisting new tools like AI, much like Paul Bunyan resisted the steam-powered saw, and can you share a real-world example of this resistance impacting a team?

I think the resistance often comes from a place of comfort with tried-and-true methods. Many security professionals have honed their skills over years, mastering manual processes or specific tools, and the idea of AI feels like a disruption to that expertise. It’s not just fear of the unknown; it’s a genuine concern that these new tools might undermine their hard-earned judgment or make their role less relevant. I’ve seen this firsthand in a team I worked with a few years back at a large financial institution. They were deeply invested in manual log analysis and hesitated to adopt an AI-driven SIEM platform, worried it would produce false positives or miss critical nuances. This resistance slowed down their incident response by weeks during a critical phishing campaign—time we couldn’t afford to lose. If they had taken small steps, like running AI outputs in parallel with their manual checks or dedicating a few hours to understanding the tool’s logic, they could’ve built trust in the system sooner. Instead, they were stuck playing catch-up while the threat escalated. The lesson there was clear: adaptation doesn’t mean abandoning your skills; it means augmenting them.

What challenges have you faced when working with AI embedded in security tools like SIEMs or endpoint protection platforms, especially given their proprietary nature, and can you walk us through a specific instance where this opacity caused an issue?

The biggest challenge with these embedded AI systems is the lack of transparency. Vendors often lock their models behind a proprietary curtain, so you’re left with outputs but no insight into the ‘why’ or ‘how’ of the decisions. This can be incredibly frustrating when you’re accountable for the outcomes but can’t inspect the logic. I remember a situation with an endpoint protection platform at a previous organization where the AI flagged a legitimate internal tool as malicious, disrupting operations for an entire department. We couldn’t understand why the model made that call—no access to the training data, no visibility into the decision criteria. It felt like arguing with a black box. To work around it, we had to rely on secondary metrics, like user behavior logs and network traffic patterns, to manually validate the tool’s activity as safe. We eventually whitelisted it, but only after hours of unnecessary downtime. That experience drove home the need to push vendors for more explainability or, better yet, build complementary tools where we control the inputs and logic.

When it comes to designing AI-assisted workflows to address vendor blind spots, how do you recommend security pros get started, and can you share a step-by-step example of a tool you’ve created?

Starting with custom AI workflows is about identifying friction points in your environment where vendor tools fall short. Look for repetitive tasks or gaps in contextual understanding, then design something small and targeted. The key is to keep it manageable—don’t try to replicate an entire SIEM overnight. I’ll give you an example of a utility I built for a past project to monitor insider threats. First, I pinpointed the problem: our vendor tool wasn’t flagging subtle anomalies in user behavior because it lacked context about our specific roles and workflows. Step one was collecting internal data—login times, file access patterns, and department-specific norms. Step two, I used Python and a simple machine learning library to train a basic anomaly detection model on this data, feeding it examples of normal versus suspicious activity. Step three was integrating it with an alert system to notify me only when deviations exceeded a threshold I set. This tool cut down on noise by about 40%, letting us focus on real risks rather than chasing every alert. It wasn’t perfect, but it gave us control over what ‘risky’ meant in our context, and that made all the difference.

You’ve mentioned how AI can reduce friction in tasks like writing SQL queries. Can you describe a particularly time-consuming translational task you’ve tackled with AI, and what was the before-and-after impact on your work?

Absolutely, one of the most draining tasks I’ve dealt with is extracting specific incident data from massive log files during investigations. Before AI, I’d spend hours crafting complex JQ filters or SQL queries just to isolate a handful of relevant entries—think sifting through thousands of lines for a single IP address or timestamp pattern. It wasn’t hard, but it was tedious and broke my focus on the actual analysis. Using AI as a translator changed the game. I built a small front-end tool where I could type plain English requests like, “Show me all logs from this IP between these hours,” and the AI would spit out the correct JQ syntax. What used to take me an hour or more now takes under five minutes. Beyond the time savings, it shifted my mental energy—I could stay in the investigative mindset, piecing together the story of an incident, instead of wrestling with syntax. That alone made me feel sharper and more effective on the job.

For security pros intimidated by coding, how would you suggest they start building confidence with AI’s help, and can you share a personal story of overcoming a learning barrier?

If coding feels daunting, the beauty of AI is that it can lower that barrier significantly. Start by using AI as a tutor—describe what you want to do in plain English, and let it generate the code for you. Your job isn’t to write perfect Python from scratch; it’s to understand enough to tweak what the AI gives you. I remember when I first started with Python years ago, I was overwhelmed by the syntax and logic. I’d stare at error messages for hours, feeling like I’d never get it. Then I started using early code-assist tools to break down scripts, and later, AI models to draft snippets. One breakthrough was automating a simple log-parsing task—AI wrote 80% of the script, and I just adjusted the file paths. That small win gave me confidence to tackle bigger projects. For anyone starting out, pick a tiny task, like renaming files in bulk, and use a free AI tool like ChatGPT to draft the code. Then, read through it line by line with a resource like Python.org’s beginner tutorials. It’s less about mastery and more about building comfort through small, practical steps.

How have you seen AI’s statistical reasoning clash with the contextual needs of security work, and can you describe a specific case where you had to intervene?

AI’s reliance on statistical patterns can often miss the human and organizational context that’s so critical in security. It’s great at identifying anomalies based on data, but it doesn’t grasp intent or cultural nuances. I recall an incident where an AI-driven monitoring tool flagged a senior executive’s activity as suspicious because their login patterns deviated during a business trip—lots of unusual locations and times. Mathematically, it looked like a compromised account, but in reality, it was just travel. The tool recommended locking the account, which would’ve disrupted critical operations. I had to step in, review the context—checking travel schedules and confirming with the user—and override the recommendation. The outcome was a preserved workflow, but the lesson was stark: AI can point you to a problem, but it’s on us to interpret whether it’s a real threat. We ended up adjusting the tool’s sensitivity for certain roles, but it reinforced that human judgment is irreplaceable when stakes are high.

You’ve advocated automating weekly tasks with Python and AI. Can you walk us through a routine task you’ve automated in your workflow, including the process and impact?

One task I automated was generating weekly summaries of security alerts for a team I supported. Manually compiling these reports—pulling data from multiple dashboards, formatting it, and prioritizing key issues—took about three hours every Monday. It was mind-numbing work that drained my energy for more strategic tasks. I decided to build a Python script with AI assistance: first, I used an AI model to draft code that scraped alert data from our SIEM API. Then, I refined it to filter for high-priority incidents based on criteria I defined. Finally, I added a formatting step to output a clean report emailed directly to the team. The biggest challenge was debugging API connection errors, which took a few frustrating iterations to resolve. Once it worked, though, those three hours shrank to under ten minutes of review time. That freed me up to focus on deeper analysis and planning, and honestly, it felt like reclaiming a piece of my week. It’s a small win, but it compounded into a lot more bandwidth for what mattered.

How do you approach actively engaging with AI systems by questioning outputs and tuning behaviors, and can you share an example of a specific adjustment you made?

Engaging with AI is all about treating it as a partner, not a black-box oracle. I make it a habit to scrutinize outputs, test edge cases, and feed back better data or constraints when the results don’t align with reality. It’s a hands-on process of trial and error. For instance, I was working with an AI tool to prioritize vulnerability scan results, but it kept overemphasizing minor issues because it was trained on generic severity scores, not our environment’s specific risks. I adjusted by feeding it a curated dataset of past incidents from our organization, emphasizing assets critical to our operations. I also tweaked the weighting to deprioritize issues on non-production systems. After a few iterations, the prioritization aligned much better with our actual needs—cutting down irrelevant alerts by about 30%. That adjustment wasn’t just technical; it felt like reclaiming control over a process that directly impacts our security posture. It’s a reminder that AI is only as good as the guidance we provide.

How has participating in community learning around AI in security shaped your approach, and can you tell us about a specific collaboration that led to a breakthrough?

Community learning has been invaluable for keeping pace with AI’s rapid evolution in security. Sharing ideas, scripts, and failures with peers helps me see blind spots in my own thinking and pick up practical tricks I wouldn’t have stumbled on alone. A standout moment was during an online forum discussion with other security pros about automating threat intelligence feeds. One member shared a lightweight Python script they’d built with AI to cross-reference indicators of compromise against internal logs, something I hadn’t considered. We swapped notes, and I adapted their approach by adding a scoring mechanism based on our risk priorities. That collaboration turned a vague idea into a tool that shaved hours off our weekly threat-hunting process. The key insight was how small, shared innovations can spark bigger solutions—it felt like a collective win. Being part of these conversations keeps me grounded and constantly learning, which is critical in a field that never stands still.

As you prepare for your keynote at SANS 2026 on strengthening AI fluency in security, what’s one practical takeaway you’re excited to share, and why is it critical for professionals right now?

I’m really looking forward to sharing actionable ways to build AI fluency, and one key takeaway is starting with a tool audit in your own environment. Map out every security product you use—SIEMs, endpoint protection, whatever—and identify where AI is already making decisions, often without your full awareness. Then, take one of those tools and actively test its outputs against a known incident to see where it excels or fails. I’ll walk through a real-world example of doing this with a mail filtering system, showing how to spot overzealous flagging and adjust it with better data. This is critical right now because AI is embedded everywhere, silently shaping outcomes, and if we don’t understand its influence, we’re not truly in control of our security posture. With threats evolving faster than ever, owning that knowledge gap isn’t just a nice-to-have—it’s a necessity.

What is your forecast for the role of AI in cybersecurity over the next few years?

Looking ahead, I see AI becoming even more deeply integrated into every layer of cybersecurity, from prevention to response. It’s not just going to be a feature in vendor tools; it’ll be the backbone of how we process and act on data at scale. However, I believe the challenge will be balancing automation with human oversight—AI will handle more, but the need for nuanced judgment won’t disappear. We’ll likely see a surge in custom-built AI utilities as teams realize vendor solutions can’t fully capture their unique environments. My hope is that within five years, security pros will be as fluent in directing AI as they are in configuring firewalls today. It’s an exciting shift, but it’ll demand a cultural change—embracing continuous learning and adaptation to stay ahead of adversaries who are also leveraging these tools.

subscription-bg
Subscribe to Our Weekly News Digest

Stay up-to-date with the latest security news delivered weekly to your inbox.

Invalid Email Address
subscription-bg
Subscribe to Our Weekly News Digest

Stay up-to-date with the latest security news delivered weekly to your inbox.

Invalid Email Address