Malik Haidar stands at the intersection of human psychology and high-level network security, having spent years defending multinational infrastructures from the world’s most sophisticated hacking collectives. His deep understanding of how threat actors exploit organizational trust has made him a leading voice in combating the rise of social engineering. Today, we sit down with him to discuss the evolving tactics of the Scattered LAPSUS$ Hunters and the specific ways they are refining their approach to breach corporate perimeters. Our conversation explores the tactical shift in recruitment for voice phishing, the technical methods used to escalate privileges in cloud environments like Azure and Snowflake, and the critical defensive measures required to neutralize these persistent threats.
Threat groups are now offering $500 to $1,000 per call specifically to recruit women for vishing campaigns. How does this shift in demographics manipulate help desk psychology, and what specific cues might an operator miss when confronted with a polished, non-traditional voice profile?
This recruitment shift is a cold, calculated move to exploit subconscious biases within the IT help desk environment. Traditionally, support staff are trained to be on high alert for aggressive or overly technical male voices that fit the “hacker” stereotype, but a polished, calm female voice can disarm an operator’s natural skepticism. When an attacker offers $500 to $1,000 per call, they aren’t just buying a voice; they are buying a performance designed to elicit a “helpful” response rather than a “security” response. Operators often miss subtle cues like the lack of background office noise or the slight hesitation that occurs when a caller is reading from a script because they are focused on solving the problem for a person who sounds professional and non-threatening. It’s a psychological pivot that bypasses traditional training, making the interaction feel like a routine service request rather than a high-stakes security breach.
Attackers frequently use pre-written scripts to convince technicians to reset passwords or install remote monitoring tools. What are the primary indicators of a scripted vishing attempt, and how can organizations standardize identity verification to neutralize these highly persuasive social engineering tactics?
The most glaring indicator of a script is the “too-perfect” delivery of technical jargon or a rigid adherence to a narrative even when the technician asks an off-script question. You might notice a caller repeating specific phrases verbatim or showing a strange lack of knowledge about their supposed department’s internal culture while perfectly naming the Remote Monitoring and Management (RMM) tools they want installed. To neutralize this, organizations must move beyond the “knowledge-based” verification—like asking for an employee ID—and implement strict out-of-band authentication protocols. We need to see help desks utilizing mandatory video verification or push-to-accept prompts sent to a previously registered corporate device before any password reset or tool installation is even discussed. Standardizing these hurdles ensures that no matter how persuasive the voice or the script is, the technical barrier remains impassable without a secondary, verified factor.
Techniques like MFA prompt bombing and SIM swapping allow actors to move laterally into environments like Azure or Snowflake. Once initial access is gained via a help desk call, what are the immediate steps for escalating privileges, and how can security teams better monitor for these lateral movements?
Once a threat actor gets that foot in the door through a help desk reset, their first move is often to establish a persistent presence by creating a new virtual machine or a secondary administrative account. In environments like Azure, they use the Graph API to quietly enumerate resources and look for over-privileged service accounts that can be hijacked. We’ve seen cases where they immediately target Snowflake databases or Outlook mailbox files to exfiltrate sensitive data before the organization even realizes a breach has occurred. Security teams need to prioritize monitoring for “impossible travel” alerts and unusual API calls, especially those coming from newly created identities. If an administrative privilege escalation happens within sixty minutes of a help desk interaction, it should trigger an automatic, high-priority isolation of that account until a manual review can be completed.
Cybercriminals are increasingly using residential proxies and legitimate tunneling tools like Ngrok or Teleport to blend into corporate traffic. How do these tools help attackers evade detection during data exfiltration, and what telemetry should network administrators prioritize to distinguish between authorized and unauthorized use?
By using residential proxy networks like Luminati or OxyLabs, attackers ensure their traffic originates from domestic, “clean” IP addresses that don’t trigger the usual geographical blocking or reputation-based alerts. When combined with legitimate tunneling tools like Ngrok or Teleport, the malicious traffic looks identical to standard administrative or developer activity, effectively hiding in plain sight. Network administrators must look beyond simple IP reputation and instead prioritize telemetry related to process-to-network correlations. For example, seeing a file-sharing service like gofile.io or mega.nz being accessed from a server that typically only communicates with internal databases is a massive red flag. You have to monitor the “behavior” of the connection—looking for persistent, low-and-slow data transfers to external endpoints that have no business being in your environment.
Many organizations are moving away from SMS-based authentication and auditing logs for administrative escalations following help desk interactions. What are the logistical challenges of implementing these hardened MFA policies, and what metrics best demonstrate the effectiveness of such a transition?
The primary logistical challenge is always user friction, as employees find SMS codes easier than hardware security keys or biometric authenticators. However, the risk of SIM swapping is so high now that the transition is no longer optional; it’s a survival requirement. When implementing these policies, the best metric to track is the “Mean Time to Detect” unauthorized access attempts—if your hardened MFA is working, you should see a spike in failed authentication logs rather than successful, suspicious logins. Additionally, auditing help desk logs against subsequent account changes is vital; if you see a 0% correlation between help desk tickets and unauthorized “New User Created” events, you know your verification process is holding firm. Success is ultimately measured by the absence of lateral movement following a social engineering attempt.
What is your forecast for the evolution of vishing and social engineering tactics in the IT sector?
I believe we are on the cusp of seeing “deepfake-as-a-service” become the standard tool for social engineering, where the $1,000 recruitment fees we see today will transition into subscriptions for AI-generated voice clones. We will see attackers move away from generic scripts toward highly personalized attacks that use data scraped from professional networks to mimic the exact tone and speaking style of specific executives or IT directors. The “human element” will become increasingly difficult to trust, forcing organizations to adopt a “Zero Trust” architecture where no voice, no matter how familiar or persuasive, is granted access without multi-layered, cryptographic verification. As groups like Scattered Spider continue to refine their ability to blend into legitimate traffic, the battle will shift from the perimeter to the identity layer, making identity governance the most critical pillar of any cybersecurity strategy.

