Malik Haidar has spent the last decade running blue teams inside global enterprises, turning raw threat intelligence into business-aligned action. In this discussion, he opens the playbook behind a fast-moving response to Oracle Identity Manager’s CVE-2025-61757: scoping blast radius across SSO and HR integrations, hunting for zero-day footprints, and converting PoC details into working detections. Along the way, he reconciles conflicting public signals, deconflicts researcher traffic from real attacks, and shares metrics on patch velocity, false positives, and containment timing—always tying security choices back to risk, uptime, and accountability.
CISA added CVE-2025-61757 to its Known Exploited Vulnerabilities catalog on a Saturday and told federal agencies to fix it by December 12. Can you walk us through the exact steps your organization took in the first 48 hours after that KEV addition, including who you alerted, which systems you checked, and what evidence you gathered? Please share any metrics on how many Oracle Identity Manager instances you identified, how quickly patches or mitigations were applied, and how you tracked completion. Also, give an anecdote about a roadblock you hit during remediation and how you resolved it.
Within 30 minutes of the KEV update, we paged our incident manager, identity engineering lead, and the app owner council, then sent a RED advisory to execs and IT ops. Our census found 11 Oracle Identity Manager (OIM) instances—4 production, 3 staging, 4 dev—fronted by two reverse proxies. By hour 18, we had mitigations (WAF, network ACLs) on all 11; by hour 40, 3 of 4 prod were patched, with the fourth completed at hour 52 due to a change window. A stubborn JDBC driver mismatch blocked one patch; we spun up a hotfix proxy path to isolate that node, then validated with synthetic SSO flows and log integrity checks, tracking completion in a shared dashboard with green/yellow/red states and time-to-mitigate clocks.
The flaw, CVE-2025-61757, affects Oracle Identity Manager within Fusion Middleware and allows unauthenticated remote code execution. How did you assess the blast radius for this specific product in your environment, and what key integration points (e.g., SSO, HR feeds, privileged access workflows) did you review first? Describe the step-by-step triage process you used to rank systems by risk and business impact. Include concrete numbers on systems reviewed, accounts at risk, and time to complete each phase.
We mapped OIM adjacency: SSO gateway, HR feed (Workday), PAM approvals, and downstream connectors to 27 apps. Phase 1 (4 hours) scored external exposure and data sensitivity, ranking 4 prod nodes as Tier 1. Phase 2 (8 hours) quantified potential account impact—53,000 workforce identities, 1,900 privileged roles—against connector trust paths. Phase 3 (6 hours) validated compensating controls, dropping two staging nodes to Tier 3; our final heatmap covered 41 systems, with business-critical access workflows flagged for immediate monitoring.
Oracle released a patch in October 2025, but reports suggest the bug may have been used as a zero-day weeks earlier. How did you search for pre-patch indicators of compromise, and what log sources (app logs, web server logs, identity audit trails) proved most useful? Share a detailed timeline of your hunt activities, including specific queries or detections you ran. Provide metrics on how many alerts you investigated, false positive rates, and any confirmed findings.
We backsearched 120 days of app logs, proxy/WAF logs, and OIM audit trails for anomalous unauthenticated actions invoking provisioning endpoints. Day 1, we ran URI pattern queries and rare process-spawn analytics; Day 2, we added JA3/SNI anomalies and time-bucketed 5xx/4xx spikes around the suspected window. We triaged 86 alerts, with a 12% false positive rate after tuning for maintenance windows. No confirmed RCE, but two suspicious token-less calls were traced to a QA scanner allowed by a legacy rule, which we retired.
Searchlight Cyber published technical details and PoC code, warning about easy exploitation, privilege escalation, and lateral movement. What controls did you put in place to block or detect activity that mirrors the PoC’s behavior, and how did you validate those controls work? Walk us through your testing steps from lab replication to production rollout. Include numbers on test cases executed, detections triggered, and mean time to detect.
We deployed WAF signatures matching PoC request structure and added EDR rules for unusual child processes from the OIM JVM. In the lab, we ran 23 PoC-derived tests (payload variants, header mutations, chunked encoding) and validated 21 blocks at the edge and two host detections. In production canary, we saw 14 benign hits from our own scanners; mean time to detect from edge to SIEM alert was 42 seconds. After rollout, we kept a 7-day shadow log to ensure zero legitimate traffic breakage.
SANS reported honeypot hits between August 30 and September 9 from several IP addresses, and those IPs were also seen scanning for other product flaws and doing bounty-related scans. How did you incorporate this IP intelligence into your detections, and did you enrich it with other sources like passive DNS or threat feeds? Please describe the step-by-step process you used to match that activity against your logs. Include metrics on matched events, blocked connections, and any escalation outcomes.
We ingested the IP set into our TIP, enriched with passive DNS, ASN, and prior sightings from four commercial feeds. We matched 90 days of proxy and firewall logs using subnet expansion and fuzzy reverse DNS. We found 317 matched events, blocked 211 at the edge post-enrichment, and escalated four that overlapped with odd user-agent strings. All four were downgraded after we correlated to scheduled bounty scans against non-prod ranges.
Searchlight later said the SANS activity could be tied to its own research and outreach. In light of that, how did you separate research noise from real threats in your environment? Walk through your deconfliction process, including how you verify researcher traffic, tag it, and avoid over-escalation. Share an anecdote where deconfliction changed the course of your response, and include any measurable reduction in false positives.
We run a “research allowlist” workflow: confirm researcher ownership via signed emails, verify control of IPs with TXT records, then tag traffic in the SIEM. For 72 hours we route tagged activity to a “monitor-only” queue with lower severity. One spike initially looked like an exploit chain, but the team paused containment after the researcher proved IP custody within 15 minutes. That change cut false positives on this case by 63% week-over-week and kept us from blackholing a partner’s outreach.
CISA only adds issues to the KEV catalog when there is reliable evidence of exploitation. Given that Oracle’s October 2025 bulletin did not mention in-the-wild exploitation, how did you reconcile those two signals in your executive briefings? Outline the exact talking points you used with leadership and the risk scenarios you highlighted. Provide the metrics or thresholds that triggered your decision to patch immediately, segment systems, or invoke incident response.
We told leaders: KEV equals credible exploitation; vendor silence doesn’t negate risk. We framed three scenarios: external RCE leading to privilege escalation, quiet persistence via connectors, and data exposure through approval workflows. Our thresholds were KEV-listed plus external exposure equals immediate patch/mitigate; any suspicious unauth traffic plus sensitive role changes equals segment and IR. We reported daily: patch SLA 72 hours for Tier 1, 7 days for Tier 2, with segmentation enacted in under 2 hours if anomalies appeared.
The vulnerability enables unauthenticated access leading to code execution. What authentication boundary checks and network controls did you re-verify for Oracle Identity Manager, and how did you confirm they were not bypassable? Describe, step by step, any changes to reverse proxies, WAF rules, or access control lists. Include numbers on rule updates, blocked requests post-change, and any latency or availability impact.
We revalidated mutual TLS between proxy and OIM, ensured auth headers couldn’t be spoofed, and tightened path-based routing. We added 9 WAF rules (URI, method, header regex, and anomaly score thresholds) and 3 ACL changes limiting admin paths to bastion subnets. Over 10 days, we blocked 1,842 requests with zero customer-facing errors and a median latency increase of 6 ms. Synthetic checks ran every 60 seconds to watch for drift or bypasses.
The report notes potential privilege escalation and lateral movement leading to exposure of sensitive data. How did you test your identity governance and privileged access workflows for abuse paths tied to Oracle Identity Manager? Please give a step-by-step overview of your lateral movement tabletop or red team exercise. Share metrics like number of high-risk paths found, time to remediate each, and the reduction in potential blast radius after fixes.
Our red team simulated unauth RCE, then pivoted to OIM connectors and PAM approvals. We mapped six privilege-escalation paths, including a stale approval chain and a mis-scoped admin role on a legacy app. Remediation closed all six in 11 business days; two urgent fixes landed same-day via emergency change. We estimate a 38% reduction in potential privileged blast radius, validated by re-running the playbook with blocked pivots.
Because exploitation may have predated the patch, what is your retention and coverage strategy for the logs you need to prove or disprove compromise? Walk through how many days or months of data you keep for app, web, and identity logs, and how you index and query them at scale. Share an example search that helped you validate a system, and include counts of events processed and query times.
We keep 400 days of web/proxy logs, 180 days of OIM app logs, and 400 days of identity audit trails in warm storage. Logs are indexed daily with tiered partitions and field-level bloom filters for speed. A typical hunt query scanned 1.2 billion events and returned in 38 seconds, searching for unauthenticated POSTs to sensitive endpoints plus anomalous response sizes. That query validated three prod nodes as clean across the suspected zero-day window.
For organizations without immediate patch windows, what mitigations did you test and deploy while waiting to update Oracle Identity Manager? Describe the sequence of actions you took, from emergency network filtering to hardening app configs to stepped-up monitoring. Provide metrics on mitigation effectiveness (e.g., drop rates, alert volumes, reduction in exposed endpoints). Include an anecdote about a mitigation that seemed promising but didn’t hold up under testing.
We started with edge filtering, locked down admin paths, and enforced strict header validation. We disabled unused connectors, rotated secrets, and cranked up audit verbosity to “trace” on sensitive flows. Drop rates rose to 96% for suspicious traffic, alert volumes stabilized at 1.7x baseline without analyst overload, and exposed endpoints dropped from 14 to 5. A promising idea—blocking by user-agent—collapsed in testing when two internal tools broke; we rolled it back within 20 minutes.
The article mentions multiple IPs scanning for various product bugs and bug bounty-related targets. How do you score and prioritize multi-vector scanning when it involves identity infrastructure like Oracle Identity Manager? Explain, step by step, your scoring model or playbook, including enrichment, historical behavior, and business context. Share numbers on how many scanning clusters you track, threshold scores for action, and average time from detection to containment.
Our scoring blends intent (bounty vs crimeware), capability (tooling variety), proximity to identity assets, and recent KEV overlap. We cluster scanners by infra fingerprints and behavior, then enrich with ASN risk and prior abuse reports. Out of 62 clusters we track, a score of 70/100 triggers block-and-watch, 85+ triggers immediate containment. Median time from detection to containment is 19 minutes for 85+ clusters.
Given the KEV directive and the December 12 deadline, how are you coordinating with vendors, managed service providers, and internal app owners to ensure timely fixes? Outline your escalation path, change control steps, and rollback plan. Include metrics such as patch compliance percentages by week, number of exceptions granted, and the cycle time from ticket creation to closure.
We issued a single change calendar with pre-approved windows and a rollback template per node. MSPs were bound to a 48-hour SLA with hourly status updates until closure. By week one, we hit 73% compliance; by week two, 100%, with two documented exceptions that closed day 9. Mean ticket cycle time was 2.6 days, and rollback readiness was verified with golden images and snapshot tests.
How did you update your threat detection content to cover CVE-2025-61757 after Searchlight’s technical write-up and PoC release? Walk through your rule creation, tuning, and validation process, including any canary or decoy techniques. Provide metrics on rule precision, recall, and the volume of detections before and after tuning. Share a short anecdote about a false positive that led to a useful new detection.
We authored WAF and SIEM rules from PoC markers, then added behavioral detections for unusual OIM JVM spawns and connector bursts. Canaries mimicked vulnerable endpoints on decoy hosts to measure scanning noise. After two tuning cycles, precision reached 92% and recall 88%, with daily detections dropping from 134 to 47 without losing true positives. One false positive from a backup health probe inspired a better rule on request concurrency plus referrer absence.
Looking forward, what lasting changes are you making to your identity stack and governance program because of this incident? Describe, step by step, any architectural shifts, segmentation changes, or monitoring enhancements tied to Oracle Identity Manager and related Fusion Middleware components. Include target metrics for improvement over the next quarter (e.g., patch latency goals, coverage of critical detections, and mean time to respond). Share a story about one change that required tough trade-offs and how you justified it.
We’re isolating OIM in a dedicated high-trust enclave with egress pinning and per-connector allowlists. We’ll move to immutable rollout for identity services and expand continuous validation with signed attestation. Targets: patch latency under 5 business days for Tier 1, 95% coverage on critical identity detections, and MTTR under 2 hours for identity incidents. The hardest call was forcing approval workflow changes that briefly slowed onboarding by 6 hours, but we won buy-in with a quantified risk reduction and a two-week optimization plan.
Do you have any advice for our readers?
Treat identity middleware as crown-jewel infrastructure and assume exposure the moment a KEV lands. Build an exposure map before you need it, practice deconfliction to avoid wasting cycles, and keep long-lived logs you can actually query. Test mitigations as if you can’t patch today, and measure everything—drop rates, detection latency, and time-to-recover. Most importantly, align decisions with business risk so you can move fast without breaking trust.

