HexStrike AI, a new AI-driven security tool meant to assist red teams and researchers in testing digital defenses, is being exploited by cybercriminals. Originally intended to streamline tasks like reconnaissance and vulnerability discovery, the open-source platform connects to more than 150 cybersecurity tools and features a suite of AI agents trained in exploit development and attack chain analysis.
Check Point researchers have flagged alarming misuse of the tool in the wild. Threat actors are not only using HexStrike AI to target recently disclosed vulnerabilities—including those affecting Citrix systems—but are also trading access to compromised systems, including NetScaler instances, on cybercrime forums. In healthcare environments where Citrix solutions are often embedded in EHR and remote access systems, this trend represents a serious threat.
Because HexStrike AI automates exploitation—retrying until successful and dramatically reducing the time between vulnerability disclosure and attack—healthcare IT teams face added urgency. Automation also helps adversaries parallelize attacks across networks, potentially leading to multiple breaches within hours of a CVE going public.
Sophos and others have documented similar abuse of legitimate tools like Velociraptor, used in attacks to deploy secondary malware payloads. With hospitals and clinics relying heavily on third-party monitoring and diagnostic platforms, the risk of red-team tools being turned against defenders is no longer theoretical—it’s operational.
Adding to this risk, recent research has warned that AI-driven tools such as PentestGPT are vulnerable to prompt injection attacks. A compromised AI security agent could backfire, giving attackers unauthorized access to hospital systems under the guise of legitimate testing.
The rapid weaponization of AI security tools like HexStrike AI demands a shift in posture. Healthcare IT leaders should prioritize patch management, AI tool auditing, and the implementation of strict access controls. With lives on the line, it’s critical to treat every vulnerability—and every red-teaming tool—as a potential vector for attack until proven otherwise.