The emergence of generative AI and large language models (LLMs) has fundamentally shifted the cybercrime landscape. What once required technical prowess or insider access can now be automated, personalized, and deployed at scale—by anyone with an internet connection. AI is enabling increasingly sophisticated attacks, from deepfake impersonation and voice fraud to automated social engineering and vulnerability discovery.
This panel brings together experts explore the tools, practices, and collaborations needed to defend against AI‑driven cybercrime. With an eye toward practical implications, panelists will consider the tradeoffs in using AI for defense, defense against insider threats, and the urgency of public-private coordination & policy levers.