The advancement of AI continues to impact corporate strategy as models and use-cases gain maturity and develop from beta-phase testing to full launch in active functions and applications. However, AI models and applications can increase cyber risk, exposing organizations to new security and privacy vulnerabilities that cyber attackers will seek to exploit. Additional to cyber threat, poorly implemented security and privacy governance in AI use cases will incur regulatory fines and potential loss of license to operate.
The implementation guidelines of NIS2, DORA, and the EU AI Act will all encourage businesses to establish robust AI governance practises, as well as address cyber risk and cyber resilience programs to ensure and maintain compliance. AI is also being used by cyber criminals to enhance their capabilities, which raises new challenges for security teams. Finally, security vendors continue to develop their own in-house AI models to layer on new and innovative features into security toolsets to augment security systems, such as automation in security operations, applied AI models in threat detection and IR, and more.
Join IDC's analysts and security vendor thought leaders to learn about the latest trends in European cyber security, and the future of security in an AI-driven world.