Increased instances of AI-based cyberattacks emerge, with AI security concerns taking precedence for top cybersecurity executives (CISOs)
A new report from cybersecurity firm Team8 has revealed that one in four CISOs have experienced an AI-generated attack on their company's network in the past year. AI has risen to the top of CISOs' priority lists, surpassing traditional concerns such as vulnerability management, data loss prevention, and third-party risk.
The report suggests that the true number of companies targeted by AI-powered attacks may be higher due to the difficulty in detecting such threats. AI-powered phishing and malware development are among the concerns raised in the report.
CISOs are focusing on securing AI-driven agents within their environments and ensuring employees’ use of AI tools complies with security and privacy policies. This reflects concerns not only about external AI attacks but also risks arising from internal AI adoption.
Increasingly, CISOs are deploying AI-powered risk assessment tools to automate and accelerate first- and third-party risk reviews, moving from static periodic checks to continuous, adaptive risk management. However, integration challenges and the need for transparency and alignment with existing systems remain hurdles.
The report also highlights the unintended security consequences of companies' own use of AI as a significant worry for executives. Executives expect AI to replace humans in areas such as penetration testing, third-party risk assessments, reviews of user access requests, and threat modeling.
The allow-listing approach, where almost half of companies require employees to get permission to use particular AI tools, could cause friction with non-security executives eager to expand their firms' AI use. The demand for effective 'allow-by-default' controls is acute, as security teams grapple with shadow AI usage and the absence of enterprise-grade governance frameworks.
Half of the CISOs expect that reducing employee count is a major factor in their experimentation with AI-powered Security Operations Centers (SOCs). Nearly eight in 10 CISOs expect security operations center roles to be the first positions replaced by AI.
Securing AI agents is a major concern for 37% of the respondents mentioned in the report. Ensuring that employees' use of AI tools conforms to security and privacy policies is a concern for 36% of the respondents.
The majority of companies (already nearly 7 in 10) are using AI agents, with another 23% planning to deploy them next year. CISOs are under pressure to enable enterprise-wide AI adoption pushed by corporate boards, while simultaneously mitigating evolving AI risks in a fast-moving, immature control environment.
AI agents could "unlock expert-level capabilities across a broader surface area" in the areas of penetration testing and threat modeling, where there is a major workforce shortage. CISOs in 2025 prioritize managing AI-generated attacks and integrating AI into Security Operations Centers (SOCs) as pressing challenges, balancing proactive defense with enabling safe AI adoption across enterprises.
- A concern raised in the report is the rise of AI-powered phishing and malware development, targeting companies' networks.
- The report suggests that the true number of companies targeted by AI-powered attacks may be higher due to the difficulty in detecting such threats, such as AI-generated phishing attacks.
- CISOs are focusing on securing AI-driven agents within their environments and ensuring employees’ use of AI tools complies with security and privacy policies, a response to the risks arising from internal AI adoption.
- In 2025, CISOs prioritize managing AI-generated attacks and integrating AI into Security Operations Centers (SOCs) as pressing challenges, balancing proactive defense with enabling safe AI adoption across enterprises.