No Audio ⏸ No AI visibility, no confidence Download report Blind spots don’t just hide AI threats—they erode confidence in controls AI PULSE SURVEY | VOL 4 10 min read IT leaders are sounding the alarm on AI threats, but the C-suite isn’t aligned. Meanwhile, shadow AI is prevalent in many companies, leaving leadership to make critical decisions with only part of the picture.What the AI pulse survey found about AI cyber risk:AI is increasing cyber risk faster than leadership decision-making is adapting.IT leaders perceive significantly higher AI-driven threat escalation than C‑suite and boards.Implementing more stringent security standards is a key priority to better manage third‑party embedded AI risk.Organizations with formal AI governance frameworks report greater visibility and control.Misaligned risk perception creates blind spots that delay action. Peer insights on AI cyber & resilience Vol. 4 key findings Where AI risk hides The survey reveals a consistent pattern: leaders are making AI decisions with incomplete visibility. These insights trace where risk is misunderstood, where confidence breaks down, and where leaders regain control. Leadership sees less than IT Scale doesn’t mean visibility When AI runs ahead of rules Invisible AI inside vendors Leadership sees less than IT Perception gaps have real consequences To what degree, if at all, has AI affected the sophistication and frequency of cyberattacks (e.g., deepfakes, automated phishing) targeting your organization? Click the image to view in full size.45% of IT leaders believe AI has increased cyber risk significantly, versus 30% of C-suite executives and board members. This perception gap can delay critical decisions, lead to under-testing of AI controls, and leave organizations exposed to threats that leadership may underestimate. When risk perceptions aren’t aligned, blind spots persist. Scale doesn’t mean visibility Size doesn’t beat blind spots How would you describe your organisation’s visibility into the specific AI tools (both authorised and unauthorised) currently being used by employees? Click the image to view in full size.Unmanaged AI is a widespread issue across organisations of all sizes.When leaders don’t see their own blind spots, they miss the shared urgency for detecting threats. The lack of transparency allows for the unchecked use of unapproved AI tools and extensions, the normalisation of varied consent policies, and potentially a decline in data protection standards.In this survey, small organisations are defined as those with less than $100 million in revenueMedium-sized organisations are those with revenues between $100 million and $5 billionLarge organisations are defined as those with more than $5 billion in revenue When AI runs ahead of rules Why formal frameworks matter Frameworks align with higher assurance in controls Organisations with formal AI governance frameworks report stronger visibility and greater confidence in their security controls. Governance provides structure, accountability, and clarity—making it a critical foundation for managing AI risk effectively. Click the image to view in full size.In this survey, small organisations are defined as those with less than $100 million in revenueMedium-sized organisations are those with revenues between $100 million and $5 billionLarge organisations are defined as those with more than $5 billion in revenue Invisible AI inside vendors Third-party embedded AI: where visibility is won or lost What is your organisation’s top priority in managing risks posed by embedded AI in third-party vendor software? Click the image to view in full size.As vendors embed AI into everyday tools, organisations are losing visibility into where and how AI operates. Strengthening vendor governance—through tighter security standards, training, and contractual controls—is now essential to managing external AI risk. You can’t defend what you can’t see; invest in enablers Organizations that feel most confident in managing AI security risks are those that invest most in concrete capabilities that convert intent (we take AI risk seriously) into evidence (we can see, govern and defend it). Here’s a recap of the capabilities or enablers:Formal AI governance framework: It enables clear acceptable-use rules, ownership, accountability, and enforceable guardrails across the enterprise — so AI doesn’t sprawl faster than controls.AI tool monitoring: You can’t manage what you can’t see. Investing in monitoring capabilities will allow your organization to detect threats early, specifically shadow AI, while enhancing compliance, including data protection, and proving controls are working.Organizational readiness and resilience: This reduces human-driven failures and builds consistency in “how we work” with AI.Using AI to fight AI: Employing AI in the security stack means faster detection of cyberattacks, better pattern recognition and improved response against AI-accelerated threats.Vendor controls for embedded AI: This closes a growing blind spot as AI features proliferate inside SaaS and third‑party platforms. Where AI is “hidden in the stack,” more stringent vendor security standards and AI specific training are crucial. FAQs + EXPAND ALL What is "shadow AI" and why is it a risk to organizations? + Shadow AI refers to AI tools or features used inside a company without formal approval or oversight. Shadow AI is a risk because these unapproved uses create blind spots – you can’t manage or secure what you don’t know about. In Protiviti’s survey, roughly two-thirds of companies said employees have used AI without proper oversight, leading to gaps in controls. Such hidden AI can bypass standard security measures and compliance rules, leaving leadership with an incomplete picture of the organization’s risk exposure. Why do organizations lack visibility into AI usage? + Many companies lack visibility because AI adoption often outpaces governance. New AI tools are deployed across different teams faster than they can be tracked centrally. The survey shows that AI’s federated adoption – where business units adopt AI independently – and decentralized IT environments contribute to this blind spot. Complex organizations (multiple divisions, M&A, etc.) often end up with siloed systems that make it hard to monitor all AI usage. In fact, almost half of large enterprises surveyed admitted they don’t have full insight into what AI tools employees are using. How is AI increasing cybersecurity threats? + AI is escalating cyber threats by making attacks more frequent and sophisticated. Malicious actors use AI to automate phishing, craft ultra-realistic deepfakes, and scan for vulnerabilities faster – essentially supercharging attacks beyond human scale. Protiviti’s research highlights that nation-state groups and cybercriminals are aggressively employing AI to exploit weaknesses in corporate systems and supply chains. One example: deepfake-based scams have surged by an estimated 3000% in a single year in North America. In short, AI lets attackers increase both the speed and realism of threats, challenging organizations to bolster their defenses accordingly. How can organizations improve visibility into AI usage across the enterprise? + Organizations can improve visibility by establishing enterprise-wide AI governance and tracking. In practice, that means creating a formal AI governance framework that inventories all AI applications, defines approval processes, and implements technical monitoring for AI activities. Protiviti’s survey calls a formal governance framework the most effective solution to the visibility problem. Only about 4 in 10 companies have one in place today, but those that do report much better insight into where AI is being used and greater confidence that their controls are keeping up with AI-driven threats. In essence, to avoid “flying blind” with AI, organizations need to track it like any other critical asset – with clear policies, ownership, and technical oversight covering every corner of the enterprise. How does AI in vendor tools create blind spots? + AI built into vendor software can become a hidden blind spot because it often operates outside a company’s direct oversight. When third-party platforms embed AI features (for data analysis, automation, etc.), the client organization might not fully see or control how those AI components work. The survey indicates that many companies struggle with this: vendors are adding AI faster than customers can track it. To address the risk, 32% of organizations said their top priority is tightening vendor security standards around AI, and another 31% are focusing on AI-specific training for their teams to better manage vendor AI impacts. These steps show recognition that unmonitored AI in vendor tools can undermine security and compliance if left unchecked. Why are formal AI governance frameworks important? + Formal AI governance frameworks are important because they provide the structure and accountability needed to manage AI risks consistently. A formal AI governance framework defines how AI is approved, monitored, and controlled – ensuring every project follows policy and that security, privacy, and compliance risks are addressed proactively. This matters because companies with a framework report significantly higher visibility into AI usage and greater confidence in their controls. In contrast, without a framework, AI initiatives can become chaotic or create unseen vulnerabilities. With only 41% of companies having a formal framework today, establishing one is increasingly seen as essential to keep AI deployment transparent and safe. Why is employee training critical for AI risk management? + Even the best AI tools won’t secure themselves – people need to be prepared. Training employees and leaders on AI risks and proper use is critical so they can spot AI-driven threats (like deepfake phishing or data misuse) and follow governance policies. The survey found that about one-third of companies are prioritizing AI-specific education for their staff and executives as a key risk management measure. Organizations that pair advanced security tools with robust training see better insight into AI usage and higher confidence in their defenses. In short, well-trained people are an essential line of defense – they ensure that as AI is adopted, it’s used responsibly and with eyes open. Meet the minds behind the report and insights Tom Andreesen, Managing Director and AI Leader Tom is a managing director with over 33 years’ experience helping organizations develop and implement a variety of business and technology solutions to enhance their operations. Tom has also helped companies establish risk management capabilities and overall governance programs to help address operational risks, technology risks, and regulatory compliance requirements. Tom is the leader of Protiviti’s Global Microsoft Alliance program. Connect on LinkedIn Sameer Ansari, Managing Director Sameer Ansari, Global CISO Solutions Leader, brings over 20 years of experience developing and delivering complex privacy solutions to the Financial Industry, and privacy consulting and implementation experience in the TMT and Consumer Products industries, in many locations throughout the globe.Prior to joining Protiviti, Sameer was Managing Director at a Big 4 firm, leading privacy and data risk management solutions across multiple sectors while also the cybersecurity lead for the Investment Management industry sector. Earlier, Sameer was the Head of Privacy and Data Governance at the Vanguard Group, where he implemented its Global Privacy Enterprise and Data Governance Programs and co-chaired Vanguard’s Data Protection Advisory Committee. Connect on LinkedIn Andrew Retrum, Managing Director Andrew Retrum is a Managing Director within Protiviti’s Technology Consulting Practice and the Global Technology Risk & Resilience Practice Lead.Andrew assists our clients in navigating an ever-evolving risk landscape, managing cyber and evolving technology risks and helping our clients better understand, communicate, and respond and recover from adverse events. Andrew has led Cyber Program Offices for several large institutions as part of broader business transformation efforts. He is an advocate for the adoption of the FAIR Methodology as an alternative method of IT Risk Management and thought leader on recent cybersecurity regulatory matters. Connect on LinkedIn Keep your finger on the AI Pulse Key links All AI Pulse Results Key links Download the full report Explore our AI Studio All AI Pulse Results Vol. 3: From Automation to Autonomy Paper: When AI Readiness Meet ROI Reckoning Vol. 4: No AI visibility, no confidence