Cybersecurity and Resiliency in the Age of AI: Taming the Digital Genie Before It Gossips 5 min read Artificial intelligence (AI) is rapidly reshaping the enterprise landscape, promising a leap in productivity and efficiency. Yet, as organisations rush to deploy these digital agents, they risk unleashing forces they do not fully understand or control. The productivity promise of AI is real, but so is the privacy peril, and the stakes have never been higher.Discussions that I’ve had with leadership at major financial institutions, including Wells Fargo, reveal a growing anxiety among business leaders. Regulators are no longer content to let technology teams experiment in isolation; they are demanding answers about how AI is governed, secured and, most crucially, how its identity and access are managed. The old paradigms of cybersecurity, built around human users and their digital identities, are crumbling in the face of autonomous AI agents that operate with increasing independence. Topics Cybersecurity and Privacy Digital Transformation Technology Enablement Artificial Intelligence The Identity Crisis of Non-Human AgentsThe challenge is not theoretical. For decades, identity management meant authenticating people and granting them access to systems and data. Now, organisations must grapple with the question: What does identity mean for a non-human agent? Where should these identities be defined and stored?The complexity multiplies when AI agents interact, potentially bypassing established access controls and exposing sensitive information in ways that were never anticipated. Consider the scenario where one AI agent requests data from another, which in turn consults a third agent. Each step in this chain carries the risk of privilege escalation, where information meant to be confidential is inadvertently disclosed.The problem is not the security of the communication channels but the inheritance of rights across agents - a subtle but profound shift in how access is managed. Even the technology giants that build these systems admit they have yet to solve the problem of agentic identity at scale.Taking a look at a very possible hypothetical, let’s say Dave - a project manager - eagerly asks his company’s newly launched AI assistant, Synthia, to prepare an insightful pre-read for a meeting with the notoriously tough vice president. Synthia dives into every email, document and even HR and finance records, gathering budget overruns and confidential performance notes. It even flags the VP’s dislike of beige, found in chat logs.Dave, impressed by the thoroughness, sends the report to the VP and team without a second glance. The fallout is immediate: privacy breaches, embarrassed colleagues and a furious VP. Dave’s AI becomes the ultimate office gossip.Avoiding AI’s Potential as a Privacy LiabilityMapping the landscape of AI tools and their data access is a daunting task. Many enterprises lack a comprehensive inventory of their AI systems, let alone a clear understanding of what data those systems can reach. Meanwhile, regulatory expectations are evolving at breakneck speed. Financial regulators look to the largest banks to set the standard, but the reality is that even these institutions are still searching for best practices.The risks are not limited to compliance. A single AI-generated report that leaks confidential data can trigger an internal crisis and inflict lasting reputational harm. Mishandled access can violate privacy laws and industry regulations, exposing organisations to legal and financial consequences. The genie, once out of the bottle, is not easily contained.So what is to be done? The answer lies in treating AI agents with the same rigor as human employees. Their identities must be defined, their roles scoped and their access strictly limited to what is necessary. The principle of least privilege, long a cornerstone of cybersecurity, must now be applied to machines as well as people. Permission inheritance must be tightly controlled, ensuring that AI agents mirror the access rights of their human counterparts and do not inadvertently escalate privileges through agent-to-agent communication.Data classification becomes paramount. Machine-readable labels and policies must guide AI behavior, ensuring that sensitive information is protected and only accessible to authorised agents. Continuous auditing and monitoring are essential, not only to track what AI agents access and share, but also to detect anomalous behaviors and model drift that could signal a breach or misuse.Building a security-aware culture is no longer optional. Employees must be trained to understand the risks and responsibilities associated with AI, fostering a climate of vigilance and accountability. Emerging standards, such as SPIFFE, offer promising frameworks for managing machine identities and supporting zero-trust architectures for agentic AI.Closing ThoughtsUltimately, leadership is required. Business executives must proactively address the risks of AI identity and access, collaborating with cybersecurity experts and ethicists to shape governance strategies. Continuous evaluation is essential, as both AI capabilities and regulatory requirements evolve. Before deploying AI solutions, organisations must ensure that robust data classification, security controls and incident response plans are in place.The journey to resilient, secure AI is ongoing. Those who tame the digital genie today will be best positioned to seize tomorrow’s opportunities without falling victim to its unintended consequences.This article originally appeared on Forbes Technology Council.Andy Retrum and Ryan McCarthy, managing directors at Protiviti, contributed to this article. Find out more about our solutions: Pro Document Consent Cybersecurity Our cybersecurity services assess, develop, implement, and manage end-to-end next generation solutions tailored to your needs. We share your commitment to protecting your data and optimising your business and cyber resiliency. Pro Digital Hightech Artificial Intelligence At Protiviti Australia, we deliver cutting edge artificial intelligence solutions, helping you leverage existing Al technologies or build custom solutions for your enterprise. Pro Screen System Integration Emerging Technologies Protiviti’s cloud services and Emerging Technologies team help organisations embrace new technologies to support business strategies, optimise business processes, and mine data to bring new solutions to market and gain a competitive advantage. Leadership Hirun Tantirigama Hirun is a managing director and Protiviti Australia's technology consulting lead with 18 years’ experience in providing risk and regulatory advisory services across a variety of clients and industries. He has led complex, transformational programs across areas such as ... Learn More Krishnan Venkatraman Krishnan is a director with over 14 years’ experience in professional services. He has specific expertise in technology risk consulting and has been advising clients both in the public and private sector in designing and implementing information security controls.Major ... Learn More Rita Gatt As managing director, technology and cybersecurity at Protiviti, Rita leads a dedicated team focused on solving complex organisational challenges, with a particular emphasis on leveraging data, AI and technology to do so. With over 20 years of experience navigating ... Learn More Featured insights BLOGS Navigating Australia's Cybersecurity Obligations: SOCI, PSPF and the Essential Eight – A Strategic Guide for Government and Critical Infrastructure Organisations 18 min read As Australia confronts an evolving and intensifying cyber threat landscape, public and private sector entities are under increasing pressure to fortify their cyber resilience. Central to this effort are three frameworks that define the country's... INSIGHTS PAPER Protect Your Cloud Environment With CNAPP 8 min read In 2023, a prominent global technology firm experienced a significant security breach when sensitive production data was inadvertently restored in a development environment. This misconfiguration led to the exposure of credentials and customer data,... INSIGHTS PAPER Pragmatic AI Security Strategies for CISOs 3 min read Artificial Intelligence (AI) is transforming how organisations work, compete, and serve customers. Many enterprises are moving quickly to implement AI in their business, eager to capture productivity gains and new capabilities. SURVEY Top Risks 2026: Executive Perspectives & Growth Opportunities 8 min read Protiviti Top Risks Report 2026 shares executive insights on Gen AI, agentic AI, cyber threats and economic risks. BLOGS The Cybersecurity Blind Spot in SOX Compliance and How to Fix It 8 min read Recent ransomware attacks and new SEC cyber disclosure rules have increased the focus on cyber resilience across nearly all industries. As a result, many companies have invested substantial resources to help manage and mitigate cybersecurity risk to... SURVEY When AI Readiness Meets ROI Reckoning | AI Pulse Series 2026 5 min read Learn what is limiting AI ROI from Protiviti’s yearlong research paper. BLOGS Cybersecurity risk assessments vs. gap assessments: Why both matter 6 min read As cybersecurity incidents continue to make headlines, whether involving the breach of sensitive information or the halting of an enterprise’s operations,cybersecurityrisks remain top of mind for many organisations in Australia and... BLOGS Creating a resilient cybersecurity strategy: The governance lifecycle approach 7 min read Cybersecurity governance should do more than manage cyber risk. Goodcybersecuritygovernance creates efficiencies by clarifying the outcomes expected from its processes and establishing boundaries of responsibility among cybersecurity... Previous Article Pagination Next Article