Balancing generative AI innovation with individual privacy rights in Australia By Hanneke CattsCan we balance innovation with individual rights?Generative AI isn’t just raising questions about technology; it’s also raising concerns about privacy. Can we balance innovation with individual rights?As intelligent systems reshape how businesses process personal data, regulatory frameworks worldwide are playing catch-up. Forward-thinking organisations aren't waiting for mandates; they're taking the lead in defining what responsible innovation looks like in practice. Topics Cybersecurity and Privacy Artificial Intelligence Australia’s evolving privacy landscapeAustralia is entering a new era of digital regulation, with privacy and AI front and centre. The Privacy Act 1988 is undergoing a major overhaul, the most significant in decades, following a two-year review by the Attorney-General’s Department. The second tranche of legislation, expected in 2025, is anticipated to strengthen individual rights and increase business obligations, especially in relation to automated and AI-driven decision-making.In parallel, the government is still determining how to regulate high-risk AI use, whether through new legislation or updates to existing laws. Particularly, the government’s interim response to the 2024 Safe and Responsible AI in Australia consultation outlines ongoing initiatives that expand upon Australia’s voluntary AI Ethics Principles (2019)Australia’s privacy reform is not just about ensuring compliance. It reflects a broader shift in thinking about how digital technologies, particularly generative AI, impact privacy, fairness, and individual rights.Key changesProposed changes to The Privacy Act 1988 are expected to better address the realities of generative AI and automated systems. They are expected to include stronger transparency requirements when personal information is used in algorithmic decision-making, clearer boundaries around profiling practices, and new individual rights such as data erasure, data portability, and the ability to request a human review of AI-driven outcomes. Collectively, the reforms aim to align Australian privacy law with modern data use practices and ensure individuals retain meaningful control over their personal information.Privacy under pressure with generative AI—What are the key concerns?The rise of generative AI has delivered undeniable advancements, such as accelerating medical breakthroughs, streamlining financial services, and unlocking new forms of productivity.However, this rapid progress brings an equally urgent need to safeguard the privacy of personal information. Data may fuel innovation, but when not managed appropriately, it can undermine the very rights and freedoms it seeks to enhance. Our key concerns fall into three distinct areas:In a data-driven world, innovation outpaces regulation – making responsible AI use urgent.Is there fairness in the algorithm? Generative AI raises concerns about bias and trust.Publicly available large-language models (LLMs) complicate privacy compliance with cross-border models. 1. Responsible AI use is urgent.Generative AI technology, particularly technologies using deep learning or generative techniques, can memorise and regurgitate personal or sensitive information, sometimes unintentionally. This reveals a fundamental design gap: our current privacy frameworks were not built for systems that learn from vast, unstructured datasets or infer personal details without explicit input. These issues force critical questions to the surface:How much data collection is justifiable? Who determines how this data is used? Are individuals truly informed about how their information is being reused, repurposed, or inferred by AI systems?These questions are no longer abstract policy debates. Rather, they represent real and growing risks requiring attention from regulators, industry, and individuals alike. Amid ongoing regulatory shifts, Australia must move beyond the ‘notice-and-consent’ model, towards organisational accountability and system-level governance. In a hyper-connected, AI-driven world, individuals can no longer meaningfully control their data on their own. As a result, privacy protections must be embedded directly into systems and processes, supported by robust accountability mechanisms such as audits to ensure compliance and transparency.2. Challenges of bias and trust.As generative AI becomes increasingly embedded into public and private sector operations, concerns over inaccuracy and bias in outputs are intensifying.Generative AI systems learn from massive volumes of information, much of which may be outdated, inaccurate, or reflect historical and societal biases. Embedding personal or sensitive data in training data without proper safeguards risks generating misleading, discriminatory, or inaccurate outputs. The consequences can directly affect individual rights, opportunities, or reputations.To address these challenges, stronger governance frameworks are needed - ones that prioritise data quality, ethical oversight, and safeguards around personal information used in AI development and training.3. Publicly available LLMs complicate privacy compliance with cross-border models.While cloud-based, public generative AI tools or LLMs offer immediate utility, from speeding up content creation to summarising documents, they come with significant privacy trade-offs particularly when operated across borders.For example, when an Australian’s personal information is processed in foreign jurisdictions, local privacy protections may not apply, and enforcement becomes obscure in the event of a breach. Users may inadvertently provide personal information, including health information, business IP, or identifiers. Because LLMs are designed to feel human, users often overshare, without considering the fact that their data may be stored, used to improve the LLM, or accessed by the provider or external parties.Further, publicly available LLMs can memorise and leak sensitive data, especially when it originates from publicly scraped sources. A lack of transparency on how training data is sourced and handled has the potential to complicate further key privacy principles such as data minimisation and purpose limitation under Australian and international privacy standards.Mitigation is possible, but complex. While some technical fixes help, they don’t remove the risk entirely.What can you do? – Opportunities to enhance privacy protectionsPeople need protection today, not five years from now. Organisations have a growing responsibility to bridge this legislative gap proactively, as this is no longer just about compliance. While Australia’s broader digital and data strategy reflects a coordinated regulatory shift and a maturing approach to digital governance, the agility of technology and innovation will always outpace regulatory legislation.To keep pace, organisations must stay informed and adapt with stronger privacy practices following guidance from the likes of the Office of the Australian Information Commissioner OAIC) to navigate change. Our key opportunities for organisations to act proactively fall into three categories:Stay informed.Align the enterprise’s AI Strategy with Privacy, Cybersecurity & Data Governance frameworks.Utilise a “Privacy by design” approach.1. Stay informed of both the Australian and international legislative climates.The first critical step to sound privacy protection in AI is to monitor local legislative developments closely, but equally as important, stay informed about international privacy jurisdictions - particularly those that are considered industry-leading or global best practices. In particular, the European Union’s GDPR acts as a strong guide to shape businesses’ approach to privacy and data protection. Similarly, the European Union Artificial Intelligence Act provides clear insight into the direction of specific AI legislation as it focuses on strengthening regulation of data quality, transparency, human oversight, and accountability for Artificial Intelligence technologies.As AI models become more complex and embedded in decision-making, proactively aligning with these global standards helps mitigate regulatory risk, enables cross-border scalability, and ensures ethical data use remains central to innovation.2. Align the enterprise’s AI Strategy with privacy, cybersecurity and data governance frameworks.In the world of AI, data is king. And in today's digital landscape, it's also the prime target for sophisticated cyber threats. As organisations harness vast datasets to power AI innovations, this valuable information simultaneously represents significant risk exposure, underscoring why organisations must align cybersecurity, privacy, and data governance to unlock its potential safely.There is not a clear path that outlines how cybersecurity, privacy, and data governance initiatives work together to mitigate the ongoing risk of the potential loss of customer data. However, with generative AI accelerating the value and volume of data, the synergies among cybersecurity, privacy, and data governance frameworks are becoming even stronger: cybersecurity protects the data, privacy ensures it’s used ethically and lawfully, and data governance provides the structure to manage it all consistently, together forming the foundation for trustworthy AI.Organisations should establish aligned frameworks and a Customer Data Protection Working Group to drive coordinated, risk-aware decision-making around people, processes, and systems, ensuring stronger protection of individuals' data and therefore, safer AI innovation.Why? Organisations that take a proactive, collaborative approach to risk are better equipped to prevent customer data loss, minimise reputational damage or regulatory scrutiny, meaning they can innovate safely and keep up with the rapid pace of technological change.3. Embed a Privacy-by-Design approach to AI implementation.Privacy should be embedded as a foundational design principle… not bolted on as an afterthought. Privacy-by-design guarantees compliance builds customer trust and reduces the risk of costly rework or breaches. To achieve this, organisations should take steps to understand their data landscape, assess and understand their risks, then operationalise protection considerations at every step of the way from developmental to operational workflows.While there’s a lot businesses can do, from aligning internal frameworks to embedding privacy into AI design, lasting privacy protection will ultimately require coordinated efforts between regulators, policymakers, and technologists to set clear standards and guardrails in an increasingly data-driven world.Why act now?With privacy risks escalating and Australians becoming increasingly concerned about how their data is used, the responsibility is shifting to organisations to act now. Regulation is still catching up, but the direction is clear: system-level accountability is replacing individual control. Proactively planning in alignment with both established frameworks such as the GDPR and emerging regulatory developments allows businesses to build stronger, safer systems and avoid the need for reactive changes.The million-dollar question: can we balance innovation with individual rights?The short answer is yes, but not easily. Innovation is exciting but must go hand in hand with privacy to protect individual rights. Organisations need to lead the way in delivering breakthrough technology safely and responsibly.Protiviti’s Global AI Governance solution composed of privacy and security practitioners understands the inherent risks and challenges our clients face in developing and maintaining effective data protection programs. We draw on our expertise in compliance, data governance, security, privacy, and AI to help clients navigate the evolving regulatory landscape — supporting them in meeting obligations, assessing risks, and implementing effective measures. This includes legislation such as Australia’s Privacy Act reforms, emerging AI laws, and international standards from bodies like the U.S. National Institute of Standards and Technology (NIST) and the International Organization for Standardisation (ISO).Want to read more? To explore how to balance innovation with accountability in the age of AI and big data, read our related article: Mastering Data Dilemmas. Find out more about our solutions: Artificial Intelligence Services Artificial Intelligence (AI) stands at the forefront of innovation and is revolutionising the way businesses operate and compete. Al is critical to define the trajectory of future growth and value. The opportunity is vast and balance is key to strategic and responsible use of Al. Technology Risk Management We help design and implement operating models to manage technology risk and better control IT systems, people, and processes. Our technology risk offerings reduce cost and risks while increasing agility. Cybersecurity Consulting From the speed of innovation, digital transformation, and economic expectations to evolving cyber threats, the talent gap, and a dynamic regulatory landscape, technology leaders are expected to effectively respond to and manage these competing priorities. Data Privacy Consulting Protiviti’s data privacy consulting team understands the risks and challenges companies face in developing and maintaining effective privacy and data protection programs. Leadership Hanneke Catts Hanneke is a director based in Protiviti Australia's Sydney office with over 15 years’ experience focusing on technology consulting, including privacy, technology risk, project management and assurance, IT controls and security compliance, enterprise risk management, ... Learn More Hirun Tantirigama Hirun is a managing director and Protiviti Australia's technology consulting lead with 15 years’ experience in providing risk and regulatory advisory services across a variety of clients and industries. He has led complex, transformational programs across areas such as ... Learn More Featured insights NEWSLETTER Agentic AI: What It Is and Why Boards Should Care In 2024, generative artificial intelligence (AI) was all the rage. In 2025, agentic AI has surfaced as the next frontier of AI deployment. What is agentic AI, and why is it important for directors to understand how management intends to use it?... WHITEPAPER Generative AI: Business Rewards vs. Security Risks Explore ISMG’s Second Annual Generative AI Study, sponsored by Protiviti. Learn how businesses balance AI innovation with security risks in this comprehensive report INSIGHTS PAPER Harnessing the future: Protiviti’s research on AI adoption Due in large part to the rise of generative artificial intelligence (AI), organisations of all types and sizes are implementing, or considering adopting, AI for various business functions and activities — from accounting and finance to cybersecurity,... WHITEPAPER Establishing a scalable AI governance framework Establishing an AI governance structure paves the way for effective management and measurement of AI solutions, fostering innovation while mitigating risk. PODCAST Australian Privacy Commissioner Carly Kind breaks down new rules in Australia's Privacy Act In this VISION by Protiviti Interview, Protiviti Director Hanneke Catts sits down with Carly Kind, Privacy Commissioner for The Office of the Australian Information Commissioner (OAIC), to discuss recent updates to the Australian Privacy Act.... RESOURCE GUIDE A guide to the EU AI Act: Regulations, compliance and best practices As artificial intelligence (AI) continues its explosive growth within organizations around the world, with virtually every business function exploring opportunities to increase productivity, efficiency and revenue growth, a growing collection of... RESEARCH GUIDE FAQ Guide on the Use of AI for Financial Crime Compliance Ask financial crime professionals what the most challenging part of their job is, and most will likely say it is the timely identification of suspicious activity. As much as companies have worked to improve their detection capabilities given their... NEWSLETTER A Director’s Road Map for Effective AI Implementation Almost every organisation around the world is trying to figure out what artificial intelligence (AI) offers and how to deploy it to move the business forward. A road map can help directors engage more effectively in these strategic conversations.... BLOGS Transformative Role of AI in Fraud Detection The rise of Artificial Intelligence (AI) has fundamentally transformed the landscape of fraud detection, presenting unprecedented challenges and opportunities for businesses across industries. BLOGS Navigating AI Risk Management: Insights from CIOs on Artificial Intelligence Strategy Businesses are excited about the transformative potential of artificial intelligence (AI) to innovate and enhance business models, customer insights, products, and processes. Alongside this potential, there is a growing need to identify and mitigate... INSIGHTS PAPER Mastering Data Dilemmas: Navigating Privacy, Localisation and Sovereignty In today's digital age, data privacy management is paramount for businesses and individuals alike. With the ever-changing regulatory landscape surrounding data protection, organisations must adapt swiftly to ensure compliance and maintain trust with... Previous Article Pagination Next Article