Balancing generative AI innovation with individual privacy rights in Australia

By Hanneke Catts

Can we balance innovation with individual rights?

Generative AI isn’t just raising questions about technology; it’s also raising concerns about privacy. Can we balance innovation with individual rights?

As intelligent systems reshape how businesses process personal data, regulatory frameworks worldwide are playing catch-up. Forward-thinking organisations aren't waiting for mandates; they're taking the lead in defining what responsible innovation looks like in practice.

Australia’s evolving privacy landscape

Australia is entering a new era of digital regulation, with privacy and AI front and centre. The Privacy Act 1988 is undergoing a major overhaul, the most significant in decades, following a two-year review by the Attorney-General’s Department. The second tranche of legislation, expected in 2025, is anticipated to strengthen individual rights and increase business obligations, especially in relation to automated and AI-driven decision-making.

In parallel, the government is still determining how to regulate high-risk AI use, whether through new legislation or updates to existing laws.  Particularly, the government’s interim response to the 2024 Safe and Responsible AI in Australia consultation outlines ongoing initiatives that expand upon Australia’s voluntary AI Ethics Principles (2019)

Australia’s privacy reform is not just about ensuring compliance. It reflects a broader shift in thinking about how digital technologies, particularly generative AI, impact privacy, fairness, and individual rights.

Key changes

Proposed changes to The Privacy Act 1988 are expected to better address the realities of generative AI and automated systems. They are expected to include stronger transparency requirements when personal information is used in algorithmic decision-making, clearer boundaries around profiling practices, and new individual rights such as data erasure, data portability, and the ability to request a human review of AI-driven outcomes. Collectively, the reforms aim to align Australian privacy law with modern data use practices and ensure individuals retain meaningful control over their personal information.

Privacy under pressure with generative AI—What are the key concerns?

The rise of generative AI has delivered undeniable advancements, such as accelerating medical breakthroughs, streamlining financial services, and unlocking new forms of productivity.

However, this rapid progress brings an equally urgent need to safeguard the privacy of personal information. Data may fuel innovation, but when not managed appropriately, it can undermine the very rights and freedoms it seeks to enhance. Our key concerns fall into three distinct areas:

  1. In a data-driven world, innovation outpaces regulation – making responsible AI use urgent.
  2. Is there fairness in the algorithm? Generative AI raises concerns about bias and trust.
  3. Publicly available large-language models (LLMs) complicate privacy compliance with cross-border models.  

1. Responsible AI use is urgent.

Generative AI technology, particularly technologies using deep learning or generative techniques, can memorise and regurgitate personal or sensitive information, sometimes unintentionally. This reveals a fundamental design gap: our current privacy frameworks were not built for systems that learn from vast, unstructured datasets or infer personal details without explicit input. 

These issues force critical questions to the surface:

  • How much data collection is justifiable?  
  • Who determines how this data is used?  
  • Are individuals truly informed about how their information is being reused, repurposed, or inferred by AI systems?

These questions are no longer abstract policy debates. Rather, they represent real and growing risks requiring attention from regulators, industry, and individuals alike. Amid ongoing regulatory shifts, Australia must move beyond the ‘notice-and-consent’ model, towards organisational accountability and system-level governance.  In a hyper-connected, AI-driven world, individuals can no longer meaningfully control their data on their own. As a result, privacy protections must be embedded directly into systems and processes, supported by robust accountability mechanisms such as audits to ensure compliance and transparency.

2. Challenges of bias and trust.

As generative AI becomes increasingly embedded into public and private sector operations, concerns over inaccuracy and bias in outputs are intensifying.

Generative AI systems learn from massive volumes of information, much of which may be outdated, inaccurate, or reflect historical and societal biases. Embedding personal or sensitive data in training data without proper safeguards risks generating misleading, discriminatory, or inaccurate outputs. The consequences can directly affect individual rights, opportunities, or reputations.

To address these challenges, stronger governance frameworks are needed - ones that prioritise data quality, ethical oversight, and safeguards around personal information used in AI development and training.

3. Publicly available LLMs complicate privacy compliance with cross-border models.

While cloud-based, public generative AI tools or LLMs offer immediate utility, from speeding up content creation to summarising documents, they come with significant privacy trade-offs particularly when operated across borders.

For example, when an Australian’s personal information is processed in foreign jurisdictions, local privacy protections may not apply, and enforcement becomes obscure in the event of a breach. Users may inadvertently provide personal information, including health information, business IP, or identifiers. Because LLMs are designed to feel human, users often overshare, without considering the fact that their data may be stored, used to improve the LLM, or accessed by the provider or external parties.

Further, publicly available LLMs can memorise and leak sensitive data, especially when it originates from publicly scraped sources. A lack of transparency on how training data is sourced and handled has the potential to complicate further key privacy principles such as data minimisation and purpose limitation under Australian and international privacy standards.

Mitigation is possible, but complex. While some technical fixes help, they don’t remove the risk entirely.

What can you do? – Opportunities to enhance privacy protections

People need protection today, not five years from now. Organisations have a growing responsibility to bridge this legislative gap proactively, as this is no longer just about compliance. While Australia’s broader digital and data strategy reflects a coordinated regulatory shift and a maturing approach to digital governance, the agility of technology and innovation will always outpace regulatory legislation.

To keep pace, organisations must stay informed and adapt with stronger privacy practices following guidance from the likes of the Office of the Australian Information Commissioner OAIC) to navigate change. Our key opportunities for organisations to act proactively fall into three categories:

  1. Stay informed.
  2. Align the enterprise’s AI Strategy with Privacy, Cybersecurity & Data Governance frameworks.
  3. Utilise a “Privacy by design” approach.

1. Stay informed of  both the Australian and international legislative climates.

The first critical step to sound privacy protection in AI is to monitor local legislative developments closely, but equally as important, stay informed about international privacy jurisdictions - particularly those that are considered industry-leading or global best practices. In particular, the European Union’s GDPR acts as a strong guide to shape businesses’ approach to privacy and data protection. Similarly, the European Union Artificial Intelligence Act provides clear insight into the direction of specific AI legislation as it focuses on strengthening regulation of data quality, transparency, human oversight, and accountability for Artificial Intelligence technologies.

As AI models become more complex and embedded in decision-making, proactively aligning with these global standards helps mitigate regulatory risk, enables cross-border scalability, and ensures ethical data use remains central to innovation.

2. Align the enterprise’s AI Strategy with privacy, cybersecurity and data governance frameworks.

In the world of AI, data is king. And in today's digital landscape, it's also the prime target for sophisticated cyber threats. As organisations harness vast datasets to power AI innovations, this valuable information simultaneously represents significant risk exposure, underscoring why organisations must align cybersecurity, privacy, and data governance to unlock its potential safely.

There is not a clear path that outlines how cybersecurity, privacy, and data governance initiatives work together to mitigate the ongoing risk of the potential loss of customer data. However, with generative AI accelerating the value and volume of data, the synergies among cybersecurity, privacy, and data governance frameworks are becoming even stronger: cybersecurity protects the data, privacy ensures it’s used ethically and lawfully, and data governance provides the structure to manage it all consistently, together forming the foundation for trustworthy AI.

Organisations should establish aligned frameworks and a Customer Data Protection Working Group to drive coordinated, risk-aware decision-making around people, processes, and systems, ensuring stronger protection of individuals' data and therefore, safer AI innovation.

Why? Organisations that take a proactive, collaborative approach to risk are better equipped to prevent customer data loss, minimise reputational damage or regulatory scrutiny, meaning they can innovate safely and keep up with the rapid pace of technological change.

3. Embed a Privacy-by-Design approach to AI implementation.

Privacy should be embedded as a foundational design principle… not bolted on as an afterthought. Privacy-by-design guarantees compliance builds customer trust and reduces the risk of costly rework or breaches. To achieve this, organisations should take steps to understand their data landscape, assess and understand their risks, then operationalise protection considerations at every step of the way from developmental to operational workflows.

While there’s a lot businesses can do, from aligning internal frameworks to embedding privacy into AI design, lasting privacy protection will ultimately require coordinated efforts between regulators, policymakers, and technologists to set clear standards and guardrails in an increasingly data-driven world.

Why act now?

With privacy risks escalating and Australians becoming increasingly concerned about how their data is used, the responsibility is shifting to organisations to act now. Regulation is still catching up, but the direction is clear: system-level accountability is replacing individual control. Proactively planning in alignment with both established frameworks such as the GDPR and emerging regulatory developments allows businesses to build stronger, safer systems and avoid the need for reactive changes.

The million-dollar question: can we balance innovation with individual rights?

The short answer is yes, but not easily. Innovation is exciting but must go hand in hand with privacy to protect individual rights. Organisations need to lead the way in delivering breakthrough technology safely and responsibly.

Protiviti’s Global AI Governance solution composed of privacy and security practitioners understands the inherent risks and challenges our clients face in developing and maintaining effective data protection programs. We draw on our expertise in compliance, data governance, security, privacy, and AI to help clients navigate the evolving regulatory landscape — supporting them in meeting obligations, assessing risks, and implementing effective measures. This includes legislation such as Australia’s Privacy Act reforms, emerging AI laws, and international standards from bodies like the U.S. National Institute of Standards and Technology (NIST) and the International Organization for Standardisation (ISO).

Want to read more? To explore how to balance innovation with accountability in the age of AI and big data, read our related article: Mastering Data Dilemmas.

Featured insights

Loading...