Generative Artificial Intelligence

Balance risk, efficiency and compliance

By Christine Livingston and Jeremy Wildhaber

You need to learn not only about generative artificial intelligence’s potential uses for delivery and management of healthcare, but also about its capabilities to support internal audit activity. At the same time, you need to understand hazards associated with its use and develop your approaches to realising generative AI’s potential while managing its risks.

*Reprinted with permission from New Perspectives, Journal of the Association of Healthcare Internal Auditors, Inc. Volume 42/ Number 4, 2023.

Professionals in a broad range of industries know that ChatGPT and other forms of generative artificial intelligence (AI) will play a significant role in their work going forward. But they cannot always decode its potential and risks from general news about the technology. 

ChatGPT in the context of AI 

Recent advances in artificial intelligence have brought large language models (LLMs), one brand-named ChatGPT, to the forefront of many personal and business conversations. LLMs, somewhat synonymous with generative AI, leverage newer algorithms, deep learning, and tremendous volumes of unstructured data. They use the data to generate new content such as text, images, video, audio, simulations, and software code based on human prompts or queries. 

Most generative AI capabilities were previously only available to those with the skills to interact with application programming interfaces. OpenAI’s ChatGPT is a particularly noteworthy example of an LLM, as the addition of a conversational interface has made the technology available to everyone—no technical skill sets required. 

Users who experiment with ChatGPT can quickly appreciate the service’s ability to answer questions phrased in natural human language based on a broad range of information. ChatGPT can generate text that may not be easily distinguished from human writing. Site visitors can ask questions, receive reasonable responses and ask follow-up questions to get more detailed information. Generative AI already had numerous commercial uses before ChatGPT’s launch, including the following: 

  1. Customer support and sales and marketing task automation
  2. Software code development
  3. Fraud detection applications
  4. Traffic modeling
  5. Text generation
  6. Supply chain automation
  7. Multiple cybersecurity applications such as attack simulation and threat intelligence gathering 

The arrival and instant popularity of ChatGPT unleashed the race for new generative AI products and LLMs. Virtually all hyperscalers, such as Microsoft, IBM, Google and Amazon Web Services, are unveiling their latest and greatest generative AI products and services. 

Regulations

The rapid adoption of LLMs such as ChatGPT has led some industry leaders to advise that further development of the technology should be regulated by a government entity. In February 2023, ChatGPT made history when it gained one million users faster than any other online service. 

In March of 2023, the nonprofit Future of Life Institute published a petition, signed by several technology leaders and luminaries. The petition urged a “public and verifiable” six-month pause in some AI development so leaders could consider how to manage AI’s ethical and functional risks.

The Center for AI Safety shared a statement from AI experts and public figures expressing their concern about AI risk. The statement says: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The statement, signed by a number of AI scientists and other notable figures, is intended to open the discussion about some of AI’s most severe risks, according to the organisation. 

In May 2023, OpenAI CEO Sam Altman and others testified before Congress to discuss AI in the contexts of jobs, accuracy, misinformation, bias, privacy, and copyright. All parties present agreed artificial intelligence must be regulated, but the approach to regulation has yet to be determined. 

How ChatGPT works

LLMs are built with neural networks leveraging transformer technology—a new AI architecture for interpreting unstructured information first published by Google Research in 2017. The GPT in ChatGPT stands for Generative Pretrained Transformer, noting the transformer architecture. 

ChatGPT was trained using a combination of supervised learning and reinforcement learning techniques. Supervised learning involves providing labeled data to a model, and using reinforcement learning that rewards a model when it accomplishes an objective. 

According to OpenAI, ChatGPT was trained with reinforcement learning from human feedback using a process by which human trainers played both sides of a conversation (human user and AI assistant). The process generated both improved understanding of the questions posed and well-composed responses. Subsequently, AI trainers ranked several potential completions and rewarded the model; the process was repeated several times to train ChatGPT. 

The training occurred on massive amounts of data and with an unprecedented number of parameters (weights in the AI model). The LLMs are able to predict the likelihood of the next word in a sequence based on information they have been trained on and then rewards are calculated mathematically for a job well done to reinforce learning. 

Risks 

A lack of clarity about generative AI’s potentials and perils is a major risk to organisations, since results could be choices that are too cautious or too reckless for charting an organisations’ future. Uncertainty about AI’s capabilities and meaningful applications could cause leaders, auditors and others to delay use of this technology and miss its capacities to improve care and cut costs. Not knowing enough could result in a misunderstanding of risks and a too-rapid adoption of AI that introduces a different set of hazards. 

Several technology luminaries and leaders are advising caution before developing the technology further or integrating it into mission-critical applications. Concerns about AI are serious. 

Inaccurate results – Generative AI can provide incorrect information and affect outcomes. In industries that must put safety first—such as healthcare—inaccurate results must be a major concern. The undesirable side effect is caused by the model being optimised to produce a statistically probable response—not necessarily a truthful response. LLMs do not have an inherent sense of true/false or right/wrong. 

Unintentional bias – AI can be built with—or can develop— unintentional bias. Generative AI uses algorithms that are trained from source data, and its interactions with users further refine its responses over time. Biases in an AI’s logic or in its knowledge base—whether present from inception or developed over time—are an additional area of concern with any AI. Algorithm weaknesses, source data and user input are three distinct opportunities for bias to creep into an AI system. 

New cybersecurity risks – AI can introduce new cybersecurity risks. Even as generative AI is applied to simulate attacks, identify cybersecurity threats and more, bad actors will want to use AI to create new vulnerabilities and innovate new forms of attack. 

New data privacy risks – AI can introduce new data privacy risks. Generative AI is always learning from users’ queries and prompts. If those queries refer to confidential information or include details about intellectual property, the information may become part of the AI’s knowledge base. 

Some companies have temporarily blocked the use of generative AI within their organisations due to concern about how their confidential information or intellectual property could turn up in AI knowledge bases. Their worries are legitimate. New features have been developed to provide more privacy and security, but they do not yet fully address all privacy concerns. 

Workforce disruption – AI can disrupt workforces. When new technology emerges, so too does fear that human workers will be replaced. While generative AI can create text or recommend actions, it cannot fully emulate the capabilities of a person. Leaders should be prepared to address concerns that AI may one day replace jobs in healthcare. They will want to monitor for and respond to concerns that may affect morale among their organisations’ employees. 

Strategy choices produced by AI could be too cautious or too reckless.
Large language models do not have an inherent sense of true/false or right/wrong.

Mitigating risks and auditing AI 

Various forms of generative AI hold enormous potential to improve healthcare outcomes and drive down costs, with applications in clinical decision making, risk prediction of pandemic preparedness, personalised medicine and more, according to the World Economic Forum. AI is already being used in healthcare to diagnose conditions from a symptom list, identify at-risk patients, find fractures on x-rays, analyse clinical study data and predict health conditions. And the abundant data healthcare organisations generate about patients, conditions, treatments and more creates an environment rich with information for a variety of AI healthcare applications. 

As healthcare organisations adopt generative AI, business leaders and auditors alike must monitor all phases of AI deployments—planning, design, development, testing, implementation, operations and maintenance.

Internal auditors, in particular, will want to consider how their organisations will:

  • Build controls to guide all phases of AI system deployment to ensure the applications remain robust, accurate and effective.
  • Ensure that transparency exists within the algorithm decision-making process, and that AI models are accurate and unbiased, to avoid unintended consequences of biased AI.
  • Ensure sensitive information is adequately protected and data privacy regulatory compliance is achieved. If third-party AI solutions are used, understand how the vendors are using and protecting the organisation’s data.
  • Assess and monitor risks related to relying on AIgenerated insights and how the organisation can mitigate those risks.

Organisations also should confirm that the AI solutions they use are compliant with healthcare regulations and consider emerging guidance from Centers for Medicare and Medicaid Services, Health and Human Services, states and other regulatory authorities. 

To achieve the best outcomes from using generative AI, healthcare organisations must educate themselves and their workforces and create, adopt and follow ethical and functional standards. They will want to understand generative AI’s key design principles and stay current with emerging best practices for its use in healthcare.

Resources

 

AI security 

Technology firms have heard businesses’ concerns about AI risks and are developing solutions to address them. A trend toward granting users more nuanced and sophisticated control over data privacy exists. For instance, Microsoft is putting ChatGPT-maker OpenAI’s models into Microsoft cloud environments to offer customers more security and privacy. Microsoft and OpenAI both know that protecting customers’ confidential data and intellectual property will be key to adoption and use of generative AI. 

Amazon has announced Bedrock, Google has released Bard, and IBM is reinvigorating the Watson brand with Watsonx. Additionally, OpenAI has announced that customers can opt out of some of its data collection services to secure more privacy. 

Healthcare operations 

You will want to ensure responsible use of AI within your organisation. The following use cases show the potential of OpenAI and other LLMs to provide transparency and efficiency to a broad range of tasks in healthcare. 

Claims audits and coding – AI can use claims and health records to validate medical coding and identify anomalies and trends. 

Revenue cycle – AI can detect billing errors, collection issues and missed coding. 

Fraud, waste and abuse – AI can identify illegal kickbacks and referrals, suspicious claims, billing fraud and upcoding. AI also can pinpoint wasteful medical supply use to help drive down costs. 

Contract assessments – AI can identify billings and payments that do not align with contractual terms. 

Compliance – AI can flag transactions that do not comply with internal policies and regulations. 

The use cases should only be explored using a containerised and secure AI service that does not share data or intellectual property with the AI service provider for their training purposes. Internal audit You can leverage generative AI to facilitate numerous audit tasks that are summarised in Exhibit 1.

Internal audit

You can leverage generative AI to facilitate numerous audit tasks that are summarised in Exhibit 1.

Exhibit 1 – Internal audit tasks
  1. Identify and assess risks
  2. Automate tests
  3. Build request lists
  4. Develop audit programmes
  5. Generate audit analytics
  6. Identify anomalies and potential fraud
  7. Update audit resources such as manuals and checklists
  8. Manage stakeholders
  9. Plan projects
  10. Prioritise audit areas
  11. Produce audit reports
  12. Track issues
  13. Provide training and development

 

Internal audit generates an abundance of data that generative AI can summarise and analyse. For example, business leaders might respond to a risk assessment by explaining the risks they perceive based on their knowledge of the organisation. The assessment also might include independent data. 

Generative AI could use leaders’ responses and the independent data to help build an audit plan, develop work programnes and more. Asked the right questions, generative AI will outline potential controls for identified risks, describe how to test controls, and specify attributes to test within a control area. Asked what information is needed to test a control or process area, generative AI can provide a list of the most applicable documents. 

As noted in the risk section above, generative AI’s output is not always accurate. You will need to apply human knowledge and experience to ensure generative AI’s output is correct. 

Recommendations and considerations 

You should consider AI in general—and generative AI such as ChatGPT in particular—from two equally important perspectives. First, build your knowledge about AI so you can be effective in managing its risks. And then derive the full value of AI’s potential in supporting audit activity, as well as ensuring its proper use the in delivery and management of patient care.

Algorithm weaknesses, source data and user input can produce bias.
Generative AI can summarise and analyse internal audit data.
Loading...