Professionals in a broad range of industries know that ChatGPT and other forms of generative artificial intelligence (AI) will play a significant role in their work going forward. But they cannot always decode its potential and risks from general news about the technology.
ChatGPT in the context of AI
Recent advances in artificial intelligence have brought large language models (LLMs), one brand-named ChatGPT, to the forefront of many personal and business conversations. LLMs, somewhat synonymous with generative AI, leverage newer algorithms, deep learning, and tremendous volumes of unstructured data. They use the data to generate new content such as text, images, video, audio, simulations, and software code based on human prompts or queries.
Most generative AI capabilities were previously only available to those with the skills to interact with application programming interfaces. OpenAI’s ChatGPT is a particularly noteworthy example of an LLM, as the addition of a conversational interface has made the technology available to everyone—no technical skill sets required.
Users who experiment with ChatGPT can quickly appreciate the service’s ability to answer questions phrased in natural human language based on a broad range of information. ChatGPT can generate text that may not be easily distinguished from human writing. Site visitors can ask questions, receive reasonable responses and ask follow-up questions to get more detailed information. Generative AI already had numerous commercial uses before ChatGPT’s launch, including the following:
- Customer support and sales and marketing task automation
- Software code development
- Fraud detection applications
- Traffic modeling
- Text generation
- Supply chain automation
- Multiple cybersecurity applications such as attack simulation and threat intelligence gathering
The arrival and instant popularity of ChatGPT unleashed the race for new generative AI products and LLMs. Virtually all hyperscalers, such as Microsoft, IBM, Google and Amazon Web Services, are unveiling their latest and greatest generative AI products and services.
The rapid adoption of LLMs such as ChatGPT has led some industry leaders to advise that further development of the technology should be regulated by a government entity. In February 2023, ChatGPT made history when it gained one million users faster than any other online service.
In March of 2023, the nonprofit Future of Life Institute published a petition, signed by several technology leaders and luminaries. The petition urged a “public and verifiable” six-month pause in some AI development so leaders could consider how to manage AI’s ethical and functional risks.
The Center for AI Safety shared a statement from AI experts and public figures expressing their concern about AI risk. The statement says: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The statement, signed by a number of AI scientists and other notable figures, is intended to open the discussion about some of AI’s most severe risks, according to the organization.
In May 2023, OpenAI CEO Sam Altman and others testified before Congress to discuss AI in the contexts of jobs, accuracy, misinformation, bias, privacy, and copyright. All parties present agreed artificial intelligence must be regulated, but the approach to regulation has yet to be determined.
How ChatGPT works
LLMs are built with neural networks leveraging transformer technology—a new AI architecture for interpreting unstructured information first published by Google Research in 2017. The GPT in ChatGPT stands for Generative Pretrained Transformer, noting the transformer architecture.
ChatGPT was trained using a combination of supervised learning and reinforcement learning techniques. Supervised learning involves providing labeled data to a model, and using reinforcement learning that rewards a model when it accomplishes an objective.
According to OpenAI, ChatGPT was trained with reinforcement learning from human feedback using a process by which human trainers played both sides of a conversation (human user and AI assistant). The process generated both improved understanding of the questions posed and well-composed responses. Subsequently, AI trainers ranked several potential completions and rewarded the model; the process was repeated several times to train ChatGPT.
The training occurred on massive amounts of data and with an unprecedented number of parameters (weights in the AI model). The LLMs are able to predict the likelihood of the next word in a sequence based on information they have been trained on and then rewards are calculated mathematically for a job well done to reinforce learning.
A lack of clarity about generative AI’s potentials and perils is a major risk to organizations, since results could be choices that are too cautious or too reckless for charting an organizations’ future. Uncertainty about AI’s capabilities and meaningful applications could cause leaders, auditors and others to delay use of this technology and miss its capacities to improve care and cut costs. Not knowing enough could result in a misunderstanding of risks and a too-rapid adoption of AI that introduces a different set of hazards.
Several technology luminaries and leaders are advising caution before developing the technology further or integrating it into mission-critical applications. Concerns about AI are serious.
Inaccurate results – Generative AI can provide incorrect information and affect outcomes. In industries that must put safety first—such as healthcare—inaccurate results must be a major concern. The undesirable side effect is caused by the model being optimized to produce a statistically probable response—not necessarily a truthful response. LLMs do not have an inherent sense of true/false or right/wrong.
Unintentional bias – AI can be built with—or can develop— unintentional bias. Generative AI uses algorithms that are trained from source data, and its interactions with users further refine its responses over time. Biases in an AI’s logic or in its knowledge base—whether present from inception or developed over time—are an additional area of concern with any AI. Algorithm weaknesses, source data and user input are three distinct opportunities for bias to creep into an AI system.
New cybersecurity risks – AI can introduce new cybersecurity risks. Even as generative AI is applied to simulate attacks, identify cybersecurity threats and more, bad actors will want to use AI to create new vulnerabilities and innovate new forms of attack.
New data privacy risks – AI can introduce new data privacy risks. Generative AI is always learning from users’ queries and prompts. If those queries refer to confidential information or include details about intellectual property, the information may become part of the AI’s knowledge base.
Some companies have temporarily blocked the use of generative AI within their organizations due to concern about how their confidential information or intellectual property could turn up in AI knowledge bases. Their worries are legitimate. New features have been developed to provide more privacy and security, but they do not yet fully address all privacy concerns.
Workforce disruption – AI can disrupt workforces. When new technology emerges, so too does fear that human workers will be replaced. While generative AI can create text or recommend actions, it cannot fully emulate the capabilities of a person. Leaders should be prepared to address concerns that AI may one day replace jobs in healthcare. They will want to monitor for and respond to concerns that may affect morale among their organizations’ employees.