Transcript | The Rise of Generative AI – with Christine Livingston

ChatGPT is the talk of the town today. But as we all know, generative AI is much more than this one tool. Gen AI represents a new frontier of promise, productivity and capabilities for organizations around the world. But it also comes with risks that these organizations must understand and manage if they’re going to capitalize successfully on these new technologies.

In this episode, we talk all things gen AI with Protiviti Managing Director Christine Livingston. Christine is responsible for artificial intelligence/machine learning and innovation solutions at Protiviti. With over a decade of experience in AI/ML deployment, she has delivered hundreds of successful AI solutions, including many first-in-class AI-enabled applications. She has helped several Fortune 100 companies develop practical strategies for enterprise adoption of new and emerging technology, including the creation of AI-enabled technology roadmaps. She focuses on identifying emerging technology opportunities, developing innovation strategies and incorporating AI/ML capabilities into enterprise solutions.

Kevin Donahue:

ChatGPT is the talk of the town today. But as we all know, generative AI is much more than this one tool. Gen AI represents a new frontier of promise, productivity and capabilities for organizations around the world. But it also comes with risks that these organizations must understand and manage if they’re going to capitalize successfully on these new technologies.

This is Kevin Donahue, a senior director with Protiviti, welcoming you to a new edition of Powerful Insights.

I had the pleasure of talking all things gen AI with Protiviti Managing Director Christine Livingston. Christine is responsible for artificial intelligence/machine learning and innovation solutions at Protiviti. With over a decade of experience in AI/ML deployment, she has delivered hundreds of successful AI solutions, including many first-in-class AI-enabled applications. She has helped several Fortune 100 clients develop practical strategies for enterprise adoption of new and emerging technology, including the creation of AI-enabled technology roadmaps. She focuses on identifying emerging technology opportunities, developing innovation strategies and incorporating AI/ML capabilities into enterprise solutions.

Christine, thanks for joining me today!

 

Christine Livingston:

Thanks for having me, Kevin.

 

Kevin Donahue:

So, what has been your experience with generative AI and, more broadly, business use cases for artificial intelligence? I know you’ve been in this space for quite some time.

 

Christine Livingston:

Yes, it’s been a really fun evolution. Obviously, generative AI and ChatGPT have captured the attention of the world right now, but we’ve seen a lot of advancements in this space. I’ve been in AI for a little over 10 years now, and when I started in artificial intelligence, there was a lot of work focused on trying to understand and interpret information: How could we understand videos? How could we understand image? How could we understand written communication? It’s hard. People express concepts many ways. And systems got pretty good at interpreting data. They could ingest it; they could figure it out. And AI was really focused on that space.

And even as little as five years ago, I remember working with a customer who was trying to create some narrative around some of the interpretation that some of these other AI capabilities brought. We were using artificial intelligence to help them create a conversational dashboard: How could I conversationally interact with my data? It was great at the interpretation side of things, but when it started to provide the conversational response, it would say something like, "Sales are up in Q3,” and that’s not overly helpful. Anybody who looks at that visualization could see that. What would be interesting would be being able to provide a more verbose response and understanding why, and articulating some of the drivers.

And what I’ve seen happen in the last five or six years is that that ability to create a more meaningful narrative — and not just interpret information, but also create new information in a plausible, logical way — has advanced tremendously. If you were to ask something like ChatGPT or Bard a similar question today, you might get back a very insightful response because it’s able to articulate itself in a much more meaningful, helpful way. The other space we’ve seen is, it’s also similar to how we can generate text. We can also now generate images, video, audio. You can create new, unique novel output. Generative AI is also not necessarily limited to text-based input and output; it’s also able to respond to a variety of mediums.

 

Kevin Donahue:

Thanks, Christine. And you mentioned Bard and some of the other tools. I want to ask about that. And we’ll also cover business use cases soon because I know there are a lot of questions about that. But people hear “generative AI,” and they think “ChatGPT.” Naturally, it’s been big in the news, and they’re thinking that — at least for the moment. But those two aren’t the same. Correct?

 

Christine Livingston:

Correct. They are not the same. ChatGPT, you can think of as the Kleenex brand of generative AI at the moment. It just happens to be one model, one vendor, one capability. There are many others: Google is creating Bard. Anthropic, Hugging Face — everybody is catching up. AWS has models. There are all kinds of generative models that have been created since the original Google paper in 2017 introducing transformer architecture. There are over 50 very well-developed large language models in place today. There’s also, again, that ability to generate image and content: Midjourney, DALL-E — maybe two of the better-known image-generation platforms. Right now, we’re seeing Adobe release interesting capabilities with Firefly. Generative AI is definitely a broad space with hundreds of tools that are manifesting in that space of being able to create new and novel content.

 

Kevin Donahue:

Christine, as I understand it, ChatGPT took off because it introduced a very consumer-friendly interface, as opposed to some of the prior technical interfaces that were in place, and now others are catching up quickly.

 

Christine Livingston:

It’s a great point. It’s one of my favorite considerations as a technologist. The GPT-3 models, which were predecessors to ChatGPT, came out in 2020. They were similar in terms of capability. But to access that model you had to, number one, know it existed, and number two, you would have to write APIs to access the intelligence, and you’d have to understand how to sequence the APIs, how to sequence your prompts. You had to stitch together a couple of these technical elements.

And once somebody said, “What if we make a simple interface that allows you to conversationally interact, and we abstract away all of those API calls, all of the technical understanding of how to sequence them, and we just let people have a conversation?” This is something everyone knows how to do. And it’s truly been a phenomenal case study in how important thinking about the user experience that you’re creating is. You can build great technology, but if people don’t understand how to use it and it’s not accessible and intuitive to them, it’s going to be very challenging for you to get meaningful adoption.

It follows the mantra of Steve Jobs. A very famous statement of Steve Jobs is, “Simple is harder than complex.” It’s very hard to make these very complex technical concepts simple and accessible and easy to use. But you’ll see the value, just as ChatGPT did. It took five days to get a million users faster than any other application in the modern internet history.

 

Kevin Donahue:

Christine, what types of questions are your clients and other organizations asking right now about generative AI?

 

Christine Livingston:

One of the more interesting things I’ve seen more recently is that the questions are not focused on a particular industry or even a particular business function. But because the technology has become so globally accessible and understandable, we’re getting asked across marketing, sales, operations, research, finance — across the entire spectrum — “How could I use this technology to do something meaningful, and what are the practical business applications?” It’s a great question, and next to understanding the user experience, one of the most fundamental questions you should be asking is, “How are we going to use this to drive meaningful business value?”

You’ll typically see four areas of opportunity for AI broadly, and generative AI is adding another layer of capability and intelligence to those four areas. Usually, we think of them as being decision-support systems: How can I help people or systems make more intelligent, more informed, more contextual responses and decisions? How can I surface the right information to the right person at the right time? There’s huge opportunity here, and this has implications all the way from a clinician — your doctor in the hospital making some decisions — to finance executives deciding where they might want to place their bets. There are huge opportunities for decision support.

A lot of people have seen customer experience that could be in things like virtual agents, chatbots, conversational support. But we’re also seeing interesting opportunities to create highly personalized, highly customized experiences: How might you recommend a product to an individual consumer, taking into account the information they’re providing you, what they’re telling you they’re looking for and knowing who they are as an individual?

The third is knowledge management: How do we make knowledge accessible and interpretable across the enterprise? This is very challenging. We’ve been trying to do this for a long time. And AI, and in particular generative AI, has a unique ability to look at huge bodies of information and create a meaningful synthesis. Rather than surface 100 search results, we can now create a meaningful synopsis on your enterprise data of the information that it’s seeing and retrieving. Being able to manage and access knowledge is a great opportunity.

The last is around process efficiency and automation. This is where you see opportunities to automate tasks that may be highly manual rote tasks, where you have an opportunity to increase efficiency and elevate the complexity and the interest of the work that people are doing today.

 

Kevin Donahue:

The promise of it and how things are changing in my job, day to day, how it’s impacting me, is something I’m not only mindful of but also amazed by, and I’m sure you hear that a lot. That said, Christine, accuracy has been a much-discussed concern regarding the use of this technology. And there are other risks — accuracy only scratches the surface. Can you talk about some of the risks organizations need to be aware of as they venture into using these technology tools?

 

Christine Livingston:

Accuracy, or the concept of hallucination, as you mentioned, Kevin, has probably been the most widely talked about. There are also a lot of significant concerns around data privacy and security. You need to understand what happens when you put data into these models. How is it used in the future? How does it learn from my interactions and therefore my data that I provide to this model? Thinking about the security of those solutions, there have not been significant advances in the area of AI security. It’s historically focused mostly on the application or the access layer and the data layer — not a lot of focus on securing the model itself. And as you can now start to interact and engage with these models, potentially changing their behavior and changing their algorithms with your prompts and your input, there needs to be new focus on securing the AI itself.

Explainability and transparency are going to become increasingly important as we start to see regulations enter the space. It’s long been talked about, how explainable is your model? Can you provide a justification for why a model made a recommendation? This is particularly challenging with generative AI and will be a necessary element to comply with regulation in the future. For example, New York Law 144 is going to require that you can explain why you made employment or hiring decisions with artificial intelligence or automated algorithms. You’ll need to be able to explain and provide transparency into the decisions that those models are making.

The last one, which is very recently coming to the surface, is understanding authenticity and ownership of the information you see. We’re starting to see things like deepfakes become much easier and more accessible to create. We’re able to create audio and video and images that look and feel like the real thing. And we’re also starting to see the first few lawsuits crop up around, who actually owns the output of some of these models, and how do you attribute AI as a creator, potentially, of information?

 

Kevin Donahue:

And Christine, I want to follow up on that point in particular in a moment. But first, can you expand a bit on hallucinations? I’ve heard that mentioned before, but I’m not sure I’m clear on what those are or how they are defined.

 

Christine Livingston:

I don’t love the term hallucination. It’s been adopted, so it is what it is. But the simpler way for me to think about it is to understand that a lot of these models learn through reinforcement. They are optimized to perform a specific function or task, and they’re rewarded when they do so. In a lot of these large language models and generative-AI text-based capabilities, the reward or the optimization is that the model provides a plausible- or reasonable-sounding response. That’s the objective, not necessarily understanding whether something is truthful or untruthful, or factual or not factual. They don’t really have a concept of truth or fiction. They’re not producing — again, why I don’t love the term hallucination. They’re not hallucinating. They’re just doing what they were optimized to do, which is provide a response that sounds very plausible.

And they’re so plausible, and so reasonable. In fact, there’s a lawsuit now where an attorney used to prepare his brief, and it sounded so reasonable and so authentic that it was submitted. And when it was submitted, the opposing counsel said, “We can’t find any of these cited cases.” The output was so reasonable that it sounded like real law, real cases, real things, but none of them were real cases at all. And that concept of hallucination is understanding that the model may produce an output that is not grounded at all in reality, because it wasn’t optimized to do that.

 

Kevin Donahue:

And that is certainly a scary concept. And I’m sure companies and leaders are going to be keeping an eye on that as they figure out how to use these technologies.

 

Christine Livingston:

It’s important to think about and recognize that you’re not going to deploy an AI model in a silo, and you need to think about some of the guardrails that you’re going to put in front of and behind your model as well to ensure veracity, ensure completeness. And it can be a complicated architecture to think about using the models in context and in concert with other technology platforms to produce the most value. But often, we tend to think of just the model. It’s not just the model that’s going to build these AI systems. It’s the model and all the other components that go along with it to create safe and reliable and accurate systems.

 

Kevin Donahue:

Which include the right data sets, and understanding how that data is being created and the accuracy of that data.

 

Christine Livingston:

Absolutely. The data is key.

 

Kevin Donahue:

Christine, you talked a bit before about copyright and IP, or intellectual property. What more can you say about the type of concerns that generative AI brings to the table in that space? I’m a content producer, and we’ve talked a lot about leaning on generative AI for assistance, but not relying on it, because we may not know where that information is coming from — or, to your point about hallucinations, if it’s accurate.

 

Christine Livingston:

It’s a very complicated topic, and I’ve heard a couple of very logical, grounded concerns here. One becomes more of a legal question in some sense. When you start to use these models and they’re producing recommendations or drafting outlines or helping you optimize a paragraph or whatever that may be, you don’t know where the information and the data that it used to create that recommendation for you came from. You can’t be sure that it didn’t come from a competitor, and you can’t be sure that it didn’t come from copyrighted, or someone else’s, IP. You don’t know that for sure unless you trust the model creator themselves to not have done some of those copyright-infringement tasks.

Now, what copyright infringement actually means for some of these models, there are cases starting to surface now that will start to define and shape our perspective on that. There are some new cases now that say, “Your model read our copyrighted material and produced a synthesis. That’s copyright infringement.” The flip side of that discussion I’ve seen is, a person can go and read your copyright material and create a synopsis, and you wouldn’t pursue legal action against that person, saying that they wrote a synopsis. Why is it different if it’s a human? And we’ll see what happens in the next few months and years as some of these lawsuits play out.

But there’s certainly a lot of concern around, where did this information come from? Is it truly unique? Does someone else potentially own it? And as I create new information, can I copyright it if AI helped me produce it? There’s that side of the discussion happening as well. And particularly, when it comes to IP and IP infringement, there are a lot of discussions happening about, how can you cite AI as a co-creator?

 

Kevin Donahue:

That’s really helpful. Thanks again. As a content producer, that is of particular interest to me. You’ve touched on some things regarding bias already, but what are some of the potential problems, in the eyes of a board member or C-suite leader, created by unintended bias developed by generative-AI models?

 

Christine Livingston:

There certainly are a couple to think of. The first is, thinking about the data that these systems were trained on will play a huge role in that. And it’s important to understand, is there aninherent bias that I might not even be aware of in the data set itself? For example — this was a fairly public example seven years ago or so — Amazon was experimenting with using artificial intelligence to predict its next leaders in the company or high-potential high performers. And it was trained using the résumés of CEOs — highly successful people, historically. And what they found was, the model, when trained on that data, had a tendency to recommend white men over any other race, gender, ethnicity. And it was not an intentional bias, but something that happened because historically, that was what those leaders had looked like in the data that was produced and provided to the model.

As people are training these models, it’s important to not lose that understanding. People are inherently training and optimizing these models. If your AI trainer or your AI optimizer happens to hold a personal bias, how are they going to train or optimize the model themselves? It’s very hard to retrain yourself and to unthink through some of those biases, but recognize that that can come through as you’re starting to train these models. Our own human biases may come through as we transfer or translate them on these models. And it’s critically important to measure and monitor not just as you deploy them, but to continue to provide those measurements over time, as models may drift and evolve.

 

Kevin Donahue:

This has been a fantastic discussion. Thank you, Christine. A reminder for our listeners: You can find more information about generative AI and artificial intelligence from Protiviti on our website — specifically, our page focused on artificial intelligence. Christine, one more question for you. What are some key questions and considerations for leaders today as they determine where and how they should implement generative AI in their organization?

 

Christine Livingston:

There are a couple of important questions to think about. One, is it here to stay? And you should be thinking now about where and how you’re going to use it, and thinking about, what are your competitors doing with this technology? It’s also important to understand if there is a strategy for which use cases are being deployed and how they’re being deployed, and how are you selecting and prioritizing opportunities for those use cases? That’s critically important. Recognize that there needs to be meaningful business value.

And as you’re starting to embark on your generative journey, in particular, have you designed governance frameworks and cross-functional solution teams that can look at all aspects of these models and govern and manage them appropriately? Do you have the right resources and the right skill sets? And how are we going to evaluate over time that our models continue to adhere to the governance frameworks and the principles that we set up for them as they were originally created? But there are tons of opportunities, and it’s a great time to be thinking about, how you can use these capabilities and be experimenting as you work to drive your business forward?

 

Kevin Donahue:

My thanks to Christine for sharing her interesting and informative insights on generative AI and its promise for organizations worldwide in transforming how they operate in many areas. A few things stood out to me in terms of the questions and inquiries companies are asking right now: They fall into four areas where gen AI can support them: decision support, customer experience, knowledge management, and process efficiency and automation. The risks stood out to me as well: security, hallucinations — as we talked quite a bit about — explainability, and authenticity and ownership of information. It’s going to be interesting to see how all these issues and opportunities play out in the coming months and coming years. For more information, you can visit the Artificial Intelligence Services page on the Protiviti website.

And finally, I encourage you to please subscribe to our Powerful Insights podcast series and review us wherever you get your podcast content.

Loading...