Generative AI: Business Rewards vs. Security Risks Download The Second Annual StudyConducted by ISMG and sponsored by ProtivitiWhy It Matters:Generative AI (Gen AI) has emerged as a transformative force across industries. However, with its powerful capabilities comes the need for careful planning and execution. Understanding Gen AI's implications—both the opportunities and risks—is essential for staying competitive and secure in the digital age. This report sheds light on how businesses are leveraging AI to drive innovation while navigating the obstacles and complexities associated with its integration.What's Happening:This survey captures the pulse of the business world as it relates to Gen AI. The findings show how organisations are currently integrating Gen AI technologies, identifying key trends and practices that are helping drive successful adoption. From improved productivity to evolving security concerns, the report provides an in-depth look at the real-world applications and challenges companies are experiencing today.The Bottom Line:By downloading the full report, you'll gain valuable insights into how Gen AI is shaping the future of business and security. The findings emphasize the need for organisations to embrace Gen AI strategically, balancing innovation with risk management. Download Topics Cybersecurity and Privacy Artificial Intelligence Key Report Insights from Christine Livingston Embracing Generative AI + What's driving the sharp decline in organisations prohibiting generative AI use - reduced concerns about generative AI, greater confidence about the technical and procedural protections, fear of missing out on the potential gains, or something else like shadow AI?There are a couple of reasons that we see that dropping. First is probably that even among organisations that outright blocked a lot of the public generative AI sites, they found that a large percentage of their employees were still leveraging those technologies. We've seen that consistently across many different organisations. So a pure block or a ban was not overly successful. They've also started to see some of the early-stage pilots and prototypes come to fruition in production. The number of projects in production almost doubled in the last year. But as you start to see those use cases move into production, you begin to recognise the tangible value that AI is creating for your organisation, and it becomes much harder to ignore the potential and the possibilities. We have also come a long way in terms of understanding some of the potential pitfalls and risks of generative AI specifically and have better opportunities to mitigate and manage those potential downfalls. Accelerating AI adoption + With general AI usage in production doubling from 15% to 31% in a year and given all the hype around AI, is the growth slower than you would have expected or is it doubling faster?It mirrors generally what we're seeing. It's maybe a little bit slower than I would've expected, but at the same time, we know that a lot of these applications are first in class. They're proving potentially an unproven concept applied to a new business process, and that requires a lot of planning and a lot of technical validation that the solution will work the way you expect it to. So if you look at the results from last year, about 27% of the respondents last year said they had plans to implement AI. About 28% of people said they were in the pilot phase last year. So if you take those 28% in pilot last year and think about where they are a year later, you're almost exactly at that growth trajectory that we would've expected. So as people are budgeting, experimenting and then releasing to prototype in POC, this naturally becomes the trajectory of the solution into production. Evaluating AI ROI + Productivity gains seem to vary widely, with some reporting exceptional ROI while others consider any ROI an illusion. What factors drive this disparity in responses?One of the most significant challenges faced by organisations today is selecting the appropriate use case. Many organisations have chosen what appeared to be the simplest option, without fully understanding the business impact or value of utilising AI in that function. They may not have quantified an ROI before they began experimenting. I've seen a huge range of value delivered based on the use case that clients look to pursue and the organisation's approach. The other concept tied to ROI is how most people who are experimenting with this technology use it as a consumer. They're familiar with how the technology works. However, it's much harder to operationalise an enterprise use case for generative AI than do some light experimentation. Enterprises often underestimate the challenges of integrating AI with their data, establishing governance principles, setting up guardrails, and ensuring a justifiable business case. These complexities ultimately influence ROI outcomes. Addressing AI risks + Are the concerns shared by respondents, particularly around data leakage and unreliable results, aligned with what you or your organisation identifies as the most important risks or do you see other risks as more critical?Unreliable results, often referred to as hallucinations, are probably one of the predominant concerns for organisations today. How do we trust the answers? There are also a lot of concerns about how we retrain our employees and our organisations to use these results appropriately and accurately. I often use the analogy of reviewing a spreadsheet or a forecast: I know which cells to look in and which formulas to validate to confirm the accuracy of the data I'm seeing and using. With generative AI, we don't necessarily know yet where to look and how to validate the responses and outputs. Absent a clear citation of source data, we're still learning those behaviors and habits around how to interpret and use the responses and the outcomes responsibly. A lot of organisations are moving these capabilities in-house - within the confines of their cloud platforms - to ensure their data is not being used to retrain models or entering the public domain. These, in my view, are probably the top risks that clients are most concerned about today.What should we do to mitigate the risks associated with generative AI?One key point I emphasize is that not all risks are necessarily created equal. Organisations should focus on stratifying risks into low, medium and high categories and designing governance processes, policies and technologies accordingly. These discussions are complex and need to be centered on the use case to make sure that your risk mitigation is appropriate. Retrieval-augmented generation (RAG), output comparison and model evaluation, are tactics that are often used. However, a significant challenge for many organisations lies in performing the initial risk classification and turning it into actionable strategies. Developing a target operating and governance model to support from the people, process and technology standpoint is essential for effective risk management. Navigating AI Governance + Are the concerns shared by respondents, particularly around data leakage and unreliable results, aligned with what you or your organisation identifies as the most important risks or do you see other risks as more critical?Unreliable results, often referred to as hallucinations, are probably one of the predominant concerns for organisations today. How do we trust the answers? There are also a lot of concerns about how we retrain our employees and our organisations to use these results appropriately and accurately. I often use the analogy of reviewing a spreadsheet or a forecast: I know which cells to look in and which formulas to validate to confirm the accuracy of the data I'm seeing and using. With generative AI, we don't necessarily know yet where to look and how to validate the responses and outputs. Absent a clear citation of source data, we're still learning those behaviors and habits around how to interpret and use the responses and the outcomes responsibly. A lot of organisations are moving these capabilities in-house - within the confines of their cloud platforms - to ensure their data is not being used to retrain models or entering the public domain. These, in my view, are probably the top risks that clients are most concerned about today.What should we do to mitigate the risks associated with generative AI?One key point I emphasize is that not all risks are necessarily created equal. Organisations should focus on stratifying risks into low, medium and high categories and designing governance processes, policies and technologies accordingly. These discussions are complex and need to be centered on the use case to make sure that your risk mitigation is appropriate. Retrieval-augmented generation (RAG), output comparison and model evaluation, are tactics that are often used. However, a significant challenge for many organisations lies in performing the initial risk classification and turning it into actionable strategies. Developing a target operating and governance model to support from the people, process and technology standpoint is essential for effective risk management. One of the most significant challenges for organisations today is selecting the appropriate use case. Many organisations have chosen what appears to be the simplest option, without fully understanding the business impact or value of utilising AI in that function. -Christine Livingston, Managing Director, Global AI LeaderLearn about our AI services