White House Issues Executive Order to “Seize the Promise and Manage the Risks” of AI Download What you should know: In his first executive order on artificial intelligence, President Biden is directing various federal agencies to identify the risks of the technology as well as harness the benefits.Why it matters: Regulatory and legislative changes are coming, so it's important to watch the details.Our insights: In this Flash Report, we summarise the key directives contained in the executive order, address key takeaways and concerns for organisations to consider, and outline steps businesses can take to prepare for changes in the AI regulatory landscape. Download Topics Cybersecurity and Privacy Risk Management and Regulatory Compliance Technology Enablement In his first executive order (EO) on artificial intelligence (AI), President Biden is directing various federal agencies to identify the risks of the technology as well as harness the benefits. This is something to watch because the details will be important – regulatory and legislative changes are coming.Key Directives in the Executive OrderAmong the key directives government agencies need to address, per the EO, are the following:Directing the development of new standards and rules across federal agenciesInitiating action to address the most concerning implications of the broad adoption of AI in public and private sectorsCatalysing research and assistance across impacted ecosystemsCalling on Congress to pass a Federal Privacy BillAligning with other global actions (G7, European Union Artificial Intelligence Act, etc.)Numerous Questions RemainDespite efforts to be as comprehensive as possible, the EO raises several questions that will need to be addressed before businesses can fully determine the impact that forthcoming EO implementation may have.How is “safety” defined and by whom?How will AI developers and users demonstrate that models are free of discrimination and bias?How will small- to medium-sized businesses navigate the new rules and regulations (domestic and international) given the “moat” that may already exist around big tech players?How do forthcoming standards apply to AI solutions already in use versus newly developed ones?How will this impact the balance between regulation and free speech?There are other concerns that business leaders need to be aware of; for example, how the open-source community and those that leverage open-source AI tools and models will be impacted. A significant open-source presence already exists with no single corporate entity responsible for development. Currently, more than 225 transformer models (the architecture behind the wildly popular ChatGPT) are available on the popular open-source AI community, HuggingFace, with more to come. These models are also being made widely available through hyperscalers such as AWS and GCP.Key Takeaways and ConcernsStandardisation is key: The initiative’s ultimate success will depend on the ability of a multitude of agencies to put in place essential definitions, guardrails and regulations necessary to provide sufficient guidance to enable the power of AI while navigating both known and unknown risks. New AI safety and security standards will be developed for powerful AI systems, intended to apply to certain models that pose a serious risk to national security, national economic security, or national health and safety.Definition of safe: The EO directs NIST to develop standards, tools and tests for safe systems. However, there remains a lack of specific guidance on what constitutes a safe system, and the directive fails to distinguish between the risks presented by AI software and the combination of AI software and hardware.Safety testing: The developers of these powerful AI systems may be required to share critical information with the federal government; specifically, “developers of foundation models must share the results of all red-team safety tests.” Traditionally, red-team safety tests focus on manipulating a model (through simulated adversarial attacks), not necessarily on whether the model is inherently biased, safe or responsible. As noted above, those determinations depend on standardised definitions and evaluation mechanisms.AI-generated content labeling: To address fraudulent and deceptive uses of AI, the EO directs the Department of Commerce to develop standards and best practices for detecting and authenticating official content, including watermarks to distinctly label AI-generated content. This proactive measure could be a powerful tool in distinguishing between content created by humans and that produced by AI; however, technology to detect AI-generated content is still a work in progress. As of July 20, 2023, OpenAI revoked its technology intended to detect AI-written text and stated that its “AI classifier is no longer available due to its low rate of accuracy.”Federal privacy law: The EO calls on Congress to pass bipartisan privacy legislation to protect all citizens and asserts that federal support and the development of privacy-preserving technology should be prioritised. It also establishes a new “Research Coordination Network” to advance rapid breakthroughs in privacy. As AI technology continues to evolve, the EO turns the privacy lens on the government by directing an evaluation of how agencies collect and use commercially available information and personally identifiable data. Privacy serves as a double-edged sword for AI – large amounts of data (which may include sensitive information) may be required to train models, which may represent risk to individuals’ personal data.Mitigating bias: The EO directs government agencies to take steps to address algorithmic bias and discrimination related to AI systems, calling for the development of guidance and best practices for mitigating algorithmic discrimination, ensuring AI systems comply with civil rights laws, conducting evaluations to detect unjust impacts on protected groups, and granting human consideration for adverse decisions made by AI systems. It further directs the Department of Justice to coordinate civil rights enforcement regarding AI discrimination. Also of note, the EO encourages the Director of the Federal Housing Finance Agency and the Director of the Consumer Financial Protection Bureau to consider using their authorities to use appropriate methodologies, including AI tools, to ensure compliance with federal law. This includes evaluating their underwriting models for bias or disparities affecting protected groups and evaluating automated collateral valuation and appraisal processes.These broad mandates underscore the federal government’s determination to protect core national interests. An intriguing aspect of these proposed standards is their reliance on existing agency enforcement. Unlike some other jurisdictions that have introduced specific AI regulatory bodies, the standards do not create new enforcement agencies or mechanisms.What Businesses Can Do to Prepare for Changes in the AI Regulatory LandscapeWhile much work remains to be done before the EO has an appreciable impact on how businesses use AI, some preparatory steps will still prove beneficial for those developing and deploying AI.Establish an AI code of ethics aligning with your corporate ethics and mission.Develop an inventory of AI/ML systems and a repository of algorithms, documenting each model’s purpose, data sources, key parameters and business context.Formalise an AI governance structure, including roles and responsibilities, and plans for red teaming and auditing.Evaluate AI products/services and uses against a standard framework such as the NIST AI Risk Management Framework.Closing ThoughtsWhile the success of the EO is contingent on the development and execution of plans and policies by government agencies, the EO indicates there is growing public awareness of the potential privacy, ethical and security challenges presented by AI and the need to address those. Taking the suggested steps above will help businesses futureproof themselves for forthcoming regulations, and ultimately will help businesses better leverage the capabilities of AI. Find out more about our solutions: Artificial Intelligence Services AI is changing the way we do business and there’s applied value for almost every function in every industry. We partner with clients to unlock the power of intelligence-based innovations for your business. Cybersecurity Consulting From the speed of innovation, digital transformation, and economic expectations to evolving cyber threats, the talent gap, and a dynamic regulatory landscape, technology leaders are expected to effectively respond to and manage these competing priorities. Regulatory Compliance Disruptive technologies, regulatory pressures, evolving customer loyalty, and pressure to enhance economic returns are just some of the challenges organisations need to overcome by innovating and managing their compliance risks to succeed over the next decade. Leadership Michael Pang Michael is a managing director with over 20 years’ experience. He is the IT consulting practice leader for Protiviti Hong Kong and Mainland China. His experience covers cybersecurity, data privacy protection, IT strategy, IT organisation transformation, IT risk, post ... Learn More Patrick Pang Patrick is a senior director in intelligent automation and digital services with over 20 years of experience in the financial services industry. He has extensive knowledge of various business domains in retail and commercial banking. He has 15 years of IT project ... Learn More Featured insights WHITEPAPER Generative Artificial Intelligence You need to learn not only about generative artificial intelligence’s potential uses for delivery and management of healthcare, but also about its capabilities to support internal audit activity. At the same time, you need to understand hazards... WHITEPAPER Human v. machine: Tackling artificial intelligence risks in financial institutions In the novel Tell the Machine Goodnight, Katie Williams tells the story of Pearl, a technician for Apricity Corporation, which has developed a machine that “uses a sophisticated metric, taking into account factors of which we are not consciously... WHITEPAPER Artificial Intelligence: Can Humans Drive Ethical AI? Artificial intelligence (AI) is a powerful technology that’s driving innovation, boosting performance, and improving decision-making and risk management across enterprises. It’s also turning data into the key driver of competitive advantage. Over the... PODCAST Podcast | The Rise of Generative AI – with Christine Livingston ChatGPT is the talk of the town today. But as we all know, generative AI is much more than this one tool. Gen AI represents a new frontier of promise, productivity and capabilities for organisations around the world. But it also comes with risks that... Button Button