Emerging Risk Categories: Economic, Technological, Societal
Industries Impacted: Financial Services, Technology, Healthcare & Life Sciences
Machine learning, also known as Analytics 3.0, is the latest development in the field of data analytics. Machine learning allows computers to take in large amounts of data, process it, and teach themselves new skills using that input. It’s a way to achieve artificial intelligence, or AI, using a “learn by doing” process.
Machine learning enables computers to learn and act without being explicitly programmed. It evolves from the study of pattern recognition and the design and analysis of algorithms to enable learning from data and make possible data-driven predictions or decisions. It is so pervasive today that many of us likely use it several times a day without even knowing it.
In earlier stages of analytics development, the companies that most benefited from the new field were the information firms and online companies that saw and seized the opportunities of big data before others. The ability to provide much needed data and information represented a clear first mover’s advantage for these companies. While the first movers in big data were the big winners, their advantage won’t last much longer as productivity levels out. The evolution to Analytics 3.0 is a game changer because the range of business problems that intelligent automation — a mixture of AI and machine learning — can solve is increasing every day. At this stage, nearly every firm in any industry can profit from intelligent automation. Companies that invest immediately in machine learning have the potential to gain long-term benefits, profiting from the work of analytics pioneers. To gain these benefits, companies must rethink how the analysis of data can create value for them in the context of Analytics 3.0.
In PreView, Volume 2, Issue 2, we highlighted the challenges that investors in AI face, including high research and development costs and the difficulty of retaining people with the right skill sets. Still, we believe that the long- term benefits outweigh the costs. The biggest downside of not adopting AI, and specifically machine learning, early is that firms delay the opportunities to profit and risk displacement by the early movers.
Beneficial Applications Versus Risks of Machine
Key Considerations and Implications
- The moral component — The level of intelligence and “morality” that a machine exerts is a direct result of the data it receives. One consequence is that, based on the data input, machines may train themselves to work against the interest of some humans or be biased. Failure to erase bias from a machine algorithm may produce results that are not in line with the moral standards of society. Yet not all researchers, scientists and experts believe that AI will be hurtful to society. Some believe that AI can be developed to mirror the human brain and obtain human moralistic psychology to enhance society.
- Accuracy of risk assessments — Risk assessments are used in many areas of society to evaluate and measure the potential risks that may be involved in specific scenarios. The increasing popularity of using AI risk assessments to make important decisions on behalf of people is a direct result of the growing trust between humans and machines. However, there are serious implications to note when using a machine learning system to make risk assessments. A quantitative analyst estimates that some machine learning strategies may fail up to 90 percent when tested in a real-life setting. The reason is that while algorithms used in machine learning are based on an almost infinite amount of items, much of this data is very similar. For these machines, finding a pattern would be easy, but finding a pattern that will fit every real-life scenario would be difficult.
- Transparency of algorithms — Supporters of creating transparency in AI advocate for the creation of a shared and regulated database that is not in possession of any one entity that has the power to manipulate the data; however, there are many reasons why corporations are not encouraging this. While transparency may be the solution to creating trust between users and machines, not all users of machine learning see a benefit there.
Next, we highlight some of the ways these implications play out in several industries.
Spotlight: Industry Implications
Today, artificial intelligence makes it possible to predict the likelihood of a heart attack with much better accuracy than before. While manual systems are able to make correct predictions with around 30 percent accuracy, a machine learning algorithm created at Carnegie Mellon University was able to raise the prediction accuracy to 80 percent. In a hospital, an 80 percent prediction theoretically would give a physician four hours to intervene before the occurrence of the life-threatening event.
However, the accuracy of risk assessments in the medical field may vary depending on the level of bias in the research used to train the machine learning algorithm. For instance, most heart disease research is conducted on men, even though heart attack symptoms between men and women differ in some important ways. If the system is trained to recognise heart attack symptoms found in men, the accuracy of predicting a heart attack in women diminishes and may result in a fatality. For that reason, people who are affected by decisions based on AI risk assessments will want to know how these decisions are systematically made.
Hedge funds, which have always relied heavily on computers to find trends in financial data, are increasingly moving toward machine learning. Their goal is to be able to automatically recognise changes in the market and react quickly in ways quant models cannot. Most of these algorithms are proprietary, for a reason. The risk of having transparency in this case is that as one fund becomes successful using a certain algorithm, others will want to mimic that company’s machine learning method, diminishing everyone’s success and creating an artificial market environment. For this reason, any regulation that attempts to control the transparency of AI must be suitable and appropriate to the various scenarios where AI is used.
The U.S. National Highway Traffic Safety Administration recently released guidelines for autonomous vehicles, requiring auto manufacturers to voluntarily submit their design, development, testing and deployment plans before going to market with their vehicles. Despite these efforts to increase the transparency around “the brains” deployed in autonomous vehicles, car manufacturers, tech companies and auto parts makers are in a tight competition to develop the software behind self-driving cars, and their need to keep development efforts under wraps to gain market advantage may end up hurting the future of autonomy.
In addition, the nature of machine learning itself makes it very difficult to prove that autonomous vehicles will operate safely. Traditional computer coding is written to meet safety requirements and then tested to verify if it was successful; however, machine learning allows a computer to learn and perform at its own pace and level of complexity. The more automakers are willing to be transparent about the data they input into the learning algorithms, the easier it will be for lawmakers and auto safety regulators to create laws that will ensure the safety of consumers.