What AI Thinks It Can & Knows It Can’t
Part 2 in Mojo Trek’s Two-part Series on AI
Last month, we explored what AI is, where we see it in use right now and what industries made big inroads in AI to create competitive advantage. With this second installment in the series, we dive into some cans and can’ts of AI. AI offers many possibilities for creating value, increasing insight, driving innovation, and improving decision making. But, as with all tools, AI can’t do it all. AI is a tool, not a strategic goal. Here’s a look at opportunities and limitations to keep in mind.
AI CAN’T Replace Human Oversight
As incredible as machine intelligence’s potential is, it is still not a superhero technology able to remedy critical problems with a single algorithm. Like all technology, AI is a tool and it needs to be used in service of disciplined business strategies and, in most cases, in partnership with human counterparts. Consider, for example, Facebook and its ongoing difficulties with hate speech and misinformation.
For a long time, Facebook believed that it could eliminate these widespread problems algorithmically. Despite its numerous AI-driven algorithms and vast resources, Facebook has struggled to reign in offensive and dangerous content, from terrorist threats and suicide attempts to election interference and oppressive misogyny, xenophobia, racism, and anti-LGBTQ speech.
In 2017, before Congress, Facebook leaders pledged to hire 10,000 of safety and security experts to address these issues. Today the challenges continue. The goal for Facebook is to eliminate dangerous content and AI is one tool in their arsenal but not the only tool. This spring, Facebook’s Chief Technology Officer, Mike Schroepfer, told Wired that while Facebook’s hate speech algorithms have improved greatly, “humans are going to be in the loop for the indefinite future.”
IBM, a pioneer in AI with its renowned Watson technology, has also learned hard lessons when it comes to people and machines. In 2015, IBM introduced Watson for Oncology, a tool for providing treatment recommendations for specific types of cancer. The problem was that the tool was viewed as redundant when it agreed with doctor recommendations and simply was not trusted when it came back with different recommendations from oncologists. Because its algorithms were highly sophisticated and used complex and varied data to draw conclusions, Watson for Oncology’s recommendations could not be explained to doctors or patients. That was a recipe for mistrust and the reason cancer centers and hospitals abandoned its use early on. The need for human understanding and trust far outweighed the processing magnificence of the tool.
AI CAN Find Better, Safer Paths Forward
AI’s predictive capabilities mean more knowledge is processed faster than ever. While Watson for Oncology’s early challenges underscored the ongoing need for human engagement in patient assessment and treatment, it did not mean the end of AI in healthcare. Today AI is used by numerous medical organizations to collect data on patients, their treatments, and how they respond. A Chicago-based Tempus established partnerships with cancer centers and academic medical centers and uses their data to help predict better paths forward for new patients, improving healthcare services and results. Rather than replacing or overriding medical professionals, AI is informing and supporting them.
These predictive capabilities are helping increase security and safety all over the world. For example, image recognition tools are being used to identify unsafe behaviors, such as someone wearing a mask near a bank or approaching a school or crowds rapidly growing in public spaces, which might lead to property damage or personnel injury. These tools can help law enforcement and businesses maintain the safety and rights of citizens and consumers.
AI CAN’T Avoid Bias
AI is not bias free. Without rigorous development and checks, it will mirror the biases of the people who build it. Many businesses, for example, may one day rely on machine-based tools to support initiatives like increasing diversity and inclusion or creating more effective talent development and advancement programs. To do that well and without bias, it is critical to understand exactly “what’s inside the box.” That means knowing what you are purchasing if you implement ready-made AI tools, such as chat bots, libraries or deep learning platforms, or leverage open source libraries. Leveraging AI tools without understanding who built them and where biases might unintentionally reside inside the system puts businesses at risk for leveraging tools that will deliver erroneous or unwanted outcomes.
AI CAN Be for Everyone
Today there are already millions of open source libraries that companies can use to develop their ML models and AI-enabled applications, signaling broader AI adoption to come over the next few years. Nevertheless, one of the biggest challenges that must be solved over the next three to five years is data availability, as most of it is owned by large corporations and is not available to everyone. As access to large-scale data sets become more widely available, businesses of all sizes will be able to use the powerful, rapid processing power of AI to improve their customer engagement, products, and solutions. With the right understanding and approach, AI can and will be a powerful tool for any business looking to increase understanding, proactively identify issues and opportunities and improve decision making with the latest and greatest knowledge.