Companies have always had to manage risks associated with the technologies they adopt to build their businesses. They must do the same when it comes to implementing artificial intelligence.
Some of the risks with AI mirror those associated with deploying any new technology: There are AI risks related to poor strategic alignment to business goals, a lack of skills to support initiatives and the failure to get buy-in throughout the ranks of the organization.
As such, executives should continue leaning on the same best practices that have guided the effective adoption of other technologies, with management consultants and AI experts advising CIOs and their C-suite colleagues to identify areas where AI can help them meet organizational objectives, develop strategies to ensure they have the expertise to support AI programs and create strong change management policies to smooth and speed enterprise adoption.
However, executives are finding that AI in the enterprise also comes with unique risks that need to be acknowledged and addressed.
Here are five areas of risk that can arise as organizations implement and use AI technologies in the enterprise:
Lack of employee trust slows AI adoption
Beena Ammanath, AI managing director, Deloitte Consulting LLPBeena Ammanath
Despite its name and the cannon of science fiction features built on sentient versions of artificial intelligence, AI does not operate independently from human interaction; at some point in the business process, humans need to step in to take actions based on guidance from the AI systems.
Not all workers, however, are ready to embrace their digital colleagues. According to the July 2019 "AI and Empathy" report from software maker Pegasystems, 35% of the 6,000 individuals surveyed said they were concerned that machines would take their jobs and 27% said they were concerned about "the rise of robots and enslavement of humanity."
Enterprise leaders clearly have some work to do when it comes to establishing worker trust in AI. Without that trust, AI implementation will be unproductive.
"I've seen cases where the algorithm works perfectly, but the worker isn't trained or motivated to use it," said Beena Ammanath, AI managing director at Deloitte Consulting LLP.
Consider, for example, what happens when workers don't trust an AI solution on a factory floor that determines a machine must be shut down for maintenance. "You can build the best AI solution -- it could be 99.998% accurate -- and it could be telling the factory worker to shut off the machine," Ammanath said. "But if the end user doesn't trust the machine, which isn't unusual, then that AI is a failure."
"Providing the right training to make sure your users are open to using AI, really focusing on adoption, is a big factor," she said.
Biases, errors are magnified by volume of AI transactions
At its most basic level, AI takes large volumes of data and then, using algorithms, identifies and learns to perform from the patterns it identifies in the data. But when the data is biased or problematic, the AI produces faulty results.
Similarly, problematic data or mistakes in the algorithms can lead AI systems to produce bad results.
"The AI doesn't know what's important to you, it doesn't know your products, your processes, your customers," said Seth Earley, author of The AI-Powered Enterprise and CEO of Earley Information Science. AI is built on the fundamentals of the business, which it must be taught.
Human workers, of course, have biases and make mistakes as well, but the consequences of their errors are limited to the volume of work that they can do before the errors are caught -- which is often not very much. However, the consequences of biases or hidden errors in operational AI systems can be exponential.
"Humans might make 30 mistakes in a day, but a bot handling millions of transactions a day magnifies any error," said Chris Brahm, a partner and director at Bain & Co., and leader of the firm's global advanced analytics practice.
Good data is key to mitigating such AI risks, experts said.
"AI runs on data, and the data is more important than the algorithm. If we don't have the data to support the process, no algorithm will work. No algorithm will turn bad data into good data," Earley said.
AI can have unintended consequences, automate unethical practices
A grocery chain using AI to determine pricing could find that the system, after analyzing demographic data, generates significantly higher prices for food products in poor neighborhoods where there's no competition. Based on the data, the strategy might be logical, but is it the result the grocery chain had intended?
Experts have raised similar ethical concerns about using AI in many other enterprise functions, pointing out, for example, that AI systems used to sort through resumes have learned to favor certain types of candidates over others in what many consider questionable, unethical ways.
"AI has to be responsible and ethical," said Shervin Khodabandeh, a managing director and senior partner at Boston Consulting Group and co-leader of its AI business in North America.
Although organizational leaders may not be able to foresee every ethical consideration, Khodabandeh and others said enterprises should have frameworks to ensure that their AI systems contain the policies and guardrails to create ethical, transparent, fair and unbiased results. It is also important to have human employees monitoring systems to confirm the results meet the organization's established standards.
Key skills may be at risk of being eroded by AI
After two plane crashes involving Boeing 737 Max jets, one in late 2018 and one in early 2019, some experts expressed concern that pilots were losing basic flying skills as they relied more and more on increasing amounts of automation in the cockpit.
Although those incidents are extreme cases, Brahm said AI could erode other key skills that enterprises may want to preserve in their human workforce.
"As you implement AI, do you lose skills?" he asked. "Do individuals and job functions become deskilled? Already we saw the deskilling of navigation. We've all become more reliant on Google Maps than we like to admit."
He said executives across industries should consider what everyday skills that exist now could be lost and whether they need to be preserved on some level.
Poor training data, lack of monitoring can sabotage AI systems
Microsoft released a chatbot named Tay on Twitter in 2016. Engineers had designed the bot to engage in online interactions and then learn patterns of language so that she -- yes, Tay was designed to mimic the speech of a female teenager -- would sound like a natural on the internet.
Instead, trolls taught Tay racist, misogynistic and anti-Semitic language, with her language becoming so hostile and offensive within hours that Microsoft suspended the account.
Microsoft's experience highlights another big risk with building and using AI: It has to be taught well to work right.
To prevent Tay from learning such bad behavior, Ammanath said engineers could have -- and probably should have -- designed Tay with guardrails to prevent her from mimicking certain words and phrases.
Moving forward with AI programs, organizations are advised to not only create such guardrails from the start but to also monitor what their AI is learning over time to ensure it has appropriate and complete information to reach the right conclusions and take the right actions.
Consider, for example, how the pandemic has led to a shortage of toilet paper on store shelves, Brahm said. Aggregate demand for toilet paper was probably about the same as it was pre-pandemic, but the pandemic shifted the location of demand -- with institutional buyers such as schools dropping sales, while consumers were boosting their purchases.
Suppliers that were using AI-based systems to forecast demand needed to adjust those systems to consider the highly unusual circumstances -- circumstances that couldn't have been part of any existing training data -- if they wanted to have accurate forecasts on where their next toilet paper shipments should go.
Sourced from TechTarget - Written by Mary Pratt