James Henderson

With in-house AI expertise lacking, enterprise must scrutinise third-party vendors

More than half of enterprise organisations have no in-house artificial intelligence (AI) development capabilities and exclusively rely on vendor tools and technologies, as the project pendulum swings in favour of outsourcing.

With 53% of companies housing zero internally designed or developed AI capabilities, worsening talent and resource constraints are combining to place increased importance on ecosystem alliances.

According to new research by MIT Sloan Management Review (MIT SMR) and Boston Consulting Group (BCG), widespread reliance on third-party AI is now commonplace in the enterprise.

Notably, 78% of businesses are “highly reliant” on accessing, buying, licensing or otherwise using third-party AI tools, including commercial APIs, pre-trained models and data.

“The AI landscape, both from a technological and regulatory perspective, has changed so dramatically since last year,” noted Elizabeth M. Renieris, Guest Editor at MIT SMR. “In fact, with the sudden and rapid adoption of generative AI tools, AI has become dinner table conversation. And yet, many of the fundamentals remain the same.”

Elizabeth M. Renieris (MIT Sloan Management Review)

In industries subject to non-AI-specific regulations, 77% of enterprise-grade companies use third-party AI tools, a number which jumps to 82% in sectors lacking any form of compliance.

“Third-party AI tools, including open-source models, vendor platforms and commercial APIs, have become an essential part of virtually every organisation’s AI strategy in one form or another,” added Nitzan Mekel-Bobrov, Chief AI Officer at eBay.

Increased opportunity for ecosystem vendors and partners comes with a caveat however, as third-party AI tools hold the potential to expose organisations to “all manner of risks”. This includes reputational damage, loss of customer trust, financial losses, regulatory penalties, compliance challenges and litigation issues.

According to the report, “outsourcing AI from third parties doesn’t inoculate organisations from these hazards”. This is backed up by the data with more than half (55%) of all AI-related failures – ethical or technical lapses – stemming from third-party AI tools.

“Determining how to fully assess the risks (real or potential) in third-party AI tools can be challenging,” cautioned Steven Vosloo, Digital Policy Specialist at UNICEF.

In response, the report recommended utilising a wide variety of approaches and methods to evaluate third-party tools to mitigate risk.

For example, organisations that utilise seven different methods are more than twice as likely to uncover lapses as those that only use three (51% vs. 24%). These approaches include contractual language mandating adherence to responsible AI principles, vendor pre-certification and audits, internal product-level reviews and adherence to relevant regulatory requirements and industry standards.

“Vigilance can help prevent such mishaps,” the report stated. “The more a company analyses third-party tools, including vendor certification and audits, the more AI failures they discover.”

Yet a fifth (20%) of organisations that use third-party AI tools fail to evaluate the risks at all.

“Unfortunately, companies are doing too little preventive oversight,” the report outlined. “Two thirds [68%] perform three or fewer checks on third-party AI solutions, so the true failure rate is likely much worse.

“Companies can rely on existing third-party risk management processes to evaluate AI vendors, but they also need AI-specific approaches like audits and red teaming. These approaches need to be dynamic and flexible, because AI is changing rapidly.”

SIGN UP FOR INSIGHTS VIA MOXIE MAIL

Inform your opinion with executive guidance, in-depth analysis and business commentary.