A consortium of tech companies, including Facebook Inc. and Alphabet Inc.’s Google, has released a set of benchmarks for evaluating the performance of artificial-intelligence tools, aiming to help businesses navigate the fast-growing field.
The benchmarks—which cover image recognition, object detection and voice translation—are meant to help companies compare various AI tools to see which work best for them as they pursue their own AI initiatives, said Peter Mattson, general chairman of the consortium, MLPerf, which counts 40 companies as members.
“For CIOs, metrics make for better products and services they can then incorporate into their organization,” said Mr. Mattson, a Google engineer.
The MLPerf benchmarks could, for example, evaluate the performance of an AI image-recognition model built with open-source machine-learning software from Google using data sources like RestNet50 or MobileNets, both of which specialize in image recognition. Companies can use the results as a starting point in implementing AI by seeing which software, hardware and data source work best.
There are separate benchmarks for how AI tools perform on various platforms and devices, such as mobile phones, servers and chips in the cloud or in data centers. The results will vary. For example, a mobile phone typically doesn’t have as much processing capability as a desktop computer, limiting the phone’s ability to perform AI tasks like image recognition.
Organizations have been slow to adopt AI, despite the hype surrounding the emerging technology. In a survey of 2,473 organizations of various sizes across industries world-wide by International Data Corp. in 2018, 18% had AI models in production; 16% were in the proof-of-concept stage and 15% were experimenting with AI.
Among the roadblocks to AI adoption: the myriad tools and services available and the many decisions organizations need to make, from whether or not to run AI in the cloud to whether to use graphics processors, which specialize in video and graphics but are now also handling AI, or central processing units, which mainly run central computer operations, for experimentation.
David Schubmehl, research director for AI systems at IDC, said benchmarks can help companies better address the complexities around AI adoption, allowing them to make apples-to-apples comparisons on the many AI software and hardware tools available.
“It’s coming at a useful time as we’re seeing more organizations move from experimentation to production,” he said.
Pleasanton, Calif.-based startup ServiceChannel Inc., which provides facilities-management services via the cloud to clients in sectors including retail and food, sees benchmarks like MLPerf’s as an important consideration in automating how it sends contractors to locations to provide services, said Chief Executive Tom Buiocchi.
The company is in the process of using AI and other technologies to verify the identity and performance of its contractors, and a benchmark will give the company confidence that it is deploying the right solution, Mr. Buiocchi said.
MLPerf was formed in 2018 to fill a void for standardized AI benchmarks. Its first measurement tool, released in May 2018, focused on training models—the brains behind AI implementations and a reference point for recognizing images or voice. The newer benchmarks focus on results from trained models. MLPerf’s current lineup includes representatives from Microsoft Corp., Intel Corp. and Landing AI, started by AI pioneer Andrew Ng.
For the new set of benchmarks, MLPerf pursued metrics around popular applications like voice recognition and computer vision that are universally applicable, said Vijay Janapa Reddi, associate professor of electrical engineering at Harvard University and co-chairman of MLPerf’s inference working group, which came up with the standards.
There are many ways to implement AI, but the benchmarks are meant to identify optimal solutions.
“You can literally look at the results and understand the trade-offs at a higher level,” Mr. Reddi said.
Write to Agam Shah at firstname.lastname@example.org
(END) Dow Jones Newswires
June 25, 2019 15:30 ET (19:30 GMT)