The MLPerf consortium introduces machine learning benchmark

The MLPerf consortium, made up of more than 40 companies and university researchers, has introduced MLPerf Inference v0.5, the industry’s first industry-standard machine-measuring instructor to measure system performance and energy efficiency.

The reference value includes applications including autonomous driving and processing of natural language on a variety of factors, such as smartphones, personal computers, edge servers, and cloud computing in datacenters.

Measuring conclusions will provide information on how a fast-trained neural network can process new data.

It consists of five metrics, focusing on the usual ML image assignment tasks (predicting the tag for a given image from the ImageNet data set), object discovery – object selection using the boundary box inside the image of the MS-COCO data set, and machine translation, translating sentences between English and German, similar to automatic translation in chat and e-mail applications.

Reference Reference Reference Implementations define the goal, model, and quality goal, and provide instructions for starting the scales. Referential implementations are available in ONNX, PyTorch, and TensorFlow frames.

MLPerf was created in February 2018 by engineers and researchers from Baidu, Google, Harvard University, Stanford University, and Berkeley University, California. In May 2018, it launched a package to measure the quality of training. Members are Arm, Cadence, Centaur Technology, Divide, Facebook, Futurewei, General Motors, Google, Habana Labs, Intel, MediaTek, Microsoft, Myrtle, Nvidia, Real World Insights, Toronto University and Xilinx.

“Referential values ​​will speed up the development of hardware and software for ML applications”, said Vijay Janapa Reddi, associate professor at Harvard University, and co-chair of the MLPerf Inference working group. The goal of benchmarking is also to encourage innovation in academic circles and research bodies.

“Our goal is to create the usual and relevant metrics for assessing new learning hardware, hardware accelerators and cloud computing platforms in real-world situations,” said David Kanter, co-chair of the MLPerf working group.