1. Introduction
Introduction here.
2. Use cases
2.1. Application Use Cases
This section illustrates application-level use cases for neural network inference hardware acceleration. All applications in those use cases can be built on top of pre-trained deep neural network (DNN) models.
2.1.1. Person Detection
A user opens a web-based video conferencing application, but she temporarily leaves from her room. The application is watching whether she is in front of her PC by using object detection (for example, using object detection approaches such as [SSD] or [YOLO] that use a single DNN) to detect regions in a camera input frame that include persons.
When she comes back, the application automatically detects her and notifies other online users that she is active now.
2.1.2. Semantic Segmentation
A user joins a teleconference via a web-based video conferencing application at her desk since no meeting room in her office is available. During the teleconference, she does not wish that her room and people in the background are visible. To protect the privacy of the other people and the surroundings, the application runs a machine learning model such as [DeepLabv3+] or [MaskR-CNN] to semantically split an image into segments and replaces segments that represent other people and background with another picture.
2.1.3. Skeleton Detection
A web-based video conferencing application tracks a pose of user’s skeleton by running a machine learning model, which allows for real-time human pose estimation, such as [PoseNet] to recognize her gesture and body language. When she raises her hand, her microphone is automatically unmuted and she can start speaking on the teleconference.
2.1.4. Face Recognition
There are multiple people in the conference room and they join an online meeting using a web-based video conferencing application. The application detects faces of participants by using object detection (for example, using object detection approaches such as [SSD]) and checks whether each face was present at the previous meeting or not by running a machine learning model such as [FaceNet], which verifies whether two faces would be identical or not.
2.1.5. Facial Landmark Detection
A user wants to find new glasses that beautifully fits her on an online glasses store. The online store offers web-based try-on simulator that runs a machine learning model such as Face Alignment Network [FAN] to detect facial landmarks like eyes, nose, mouth, etc. When she chooses a pair of glasses, the simulator properly render the selected glasses on the detected position of eyes on her facial image.
2.1.6. Style Transfer
A user is looking for cosmetics on an online store and wondering which color may fit her face. The online store shows sample facial makeup images of cosmetics, and offers makeup simulator that runs a machine learning model like [ContextualLoss] or [PairedCycleGAN] to transfer the makeup style of the sample makeup image to her facial image. She can check how the selected makeup looks like on her face by the simulator.
2.1.7. Super Resolution
A web-based video conferencing is receiving a video stream from its peer, but the resolution of the video becomes lower due to network congestion. To prevent degradation of the perceived video quality, the application runs a machine learning model for super-resolution such as [SRGAN] to generate higher-resolution video frames.
2.1.8. Image Captioning
For better accessibility, a web-based presentation application provides automatic image captioning by running a machine learning model such as [im2txt] which predicts explanatory words of the presentation slides.
2.1.9. Machine Translation
Multiple people from various countries are talking via a web-based real-time text chat application. The application translates their conversation by using a machine learning model such as [GNMT] or [OpenNMT], which translates every text into different language.
2.1.10. Emotion Analysis
A user is talking to her friend via a web-based real-time text chat application, and she is wondering how the friend feels because she cannot see the friend’s face. The application analyses the friend’s emotion by using a machine learning model such as [DeepMoji], which infers emotion from input texts, and displays an emoji that represents the estimated emotion.
2.1.11. Video Summarization
A web-based video conferencing application records received video streams, and it needs to reduce recorded video data to be stored. The application generates the short version of the recorded video by using a machine learning model for video summarization such as [Video-Summarization-with-LSTM].
2.2. Framework Use Cases
This section collects framework-level use cases for a dedicated low-level API for neural network inference hardware acceleration. It is expected that Machine Learning frameworks will be key consumers of the Web Neural Network API (WebNN API) and the low-level details exposed through the WebNN API are abstracted out from typical web developers. However, it is also expected that web developers with specific interest and competence in Machine Learning will want to interface with the WebNN API directly instead of a higher-level ML framework.
2.2.1. Custom Layer
A web application developer wants to run a DNN model on the WebNN API. However, she has found that some of activation functions like [LeakyReLU], [ELU], etc. are not included in the WebNN API. To address this issue, she constructs custom layers of the additional activation functions on top of the WebNN API. Note that the scope of custom layers may include convolution, normalization, etc. as well as activation.
2.2.2. Network Concatenation
A web application uses a DNN model, and its model data of upper convolutional layers and lower fully-connected layers are stored in separate files, since model data of the fully-connected layers are periodically updated due to fine tuning at the server side.
Therefore, the application downloads both partial model files at first and concatenates them into a single model. When the model is updated, the application downloads fine-tuned part of the model and replace only the fully-connected layers with it.
2.2.3. Performance Adaptation
A web application developer has a concern about performance of her DNN model on mobile devices. She has confirmed that it may run too slow on mobile devices which do not have GPU acceleration. To address this issue, her web application refers to the WebNN API to confirm whether acceleration is available or not, so that the application can display the warning for devices without acceleration.
After several weeks, she has developed a tiny DNN model that can even run on CPU. In order to accommodate CPU execution, she modifies the application so that the application loads the tiny model in the case of CPU-only devices.
3. API
3.1. Navigator
partial interface Navigator {readonly attribute ML ; };
ml
3.2. ML
interface {
ML NeuralNetworkContext (); };
getNeuralNetworkContext
3.3. OperandDescriptor
enum {
OperandLayout ,
"nchw" };
"nhwc" enum {
OperandType ,
"float32" ,
"float16" ,
"int32" ,
"uint32" ,
"tensor-float32" ,
"tensor-float16" ,
"tensor-int32" };
"tensor-quant8-asymm" dictionary { // The operand type.
OperandDescriptor required OperandType ; // The dimensions field is only required for tensor operands. // The negative value means an unknown dimension.
type sequence <long >; // The following two fields are only required for quantized operand. // scale: an non-negative floating point value // zeroPoint: an integer, in range [0, 255] // The real value is (value - zeroPoint) * scale
dimensions float ;
scale long ; };
zeroPoint
3.4. Operand
interface {};
Operand
3.5. NeuralNetworkContext
interface { // Create an Operand object that represents a model input.
NeuralNetworkContext Operand (
input OperandDescriptor ); // Create an Operand object that represents a model constant.
desc Operand (
constant OperandDescriptor ,
desc ArrayBufferView ); // Create a Model object by identifying output operands.
value Promise <Model >(
createModel sequence <Operand >); };
outputs
3.5.1. add
partial interface NeuralNetworkContext { // Create an Operand object that represents the result of an element-wise // binary addition operation with input operands a and b.Operand (
add Operand ,
a Operand ); };
b
3.5.2. conv2d
partial interface NeuralNetworkContext {Operand (
conv2d Operand ,
input Operand ,
filter sequence <long >,
padding sequence <long >,
strides sequence <long >,
dilations optional OperandLayout = "nchw"); };
layout
-
input: an
Operand
. The input 4-D tensor. The logical shape is interpreted according to the value of layout. -
filter: an
Operand
. The filter 4-D tensor. The logical shape is interpreted according to the value of layout. -
padding: a sequence of
long
of length 4. The padding for the beginning and ending along each spatial dimension of input, [beginning_height, ending_height, beginning_width, ending_width]. -
strides: a sequence of
long
of length 2. The stride of the sliding window for each spatial dimension of input, [stride_height, stirde_width]. -
dilations: a sequence of
long
of length 2. The dilation factor for each spatial dimension of input, [dilation_height, dilation_width]. -
layout: an optional
OperandLayout
with value as "nchw" or "nhwc". The default value is "nchw". This argument specifies the layout format of the input, output and filter tensor. Specifically,"nchw":
-
input tensor: [batches, input_channels, height, width]
-
filter tensor: [output_channels, input_channels, height, width]
-
output tensor: [batches, output_channels, height, width]
-
input tensor: [batches, height, width, input_channels]
-
filter tensor: [height, width, input_channels, output_channels]
-
output tensor: [batches, height, width, output_channels]
-
Returns: an Operand
. The output 4-D tensor that contains the
result of the convolution. The logical shape is interpreted according to the
value of layout.
Computes a 2-D convolution given input and filter 4-D tensors.
3.5.3. mul
partial interface NeuralNetworkContext { // Create an Operand object that represents the result of an element-wise // binary multiplication operation with input operands a and b.Operand (
mul Operand ,
a Operand ); };
b
3.6. Model
enum {
CompilationPreference ,
"low-power" ,
"fast-answer" };
"sustained-speed" interface {
Model Promise <Compilation >(
createCompilation CompilationPreference ); };
preference
3.7. Compilation
interface {
Compilation Promise <Execution >(); };
createExecution
3.8. Execution
interface {
Execution void (
setInput unsigned long ,
index ArrayBufferView );
data void (
setOutput unsigned long ,
index ArrayBufferView );
data Promise <void >(); };
startCompute
4. Examples
const nn= navigator. ml. getNeuralNetworkContext();
tensor0 ---+ +--- Add ---> intermediateOutput0 ---+ tensor1 ---+ | +--- Mul---> output tensor2 ---+ | +--- Add ---> intermediateOutput1 ---+ tensor3 ---+
The tensor0 and tensor2 are constants. The tensor1 and tensor3 are user inputs.
// Use tensors in 4 dimensions. const TENSOR_DIMS= [ 2 , 2 , 2 , 2 ]; const TENSOR_SIZE= 16 ; // Create OperandDescriptor object. const float32TensorType= { type: 'tensor-float32' , dimensions: TENSOR_DIMS}; // tensor0 is a constant tensor. Set its value from an ArrayBuffer object. // The ArrayBuffer object may contain the training data loaded before hand. const tensor0= nn. constant( float32TensorType, new Float32Array( arrayBuffer, 0 , TENSOR_SIZE)); // tensor1 is one of the input tensors. Its value will be set before execution. const tensor1= nn. input( float32TensorType); // tensor2 is another constant tensor. Set its value from same ArrayBuffer // object with offset. const tensor2= nn. constant( float32TensorType, new Float32Array( arrayBuffer, TENSOR_SIZE* Float32Array. BYTES_PER_ELEMENT, TENSOR_SIZE)); // tensor3 is another input tensor. Its value will be set before execution. const tensor3= nn. input( float32TensorType); // intermediateOutput0 is the output of the first Add operation. const intermediateOutput0= nn. add( tensor0, tensor1); // intermediateOutput1 is the output of the second Add operation. const intermediateOutput1= nn. add( tensor2, tensor3); // output is the output tensor of the Mul operation. const output= nn. mul( intermediateOutput0, intermediateOutput1); // Create the model by identifying the outputs. const model= await nn. createModel([ output]);
// Create a Compilation object for the constructed model with 'fast-answer' // compilation preference. const compilation= await model. createCompilation( 'fast-answer' );
// Create an Execution object for the compiled model. const execution= await compilation. createExecution(); // Setup the input tensors that contain the input data provided by the user. const inputTensor1= new Float32Array( TENSOR_SIZE); inputTensor1. fill( inputValue1); const inputTensor2= new Float32Array( TENSOR_SIZE); inputTensor2. fill( inputValue2); // Associate the input tensors to model’s inputs. execution. setInput( 0 , inputTensor1); execution. setInput( 1 , inputTensor2); // Associate the output tensor to model’s output. let outputTensor= new Float32Array( TENSOR_SIZE); execution. setOutput( 0 , outputTensor); // Start the asynchronous computation. await execution. startCompute(); // The computed result is now in outputTensor.
5. Acknowledgements
This specification follows the concepts of the Android Neural Networks API C API.
Thanks to Tomoyuki Shimizu, Ningxin Hu, and Zhiqiang Yu for the use cases.