1. Introduction
The [WEBRTC-NV-USE-CASES] document describes the use-case of
-
Untrusted JavaScript Cloud Conferencing
This specification provides access to encoded media, which is the output of the encoder part of a codec and the input to the decoder part of a codec which allows the user agent to apply encryption locally.
The interface is inspired by [WEBCODECS] to provide access to such functionality while retaining the setup flow of RTCPeerConnection
2. Specification
The Streams definition doesn’t use WebIDL much, but the WebRTC spec does. This specification shows the IDL extensions for WebRTC.
It
uses
an
additional
API
on
RTCRtpSender
and
RTCRtpReceiver
to
insert
the
processing
into
the
pipeline.
typedef (SFrameTransform or RTCRtpScriptTransform ); // New methods for RTCRtpSender and RTCRtpReceiver
RTCRtpTransform partial interface RTCRtpSender {attribute RTCRtpTransform ?transform ; };partial interface RTCRtpReceiver {attribute RTCRtpTransform ?transform ; };
2.1. Extension operation
At
the
time
when
a
codec
is
initialized
as
part
of
the
encoder,
and
the
corresponding
flag
is
set
in
the
RTCPeerConnection
’s
RTCConfiguration
argument,
ensure
that
the
codec
is
disabled
and
produces
no
output.
2.1.1. Stream creation
At
construction
of
each
RTCRtpSender
or
RTCRtpReceiver
,
run
the
following
steps:
-
Initialize this .
[[transform]]
to null. -
Initialize this .
[[readable]]
to a newReadableStream
. -
Set up this .
[[readable]]
. this .[[readable]]
is provided frames using the readEncodedData algorithm given this as parameter. -
Initialize this .
[[writable]]
to a newWritableStream
. -
Set up this .
[[writable]]
with its writeAlgorithm set to writeEncodedData given this as parameter and its highWaterMark set toInfinity
.highWaterMark is set to Infinity to explicitly disable backpressure.
-
Initialize this .
[[pipeToController]]
to null. -
Initialize this .
[[lastReceivedFrameCounter]]
to0
. -
Initialize this .
[[lastEnqueuedFrameCounter]]
to0
. -
Queue a task to run the following steps:
-
If this .
[[pipeToController]]
is not null, abort these steps. -
Set this .
[[pipeToController]]
to a newAbortController
. -
Call pipeTo with this .
[[readable]]
, this .[[writable]]
, preventClose equal to true, preventAbort equal to true, preventCancel equal to true and this .[[pipeToController]]
’s signal .
-
Streams backpressure can optimize throughput while limiting processing and memory consumption by pausing data production as early as possible in a data pipeline. This proves useful in contexts where reliability is essential and latency is less of a concern. On the other hand, WebRTC media pipelines favour low latency over reliability, for instance by allowing to drop frames at various places and by using recovery mechanisms. Buffering within a transform would add latency without allowing web applications to adapt much. The User Agent is responsible for doing these adaptations, especially since it controls both ends of the transform. For those reasons, streams backpressure is disabled in WebRTC encoded transforms.
2.1.2. Stream processing
The readEncodedData algorithm is given a rtcObject as parameter. It is defined by running the following steps:
-
Wait for a frame to be produced by rtcObject ’s encoder if it is a
RTCRtpSender
or rtcObject ’s packetizer if it is aRTCRtpReceiver
. -
Increment rtcObject .
[[lastEnqueuedFrameCounter]]
by1
. -
Let frame be the newly produced frame.
-
Set frame .
[[owner]]
to rtcObject . -
Set this .
[[writable]]
to this .[[transform]]
.[[writable]]
. Set frame .
[[counter]]
to rtcObject .[[lastEnqueuedFrameCounter]]
.-
If the frame has been produced by a
RTCRtpReceiver
:-
If the relevant RTP packet contains the RTP Header Extension for Absolute Capture Time , set frame .
[[captureTime]]
to the absolute capture timestamp field and set frame .[[senderCaptureTimeOffset]]
to the capture clock offset field if it is present. -
Otherwise, if the relevant RTP packet does not contain the RTP Header Extension for Absolute Capture Time but a previous RTP packet did, set frame .
[[captureTime]]
to the result of calculating the absolute capture timestamp according to timestamp interpolation and set frame .[[senderCaptureTimeOffset]]
to the most recent value that was present. -
Otherwise, set frame .
[[captureTime]]
to undefined and set frame .[[senderCaptureTimeOffset]]
to undefined. -
If frame was produced by a SFrame depacketizer , set frame .
[[useSFrame]]
to true.
-
-
If the frame has been produced by a
RTCRtpSender
, set frame .[[captureTime]]
to the capture timestamp using the methodology described in RTP Header Extension for Absolute Capture Time § absolute-capture-timestamp and set frame.[[senderCaptureTimeOffset]]
to undefined. -
Enqueue frame in rtcObject .
[[readable]]
.
The writeEncodedData algorithm is given a rtcObject as parameter and a frame as input. It is defined by running the following steps:
-
If frame .
[[owner]]
is not equal to rtcObject , abort these steps and return a promise resolved with undefined. A processor cannot create frames, or move frames between streams. -
If frame .
[[counter]]
is equal or smaller than rtcObject .[[lastReceivedFrameCounter]]
, abort these steps and return a promise resolved with undefined. A processor cannot reorder frames, although it may delay them or drop them. -
Set rtcObject .
[[lastReceivedFrameCounter]]
to frame[[counter]]
. -
Let data be frame .
[[data]]
. -
Let serializedFrame be StructuredSerializeWithTransfer ( frame , « data »).
-
Let frameCopy be StructuredDeserializeWithTransfer ( serializedFrame , frame ’s relevant realm ).
-
If frame .
[[useSFrame]]
is true, set frameCopy .[[useSFrame]]
to true. Enqueue frameCopy for processing as if it came directly from the encoded data source, by running one of the following steps:
-
If rtcObject is a
RTCRtpSender
, enqueue frameCopy to rtcObject ’s packetizer, to be processed in parallel . If frameCopy .[[useSFrame]]
is true, rtcObject ’s MUST use a SFrame packetizer or skip processing of frameCopy . -
If rtcObject is a
RTCRtpReceiver
, enqueue frameCopy it to rtcObject ’s decoder, to be processed in parallel .
-
-
Return a promise resolved with undefined.
On
sender
side,
as
part
of
readEncodedData
,
frames
produced
by
rtcObject
’s
encoder
MUST
be
enqueued
in
rtcObject
.
[[readable]]
in
the
encoder’s
output
order.
As
writeEncodedData
ensures
that
the
transform
cannot
reorder
frames,
the
encoder’s
output
order
is
also
the
order
followed
by
packetizers
to
generate
RTP
packets
and
assign
RTP
packet
sequence
numbers.
The
packetizer
may
expect
the
transformed
data
to
still
conform
to
the
original
format,
e.g.
a
series
of
NAL
units
separated
by
Annex
B
start
codes.
On
receiver
side,
as
part
of
readEncodedData
,
frames
produced
by
rtcObject
’s
packetizer
MUST
be
enqueued
in
rtcObject
.
[[readable]]
in
the
same
encoder’s
output
order.
To
ensure
the
order
is
respected,
the
depacketizer
will
typically
use
RTP
packet
sequence
numbers
to
reorder
RTP
packets
as
needed
before
enqueuing
frames
in
rtcObject
.
[[readable]]
.
As
writeEncodedData
ensures
that
the
transform
cannot
reorder
frames,
this
will
be
the
order
expected
by
rtcObject
’s
decoder.
2.2. Extension attribute
A RTCRtpTransform has private slots:
-
[[readable]]
of typeReadableStream
. -
[[writable]]
of typeWritableStream
. -
[[owner]]
of typeRTCRtpSender
orRTCRtpReceiver
, initialized to null. -
[[useSFrame]]
of type boolean. // FIXME: Decide whether augmenting this boolean with either cipher suite support or whether doing frame vs. packet based encryption.
Each RTCRtpTransform has an association steps set, which is empty by default.
The
transform
getter
steps
are:
-
Return this .
[[transform]]
.
The
transform
setter
steps
are:
-
Let transform be the argument to the setter.
-
Let checkedTransform set to transform if it is not null or to an identity transform stream otherwise.
-
If checkedTransform .
[[owner]]
is not null, throw aInvalidStateError
and abort these steps. -
Let reader be the result of getting a reader for checkedTransform .
[[readable]]
. -
Let writer be the result of getting a writer for checkedTransform .
[[writable]]
. -
Set checkedTransform .
[[owner]]
to this . -
Initialize newPipeToController to a new
AbortController
. -
If this .
[[pipeToController]]
is not null, run the following steps:-
Add the chain transform algorithm to this .
[[pipeToController]]
’s signal . -
signal abort on this .
[[pipeToController]]
.
-
-
Else, run the chain transform algorithm steps.
-
SetIf this is aRTCRtpSender
, run the following substeps:Let useSFrame be true if this is configured to use a SFrame packetizer and false otherwise.
If useSFrame is equal to checkedTransform .
[[useSFrame]]
, abort these substeps.Configure this ’s packetizer to use SFrame if checkedTransform .
[[useSFrame]]
is true and to not use SFrame if checkedTransform .[[useSFrame]]
is false.Update the negotiation-needed flag for this ’s connection.
Otherwise, run the following steps:
Let useSFrame be true if this is configured to use a SFrame depacketizer and false otherwise.
If useSFrame is equal to checkedTransform .
[[useSFrame]]
, abort these substeps.Configure this ’s depacketizer to use SFrame if checkedTransform .
[[useSFrame]]
is true and to not use SFrame if checkedTransform .[[useSFrame]]
is false.Update the negotiation-needed flag for this ’s connection.
Set this .
[[pipeToController]]
to newPipeToController .-
Set this .
[[transform]]
to transform . -
Run the steps in the set of association steps of transform with this .
The chain transform algorithm steps are defined as:
-
If newPipeToController ’s signal is aborted , abort these steps.
-
Release reader .
-
Release writer .
-
Assert that newPipeToController is the same object as rtcObject .
[[pipeToController]]
. -
Call pipeTo with rtcObject .
[[readable]]
, checkedTransform .[[writable]]
, preventClose equal to false, preventAbort equal to false, preventCancel equal to true and newPipeToController ’s signal . -
Call pipeTo with checkedTransform .
[[readable]]
, rtcObject .[[writable]]
, preventClose equal to true, preventAbort equal to true, preventCancel equal to false and newPipeToController ’s signal .
This algorithm is defined so that transforms can be updated dynamically. There is no guarantee on which frame will happen the switch from the previous transform to the new transform.
If
a
web
application
sets
the
transform
synchronously
at
creation
of
the
RTCRtpSender
(for
instance
when
calling
addTrack),
the
transform
will
receive
the
first
frame
generated
by
the
RTCRtpSender
’s
encoder.
Similarly,
if
a
web
application
sets
the
transform
synchronously
at
creation
of
the
RTCRtpReceiver
(for
instance
when
calling
addTrack,
or
at
track
event
handler),
the
transform
will
receive
the
first
full
frame
generated
by
the
RTCRtpReceiver
’s
packetizer.
3. SFrameTransform
The API presented in this section allows applications to process SFrame data as defined in [SFrame] .
enum {
SFrameTransformRole ,
"encrypt" };
"decrypt" dictionary {
SFrameTransformOptions SFrameTransformRole = "encrypt"; };
role typedef [EnforceRange ]unsigned long long ;
SmallCryptoKeyID typedef (SmallCryptoKeyID or bigint ); [
CryptoKeyID Exposed =(Window ,DedicatedWorker )]interface :
SFrameTransform EventTarget {constructor (optional SFrameTransformOptions = {});
options Promise <undefined >setEncryptionKey (CryptoKey ,
key optional CryptoKeyID );
keyID attribute EventHandler ; };
onerror SFrameTransform includes GenericTransformStream ;enum {
SFrameTransformErrorEventType ,
"authentication" ,
"keyID" ,
"syntax" }; [
"packetization" Exposed =(Window ,DedicatedWorker )]interface :
SFrameTransformErrorEvent Event {(
constructor DOMString ,
type SFrameTransformErrorEventInit );
eventInitDict readonly attribute SFrameTransformErrorEventType ;
errorType readonly attribute CryptoKeyID ?;
keyID readonly attribute any ; };
frame dictionary :
SFrameTransformErrorEventInit EventInit {required SFrameTransformErrorEventType ;
errorType required any ;
frame CryptoKeyID ?; };
keyID
The
new
SFrameTransform(
options
)
constructor
steps
are:
-
Let transformAlgorithm be an algorithm which takes a frame as input and runs the SFrame transform algorithm with this and frame .
-
Set this .
[[transform]]
to a newTransformStream
. -
Set up this .
[[transform]]
with transformAlgorithm set to transformAlgorithm . -
Let options be the method’s first argument.
-
Set this .
[[role]]
to options ["role
"]. -
Set this .
[[readable]]
to this .[[transform]]
.[[readable]]
. -
Set this .
[[writable]]
to this .[[transform]]
.[[writable]]
. -
Set this .
[[useSFrame]]
to true.
3.1. Algorithm
The SFrame transform algorithm, given sframe as a SFrameTransform object and frame , runs these steps:
-
Let role be sframe .
[[role]]
. -
If
framesframe .[[owner]]
is aRTCRtpSender
, set role to 'encrypt'. -
If
framesframe .[[owner]]
is aRTCRtpReceiver
, set role to 'decrypt'. -
If sframe .
[[owner]]
is aRTCRtpReceiver
and frame .[[useSFrame]]
is not true, queue a task to run the following steps:fire an event named
error
at sframe , using theSFrameTransformErrorEvent
interface with itserrorType
attribute set topacketization
and itsframe
attribute set to frame .Abort these steps.
Let data be undefined.
-
If frame is a
BufferSource
, set data to frame . -
If frame is a
RTCEncodedAudioFrame
, set data to frame .data
-
If frame is a
RTCEncodedVideoFrame
, set data to frame .data
-
If data is undefined, abort these steps.
-
Let buffer be the result of running the SFrame algorithm with data and role as parameters. This algorithm is defined by the SFrame specification and returns an
ArrayBuffer
. -
If the SFrame algorithm exits abruptly with an error, queue a task to run the following sub steps:
-
If the processing fails on decryption side due to data not following the SFrame format, fire an event named
error
at sframe , using theSFrameTransformErrorEvent
interface with itserrorType
attribute set tosyntax
and itsframe
attribute set to frame . -
If the processing fails on decryption side due to the key identifier parsed in data being unknown, fire an event named
error
at sframe , using theSFrameTransformErrorEvent
interface with itserrorType
attribute set tokeyID
, itsframe
attribute set to frame and itskeyID
attribute set to the keyID value parsed in the SFrame header. -
If the processing fails on decryption side due to validation of the authentication tag, fire an event named
error
at sframe , using theSFrameTransformErrorEvent
interface with itserrorType
attribute set toauthentication
and itsframe
attribute set to frame . -
Abort these steps.
-
-
If frame is a
BufferSource
, set frame to buffer . -
If frame is a
RTCEncodedAudioFrame
, set frame .data
to buffer . -
If frame is a
RTCEncodedVideoFrame
, set frame .data
to buffer . -
Set frame .
[[useSFrame]]
to true. Enqueue frame in sframe .
[[transform]]
.
3.2. Methods
The
setEncryptionKey(
key
,
keyID
)
method
steps
are:
-
Let promise be a new promise .
-
If keyID is a
bigint
which cannot be represented as a integer between 0 and 2 64 -1 inclusive, reject promise with aRangeError
exception. -
Otherwise, in parallel , run the following steps:
-
Set key with its optional keyID as key material to use for the SFrame transform algorithm, as defined by the SFrame specification .
-
If setting the key material fails, reject promise with an
InvalidModificationError
exception and abort these steps. -
Resolve promise with undefined.
-
-
Return promise .
3.3. SFrame packetization integration
A SFrame packetizer is responsible to generate SFrame packets from media content. In the context of this specification, the SFrame packetizer is not responsible for doing the actual encryption. Instead, the transform is responsible for doing so. The SFrame packetizer is responsible for splitting SFrame frames as needed so that they fit in RTP packets.
Similarly, a SFrame depacketizer is responsible to assemble RTP packets into a complete SFrame frame. It is not responsible for doing the actual decryption, the transform is responsible for doing so.
WebRTC encoded transform model is a per frame processing. SFrame can either be applied on each frame or on subframes. WebRTC encoded transform model is naturally aligned with aplying SFrame on a frame as a whole. To preserve WebRTC encoded transform model when applying SFrame on subframes, the following conceptual steps can be done:
On sending side, the
SFrameTransform
may first split the media frame in subframes like would do a regular media packetizer, and apply the SFrame encryption on each subframe. It then concatenates the encrypted subframes as a unique encrypted frame. The transform provides the encrypted frame and information of each subframe so that the SFrame packetizer generates individual packets for each subframe.On receiving side, the SFrame depacketizer assembles all individual subframe RTP packets as a unique encrypted frame. It is responsible to give the necessary subframe information to the transform so that the transform can apply the SFrame decryption on each individual subframe contained in the unique encrypted frame and concatenate each decrypted subframe as a unique decrypted media frame. If decryption of a single subframe fails, the whole encrypted frame is discarded.
4. RTCRtpScriptTransform
In
this
section,
the
capture
system
refers
to
the
system
where
media
is
sourced
from
and
the
sender
system
refers
to
the
system
that
is
sending
RTP
and
RTCP
packets
to
the
receiver
system
where
RTCEncodedFrameMetadata
data
is
populated.
4.1.
RTCEncodedFrameMetadata
dictionary
dictionary RTCEncodedFrameMetadata {unsigned long synchronizationSource ;octet payloadType ;sequence <unsigned long >contributingSources ;unsigned long rtpTimestamp ;DOMHighResTimeStamp receiveTime ;DOMHighResTimeStamp captureTime ;DOMHighResTimeStamp senderCaptureTimeOffset ;DOMString mimeType ; };
4.1.1. Members
-
synchronizationSource
, of type unsigned long unsigned long -
The synchronization source (ssrc) identifier is an unsigned integer value per [RFC3550] used to identify the stream of RTP packets that the encoded frame object is describing.
-
payloadType
, of type octet octet -
The payload type is an unsigned integer value in the range from 0 to 127 per [RFC3550] that is used to describe the format of the RTP payload.
-
contributingSources
, of typesequence<unsigned long>
sequence<unsigned long> -
The list of contribution sources (csrc list) as defined in [RFC3550] .
-
rtpTimestamp
, of type unsigned long unsigned long -
The RTP timestamp identifier is an unsigned integer value per [RFC3550] that reflects the sampling instant of the first octet in the RTP data packet.
-
receiveTime
, of type DOMHighResTimeStamp DOMHighResTimeStamp -
For frames coming from an RTCRtpReceiver, represents the timestamp of the last received packet used to produce this media frame. This timestamp is relative to
Performance
.timeOrigin
. -
captureTime
, of type DOMHighResTimeStamp DOMHighResTimeStamp -
The capture time of this frame in the capture system’s clock. On populating this member, the user agent MUST return the value of the frame’s
[[captureTime]]
slot, shifted to be relative toPerformance
.timeOrigin
. -
senderCaptureTimeOffset
, of type DOMHighResTimeStamp DOMHighResTimeStamp -
The
senderCaptureTimeOffset
is the sender system’s estimate of the offset between its own NTP clock and the capture system’s NTP clock, for the same frame that thecaptureTime
was originated from. On populating this member, the user agent MUST return the value of the frame’s[[senderCaptureTimeOffset]]
slot. -
mimeType
, of type DOMString DOMString -
The codec MIME media type/subtype defined in the IANA media types registry [IANA-MEDIA-TYPES] , e.g. audio/opus or video/VP8.
4.2.
RTCEncodedVideoFrameType
dictionary
// New enum for video frame types. Will eventually re-use the equivalent defined // by WebCodecs.enum RTCEncodedVideoFrameType {"empty" ,"key" ,"delta" , };
Enum value | Description |
---|---|
empty
|
This frame contains no data. |
key
|
This frame can be decoded without reference to any other frames. |
delta
|
This frame references another frame and can not be decoded without that frame. |
4.3.
RTCEncodedVideoFrameMetadata
dictionary
dictionary RTCEncodedVideoFrameMetadata :RTCEncodedFrameMetadata {unsigned long long frameId ;sequence <unsigned long long >dependencies ;unsigned short ;
width unsigned short ;
height unsigned long ;
spatialIndex unsigned long ;
temporalIndex long long timestamp ; // microseconds };
4.3.1. Members
-
frameId
, of type unsigned long long unsigned long long -
An identifier for the encoded frame, monotonically increasing in decode order. Its lower 16 bits match the frame_number of the AV1 Dependency Descriptor Header Extension defined in Appendix A of [AV1-RTP-SPEC] , if present. Only present for received frames if the Dependency Descriptor Header Extension is present.
-
dependencies
, of typesequence<unsigned long long>
sequence<unsigned long long> -
List of frameIds of frames this frame references. Only present for received frames if the AV1 Dependency Descriptor Header Extension defined in Appendix A of [AV1-RTP-SPEC] is present.
-
timestamp
, of type long long long long -
The media presentation timestamp (PTS) in microseconds of raw frame, matching the
timestamp
for raw frames which correspond to this frame.
4.4.
RTCEncodedVideoFrame
interface
dictionary {
RTCEncodedVideoFrameOptions RTCEncodedVideoFrameMetadata ; }; // New interfaces to define encoded video and audio frames. Will eventually // re-use or extend the equivalent defined in WebCodecs. [
metadata Exposed =(Window ,DedicatedWorker ),Serializable ]interface RTCEncodedVideoFrame {(
constructor RTCEncodedVideoFrame ,
originalFrame optional RTCEncodedVideoFrameOptions = {});
options readonly attribute RTCEncodedVideoFrameType type ;attribute ArrayBuffer data ;RTCEncodedVideoFrameMetadata getMetadata (); };
4.4.1. Constructor
-
constructor()
-
Creates a new
RTCEncodedVideoFrame
from the given originalFrame and options .[metadata]
. The newly created frame is completely independent of originalFrame , with its[[data]]
being a deep copy of originalFrame .[[data]]
. The new frame’s[[metadata]]
is a deep copy of originalFrame .[[metadata]]
, with fields replaced with deep copies of the fields present in options .[metadata]
.When called, run the following steps:
-
Set this.
[[type]]
to originalFrame .[[type]]
. -
Let this.
[[data]]
be the result of [CloneArrayBuffer] ( originalFrame .[[data]]
, 0, originalFrame .[[data]]
.[[ArrayBufferByteLength]]
). -
Let
[[metadata]]
represent the metadata associated with this newly constructed frame.-
For each {
[[key]]
,[[value]]
} pair of originalFrame .[[getMetadata()]]
, set[[metadata]]
.[[key]]
to a deep copy of[[value]]
. -
For each {
[[key]]
,[[value]]
} pair of options .[metadata]
, set[[metadata]]
.[[key]]
to a deep copy of[[value]]
.
-
-
4.4.2. Members
-
type
, of type RTCEncodedVideoFrameType , readonly RTCEncodedVideoFrameType -
The type attribute allows the application to determine when a key frame is being sent or received.
-
data
, of type ArrayBuffer ArrayBuffer -
The encoded frame data. The format of the data depends on the video codec that is used to encode/decode the frame which can be determined by looking at the
mimeType
. For SVC , each spatial layer is transformed separately.Since packetizers may drop certain elements, e.g. AV1 temporal delimiter OBUs, the input to a receive-side transform may be different from the output of a send-side transform.
The following table gives a number of examples:
mimeType Data format video/VP8 The data starts with the "uncompressed data chunk" defined in section 9.1 of [RFC6386] and is followed by the rest of the frame data. The VP8 payload descriptor is not accessible. video/VP9 The data is a frame as described in Section 6 of [VP9] . The VP9 payload descriptor is not accessible. video/H264 The data is a series of NAL units in Annex B format, as defined in [ITU-T-REC-H.264] Annex B. video/AV1 The data is a series of OBUs compliant to the low-overhead bitstream format as described in Section 5 of [AV1] . The AV1 aggregation header is not accessible.
4.4.3. Methods
-
getMetadata()
-
Returns the metadata associated with the frame.
4.4.4. Serialization
RTCEncodedVideoFrame
objects
are
serializable
objects
.
Their
serialization
steps
,
given
value
,
serialized
,
and
forStorage
,
are:
-
If forStorage is true, then throw a
DataCloneError
. -
Set serialized .
[[type]]
to the value of value .type
. -
Set serialized .
[[metadata]]
to an internal representation of value ’s metadata. -
Set serialized .
[[data]]
to the sub-serialization of value .[[data]]
.
Their deserialization steps , given serialized , value and realm , are:
-
Set value .
type
to serialized .[[type]]
. -
Set value ’s metadata to the platform object representation of serialized .
[[metadata]]
. -
Set value .
[[data]]
to the sub-deserialization of serialized .[[data]]
.
The
internal
form
of
a
serialized
RTCEncodedVideoFrame
is
not
observable;
it
is
defined
chiefly
so
that
it
can
be
used
with
frame
cloning
in
the
writeEncodedData
algorithm
and
in
the
structuredClone()
operation.
An
implementation
is
therefore
free
to
choose
whatever
method
works
best.
4.5.
RTCEncodedAudioFrameMetadata
dictionary
dictionary RTCEncodedAudioFrameMetadata :RTCEncodedFrameMetadata {short sequenceNumber ;double audioLevel ; };
4.5.1. Members
-
sequenceNumber
, of type short short -
The RTP sequence number as defined in [RFC3550] . Only exists for incoming audio frames.
Comparing two sequence numbers requires serial number arithmetic described in [RFC1982] .
-
audioLevel
, of type double double -
The audio level of this frame. The value is between 0..1 (linear), where 1.0 represents 0 dBov, 0 represents silence, and 0.5 represents approximately 6 dBSPL change in the sound pressure level from 0 dBov.
4.6.
RTCEncodedAudioFrame
interface
dictionary {
RTCEncodedAudioFrameOptions RTCEncodedAudioFrameMetadata ; }; [
metadata Exposed =(Window ,DedicatedWorker ),Serializable ]interface RTCEncodedAudioFrame {(
constructor RTCEncodedAudioFrame ,
originalFrame optional RTCEncodedAudioFrameOptions = {});
options attribute ArrayBuffer data ;RTCEncodedAudioFrameMetadata getMetadata (); };
4.6.1. Constructor
-
constructor()
-
Creates a new
RTCEncodedAudioFrame
from the given originalFrame and options .[metadata]
. The newly created frame is completely independent of originalFrame , with its[[data]]
being a deep copy of originalFrame .[[data]]
. The new frame’s[[metadata]]
is a deep copy of originalFrame .[[metadata]]
, with fields replaced with deep copies of the fields present in options .[metadata]
.When called, run the following steps:
-
Let this.
[[data]]
be the result of [CloneArrayBuffer] ( originalFrame .[[data]]
, 0, originalFrame .[[data]]
.[[ArrayBufferByteLength]]
). -
Let
[[metadata]]
represent the metadata associated with this newly constructed frame.-
For each {
[[key]]
,[[value]]
} pair of originalFrame .[[getMetadata()]]
, set[[metadata]]
.[[key]]
to a deep copy of[[value]]
. -
For each {
[[key]]
,[[value]]
} pair of options .[metadata]
, set[[metadata]]
.[[key]]
to a deep copy of[[value]]
.
-
-
4.6.2. Members
-
data
, of type ArrayBuffer ArrayBuffer -
The encoded frame data. The format of the data depends on the audio codec that is used to encode/decode the frame which can be determined by looking at the
mimeType
. The following table gives a number of examples:mimeType Data format audio/opus The data is Opus packets, as described in section 3 of [RFC6716] . audio/PCMU The data is a sequence of bytes of arbitrary length, where each byte is a u-law encoded PCM sample as defined by Table 2a and 2b in [ITU-G.711] . audio/PCMA The data is a sequence of bytes of arbitrary length, where each byte is an A-law encoded PCM sample as defined by Tables 1a and 1b in [ITU-G.711] . audio/G722 The data is G.722 audio as described in [ITU-G.722] . audio/RED The data is Redundant Audio Data as described in section 3 of [RFC2198] . audio/CN The data is Comfort Noise as described in section 3 of [RFC3389] .
4.6.3. Methods
-
getMetadata()
-
Returns the metadata associated with the frame.
4.6.4. Serialization
RTCEncodedAudioFrame
objects
are
serializable
objects
.
Their
serialization
steps
,
given
value
,
serialized
,
and
forStorage
,
are:
-
If forStorage is true, then throw a
DataCloneError
. -
Set serialized .
[[metadata]]
to an internal representation of value ’s metadata. -
Set serialized .
[[data]]
to the sub-serialization of value .[[data]]
.
Their deserialization steps , given serialized , value and realm , are:
-
Set value ’s metadata to the platform object representation of serialized .
[[metadata]]
-
Set value .
[[data]]
to the sub-deserialization of serialized .[[data]]
.
4.7. Interfaces
[Exposed =DedicatedWorker ]interface :
RTCTransformEvent Event {readonly attribute RTCRtpScriptTransformer ; };
transformer partial interface DedicatedWorkerGlobalScope {attribute EventHandler ; }; [
onrtctransform Exposed =DedicatedWorker ]interface :
RTCRtpScriptTransformer EventTarget { // Attributes and methods related to the transformer sourcereadonly attribute ReadableStream ;
readable Promise <unsigned long long >(
generateKeyFrame optional DOMString );
rid Promise <undefined >(); // Attributes and methods related to the transformer sink
sendKeyFrameRequest readonly attribute WritableStream ;
writable attribute EventHandler ; // Attributes for configuring the Javascript code
onkeyframerequest readonly attribute any options ; };enum {
RTCRtpScriptTransformType }; [
"sframe" Exposed =Window ]interface {
RTCRtpScriptTransform );constructor (Worker ,
worker optional any ,
options optional sequence <object >,
transfer optional RTCRtpScriptTransformType ); }; [
type Exposed =DedicatedWorker ]interface :
KeyFrameRequestEvent Event {(
constructor DOMString ,
type optional DOMString );
rid readonly attribute DOMString ?; };
rid
4.8. Operations
The
new
RTCRtpScriptTransform(
worker
,
options
,
transfer
,
type
)
constructor
steps
are:
-
Set t1 to an identity transform stream .
-
Set t2 to an identity transform stream .
-
Set this .
[[writable]]
to t1 .[[writable]]
. -
Set this .
[[readable]]
to t2 .[[readable]]
. -
If type is equal to
"sframe"
, set this .[[useSFrame]]
to true. Let serializedOptions be the result of StructuredSerializeWithTransfer ( options , transfer ).
-
Let serializedReadable be the result of StructuredSerializeWithTransfer ( t1 .
[[readable]]
, « t1 .[[readable]]
»). -
Let serializedWritable be the result of StructuredSerializeWithTransfer ( t2 .
[[writable]]
, « t2 .[[writable]]
»). -
Queue a task on the DOM manipulation task source worker ’s global scope to run the following steps:
-
Let transformerOptions be the result of StructuredDeserializeWithTransfer ( serializedOptions , the current Realm).
-
Let readable be the result of StructuredDeserializeWithTransfer ( serializedReadable , the current Realm).
-
Let writable be the result of StructuredDeserializeWithTransfer ( serializedWritable , the current Realm).
-
Let transformer be a new
RTCRtpScriptTransformer
. -
Set transformer .
[[options]]
to transformerOptions . -
Set transformer .
[[readable]]
to readable . -
Set transformer .
[[writable]]
to writable . -
Fire an event named
rtctransform
usingRTCTransformEvent
withtransformer
set to transformer on worker ’s global scope.
-
// FIXME: Describe error handling (worker closing flag true at RTCRtpScriptTransform creation time. And worker being terminated while transform is processing data).
Each RTCRtpScriptTransform has the following set of association steps , given rtcObject :
-
Let transform be the
RTCRtpScriptTransform
object that owns the association steps . -
Let encoder be rtcObject ’s encoder if rtcObject is a
RTCRtpSender
or undefined otherwise. -
Let depacketizer be rtcObject ’s depacketizer if rtcObject is a
RTCRtpReceiver
or undefined otherwise. -
Queue a task on the DOM manipulation task source worker ’s global scope to run the following steps:
-
Let transformer be the
RTCRtpScriptTransformer
object associated to transform . -
Set transformer .
[[encoder]]
to encoder . -
Set transformer .
[[depacketizer]]
to depacketizer .
-
The
generateKeyFrame(
rid
)
method
steps
are:
-
Let promise be a new promise.
-
Run the generate key frame algorithm with promise , this .
[[encoder]]
and rid . -
Return promise .
The
sendKeyFrameRequest()
method
steps
are:
-
Let promise be a new promise.
-
Run the send request key frame algorithm with promise and this .
[[depacketizer]]
. -
Return promise .
4.9. Attributes
A
RTCRtpScriptTransformer
has
the
following
private
slots
called
[[depacketizer]]
,
[[encoder]]
,
[[options]]
,
[[readable]]
and
[[writable]]
.
In
addition,
a
RTCRtpScriptTransformer
is
always
associated
with
its
parent
RTCRtpScriptTransform
transform.
This
allows
algorithms
to
go
from
an
RTCRtpScriptTransformer
object
to
its
RTCRtpScriptTransform
parent
and
vice
versa.
The
options
getter
steps
are:
-
Return this .
[[options]]
.
The
readable
getter
steps
are:
-
Return this .
[[readable]]
.
The
writable
getter
steps
are:
-
Return this .
[[writable]]
.
The
onbandwidthestimate
EventHandler
has
type
bandwidthestimate.
The
onkeyframerequest
EventHandler
has
type
keyframerequest.
4.10. Events
The
following
event
fires
on
an
RTCRtpScriptTransformer
:
-
keyframerequest of type
KeyFrameRequestEvent
- fired when the sink determines that a key frame has been requested.
The
steps
that
generate
an
event
of
type
KeyFrameRequestEvent
are
as
follows:
Given
a
RTCRtpScriptTransformer
transform
:
When
transform
’s
[[encoder]]
receives
a
keyframe
request,
for
instance
from
an
incoming
RTCP
Picture
Loss
Indication
(PLI)
or
Full
Intra
Refresh
(FIR),
queue
a
task
to
perform
the
following
steps:
-
Set rid to the RID of the appropriate layer, or undefined if the request is not for a specific layer.
-
Fire an event named
keyframerequest
at transform usingKeyFrameRequestEvent
with itscancelable
attribute initialized to "true", and withrid
set to rid . -
If the event’s canceled flag is true, abort these steps.
-
Run the generate key frame algorithm with a new promise, transform .
[[encoder]]
and rid .
4.11. KeyFrame Algorithms
The generate key frame algorithm , given promise , encoder and rid , is defined by running these steps:
-
If encoder is undefined, reject promise with
InvalidStateError
, abort these steps. -
If encoder is not processing video frames, reject promise with
InvalidStateError
, abort these steps. -
If rid is defined, but does not conform to the grammar requirements specified in Section 10 of [RFC8851] , then reject promise with
TypeError
and abort these steps. -
In parallel , run the following steps:
-
Gather a list of video encoders, named videoEncoders from encoder , ordered according negotiated RIDs if any.
-
If rid is defined, remove from videoEncoders any video encoder that does not match rid .
-
If rid is undefined, remove from videoEncoders all video encoders except the first one.
-
If videoEncoders is empty, queue a task to reject promise with
NotFoundError
and abort these steps. videoEncoders is expected to be empty if the correspondingRTCRtpSender
is not active, or the correspondingRTCRtpSender
track is ended. -
Let videoEncoder be the first encoder in videoEncoders .
-
If rid is undefined, set rid to the RID value corresponding to videoEncoder .
-
Create a pending key frame task called task with task .
[[rid]]
set to rid and task .[[promise]]
| set to promise . -
If encoder .
[[pendingKeyFrameTasks]]
is undefined, initialize encoder .[[pendingKeyFrameTasks]]
to an empty set. -
Let shouldTriggerKeyFrame be
false
if encoder .[[pendingKeyFrameTasks]]
contains a task whose[[rid]]
value is equal to rid , andtrue
otherwise. -
Add task to encoder .
[[pendingKeyFrameTasks]]
. -
If shouldTriggerKeyFrame is
true
, instruct videoEncoder to generate a key frame for the next provided video frame.
-
For
any
RTCRtpScriptTransformer
named
transformer
,
the
following
steps
are
run
just
before
any
frame
is
enqueued
in
transformer
.
[[readable]]
:
-
Let encoder be transformer .
[[encoder]]
. -
If encoder or encoder .
[[pendingKeyFrameTasks]]
is undefined, abort these steps. -
If frame is not a video
"key"
frame, abort these steps. -
For each task in encoder .
[[pendingKeyFrameTasks]]
, run the following steps:-
If frame was generated by a video encoder identified by task .
[[rid]]
, run the following steps:-
Remove task from encoder .
[[pendingKeyFrameTasks]]
. -
Resolve task .
[[promise]]
with frame ’s timestamp.
-
-
By
resolving
the
promises
just
before
enqueuing
the
corresponding
key
frame
in
a
RTCRtpScriptTransformer
’s
readable,
the
resolution
callbacks
of
the
promises
are
always
executed
just
before
the
corresponding
key
frame
is
exposed.
If
the
promise
is
associated
to
several
rid
values,
it
will
be
resolved
when
the
first
key
frame
corresponding
to
one
the
rid
value
is
enqueued.
The send request key frame algorithm , given promise and depacketizer , is defined by running these steps:
-
If depacketizer is undefined, reject promise with
InvalidStateError
, abort these steps. -
If depacketizer does not belong to a video
RTCRtpReceiver
, reject promise withInvalidStateError
, abort these steps. -
In parallel , run the following steps:
-
If sending a Full Intra Request (FIR) by depacketizer ’s receiver is not deemed appropriate, resolve promise with undefined and abort these steps. Section 4.3.1 of [RFC5104] provides guidelines of how and when it is appropriate to sending a Full Intra Request.
-
Generate a Full Intra Request (FIR) packet as defined in section 4.3.1 of [RFC5104] and send it through depacketizer ’s receiver.
-
Queue a task to resolve promise with undefined.
-
5. RTCRtpSender extension
An
additional
API
on
RTCRtpSender
is
added
to
complement
the
generation
of
key
frame
added
to
RTCRtpScriptTransformer
.
partial interface RTCRtpSender {Promise <undefined >generateKeyFrame (optional sequence <DOMString >); };
rids
5.1. Extension operation
The
generateKeyFrame(
rids
)
method
steps
are:
-
Let promise be a new promise.
-
In parallel , run the generate key frame algorithm with promise , this ’s encoder and rids .
-
Return promise .
6. Privacy and security considerations
This API gives Javascript access to the content of media streams. This is also available from other sources, such as Canvas and WebAudio.
However, streams that are isolated (as specified in [WEBRTC-IDENTITY] ) or tainted with another origin, cannot be accessed using this API, since that would break the isolation rule.
The API will allow access to some aspects of timing information that are otherwise unavailable, which allows some fingerprinting surface.
The API will give access to encoded media, which means that the JS application will have full control over what’s delivered to internal components like the packetizer or the decoder. This may require additional care with auditing how data is handled inside these components.
For instance, packetizers may expect to see data only from trusted encoders, and may not be audited for reception of data from untrusted sources.
7. Examples
See the explainer document .