Copyright © 2021 W3C ® ( MIT , ERCIM , Keio , Beihang ). W3C liability , trademark and permissive document license rules apply.
HTMLMediaElement
[
HTML
]
to
allow
JavaScript
to
generate
media
streams
for
playback.
Allowing
JavaScript
to
generate
streams
facilitates
a
variety
of
use
cases
like
adaptive
streaming
and
time
shifting
live
streams.
This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.
On
top
of
editorial
updates,
substantives
substantive
changes
since
publication
as
a
W3C
Recommendation
in
November
2016
are
changeType
()
method
to
switch
MediaSource
objects
off
the
main
thread
in
dedicated
createObjectURL()
extension
to
the
URL
object
following
its
integration
in
the
File
API
[
FILEAPI
The working group maintains a list of all bug reports that the editors have not yet tried to address .
Implementors should be aware that this specification is not stable. Implementors who are not taking part in the discussions are likely to find the specification changing out from under them in incompatible ways. Vendors interested in implementing this specification before it eventually reaches the Candidate Recommendation stage should track the GitHub repository and take part in the discussions.
This document was published by the Media Working Group as an Editor's Draft.
GitHub Issues are preferred for discussion of this specification.
Publication as an Editor's Draft does not imply endorsement by the W3C Membership.
This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
This document was produced by a group operating under the W3C Patent Policy . W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy .
This document is governed by the 15 September 2020 W3C Process Document .
This section is non-normative.
This
specification
allows
JavaScript
to
dynamically
construct
media
streams
for
<audio>
and
<video>.
It
defines
a
MediaSource
object
that
can
serve
as
a
source
of
media
data
for
an
HTMLMediaElement.
MediaSource
objects
have
one
or
more
SourceBuffer
objects.
Applications
append
data
segments
to
the
SourceBuffer
objects,
and
can
adapt
the
quality
of
appended
data
based
on
system
performance
and
other
factors.
Data
from
the
SourceBuffer
objects
is
managed
as
track
buffers
for
audio,
video
and
text
data
that
is
decoded
and
played.
Byte
stream
specifications
used
with
these
extensions
are
available
in
the
byte
stream
format
registry
[
MSE-REGISTRY
].
This specification was designed with the following goals in mind:
This specification defines:
The
track
buffers
that
provide
coded
frames
for
the
enabled
audioTracks
,
the
selected
videoTracks
,
and
the
"showing"
or
"hidden"
textTracks
.
All
these
tracks
are
associated
with
SourceBuffer
objects
in
the
activeSourceBuffers
list.
A
presentation
timestamp
range
used
to
filter
out
coded
frames
while
appending.
The
append
window
represents
a
single
continuous
time
range
with
a
single
start
time
and
end
time.
Coded
frames
with
presentation
timestamp
within
this
range
are
allowed
to
be
appended
to
the
SourceBuffer
while
coded
frames
outside
this
range
are
filtered
out.
The
append
window
start
and
end
times
are
controlled
by
the
appendWindowStart
and
appendWindowEnd
attributes
respectively.
A unit of media data that has a presentation timestamp , a decode timestamp , and a coded frame duration .
The duration of a coded frame . For video and text, the duration indicates how long the video frame or text SHOULD be displayed. For audio, the duration represents the sum of all the samples contained within the coded frame. For example, if an audio frame contained 441 samples @44100Hz the frame duration would be 10 milliseconds.
The sum of a coded frame presentation timestamp and its coded frame duration . It represents the presentation timestamp that immediately follows the coded frame.
A
group
of
coded
frames
that
are
adjacent
and
have
monotonically
increasing
decode
timestamps
without
any
gaps.
Discontinuities
detected
by
the
coded
frame
processing
algorithm
and
abort
()
calls
trigger
the
start
of
a
new
coded
frame
group.
The decode timestamp indicates the latest time at which the frame needs to be decoded assuming instantaneous decoding and rendering of this and any dependant frames (this is equal to the presentation timestamp of the earliest frame, in presentation order , that is dependant on this frame). If frames can be decoded out of presentation order , then the decode timestamp MUST be present in or derivable from the byte stream. The user agent MUST run the append error algorithm if this is not the case. If frames cannot be decoded out of presentation order and a decode timestamp is not present in the byte stream, then the decode timestamp is equal to the presentation timestamp .
A sequence of bytes that contain all of the initialization information required to decode a sequence of media segments . This includes codec initialization data, Track ID mappings for multiplexed segments, and timestamp offsets (e.g., edit lists).
The byte stream format specifications in the byte stream format registry [ MSE-REGISTRY ] contain format specific examples.
A sequence of bytes that contain packetized & timestamped media data for a portion of the media timeline . Media segments are always associated with the most recently appended initialization segment .
The byte stream format specifications in the byte stream format registry [ MSE-REGISTRY ] contain format specific examples.
A
MediaSource
object
URL
is
a
unique
Blob
URI
[
FILEAPI
]
created
by
createObjectURL
()
.
It
is
used
to
attach
a
MediaSource
object
to
an
HTMLMediaElement.
These
URLs
are
the
same
as
a
Blob
URI
,
except
that
anything
in
the
definition
of
that
feature
that
refers
to
File
and
Blob
objects
is
hereby
extended
to
also
apply
to
MediaSource
objects.
The
origin
of
the
MediaSource
object
URL
is
the
relevant
settings
object
of
this
during
the
call
to
createObjectURL
()
.
For example, the origin of the MediaSource object URL affects the way that the media element is consumed by canvas .
The
parent
media
source
of
a
SourceBuffer
object
is
the
MediaSource
object
that
created
it.
The presentation start time is the earliest time point in the presentation and specifies the initial playback position and earliest possible position . All presentations created using this specification have a presentation start time of 0.
For
the
purposes
of
determining
if
HTMLMediaElement
.
buffered
contains
a
TimeRanges
that
includes
the
current
playback
position,
implementations
MAY
choose
to
allow
a
current
playback
position
at
or
after
presentation
start
time
and
before
the
first
TimeRanges
to
play
the
first
TimeRanges
if
that
TimeRanges
starts
within
a
reasonably
short
time,
like
1
second,
after
presentation
start
time
.
This
allowance
accommodates
the
reality
that
muxed
streams
commonly
do
not
begin
all
tracks
precisely
at
presentation
start
time
.
Implementations
MUST
report
the
actual
buffered
range,
regardless
of
this
allowance.
The presentation interval of a coded frame is the time interval from its presentation timestamp to the presentation timestamp plus the coded frame's duration . For example, if a coded frame has a presentation timestamp of 10 seconds and a coded frame duration of 100 milliseconds, then the presentation interval would be [10-10.1). Note that the start of the range is inclusive, but the end of the range is exclusive.
The order that coded frames are rendered in the presentation. The presentation order is achieved by ordering coded frames in monotonically increasing order by their presentation timestamps .
A reference to a specific time in the presentation. The presentation timestamp in a coded frame indicates when the frame SHOULD be rendered.
A position in a media segment where decoding and continuous playback can begin without relying on any previous data in the segment. For video this tends to be the location of I-frames. In the case of audio, most audio frames can be treated as a random access point. Since video tracks tend to have a more sparse distribution of random access points, the location of these points are usually considered the random access points for multiplexed streams.
The
A
SourceBuffer
object
can
dynamically
be
configured
to
either
expect
to
be
sent
a
byte
stream
via
appendBuffer
()
that
conforms
to
a
specific
byte
stream
format
specification
that
describes
the
format
,
or
to
be
sent
sequences
of
the
byte
stream
accepted
by
a
WebCodecs
[
WEBCODECS
]
chunks
via
SourceBuffer
appendEncodedChunks
instance.
The
()
.
When
configured
to
expect
a
byte
stream,
the
SourceBuffer
byte
stream
format
specification
is
the
specific
byte
stream
format
specification
,
for
that
describes
the
format
of
the
byte
stream
expected
and
accepted
by
a
SourceBuffer
object,
object
is
initially
selected
based
on
the
type
passed
to
the
addSourceBuffer
()
call
that
created
the
object,
and
can
be
updated
by
changeType
()
calls
on
the
object.
When
the
SourceBuffer
is
instead
using
a
WebCodecs
AudioDecoderConfig
or
VideoDecoderConfig
[
WEBCODECS
]
in
the
most
recent
of
either
the
addSourceBuffer
()
call
that
created
the
SourceBuffer
or
a
changeType
()
call,
the
SourceBuffer
is
instead
expecting
WebCodecs
chunks
to
be
appended
via
appendEncodedChunks
()
.
A
specific
set
of
tracks
distributed
across
one
or
more
SourceBuffer
objects
owned
by
a
single
MediaSource
instance.
Implementations
MUST
support
at
least
1
MediaSource
object
with
the
following
configurations:
MediaSource objects MUST support each of the configurations above, but they are only required to support one configuration at a time. Supporting multiple configurations at once or additional configurations is a quality of implementation issue.
A byte stream format specific structure that provides the Track ID , codec configuration, and other metadata for a single track. Each track description inside a single initialization segment has a unique Track ID . The user agent MUST run the append error algorithm if the Track ID is not unique within the initialization segment .
A Track ID is a byte stream format specific identifier that marks sections of the byte stream as being part of a specific track. The Track ID in a track description identifies which sections of a media segment belong to that track.
MediaSource
Object
The
MediaSource
object
represents
a
source
of
media
data
for
an
HTMLMediaElement.
It
keeps
track
of
the
readyState
for
this
source
as
well
as
a
list
of
SourceBuffer
objects
that
can
be
used
to
add
media
data
to
the
presentation.
MediaSource
objects
are
created
by
the
web
application
and
then
attached
to
an
HTMLMediaElement.
The
application
uses
the
SourceBuffer
objects
in
sourceBuffers
to
add
media
data
to
this
source.
The
HTMLMediaElement
fetches
this
media
data
from
the
MediaSource
object
when
it
is
needed
during
playback.
Each
MediaSource
object
has
a
[[live
seekable
range]]
internal
slot
that
stores
a
normalized
TimeRanges
object
.
It
is
initialized
to
an
empty
TimeRanges
object
when
the
MediaSource
object
is
created,
is
maintained
by
setLiveSeekableRange
()
and
clearLiveSeekableRange
()
,
and
is
used
in
HTMLMediaElement
Extensions
to
modify
HTMLMediaElement
.
seekable
behavior.
WebIDLenum ReadyState
{
"closed
",
"open
",
"ended
"
};
Enumeration description | |
---|---|
closed
| Indicates the source is not currently attached to a media element. |
open
|
The
source
has
been
opened
by
a
media
element
and
is
ready
for
data
to
be
appended
to
the
SourceBuffer
objects
in
sourceBuffers
.
|
ended
|
The
source
is
still
attached
to
a
media
element,
but
endOfStream
()
has
been
called.
|
Consider
adding
a
"
closing
"
ReadyState
to
indicate
the
source
is
in
the
process
of
being
concurrently
detached
from
a
media
element.
This
would
be
useful
for
some
implementations
of
MediaSource
and
SourceBuffer
in
DedicatedWorkerGlobalScope
.
WebIDLenum EndOfStreamError
{
"network
",
"decode
"
};
Enumeration description | |
---|---|
network
|
Terminates playback and signals that a network error has occurred.
Note
JavaScript applications SHOULD use this status code to terminate playback with a network error. For example, if a network error occurs while fetching media data. |
decode
|
Terminates playback and signals that a decoding error has occurred.
Note
JavaScript applications SHOULD use this status code to terminate playback with a decode error. For example, if a parsing error occurs while processing out-of-band media data. |
WebIDL[Exposed=(Window,DedicatedWorker)]
interface MediaSource
: EventTarget {
constructor
();
readonly attribute SourceBufferList
sourceBuffers
;
readonly attribute SourceBufferList
activeSourceBuffers
;
readonly attribute ReadyState
readyState
;
attribute unrestricted double duration
;
attribute EventHandler onsourceopen
;
attribute EventHandler onsourceended
;
attribute EventHandler onsourceclose
;
static readonly attribute
static
};
static readonly attribute boolean canConstructInDedicatedWorker
;
// "optional" with default empty SourceBufferConfig dictionary is used here
// to pass WebIDL verification. Behavior of the method enforces that either
// a valid mime-type string or a valid SourceBufferConfig are provided.
SourceBuffer
addSourceBuffer
(optional TypeOrConfig
typeOrConfig = {}); undefined removeSourceBuffer
(SourceBuffer
sourceBuffer); undefined endOfStream
(optional EndOfStreamError
error); undefined setLiveSeekableRange
(double start, double end); undefined clearLiveSeekableRange
();
static boolean isTypeSupported
(DOMString type);
};
dictionary SourceBufferConfig
{
// Precisely one of these WebCodecs config objects must be populated to
// signal via addSourceBuffer or changeType the intent to buffer the
// corresponding WebCodecs media.
AudioDecoderConfig audioConfig
; // For appending EncodedAudioChunks VideoDecoderConfig videoConfig
; // For appending EncodedVideoChunks};
// This typedef simplifies the syntax for overloading addSourceBuffer and
// changeType to receive either a mime-type string or a WebCodecs config.
typedef
(
DOMString
or
SourceBufferConfig
)
TypeOrConfig
;
sourceBuffers
of
type
SourceBufferList
,
readonly
SourceBuffer
objects
associated
with
this
MediaSource
.
When
readyState
equals
"
closed
"
this
list
will
be
empty.
Once
readyState
transitions
to
"
open
"
SourceBuffer
objects
can
be
added
to
this
list
by
using
addSourceBuffer
()
.
activeSourceBuffers
of
type
SourceBufferList
,
readonly
Contains
the
subset
of
sourceBuffers
that
are
providing
the
selected
video
track,
the
enabled
audio
track(s),
and
the
"showing"
or
"hidden"
text
track(s).
SourceBuffer
objects
in
this
list
MUST
appear
in
the
same
order
as
they
appear
in
the
sourceBuffers
attribute;
e.g.,
if
only
sourceBuffers[0]
and
sourceBuffers[3]
are
in
activeSourceBuffers
,
then
activeSourceBuffers[0]
MUST
equal
sourceBuffers[0]
and
activeSourceBuffers[1]
MUST
equal
sourceBuffers[3].
The Changes to selected/enabled track state section describes how this attribute gets updated.
readyState
of
type
ReadyState
,
readonly
Indicates
the
current
state
of
the
MediaSource
object.
When
the
MediaSource
is
created
readyState
MUST
be
set
to
"
closed
".
duration
of
type
unrestricted
double
Allows
the
web
application
to
set
the
presentation
duration.
The
duration
is
initially
set
to
NaN
when
the
MediaSource
object
is
created.
On getting, run the following steps:
readyState
attribute
is
"
closed
"
then
return
NaN
and
abort
these
steps.
On setting, run the following steps:
TypeError
exception
and
abort
these
steps.
readyState
attribute
is
not
"
open
"
then
throw
an
InvalidStateError
exception
and
abort
these
steps.
updating
attribute
equals
true
on
any
SourceBuffer
in
sourceBuffers
,
then
throw
an
InvalidStateError
exception
and
abort
these
steps.
The duration change algorithm will adjust new duration higher if there is any currently buffered coded frame with a higher end time.
appendBuffer
()
,
appendEncodedChunks
()
and
endOfStream
()
can
update
the
duration
under
certain
circumstances.
onsourceopen
of
type
EventHandler
The
event
handler
for
the
event.
sourceopen
onsourceended
of
type
EventHandler
The
event
handler
for
the
event.
sourceended
onsourceclose
of
type
EventHandler
The
event
handler
for
the
event.
sourceclose
canConstructInDedicatedWorker
of
type
boolean
Returns true.
This
attribute
enables
main
thread
and
dedicated
worker
feature
detection
of
support
for
creating
and
using
a
MediaSource
object
in
a
dedicated
worker,
and
mitigates
the
need
for
higher
latency
detection
polyfills
like
attempting
creation
of
a
MediaSource
object
from
a
dedicated
worker,
especially
if
the
feature
is
not
supported.
addSourceBuffer
Adds
a
new
SourceBuffer
to
sourceBuffers
.
Parameter | Type | Nullable | Optional | Description |
---|---|---|---|---|
|
| ✘ | ✘ |
SourceBuffer
When this method is invoked, the user agent must run the following steps:
TypeError
exception
and
abort
these
SourceBuffer
objects
in
sourceBuffers
,
then
throw
a
NotSupportedError
exception
and
abort
these
audioConfig
and
a
videoConfig
or
has
neither,
then
throw
a
TypeError
exception
and
abort
these
steps
and
this
method.
audioConfig
nor
a
valid
VideoDecoderConfig
in
videoConfig
,
then
throw
a
TypeError
exception
and
abort
these
steps
and
this
method.
SourceBuffer
objects
in
sourceBuffers
,
then
throw
a
NotSupportedError
exception
and
abort
these
steps
and
this
method.
SourceBuffer
objects
or
if
creating
a
SourceBuffer
based
on
QuotaExceededError
exception
and
abort
these
steps.
For
example,
a
user
agent
MAY
throw
a
QuotaExceededError
exception
if
the
media
element
has
reached
the
HAVE_METADATA
readyState.
This
can
occur
if
the
user
agent's
media
engine
does
not
support
adding
more
tracks
during
playback.
readyState
attribute
is
not
in
the
"
open
"
state
then
throw
an
InvalidStateError
exception
and
abort
these
steps.
SourceBuffer
object
and
associated
resources.
[[generate
timestamps
flag]]
on
the
new
object
to
the
value
in
the
"Generate
Timestamps
Flag"
column
of
the
byte
stream
format
registry
[
MSE-REGISTRY
]
entry
that
is
associated
with
[[generate
timestamps
flag]]
on
the
new
object
to
false.
SourceBufferConfig
from
typeOrConfig
into
the
new
object's
[[input
webcodecs
configs
and
chunks]]
for
potential
handling
later
during
that
object's
chunks
append
algorithm.
[[generate
timestamps
flag]]
equals
true:
mode
attribute
on
the
new
object
to
"
sequence
".
mode
attribute
on
the
new
object
to
"
segments
".
sourceBuffers
and
queue
a
task
to
fire
an
event
named
addsourcebuffer
at
sourceBuffers
.
removeSourceBuffer
Removes
a
SourceBuffer
from
sourceBuffers
.
Parameter | Type | Nullable | Optional | Description |
---|---|---|---|---|
sourceBuffer |
SourceBuffer
| ✘ | ✘ |
undefined
When this method is invoked, the user agent must run the following steps:
sourceBuffers
then
throw
a
NotFoundError
exception
and
abort
these
steps.
updating
attribute
equals
true,
then
run
the
following
steps:
updating
attribute
to
false.
Promise
in
[[pending
append
chunks
promise]]
:
Promise
with
an
AbortError
DOMException
and
unset
[[pending
append
chunks
promise]]
.
abort
at
sourceBuffer
.
updateend
at
sourceBuffer
.
AudioTrackList
object
returned
by
sourceBuffer
.
audioTracks
.
AudioTrack
object
in
the
SourceBuffer
audioTracks
list
,
run
the
following
steps:
sourceBuffer
attribute
on
the
AudioTrack
object
to
null.
AudioTrack
object
from
the
SourceBuffer
audioTracks
list
.
This
should
trigger
AudioTrackList
[
HTML
]
logic
to
queue
a
task
to
fire
an
event
named
removetrack
using
TrackEvent
with
the
track
attribute
initialized
to
the
AudioTrack
object,
at
the
SourceBuffer
audioTracks
list
.
If
the
enabled
attribute
on
the
AudioTrack
object
was
true
at
the
beginning
of
this
removal
step,
then
this
should
also
trigger
AudioTrackList
[
HTML
]
logic
to
queue
a
task
to
fire
an
event
named
change
at
the
SourceBuffer
audioTracks
list
.
MediaSource
was
constructed
in
a
DedicatedWorkerGlobalScope
:
remove
track
mirror
message
to
[[port
to
main]]
whose
implicit
handler
in
Window
runs
the
steps
in
the
following
"otherwise"
case
for
the
Window
AudioTrack
mirror
of
the
DedicatedWorkerGlobalScope
AudioTrack
object
created
previously
by
the
implicit
handler
for
the
internal
create
track
mirror
message.
AudioTrackList
object
returned
by
the
audioTracks
attribute
on
the
HTMLMediaElement.
AudioTrack
object
from
the
HTMLMediaElement
audioTracks
list
.
This
should
trigger
AudioTrackList
[
HTML
]
logic
to
queue
a
task
to
fire
an
event
named
removetrack
using
TrackEvent
with
the
track
attribute
initialized
to
the
AudioTrack
object,
at
the
HTMLMediaElement
audioTracks
list
.
If
the
enabled
attribute
on
the
AudioTrack
object
was
true
at
the
beginning
of
this
removal
step,
then
this
should
also
trigger
AudioTrackList
[
HTML
]
logic
to
queue
a
task
to
fire
an
event
named
change
at
the
HTMLMediaElement
audioTracks
list
.
VideoTrackList
object
returned
by
sourceBuffer
.
videoTracks
.
VideoTrack
object
in
the
SourceBuffer
videoTracks
list
,
run
the
following
steps:
sourceBuffer
attribute
on
the
VideoTrack
object
to
null.
VideoTrack
object
from
the
SourceBuffer
videoTracks
list
.
This
should
trigger
VideoTrackList
[
HTML
]
logic
to
queue
a
task
to
fire
an
event
named
removetrack
using
TrackEvent
with
the
track
attribute
initialized
to
the
VideoTrack
object,
at
the
SourceBuffer
videoTracks
list
.
If
the
selected
attribute
on
the
VideoTrack
object
was
true
at
the
beginning
of
this
removal
step,
then
this
should
also
trigger
VideoTrackList
[
HTML
]
logic
to
queue
a
task
to
fire
an
event
named
change
at
the
SourceBuffer
videoTracks
list
.
MediaSource
was
constructed
in
a
DedicatedWorkerGlobalScope
:
remove
track
mirror
message
to
[[port
to
main]]
whose
implicit
handler
in
Window
runs
the
steps
in
the
following
"otherwise"
case
for
the
Window
VideoTrack
mirror
of
the
DedicatedWorkerGlobalScope
VideoTrack
object
created
previously
by
the
implicit
handler
for
the
internal
create
track
mirror
message.
VideoTrackList
object
returned
by
the
videoTracks
attribute
on
the
HTMLMediaElement.
VideoTrack
object
from
the
HTMLMediaElement
videoTracks
list
.
This
should
trigger
VideoTrackList
[
HTML
]
logic
to
queue
a
task
to
fire
an
event
named
removetrack
using
TrackEvent
with
the
track
attribute
initialized
to
the
VideoTrack
object,
at
the
HTMLMediaElement
videoTracks
list
.
If
the
selected
attribute
on
the
VideoTrack
object
was
true
at
the
beginning
of
this
removal
step,
then
this
should
also
trigger
VideoTrackList
[
HTML
]
logic
to
queue
a
task
to
fire
an
event
named
change
at
the
HTMLMediaElement
videoTracks
list
.
TextTrackList
object
returned
by
sourceBuffer
.
textTracks
.
TextTrack
object
in
the
SourceBuffer
textTracks
list
,
run
the
following
steps:
sourceBuffer
attribute
on
the
TextTrack
object
to
null.
TextTrack
object
from
the
SourceBuffer
textTracks
list
.
This
should
trigger
TextTrackList
[
HTML
]
logic
to
queue
a
task
to
fire
an
event
named
removetrack
using
TrackEvent
with
the
track
attribute
initialized
to
the
TextTrack
object,
at
the
SourceBuffer
textTracks
list
.
If
the
mode
attribute
on
the
TextTrack
object
was
"showing"
or
"hidden"
at
the
beginning
of
this
removal
step,
then
this
should
also
trigger
TextTrackList
[
HTML
]
logic
to
queue
a
task
to
fire
an
event
named
change
at
the
SourceBuffer
textTracks
list
.
MediaSource
was
constructed
in
a
DedicatedWorkerGlobalScope
:
remove
track
mirror
message
to
[[port
to
main]]
whose
implicit
handler
in
Window
runs
the
steps
in
the
following
"otherwise"
case
for
the
Window
TextTrack
mirror
of
the
DedicatedWorkerGlobalScope
TextTrack
object
created
previously
by
the
implicit
handler
for
the
internal
create
track
mirror
message.
TextTrackList
object
returned
by
the
textTracks
attribute
on
the
HTMLMediaElement.
TextTrack
object
from
the
HTMLMediaElement
textTracks
list
.
This
should
trigger
TextTrackList
[
HTML
]
logic
to
queue
a
task
to
fire
an
event
named
removetrack
using
TrackEvent
with
the
track
attribute
initialized
to
the
TextTrack
object,
at
the
HTMLMediaElement
textTracks
list
.
If
the
mode
attribute
on
the
TextTrack
object
was
"showing"
or
"hidden"
at
the
beginning
of
this
removal
step,
then
this
should
also
trigger
TextTrackList
[
HTML
]
logic
to
queue
a
task
to
fire
an
event
named
change
at
the
HTMLMediaElement
textTracks
list
.
activeSourceBuffers
,
then
remove
sourceBuffer
from
activeSourceBuffers
and
queue
a
task
to
fire
an
event
named
removesourcebuffer
at
the
SourceBufferList
returned
by
activeSourceBuffers
.
sourceBuffers
and
queue
a
task
to
fire
an
event
named
removesourcebuffer
at
the
SourceBufferList
returned
by
sourceBuffers
.
endOfStream
Signals the end of the stream.
Parameter | Type | Nullable | Optional | Description |
---|---|---|---|---|
error |
EndOfStreamError
| ✘ | ✔ |
undefined
When this method is invoked, the user agent must run the following steps:
readyState
attribute
is
not
in
the
"
open
"
state
then
throw
an
InvalidStateError
exception
and
abort
these
steps.
updating
attribute
equals
true
on
any
SourceBuffer
in
sourceBuffers
,
then
throw
an
InvalidStateError
exception
and
abort
these
steps.
setLiveSeekableRange
Updates
[[live
seekable
range]]
that
is
used
in
HTMLMediaElement
Extensions
to
modify
HTMLMediaElement
.
seekable
behavior.
Parameter | Type | Nullable | Optional | Description |
---|---|---|---|---|
start |
double
|
✘ | ✘ |
The
start
of
the
range,
in
seconds
measured
from
presentation
start
time
.
While
set,
and
if
duration
equals
positive
Infinity,
HTMLMediaElement
.
seekable
will
return
a
non-empty
TimeRanges
object
with
a
lowest
range
start
timestamp
no
greater
than
start
.
|
end |
double
|
✘ | ✘ |
The
end
of
range,
in
seconds
measured
from
presentation
start
time
.
While
set,
and
if
duration
equals
positive
Infinity,
HTMLMediaElement
.
seekable
will
return
a
non-empty
TimeRanges
object
with
a
highest
range
end
timestamp
no
less
than
end
.
|
undefined
When this method is invoked, the user agent must run the following steps:
readyState
attribute
is
not
"
open
"
then
throw
an
InvalidStateError
exception
and
abort
these
steps.
TypeError
exception
and
abort
these
steps.
[[live
seekable
range]]
to
be
a
new
normalized
TimeRanges
object
containing
a
single
range
whose
start
position
is
start
and
end
position
is
end
.
clearLiveSeekableRange
Updates
[[live
seekable
range]]
that
is
used
in
HTMLMediaElement
Extensions
to
modify
HTMLMediaElement
.
seekable
behavior.
undefined
When this method is invoked, the user agent must run the following steps:
readyState
attribute
is
not
"
open
"
then
throw
an
InvalidStateError
exception
and
abort
these
steps.
[[live
seekable
range]]
contains
a
range,
then
set
[[live
seekable
range]]
to
be
a
new
empty
TimeRanges
object.
isTypeSupported
,
static
Check
to
see
whether
the
MediaSource
is
capable
of
creating
SourceBuffer
objects
for
the
specified
MIME
type.
If
true
is
returned
from
this
method,
it
only
indicates
that
the
MediaSource
implementation
is
capable
of
creating
SourceBuffer
objects
for
the
specified
MIME
type.
An
addSourceBuffer
()
call
SHOULD
still
fail
if
sufficient
resources
are
not
available
to
support
the
addition
of
a
new
SourceBuffer
.
This
method
returning
true
implies
that
HTMLMediaElement.canPlayType()
will
return
"maybe"
or
"probably"
since
it
does
not
make
sense
for
a
MediaSource
to
support
a
type
the
HTMLMediaElement
knows
it
cannot
play.
For
proactively
checking
support
for
a
WebCodecs
AudioDecoderConfig
or
VideoDecoderConfig
,
use
the
respective
AudioDecoder
isConfigSupported
()
or
VideoDecoder
isConfigSupported
()
method.
Parameter | Type | Nullable | Optional | Description |
---|---|---|---|---|
type |
DOMString
| ✘ | ✘ |
boolean
When this method is invoked, the user agent must run the following steps:
Event name | Interface | Dispatched when... |
---|---|---|
sourceopen
|
Event
|
readyState
transitions
from
"
closed
"
to
"
open
"
or
from
"
ended
"
to
"
open
".
|
sourceended
|
Event
|
readyState
transitions
from
"
open
"
to
"
ended
".
|
sourceclose
|
Event
|
readyState
transitions
from
"
open
"
to
"
closed
"
or
"
ended
"
to
"
closed
".
|
When
a
Window
HTMLMediaElement
is
attached
to
a
DedicatedWorkerGlobalScope
MediaSource
,
each
context
has
algorithms
that
depend
on
information
from
the
other.
HTMLMediaElement
is
exposed
only
to
Window
contexts,
but
MediaSource
and
related
objects
defined
in
this
specification
are
exposed
in
Window
and
DedicatedWorkerGlobalScope
contexts.
This
lets
applications
construct
a
MediaSource
object
in
either
of
those
types
of
context
and
attach
it
to
an
HTMLMediaElement
object
in
a
Window
context
using
a
MediaSource
object
URL
as
described
in
the
attaching
to
a
media
element
algorithm.
A
MediaSource
object
is
not
Transferable
;
it
is
only
visible
in
the
context
where
it
was
created.
Window
media
element
to
a
DedicatedWorkerGlobalScope
MediaSource
.
While
the
model
describes
communication
using
message
passing,
implementations
MAY
choose
to
communicate
in
potentially
faster
ways,
such
as
using
shared
memory
and
locks.
Attachments
to
a
Window
MediaSource
synchronously
have
the
information
already
without
communicating
it
across
contexts.
A
MediaSource
that
is
constructed
in
a
DedicatedWorkerGlobalScope
has
a
[[port
to
main]]
internal
slot
that
stores
a
MessagePort
setup
during
attachment
and
nulled
during
detachment.
A
Window
[[port
to
main]]
is
always
null.
An
HTMLMediaElement
extended
by
this
specification
and
attached
to
a
DedicatedWorkerGlobalScope
MediaSource
similarly
has
a
[[port
to
worker]]
internal
slot
that
stores
a
MessagePort
and
a
[[channel
with
worker]]
internal
slot
that
stores
a
MessageChannel
,
both
setup
during
attachment
and
nulled
during
detachment.
Both
[[port
to
worker]]
and
[[channel
with
worker]]
are
null
unless
attached
to
a
DedicatedWorkerGlobalScope
MediaSource
.
Algorithms
in
this
specification
that
need
to
communicate
information
from
a
Window
HTMLMediaElement
to
an
attached
DedicatedWorkerGlobalScope
MediaSource
,
or
vice
versa,
will
use
these
internal
ports
implicitly
to
post
a
message
to
their
counterpart,
where
the
implicit
handler
of
the
message
runs
steps
as
described
in
the
algorithms.
A
MediaSource
object
can
be
attached
to
a
media
element
by
assigning
a
MediaSource
object
URL
to
the
media
element
src
attribute
or
the
src
attribute
of
a
<source>
inside
a
media
element.
A
MediaSource
object
URL
is
created
by
passing
a
MediaSource
object
to
createObjectURL
()
.
If the resource fetch algorithm was invoked with a media provider object that is a MediaSource object or a URL record whose object is a MediaSource object, then let mode be local, skip the first step in the resource fetch algorithm (which may otherwise set mode to remote) and add the steps and clarifications below to the " Otherwise (mode is local) " section of the resource fetch algorithm .
The
resource
fetch
algorithm
's
first
step
is
expected
to
eventually
align
with
selecting
local
mode
for
URL
records
whose
objects
are
media
provider
objects.
The
intent
is
that
if
the
HTMLMediaElement's
src
attribute
or
selected
child
<source>
's
src
attribute
is
a
blob:
URL
matching
a
MediaSource
object
URL
when
the
respective
src
attribute
was
last
changed,
then
that
MediaSource
object
is
used
as
the
media
provider
object
and
current
media
resource
in
the
local
mode
logic
in
the
resource
fetch
algorithm
.
This
also
means
that
the
remote
mode
logic
that
includes
observance
of
any
preload
attribute
is
skipped
when
a
MediaSource
object
is
attached.
Even
with
that
eventual
change
to
[
HTML
],
the
execution
of
the
following
steps
at
the
beginning
of
the
local
mode
logic
is
still
required
when
the
current
media
resource
is
a
MediaSource
object.
Relative to the action which triggered the media element's resource selection algorithm, these steps are asynchronous. The resource fetch algorithm is run after the task that invoked the resource selection algorithm is allowed to continue and a stable state is reached. Implementations may delay the steps in the " Otherwise " clause, below, until the MediaSource object is ready for use.
readyState
is
NOT
set
to
"
closed
"
MediaSource
was
constructed
in
a
DedicatedWorkerGlobalScope
,
then
setup
worker
attachment
communication
and
open
the
MediaSource
:
[[channel
with
worker]]
to
be
a
new
MessageChannel
.
[[port
to
worker]]
to
the
port1
value
of
[[channel
with
worker]]
.
port2
of
[[channel
with
worker]]
as
both
the
value
and
the
sole
member
of
the
transferList
,
and
let
the
result
be
serialized
port2
.
MediaSource
's
DedicatedWorkerGlobalScope
that
will
DedicatedWorkerGlobalScope
's
realm
,
and
set
[[port
to
main]]
to
be
the
resulting
deserialized
clone
of
the
transferred
port2
value
of
[[channel
with
worker]]
.
readyState
attribute
to
"
open
".
sourceopen
at
the
MediaSource
.
MediaSource
was
constructed
in
a
Window
:
[[channel
with
worker]]
null.
[[port
to
worker]]
null.
[[port
to
main]]
null.
readyState
attribute
to
"
open
".
sourceopen
at
the
MediaSource
.
appendBuffer
()
or
appendEncodedChunks
()
.
MediaSource
is
attached.
An attached MediaSource does not use the remote mode steps in the resource fetch algorithm , so the media element will not fire "suspend" events. Though future versions of this specification will likely remove "progress" and "stalled" events from a media element with an attached MediaSource, user agents conforming to this version of the specification may still fire these two events as these [ HTML ] references changed after implementations of this specification stabilized.
The
following
steps
are
run
in
any
case
where
the
media
element
is
going
to
transition
to
NETWORK_EMPTY
and
queue
a
task
to
fire
an
event
named
emptied
at
the
media
element.
These
steps
SHOULD
be
run
right
before
the
transition.
MediaSource
was
constructed
in
a
DedicatedWorkerGlobalScope
:
MediaSource
using
an
internal
detach
message
posted
to
[[port
to
worker]]
.
[[port
to
worker]]
null.
[[channel
with
worker]]
null.
detach
notification
runs
the
remainder
of
these
steps
in
the
DedicatedWorkerGlobalScope
MediaSource
.
MediaSource
was
constructed
in
a
Window
:
Window
MediaSource
.
[[port
to
main]]
null.
readyState
attribute
to
"
closed
".
duration
to
NaN.
SourceBuffer
objects
from
activeSourceBuffers
.
removesourcebuffer
at
activeSourceBuffers
.
SourceBuffer
objects
from
sourceBuffers
.
removesourcebuffer
at
sourceBuffers
.
sourceclose
at
the
MediaSource
.
Going
forward,
this
algorithm
is
intended
to
be
externally
called
and
run
in
any
case
where
the
attached
MediaSource
,
if
any,
must
be
detached
from
the
media
element.
It
MAY
be
called
on
HTMLMediaElement
[
HTML
]
operations
like
load()
and
resource
fetch
algorithm
failures
in
addition
to,
or
in
place
of,
when
the
media
element
transitions
to
NETWORK_EMPTY
.
Resource
fetch
algorithm
failures
are
those
which
abort
either
the
resource
fetch
algorithm
or
the
resource
selection
algorithm,
with
the
exception
that
the
"Final
step"
[
HTML
]
is
not
considered
a
failure
that
triggers
detachment.
Run the following steps as part of the " Wait until the user agent has established whether or not the media data for the new playback position is available, and, if it is, until it has decoded enough data to play back that position" step of the seek algorithm :
The
media
element
looks
for
media
segments
containing
the
new
playback
position
in
each
SourceBuffer
object
in
activeSourceBuffers
.
Any
position
within
a
TimeRanges
in
the
current
value
of
the
HTMLMediaElement
.
buffered
attribute
has
all
necessary
media
segments
buffered
for
that
position.
TimeRanges
of
HTMLMediaElement
.
buffered
HTMLMediaElement
.
readyState
attribute
is
greater
than
HAVE_METADATA
,
then
set
the
HTMLMediaElement
.
readyState
attribute
to
HAVE_METADATA
.
Per
HTMLMediaElement
ready
states
[
HTML
]
logic,
HTMLMediaElement
.
readyState
changes
may
trigger
events
on
the
HTMLMediaElement.
appendBuffer
()
or
appendEncodedChunks
()
call
causes
the
coded
frame
processing
algorithm
to
set
the
HTMLMediaElement
.
readyState
attribute
to
a
value
greater
than
HAVE_METADATA
.
The
web
application
can
use
buffered
and
HTMLMediaElement
.
buffered
to
determine
what
the
media
element
needs
to
resume
playback.
If
the
readyState
attribute
is
"
ended
"
and
the
new
playback
position
is
within
a
TimeRanges
currently
in
HTMLMediaElement
.
buffered
,
then
the
seek
operation
must
continue
to
completion
here
even
if
one
or
more
currently
selected
or
enabled
track
buffers'
largest
range
end
timestamp
is
less
than
new
playback
position
.
This
condition
should
only
occur
due
to
logic
in
buffered
when
readyState
is
"
ended
".
The
following
steps
are
periodically
run
during
playback
to
make
sure
that
all
of
the
SourceBuffer
objects
in
activeSourceBuffers
have
enough
data
to
ensure
uninterrupted
playback
.
Changes
to
activeSourceBuffers
also
cause
these
steps
to
run
because
they
affect
the
conditions
that
trigger
state
transitions.
Having
enough
data
to
ensure
uninterrupted
playback
is
an
implementation
specific
condition
where
the
user
agent
determines
that
it
currently
has
enough
data
to
play
the
presentation
without
stalling
for
a
meaningful
period
of
time.
This
condition
is
constantly
evaluated
to
determine
when
to
transition
the
media
element
into
and
out
of
the
HAVE_ENOUGH_DATA
ready
state.
These
transitions
indicate
when
the
user
agent
believes
it
has
enough
data
buffered
or
it
needs
more
data
respectively.
An
implementation
MAY
choose
to
use
bytes
buffered,
time
buffered,
the
append
rate,
or
any
other
metric
it
sees
fit
to
determine
when
it
has
enough
data.
The
metrics
used
MAY
change
during
playback
so
web
applications
SHOULD
only
rely
on
the
value
of
HTMLMediaElement
.
readyState
to
determine
whether
more
data
is
needed
or
not.
When
the
media
element
needs
more
data,
the
user
agent
SHOULD
transition
it
from
HAVE_ENOUGH_DATA
to
HAVE_FUTURE_DATA
early
enough
for
a
web
application
to
be
able
to
respond
without
causing
an
interruption
in
playback.
For
example,
transitioning
when
the
current
playback
position
is
500ms
before
the
end
of
the
buffered
data
gives
the
application
roughly
500ms
to
append
more
data
before
playback
stalls.
HTMLMediaElement
.
readyState
attribute
equals
HAVE_NOTHING
:
HTMLMediaElement
.
buffered
does
not
contain
a
TimeRanges
for
the
current
playback
position:
HTMLMediaElement
.
readyState
attribute
to
HAVE_METADATA
.
Per
HTMLMediaElement
ready
states
[
HTML
]
logic,
HTMLMediaElement
.
readyState
changes
may
trigger
events
on
the
HTMLMediaElement.
HTMLMediaElement
.
buffered
contains
a
TimeRanges
that
includes
the
current
playback
position
and
enough
data
to
ensure
uninterrupted
playback
:
HTMLMediaElement
.
readyState
attribute
to
HAVE_ENOUGH_DATA
.
Per
HTMLMediaElement
ready
states
[
HTML
]
logic,
HTMLMediaElement
.
readyState
changes
may
trigger
events
on
the
HTMLMediaElement.
HAVE_CURRENT_DATA
.
HTMLMediaElement
.
buffered
contains
a
TimeRanges
that
includes
the
current
playback
position
and
some
time
beyond
the
current
playback
position,
then
run
the
following
steps:
HTMLMediaElement
.
readyState
attribute
to
HAVE_FUTURE_DATA
.
Per
HTMLMediaElement
ready
states
[
HTML
]
logic,
HTMLMediaElement
.
readyState
changes
may
trigger
events
on
the
HTMLMediaElement.
HAVE_CURRENT_DATA
.
HTMLMediaElement
.
buffered
contains
a
TimeRanges
that
ends
at
the
current
playback
position
and
does
not
have
a
range
covering
the
time
immediately
after
the
current
position:
HTMLMediaElement
.
readyState
attribute
to
HAVE_CURRENT_DATA
.
Per
HTMLMediaElement
ready
states
[
HTML
]
logic,
HTMLMediaElement
.
readyState
changes
may
trigger
events
on
the
HTMLMediaElement.
During
playback
activeSourceBuffers
needs
to
be
updated
if
the
selected
video
track,
the
enabled
audio
track(s),
or
a
text
track
mode
changes.
When
one
or
more
of
these
changes
occur
the
following
steps
need
to
be
followed.
Also,
when
MediaSource
was
constructed
in
a
DedicatedWorkerGlobalScope
,
then
each
change
that
occurs
to
a
Window
mirror
of
a
track
created
previously
by
the
implicit
handler
for
the
internal
create
track
mirror
message
MUST
also
be
made
to
the
corresponding
DedicatedWorkerGlobalScope
track
using
an
internal
update
track
state
message
posted
to
[[port
to
worker]]
whose
implicit
handler
makes
the
change
and
runs
the
following
steps.
Likewise,
each
change
that
occurs
to
a
DedicatedWorkerGlobalScope
track
MUST
also
be
made
to
the
corresponding
Window
mirror
of
the
track
using
an
internal
update
track
state
message
posted
to
[[port
to
main]]
whose
implicit
handler
makes
the
change
to
the
mirror.
SourceBuffer
associated
with
the
previously
selected
video
track
is
not
associated
with
any
other
enabled
tracks,
run
the
following
steps:
SourceBuffer
from
activeSourceBuffers
.
removesourcebuffer
at
activeSourceBuffers
SourceBuffer
associated
with
the
newly
selected
video
track
is
not
already
in
activeSourceBuffers
,
run
the
following
steps:
SourceBuffer
to
activeSourceBuffers
.
addsourcebuffer
at
activeSourceBuffers
SourceBuffer
associated
with
this
track
is
not
associated
with
any
other
enabled
or
selected
track,
then
run
the
following
steps:
SourceBuffer
associated
with
the
audio
track
from
activeSourceBuffers
removesourcebuffer
at
activeSourceBuffers
SourceBuffer
associated
with
this
track
is
not
already
in
activeSourceBuffers
,
then
run
the
following
steps:
SourceBuffer
associated
with
the
audio
track
to
activeSourceBuffers
addsourcebuffer
at
activeSourceBuffers
mode
becomes
"disabled"
and
the
SourceBuffer
associated
with
this
track
is
not
associated
with
any
other
enabled
or
selected
track,
then
run
the
following
steps:
SourceBuffer
associated
with
the
text
track
from
activeSourceBuffers
removesourcebuffer
at
activeSourceBuffers
mode
becomes
"showing"
or
"hidden"
and
the
SourceBuffer
associated
with
this
track
is
not
already
in
activeSourceBuffers
,
then
run
the
following
steps:
SourceBuffer
associated
with
the
text
track
to
activeSourceBuffers
addsourcebuffer
at
activeSourceBuffers
Follow
these
steps
when
duration
needs
to
change
to
a
new
duration
.
duration
is
equal
to
new
duration
,
then
return.
SourceBuffer
objects
in
sourceBuffers
,
then
throw
an
InvalidStateError
exception
and
abort
these
steps.
SourceBuffer
objects
in
sourceBuffers
.
This condition can occur because the coded frame removal algorithm preserves coded frames that start before the start of the removal range.
duration
to
new
duration
.
duration
to
new
duration
.
MediaSource
was
constructed
in
a
DedicatedWorkerGlobalScope
:
duration
change
message
to
[[port
to
main]]
whose
implicit
handler
in
Window
runs
the
HTMLMediaElement
duration
change
algorithm
.
This
algorithm
gets
called
when
the
application
signals
the
end
of
stream
via
an
endOfStream
()
call
or
an
algorithm
needs
to
signal
a
decode
error.
This
algorithm
takes
an
error
parameter
that
indicates
whether
an
error
will
be
signalled.
readyState
attribute
value
to
"
ended
".
sourceended
at
the
MediaSource
.
SourceBuffer
objects
in
sourceBuffers
.
This allows the duration to properly reflect the end of the appended media segments. For example, if the duration was explicitly set to 10 seconds and only media segments for 0 to 5 seconds were appended before endOfStream() was called, then the duration will get updated to 5 seconds.
network
"
MediaSource
was
constructed
in
a
DedicatedWorkerGlobalScope
:
network
error
message
to
[[port
to
main]]
whose
implicit
handler
in
Window
runs
the
steps
in
the
following
"otherwise"
case.
HTMLMediaElement
.
readyState
attribute
equals
HAVE_NOTHING
HTMLMediaElement
.
readyState
attribute
is
greater
than
HAVE_NOTHING
decode
"
MediaSource
was
constructed
in
a
DedicatedWorkerGlobalScope
:
decode
error
message
to
[[port
to
main]]
whose
implicit
handler
in
Window
runs
the
steps
in
the
following
"otherwise"
case.
HTMLMediaElement
.
readyState
attribute
equals
HAVE_NOTHING
HTMLMediaElement
.
readyState
attribute
is
greater
than
HAVE_NOTHING
SourceBuffer
Object
WebIDLenum AppendMode
{
"segments
",
"sequence
"
};
Enumeration description | |
---|---|
segments
|
The
timestamps
in
the
media
segment
or
|
sequence
|
Media
segments
will
be
treated
as
adjacent
in
time
independent
of
the
timestamps
in
the
media
segment.
Coded
frames
in
a
new
media
segment
will
be
placed
immediately
after
the
coded
frames
in
the
previous
media
segment.
As
with
"
|
WebIDL[Exposed=(Window,DedicatedWorker)]
interface SourceBuffer
: EventTarget {
attribute AppendMode
mode
;
readonly attribute
readonly attribute boolean updating
;
readonly attribute TimeRanges buffered
;
attribute double timestampOffset
;
readonly attribute AudioTrackList audioTracks
;
readonly attribute VideoTrackList videoTracks
;
readonly attribute TextTrackList textTracks
;
attribute double appendWindowStart
;
attribute unrestricted double appendWindowEnd
;
attribute EventHandler onupdatestart
;
attribute EventHandler onupdate
;
attribute EventHandler onupdateend
;
attribute EventHandler onerror
;
attribute EventHandler onabort
;
undefined appendBuffer
(BufferSource data);
Promise < undefined > appendEncodedChunks
(EncodedChunks
chunks);
undefined abort
();
// "optional" with default empty SourceBufferConfig dictionary is used here
// to pass WebIDL verification. Behavior of the method enforces that either
// a valid mime-type string or a valid SourceBufferConfig are provided.
undefined changeType
(optional TypeOrConfig
typeOrConfig = {});
undefined remove
(double start, unrestricted double end);
};
};
// This typedef simplifies the syntax for describing either a single audio or
// video chunk, or a sequence of chunks. While this typedef does not provide the
// ability to enforce that all members of a sequence of chunks are the same kind
// (audio versus video), the behavior of the appendEncodedChunks method enforces
// that constraint: the implementation MUST reject a sequence containing both
// audio and video chunks.
typedef (sequence< (EncodedAudioChunk or EncodedVideoChunk) >
or
EncodedAudioChunk
or
EncodedVideoChunk
)
EncodedChunks
;
mode
of
type
AppendMode
Controls
how
a
sequence
of
media
segments
are
handled.
This
attribute
is
initially
set
by
addSourceBuffer
()
after
the
object
is
created,
and
can
be
updated
by
changeType
()
or
setting
this
attribute.
On getting, Return the initial value or the last value that was successfully set.
On setting, run the following steps:
sourceBuffers
attribute
of
the
parent
media
source
,
then
throw
an
InvalidStateError
exception
and
abort
these
steps.
updating
attribute
equals
true,
then
throw
an
InvalidStateError
exception
and
abort
these
steps.
[[generate
timestamps
flag]]
equals
true
and
new
mode
equals
"
segments
",
then
throw
a
TypeError
exception
and
abort
these
steps.
If
the
readyState
attribute
of
the
parent
media
source
is
in
the
"
ended
"
state
then
run
the
following
steps:
readyState
attribute
of
the
parent
media
source
to
"
open
"
sourceopen
at
the
parent
media
source
.
[[append
state]]
equals
PARSING_MEDIA_SEGMENT
,
then
throw
an
InvalidStateError
and
abort
these
steps.
sequence
",
then
set
the
[[group
start
timestamp]]
to
the
[[group
end
timestamp]]
.
updating
of
type
boolean
,
readonly
Indicates
whether
the
asynchronous
continuation
of
an
appendBuffer
()
,
appendEncodedChunks
()
or
remove
()
operation
is
still
being
processed.
This
attribute
is
initially
set
to
false
when
the
object
is
created.
buffered
of
type
TimeRanges
,
readonly
Indicates
what
TimeRanges
are
buffered
in
the
SourceBuffer
.
This
attribute
is
initially
set
to
an
empty
TimeRanges
object
when
the
object
is
created.
When the attribute is read the following steps MUST occur:
sourceBuffers
attribute
of
the
parent
media
source
then
throw
an
InvalidStateError
exception
and
abort
these
steps.
SourceBuffer
object.
TimeRanges
object
containing
a
single
range
from
0
to
highest
end
time
.
SourceBuffer
,
run
the
following
steps:
Text track buffers are included in the calculation of highest end time , above, but excluded from the buffered range calculation here. They are not necessarily continuous, nor should any discontinuity within them trigger playback stall when the other media tracks are continuous over the same time range.
readyState
is
"
ended
",
then
set
the
end
time
on
the
last
range
in
track
ranges
to
highest
end
time
.
timestampOffset
of
type
double
Controls
the
offset
applied
to
timestamps
inside
subsequent
media
segments
or
EncodedChunks
that
are
appended
to
this
SourceBuffer
.
The
timestampOffset
is
initially
set
to
0
which
indicates
that
no
offset
is
being
applied.
On getting, Return the initial value or the last value that was successfully set.
On setting, run the following steps:
sourceBuffers
attribute
of
the
parent
media
source
,
then
throw
an
InvalidStateError
exception
and
abort
these
steps.
updating
attribute
equals
true,
then
throw
an
InvalidStateError
exception
and
abort
these
steps.
If
the
readyState
attribute
of
the
parent
media
source
is
in
the
"
ended
"
state
then
run
the
following
steps:
readyState
attribute
of
the
parent
media
source
to
"
open
"
sourceopen
at
the
parent
media
source
.
[[append
state]]
equals
PARSING_MEDIA_SEGMENT
,
then
throw
an
InvalidStateError
and
abort
these
steps.
mode
attribute
equals
"
sequence
",
then
set
the
[[group
start
timestamp]]
to
new
timestamp
offset
.
audioTracks
of
type
AudioTrackList
,
readonly
AudioTrack
objects
created
by
this
object.
videoTracks
of
type
VideoTrackList
,
readonly
VideoTrack
objects
created
by
this
object.
textTracks
of
type
TextTrackList
,
readonly
TextTrack
objects
created
by
this
object.
appendWindowStart
of
type
double
The presentation timestamp for the start of the append window . This attribute is initially set to the presentation start time .
On getting, Return the initial value or the last value that was successfully set.
On setting, run the following steps:
sourceBuffers
attribute
of
the
parent
media
source
,
then
throw
an
InvalidStateError
exception
and
abort
these
steps.
updating
attribute
equals
true,
then
throw
an
InvalidStateError
exception
and
abort
these
steps.
appendWindowEnd
then
throw
a
TypeError
exception
and
abort
these
steps.
appendWindowEnd
of
type
unrestricted
double
The presentation timestamp for the end of the append window . This attribute is initially set to positive Infinity.
On getting, Return the initial value or the last value that was successfully set.
On setting, run the following steps:
sourceBuffers
attribute
of
the
parent
media
source
,
then
throw
an
InvalidStateError
exception
and
abort
these
steps.
updating
attribute
equals
true,
then
throw
an
InvalidStateError
exception
and
abort
these
steps.
TypeError
and
abort
these
steps.
appendWindowStart
then
throw
a
TypeError
exception
and
abort
these
steps.
onupdatestart
of
type
EventHandler
The
event
handler
for
the
event.
updatestart
onupdate
of
type
EventHandler
The
event
handler
for
the
event.
update
onupdateend
of
type
EventHandler
The
event
handler
for
the
event.
updateend
onerror
of
type
EventHandler
The
event
handler
for
the
event.
error
onabort
of
type
EventHandler
The
event
handler
for
the
event.
abort
appendBuffer
Appends
the
segment
data
in
an
BufferSource
[
WEBIDL
]
to
the
SourceBuffer
.
Parameter | Type | Nullable | Optional | Description |
---|---|---|---|---|
data |
BufferSource
| ✘ | ✘ |
undefined
When this method is invoked, the user agent must run the following steps:
[[input
buffer]]
.
updating
attribute
to
true.
updatestart
at
this
SourceBuffer
object.
SourceBuffer
is
currently
configured
by
the
most
recent
of
the
addSourceBuffer
()
call
that
created
this
object
or
a
more
recent
changeType
()
call
to
expect
to
buffer
WebCodecs
EncodedChunks
via
appendEncodedChunks
()
,
the
asynchronous
buffer
append
algorithm
will
detect
this
and
trigger
the
append
error
algorithm.
appendEncodedChunks
Appends
WebCodecs
[
WEBCODECS
]
EncodedChunks
to
the
SourceBuffer
.
Parameter | Type | Nullable | Optional | Description |
---|---|---|---|---|
chunks |
EncodedChunks
| ✘ | ✘ |
When this method is invoked, the user agent must run the following steps:
TypeError
exception
and
abort
these
steps.
[[input
webcodecs
configs
and
chunks]]
.
updating
attribute
to
true.
appendBuffer
()
method:
promise
resolution
or
rejection
is
the
way
to
detect
the
analogues
of
"abort",
"error",
"updatestart",
"update",
and
"updateend"
events
that
are
not
enqueued
by
appendEncodedChunks
()
.
Promise
.
[[pending
append
chunks
promise]]
to
be
promise
.
SourceBuffer
is
currently
configured
by
the
most
recent
of
the
addSourceBuffer
()
call
that
created
this
object
or
a
more
recent
changeType
()
call
to
expect
to
buffer
bytes
parsed
from
a
BufferSource
via
appendBuffer
()
,
the
asynchronous
chunks
append
algorithm
will
detect
this
and
trigger
the
append
error
algorithm.
abort
Aborts the current segment and resets the segment parser.
undefined
When this method is invoked, the user agent must run the following steps:
sourceBuffers
attribute
of
the
parent
media
source
then
throw
an
InvalidStateError
exception
and
abort
these
steps.
readyState
attribute
of
the
parent
media
source
is
not
in
the
"
open
"
state
then
throw
an
InvalidStateError
exception
and
abort
these
steps.
InvalidStateError
exception
and
abort
these
steps.
updating
attribute
equals
true,
then
run
the
following
steps:
updating
attribute
to
false.
Promise
in
[[pending
append
chunks
promise]]
:
Promise
with
an
AbortError
DOMException
and
unset
[[pending
append
chunks
promise]]
.
abort
at
this
SourceBuffer
object.
updateend
at
this
SourceBuffer
object.
appendWindowStart
to
the
presentation
start
time
.
appendWindowEnd
to
positive
Infinity.
changeType
Changes
the
MIME
type
associated
with
SourceBuffer
byte
stream
format
specification
or
WebCodecs
chunks
buffering
expectations
for
this
object.
Subsequent
Enables
switching
among
bytestreams,
codecs,
or
even
between
bytestream
parsing
versus
buffering
of
WebCodecs
encoded
chunks
in
the
same
appendBuffer
SourceBuffer
()
calls
will
expect
the
newly
appended
bytes
to
conform
to
the
new
type.
.
Parameter | Type | Nullable | Optional | Description |
---|---|---|---|---|
|
|
✘ | ✘ |
undefined
When this method is invoked, the user agent must run the following steps:
TypeError
exception
and
abort
these
steps.
audioConfig
and
a
videoConfig
or
has
neither,
then
throw
a
TypeError
exception
and
abort
these
steps
and
this
method.
audioConfig
nor
a
valid
VideoDecoderConfig
in
videoConfig
,
then
throw
a
TypeError
exception
and
abort
these
steps
and
this
method.
sourceBuffers
attribute
of
the
parent
media
source
,
then
throw
an
InvalidStateError
exception
and
abort
these
steps.
updating
attribute
equals
true,
then
throw
an
InvalidStateError
exception
and
abort
these
steps.
SourceBuffer
objects
in
the
sourceBuffers
attribute
of
the
parent
media
source
,
then
throw
a
NotSupportedError
exception
and
abort
these
steps.
SourceBuffer
objects
in
the
sourceBuffers
attribute
of
the
parent
media
source
,
then
throw
a
NotSupportedError
exception
and
abort
these
steps.
If
the
readyState
attribute
of
the
parent
media
source
is
in
the
"
ended
"
state
then
run
the
following
steps:
readyState
attribute
of
the
parent
media
source
to
"
open
".
sourceopen
at
the
parent
media
source
.
[[generate
timestamps
flag]]
on
this
SourceBuffer
object
to
the
value
in
the
"Generate
Timestamps
Flag"
column
of
the
byte
stream
format
registry
[
MSE-REGISTRY
]
entry
that
is
associated
with
[[generate
timestamps
flag]]
on
this
SourceBuffer
object
to
false.
SourceBufferConfig
that
may
be
in
this
SourceBuffer
object's
[[input
webcodecs
configs
and
chunks]]
.
SourceBufferConfig
from
typeOrConfig
into
this
SourceBuffer
object's
[[input
webcodecs
configs
and
chunks]]
for
potential
handling
later
during
chunks
append
algorithm.
[[generate
timestamps
flag]]
equals
true:
mode
attribute
on
this
SourceBuffer
object
to
"
sequence
",
including
running
the
associated
steps
for
that
attribute
being
set.
mode
attribute
on
this
SourceBuffer
object,
without
running
any
associated
steps
for
that
attribute
being
set.
[[pending
initialization
segment
for
changeType
flag]]
on
this
SourceBuffer
object
to
true.
remove
Removes media for a specific time range.
Parameter | Type | Nullable | Optional | Description |
---|---|---|---|---|
start |
double
|
✘ | ✘ | The start of the removal range, in seconds measured from presentation start time . |
end |
unrestricted
double
|
✘ | ✘ | The end of the removal range, in seconds measured from presentation start time . |
undefined
When this method is invoked, the user agent must run the following steps:
sourceBuffers
attribute
of
the
parent
media
source
then
throw
an
InvalidStateError
exception
and
abort
these
steps.
updating
attribute
equals
true,
then
throw
an
InvalidStateError
exception
and
abort
these
steps.
duration
equals
NaN,
then
throw
a
TypeError
exception
and
abort
these
steps.
duration
,
then
throw
a
TypeError
exception
and
abort
these
steps.
TypeError
exception
and
abort
these
steps.
If
the
readyState
attribute
of
the
parent
media
source
is
in
the
"
ended
"
state
then
run
the
following
steps:
readyState
attribute
of
the
parent
media
source
to
"
open
"
sourceopen
at
the
parent
media
source
.
A
track
buffer
stores
the
track
descriptions
and
coded
frames
for
an
individual
track.
The
track
buffer
is
updated
as
initialization
segments
and
,
media
segments
,
decoder
configurations
and
EncodedChunks
are
appended
given
to
the
SourceBuffer
.
Each track buffer has a last decode timestamp variable that stores the decode timestamp of the last coded frame appended in the current coded frame group . The variable is initially unset to indicate that no coded frames have been appended yet.
Each track buffer has a last frame duration variable that stores the coded frame duration of the last coded frame appended in the current coded frame group . The variable is initially unset to indicate that no coded frames have been appended yet.
Each track buffer has a highest end timestamp variable that stores the highest coded frame end timestamp across all coded frames in the current coded frame group that were appended to this track buffer. The variable is initially unset to indicate that no coded frames have been appended yet.
Each track buffer has a need random access point flag variable that keeps track of whether the track buffer is waiting for a random access point coded frame . The variable is initially set to true to indicate that random access point coded frame is needed before anything can be added to the track buffer .
Each track buffer has a track buffer ranges variable that represents the presentation time ranges occupied by the coded frames currently stored in the track buffer.
For
track
buffer
ranges,
these
presentation
time
ranges
are
based
on
presentation
timestamps
,
frame
durations,
and
potentially
coded
frame
group
start
times
for
coded
frame
groups
across
track
buffers
in
a
muxed
SourceBuffer
.
For
specification
purposes,
this
information
is
treated
as
if
it
were
stored
in
a
normalized
TimeRanges
object
.
Intersected
track
buffer
ranges
are
used
to
report
HTMLMediaElement
.
buffered
,
and
MUST
therefore
support
uninterrupted
playback
within
each
range
of
HTMLMediaElement
.
buffered
.
These
coded
frame
group
start
times
differ
slightly
from
those
mentioned
in
the
coded
frame
processing
algorithm
in
that
they
are
the
earliest
presentation
timestamp
across
all
track
buffers
following
a
discontinuity.
Discontinuities
can
occur
within
the
coded
frame
processing
algorithm
or
result
from
the
coded
frame
removal
algorithm,
regardless
of
mode
.
The
threshold
for
determining
disjointness
of
track
buffer
ranges
is
implementation-specific.
For
example,
to
reduce
unexpected
playback
stalls,
implementations
MAY
approximate
the
coded
frame
processing
algorithm's
discontinuity
detection
logic
by
coalescing
adjacent
ranges
separated
by
a
gap
smaller
than
2
times
the
maximum
frame
duration
buffered
so
far
in
this
track
buffer
.
Implementations
MAY
also
use
coded
frame
group
start
times
as
range
start
times
across
track
buffers
in
a
muxed
SourceBuffer
to
further
reduce
unexpected
playback
stalls.
Event name | Interface | Dispatched when... |
---|---|---|
updatestart
|
Event
|
updating
transitions
from
false
to
true.
|
update
|
Event
|
The
append
or
remove
has
successfully
completed.
updating
transitions
from
true
to
false.
|
updateend
|
Event
|
The append or remove has ended. |
error
|
Event
|
An
error
occurred
during
the
append.
updating
transitions
from
true
to
false.
|
abort
|
Event
|
The
append
was
aborted
by
an
abort
()
call.
updating
transitions
from
true
to
false.
|
Each
SourceBuffer
object
has
an
[[append
state]]
internal
slot
that
keeps
track
of
the
high-level
segment
parsing
state.
It
is
initially
set
to
WAITING_FOR_SEGMENT
and
can
transition
to
the
following
states
as
data
is
appended.
Append state name | Description |
---|---|
WAITING_FOR_SEGMENT
|
Waiting for the start of an initialization segment or media segment to be appended. |
PARSING_INIT_SEGMENT
|
Currently parsing an initialization segment . |
PARSING_MEDIA_SEGMENT
|
Currently parsing a media segment . |
Each
SourceBuffer
object
has
an
[[input
buffer]]
internal
slot
that
is
a
byte
buffer
that
holds
unparsed
bytes
across
appendBuffer
()
calls.
The
buffer
is
empty
when
the
SourceBuffer
object
is
created.
Each
SourceBuffer
object
has
a
[[buffer
full
flag]]
internal
slot
that
keeps
track
of
whether
appendBuffer
()
is
allowed
to
accept
more
bytes.
bytes
or
appendEncodedChunks
()
is
allowed
to
accept
more
encoded
chunks.
It
is
set
to
false
when
the
SourceBuffer
object
is
created
and
gets
updated
as
data
is
appended
and
removed.
Each
SourceBuffer
object
has
a
[[group
start
timestamp]]
internal
slot
that
keeps
track
of
the
starting
timestamp
for
a
new
coded
frame
group
in
the
"
sequence
"
mode.
It
is
unset
when
the
SourceBuffer
object
is
created
and
gets
updated
when
the
mode
attribute
equals
"
sequence
"
and
the
timestampOffset
attribute
is
set,
or
the
coded
frame
processing
algorithm
runs.
Each
SourceBuffer
object
has
a
[[group
end
timestamp]]
internal
slot
that
stores
the
highest
coded
frame
end
timestamp
across
all
coded
frames
in
the
current
coded
frame
group
.
It
is
set
to
0
when
the
SourceBuffer
object
is
created
and
gets
updated
by
the
coded
frame
processing
algorithm.
The
[[group
end
timestamp]]
stores
the
highest
coded
frame
end
timestamp
across
all
track
buffers
in
a
SourceBuffer
.
Therefore,
care
should
be
taken
in
setting
the
mode
attribute
when
appending
multiplexed
segments
in
which
the
timestamps
are
not
aligned
across
tracks.
Each
SourceBuffer
object
has
a
[[generate
timestamps
flag]]
internal
slot
that
is
a
boolean
that
keeps
track
of
whether
timestamps
need
to
be
generated
for
the
coded
frames
passed
to
the
coded
frame
processing
algorithm.
This
flag
is
set
by
addSourceBuffer
()
when
the
SourceBuffer
object
is
created
and
is
updated
by
changeType
()
.
When the segment parser loop algorithm is invoked, run the following steps:
[[input
buffer]]
is
empty,
then
jump
to
the
need
more
data
step
below.
[[input
buffer]]
contains
bytes
that
violate
the
SourceBuffer
byte
stream
format
specification
,
then
run
the
append
error
algorithm
and
abort
this
algorithm.
[[input
buffer]]
.
If
the
[[append
state]]
equals
WAITING_FOR_SEGMENT
,
then
run
the
following
steps:
[[input
buffer]]
indicates
the
start
of
an
initialization
segment
,
set
the
[[append
state]]
to
PARSING_INIT_SEGMENT
.
[[input
buffer]]
indicates
the
start
of
a
media
segment
,
set
[[append
state]]
to
PARSING_MEDIA_SEGMENT
.
If
the
[[append
state]]
equals
PARSING_INIT_SEGMENT
,
then
run
the
following
steps:
[[input
buffer]]
does
not
contain
a
complete
initialization
segment
yet,
then
jump
to
the
need
more
data
step
below.
[[input
buffer]]
.
[[append
state]]
to
WAITING_FOR_SEGMENT
.
If
the
[[append
state]]
equals
PARSING_MEDIA_SEGMENT
,
then
run
the
following
steps:
[[first
initialization
segment
received
flag]]
is
false
or
the
[[pending
initialization
segment
for
changeType
flag]]
is
true,
then
run
the
append
error
algorithm
and
abort
this
algorithm.
[[input
buffer]]
contains
one
or
more
complete
coded
frames
,
then
run
the
coded
frame
processing
algorithm.
The frequency at which the coded frame processing algorithm is run is implementation-specific. The coded frame processing algorithm MAY be called when the input buffer contains the complete media segment or it MAY be called multiple times as complete coded frames are added to the input buffer.
SourceBuffer
is
full
and
cannot
accept
more
media
data,
then
set
the
[[buffer
full
flag]]
to
true.
[[input
buffer]]
does
not
contain
a
complete
media
segment
,
then
jump
to
the
need
more
data
step
below.
[[input
buffer]]
.
[[append
state]]
to
WAITING_FOR_SEGMENT
.
When the parser state needs to be reset, run the following steps:
[[append
state]]
equals
PARSING_MEDIA_SEGMENT
and
the
[[input
buffer]]
contains
some
complete
coded
frames
,
then
run
the
coded
frame
processing
algorithm
until
all
of
these
complete
coded
frames
have
been
processed.
mode
attribute
equals
"
sequence
",
then
set
the
[[group
start
timestamp]]
to
the
[[group
end
timestamp]]
[[input
buffer]]
.
[[input
webcodecs
configs
and
chunks]]
contains
any
EncodedChunks
,
remove
them
from
[[input
webcodecs
configs
and
chunks]]
,
but
retain
any
AudioDecoderConfig
or
VideoDecoderConfig
that
may
be
in
that
internal
slot.
appendEncodedChunks
()
reuse
the
config
in
the
chunks
append
algorithm
because
there
is
no
other
way
to
retain
such
an
unprocessed
config,
except
perhaps
by
the
app
calling
changeType
()
with
the
config
again.
[[append
state]]
to
WAITING_FOR_SEGMENT
.
This algorithm is called when an error occurs during an append.
updating
attribute
to
false.
SourceBuffer
has
a
Promise
in
[[pending
append
chunks
promise]]
:
Promise
with
an
AbortError
DOMException
and
unset
[[pending
append
chunks
promise]]
.
error
at
this
SourceBuffer
object.
updateend
at
this
SourceBuffer
object.
decode
".
When
an
append
operation
begins,
the
following
steps
are
run
to
validate
and
prepare
the
SourceBuffer
.
SourceBuffer
has
been
removed
from
the
sourceBuffers
attribute
of
the
parent
media
source
then
throw
an
InvalidStateError
exception
and
abort
these
steps.
updating
attribute
equals
true,
then
throw
an
InvalidStateError
exception
and
abort
these
steps.
MediaSource
was
constructed
in
a
Window
HTMLMediaElement
.
error
attribute
is
not
null.
If
that
attribute
is
null,
then
let
recent
element
error
be
false.
Window
case,
but
run
on
the
Window
HTMLMediaElement
on
any
change
to
its
error
attribute
and
communicated
by
using
[[port
to
worker]]
implicit
messages.
If
such
a
message
has
not
yet
been
received,
then
let
recent
element
error
be
false.
InvalidStateError
exception
and
abort
these
steps.
If
the
readyState
attribute
of
the
parent
media
source
is
in
the
"
ended
"
state
then
run
the
following
steps:
readyState
attribute
of
the
parent
media
source
to
"
open
"
sourceopen
at
the
parent
media
source
.
If
the
[[buffer
full
flag]]
equals
true,
then
throw
a
QuotaExceededError
exception
and
abort
these
steps.
This
is
the
signal
that
the
implementation
was
unable
to
evict
enough
data
to
accommodate
the
append
or
the
append
is
too
big.
The
web
application
SHOULD
use
remove
()
to
explicitly
free
up
space
and/or
reduce
the
size
of
the
append.
When
appendBuffer
()
is
called,
the
following
steps
are
run
to
process
the
appended
data.
SourceBuffer
is
currently
configured
to
expect
processing
of
WebCodecs
EncodedChunks
,
then
run
the
append
error
algorithm
and
abort
this
algorithm.
EncodedChunks
is
based
on
the
most
recently
successful
execution
of
the
initial
addSourceBuffer
()
that
created
this
SourceBuffer
or
potentially
a
more
recent
changeType
()
call
that
may
have
changed
the
expectation.
updating
attribute
to
false.
update
at
this
SourceBuffer
object.
updateend
at
this
SourceBuffer
object.
Each
SourceBuffer
object
has
a
[[pending
append
chunks
promise]]
internal
slot
that
stores
the
promise
necessary
to
communicate
the
completion
of
asynchronous
steps
begun
by
a
call
to
appendEncodedChunks
()
.
If
there
is
no
asynchronous
appendEncodedChunks
()
operation
in
progress,
then
this
slot
is
unset.
Each
SourceBuffer
object
has
an
[[input
webcodecs
configs
and
chunks]]
internal
slot
that
stores
a
queue
of
unprocessed
EncodedChunks
and
at
most
one
SourceBufferConfig
.
The
SourceBufferConfig
is
added
if
the
initial
call
to
addSourceBuffer
()
that
created
this
object
was
provided
a
SourceBufferConfig
,
or
if
there
is
a
more
recent
successful
call
to
changeType
()
on
this
SourceBuffer
that
provided
a
SourceBufferConfig
.
The
contents
of
this
slot
are
processed
by
the
chunks
append
algorithm
to
buffer
the
necessary
decoder
configuration
and
coded
frames
into
the
underlying
track
buffer
.
When
the
chunks
append
algorithm
is
invoked,
run
the
following
steps
to
process
the
appended
EncodedChunks
relative
to
the
current
SourceBufferConfig
:
SourceBuffer
is
currently
configured
to
expect
processing
of
an
appended
bytestream,
then
run
the
append
error
algorithm
and
abort
this
algorithm.
EncodedChunks
is
based
on
the
most
recently
successful
execution
of
the
initial
addSourceBuffer
()
that
created
this
SourceBuffer
or
potentially
a
more
recent
changeType
()
call
that
may
have
changed
the
expectation.
SourceBufferConfig
in
this
SourceBuffer
's
[[input
webcodecs
configs
and
chunks]]
,
then
run
the
following
steps:
SourceBufferConfig
from
[[input
webcodecs
configs
and
chunks]]
.
AudioDecoderConfig
or
VideoDecoderConfig
in
config
;
this
condition
was
already
enforced
by
the
addSourceBuffer
()
or
changeType
()
call
that
placed
config
into
[[input
webcodecs
configs
and
chunks]]
.
SourceBufferConfig
from
[[input
webcodecs
configs
and
chunks]]
.
[[input
webcodecs
configs
and
chunks]]
,
prepare
the
EncodedChunks
for
coded
frame
processing
by
running
the
following
steps
for
each
chunk:
[[input
webcodecs
configs
and
chunks]]
.
EncodedAudioChunk
but
the
most
recently
processed
config
by
this
object
was
a
VideoDecoderConfig
,
or
if
chunk
is
an
EncodedVideoChunk
but
the
most
recently
processed
config
by
this
object
was
an
AudioDecoderConfig
,
then
run
the
append
error
algorithm
and
abort
this
algorithm.
abort
()
when
updating
is
false.
A
more
heavyweight
option
would
be
to
call
changeType
()
.
Also
note
that,
though
WebCodecs
does
not
define
a
decode
timestamp
attribute
for
encoded
chunks,
if
reliable
buffering
of
chunks
into
MSE
needs
real
decode
timestamps,
this
spec
may
be
refined
to
improve
that
support.
updating
attribute
to
false.
[[pending
append
chunks
promise]]
and
unset
that
internal
slot.
Follow these steps when a caller needs to initiate a JavaScript visible range removal operation that blocks other SourceBuffer updates:
updating
attribute
to
true.
updatestart
at
this
SourceBuffer
object.
updating
attribute
to
false.
update
at
this
SourceBuffer
object.
updateend
at
this
SourceBuffer
object.
The
following
steps
are
run
when
the
segment
parser
loop
successfully
parses
a
complete
initialization
segment
or
the
chunks
append
algorithm
handles
a
SourceBufferConfig
:
Each SourceBuffer object has a [[first initialization segment received flag]] internal slot that tracks whether the first initialization segment has been appended and received by this algorithm. This flag is set to false when the SourceBuffer is created and updated by the algorithm below.
Each
SourceBuffer
object
has
a
[[pending
initialization
segment
for
changeType
flag]]
internal
slot
that
tracks
whether
an
initialization
segment
is
needed
since
the
most
recent
changeType
()
.
This
flag
is
set
to
false
when
the
SourceBuffer
is
created,
set
to
true
by
changeType
()
and
reset
to
false
by
the
algorithm
below.
duration
attribute
if
it
currently
equals
NaN:
[[first
initialization
segment
received
flag]]
is
true,
then
run
the
following
steps:
User
agents
MAY
consider
codecs,
that
would
otherwise
be
supported,
as
"not
supported"
here
if
the
codecs
were
not
specified
in
type
parameter
passed
to
(a)
the
most
recently
successful
changeType
()
on
this
SourceBuffer
object,
or
(b)
if
no
successful
changeType
()
has
yet
occurred
on
this
object,
the
addSourceBuffer
()
that
created
this
SourceBuffer
object.
For
example,
if
the
most
recently
successful
changeType
()
was
called
with
'video/webm'
or
'video/webm;
codecs="vp8"'
,
and
a
video
track
containing
vp9
appears
in
the
initialization
segment,
then
the
user
agent
MAY
use
this
step
to
trigger
a
decode
error
even
if
the
other
two
properties'
checks,
above,
pass.
Implementations
are
encouraged
to
trigger
error
in
such
cases
only
when
the
codec
is
indeed
not
supported
or
the
other
two
properties'
checks
fail.
Web
authors
are
encouraged
to
use
changeType
()
,
addSourceBuffer
()
and
isTypeSupported
()
with
precise
codec
parameters
to
more
proactively
detect
user
agent
support.
changeType
()
is
required
if
the
SourceBuffer
object's
bytestream
format
is
changing.
If
the
[[first
initialization
segment
received
flag]]
is
false,
then
run
the
following
steps:
User
agents
MAY
consider
codecs,
that
would
otherwise
be
supported,
as
"not
supported"
here
if
the
codecs
were
not
specified
in
type
parameter
passed
to
(a)
the
most
recently
successful
changeType
()
on
this
SourceBuffer
object,
or
(b)
if
no
successful
changeType
()
has
yet
occurred
on
this
object,
the
addSourceBuffer
()
that
created
this
SourceBuffer
object.
For
example,
MediaSource.isTypeSupported('video/webm;codecs="vp8,vorbis"')
may
return
true,
but
if
addSourceBuffer
()
was
called
with
'video/webm;codecs="vp8"'
and
a
Vorbis
track
appears
in
the
initialization
segment
,
then
the
user
agent
MAY
use
this
step
to
trigger
a
decode
error.
Implementations
are
encouraged
to
trigger
error
in
such
cases
only
when
the
codec
is
indeed
not
supported.
Web
authors
are
encouraged
to
use
changeType
()
,
addSourceBuffer
()
and
isTypeSupported
()
with
precise
codec
parameters
to
more
proactively
detect
user
agent
support.
changeType
()
is
required
if
the
SourceBuffer
object's
bytestream
format
is
changing.
For each audio track in the initialization segment , run following steps:
AudioTrack
object.
id
property
on
new
audio
track
.
language
property
on
new
audio
track
.
label
property
on
new
audio
track
.
kind
property
on
new
audio
track
.
If
this
SourceBuffer
object's
audioTracks
.
length
equals
0,
then
run
the
following
steps:
enabled
property
on
new
audio
track
to
true.
audioTracks
attribute
on
this
SourceBuffer
object.
This
should
trigger
AudioTrackList
[
HTML
]
logic
to
queue
a
task
to
fire
an
event
named
addtrack
using
TrackEvent
with
the
track
attribute
initialized
to
new
audio
track
,
at
the
AudioTrackList
object
referenced
by
the
audioTracks
attribute
on
this
SourceBuffer
object.
DedicatedWorkerGlobalScope
:
create
track
mirror
message
to
[[port
to
main]]
whose
implicit
handler
in
Window
runs
the
following
steps:
AudioTrack
object.
audioTracks
attribute
on
the
HTMLMediaElement.
audioTracks
attribute
on
the
HTMLMediaElement.
This
should
trigger
AudioTrackList
[
HTML
]
logic
to
queue
a
task
to
fire
an
event
named
addtrack
using
TrackEvent
with
the
track
attribute
initialized
to
mirrored
audio
track
or
new
audio
track
,
at
the
AudioTrackList
object
referenced
by
the
audioTracks
attribute
on
the
HTMLMediaElement.
For each video track in the initialization segment , run following steps:
VideoTrack
object.
id
property
on
new
video
track
.
language
property
on
new
video
track
.
label
property
on
new
video
track
.
kind
property
on
new
video
track
.
If
this
SourceBuffer
object's
videoTracks
.
length
equals
0,
then
run
the
following
steps:
selected
property
on
new
video
track
to
true.
videoTracks
attribute
on
this
SourceBuffer
object.
This
should
trigger
VideoTrackList
[
HTML
]
logic
to
queue
a
task
to
fire
an
event
named
addtrack
using
TrackEvent
with
the
track
attribute
initialized
to
new
video
track
,
at
the
VideoTrackList
object
referenced
by
the
videoTracks
attribute
on
this
SourceBuffer
object.
DedicatedWorkerGlobalScope
:
create
track
mirror
message
to
[[port
to
main]]
whose
implicit
handler
in
Window
runs
the
following
steps:
VideoTrack
object.
videoTracks
attribute
on
the
HTMLMediaElement.
videoTracks
attribute
on
the
HTMLMediaElement.
This
should
trigger
VideoTrackList
[
HTML
]
logic
to
queue
a
task
to
fire
an
event
named
addtrack
using
TrackEvent
with
the
track
attribute
initialized
to
mirrored
video
track
or
new
video
track
,
at
the
VideoTrackList
object
referenced
by
the
videoTracks
attribute
on
the
HTMLMediaElement.
For each text track in the initialization segment , run following steps:
TextTrack
object.
id
property
on
new
text
track
.
language
property
on
new
text
track
.
label
property
on
new
text
track
.
kind
property
on
new
text
track
.
mode
property
on
new
text
track
equals
"showing"
or
"hidden"
,
then
set
active
track
flag
to
true.
textTracks
attribute
on
this
SourceBuffer
object.
This
should
trigger
TextTrackList
[
HTML
]
logic
to
queue
a
task
to
fire
an
event
named
addtrack
using
TrackEvent
with
the
track
attribute
initialized
to
new
text
track
,
at
the
TextTrackList
object
referenced
by
the
textTracks
attribute
on
this
SourceBuffer
object.
DedicatedWorkerGlobalScope
:
create
track
mirror
message
to
[[port
to
main]]
whose
implicit
handler
in
Window
runs
the
following
steps:
TextTrack
object.
textTracks
attribute
on
the
HTMLMediaElement.
textTracks
attribute
on
the
HTMLMediaElement.
This
should
trigger
TextTrackList
[
HTML
]
logic
to
queue
a
task
to
fire
an
event
named
addtrack
using
TrackEvent
with
the
track
attribute
initialized
to
mirrored
text
track
or
new
text
track
,
at
the
TextTrackList
object
referenced
by
the
textTracks
attribute
on
the
HTMLMediaElement.
SourceBuffer
to
activeSourceBuffers
.
addsourcebuffer
at
activeSourceBuffers
[[first
initialization
segment
received
flag]]
to
true.
[[pending
initialization
segment
for
changeType
flag]]
to
false.
DedicatedWorkerGlobalScope
:
active
track
added
message
to
[[port
to
main]]
whose
implicit
handler
in
Window
runs
the
following
step:
HTMLMediaElement
.
readyState
attribute
is
greater
than
HAVE_CURRENT_DATA
,
then
set
the
HTMLMediaElement
.
readyState
attribute
to
HAVE_METADATA
.
HTMLMediaElement
.
readyState
attribute
is
greater
than
HAVE_CURRENT_DATA
,
then
set
the
HTMLMediaElement
.
readyState
attribute
to
HAVE_METADATA
.
Per
HTMLMediaElement
ready
states
[
HTML
]
logic,
HTMLMediaElement
.
readyState
changes
may
trigger
events
on
the
HTMLMediaElement.
sourceBuffers
of
the
parent
media
source
has
[[first
initialization
segment
received
flag]]
equal
to
true,
then
run
the
following
steps:
DedicatedWorkerGlobalScope
:
sourcebuffers
ready
message
to
[[port
to
main]]
whose
implicit
handler
in
Window
runs
the
following
step:
HTMLMediaElement
.
readyState
attribute
is
HAVE_NOTHING
,
then
set
the
HTMLMediaElement
.
readyState
attribute
to
HAVE_METADATA
.
HTMLMediaElement
.
readyState
attribute
is
HAVE_NOTHING
,
then
set
the
HTMLMediaElement
.
readyState
attribute
to
HAVE_METADATA
.
Per
HTMLMediaElement
ready
states
[
HTML
]
logic,
HTMLMediaElement
.
readyState
changes
may
trigger
events
on
the
HTMLMediaElement.
If
transition
from
HAVE_NOTHING
to
HAVE_METADATA
occurs,
it
should
trigger
HTMLMediaElement
logic
to
queue
a
task
to
fire
an
event
named
loadedmetadata
at
the
media
element.
When complete coded frames have been parsed by the segment parser loop or emitted by the chunks append algorithm, then the following steps are run:
For
each
coded
frame
in
the
media
segment
run
the
following
steps:
[[generate
timestamps
flag]]
equals
true:
Special processing may be needed to determine the presentation and decode timestamps for timed text frames since this information may not be explicitly present in the underlying format or may be dependent on the order of the frames. Some metadata text tracks, like MPEG2-TS PSI data, may only have implied timestamps. Format specific rules for these situations SHOULD be in the byte stream format specifications or in separate extension specifications.
Implementations don't have to internally store timestamps in a double precision floating point representation. This representation is used here because it is the representation for timestamps in the HTML spec. The intention here is to make the behavior clear without adding unnecessary complexity to the algorithm to deal with the fact that adding a timestampOffset may cause a timestamp rollover in the underlying timestamp representation used by the byte stream format. Implementations can use any internal timestamp representation they wish, but the addition of timestampOffset SHOULD behave in a similar manner to what would happen if a double precision floating point representation was used.
mode
equals
"
sequence
"
and
[[group
start
timestamp]]
is
set,
then
run
the
following
steps:
timestampOffset
equal
to
[[group
start
timestamp]]
minus
presentation
timestamp
.
[[group
end
timestamp]]
equal
to
[[group
start
timestamp]]
.
[[group
start
timestamp]]
.
If
timestampOffset
is
not
0,
then
run
the
following
steps:
timestampOffset
to
the
presentation
timestamp
.
timestampOffset
to
the
decode
timestamp
.
mode
equals
"
segments
":
[[group
end
timestamp]]
to
presentation
timestamp
.
mode
equals
"
sequence
":
[[group
start
timestamp]]
equal
to
the
[[group
end
timestamp]]
.
appendWindowStart
,
then
set
the
need
random
access
point
flag
to
true,
drop
the
coded
frame,
and
jump
to
the
top
of
the
loop
to
start
processing
the
next
coded
frame.
Some
implementations
MAY
choose
to
collect
some
of
these
coded
frames
with
presentation
timestamp
less
than
appendWindowStart
and
use
them
to
generate
a
splice
at
the
first
coded
frame
that
has
a
presentation
timestamp
greater
than
or
equal
to
appendWindowStart
even
if
that
frame
is
not
a
random
access
point
.
Supporting
this
requires
multiple
decoders
or
faster
than
real-time
decoding
so
for
now
this
behavior
will
not
be
a
normative
requirement.
appendWindowEnd
,
then
set
the
need
random
access
point
flag
to
true,
drop
the
coded
frame,
and
jump
to
the
top
of
the
loop
to
start
processing
the
next
coded
frame.
Some
implementations
MAY
choose
to
collect
coded
frames
with
presentation
timestamp
less
than
appendWindowEnd
and
frame
end
timestamp
greater
than
appendWindowEnd
and
use
them
to
generate
a
splice
across
the
portion
of
the
collected
coded
frames
within
the
append
window
at
time
of
collection,
and
the
beginning
portion
of
later
processed
frames
which
only
partially
overlap
the
end
of
the
collected
coded
frames.
Supporting
this
requires
multiple
decoders
or
faster
than
real-time
decoding
so
for
now
this
behavior
will
not
be
a
normative
requirement.
In
conjunction
with
collecting
coded
frames
that
span
appendWindowStart
,
implementations
MAY
thus
support
gapless
audio
splicing.
This is to compensate for minor errors in frame timestamp computations that can appear when converting back and forth between double precision floating point numbers and rationals. This tolerance allows a frame to replace an existing one as long as it is within 1 microsecond of the existing frame's start time. Frames that come slightly before an existing frame are handled by the removal step below.
Removing all coded frames until the next random access point is a conservative estimate of the decoding dependencies since it assumes all frames between the removed frames and the next random access point depended on the frames that were removed.
The greater than check is needed because bidirectional prediction between coded frames can cause presentation timestamp to not be monotonically increasing even though the decode timestamps are monotonically increasing.
[[group
end
timestamp]]
,
then
set
[[group
end
timestamp]]
equal
to
frame
end
timestamp
.
[[generate
timestamps
flag]]
equals
true,
then
set
timestampOffset
equal
to
frame
end
timestamp
.
If
the
HTMLMediaElement
.
readyState
attribute
is
HAVE_METADATA
and
the
new
coded
frames
cause
HTMLMediaElement
.
buffered
to
have
a
TimeRanges
for
the
current
playback
position,
then
set
the
HTMLMediaElement
.
readyState
attribute
to
HAVE_CURRENT_DATA
.
Per
HTMLMediaElement
ready
states
[
HTML
]
logic,
HTMLMediaElement
.
readyState
changes
may
trigger
events
on
the
HTMLMediaElement.
If
the
HTMLMediaElement
.
readyState
attribute
is
HAVE_CURRENT_DATA
and
the
new
coded
frames
cause
HTMLMediaElement
.
buffered
to
have
a
TimeRanges
that
includes
the
current
playback
position
and
some
time
beyond
the
current
playback
position,
then
set
the
HTMLMediaElement
.
readyState
attribute
to
HAVE_FUTURE_DATA
.
Per
HTMLMediaElement
ready
states
[
HTML
]
logic,
HTMLMediaElement
.
readyState
changes
may
trigger
events
on
the
HTMLMediaElement.
If
the
HTMLMediaElement
.
readyState
attribute
is
HAVE_FUTURE_DATA
and
the
new
coded
frames
cause
HTMLMediaElement
.
buffered
to
have
a
TimeRanges
that
includes
the
current
playback
position
and
enough
data
to
ensure
uninterrupted
playback
,
then
set
the
HTMLMediaElement
.
readyState
attribute
to
HAVE_ENOUGH_DATA
.
Per
HTMLMediaElement
ready
states
[
HTML
]
logic,
HTMLMediaElement
.
readyState
changes
may
trigger
events
on
the
HTMLMediaElement.
duration
,
then
run
the
duration
change
algorithm
with
new
duration
set
to
the
maximum
of
the
current
duration
and
the
[[group
end
timestamp]]
.
Follow these steps when coded frames for a specific time range need to be removed from the SourceBuffer:
For
each
track
buffer
in
this
SourceBuffer
,
run
the
following
steps:
duration
If this track buffer has a random access point timestamp that is greater than or equal to end , then update remove end timestamp to that random access point timestamp.
Random access point timestamps can be different across tracks because the dependencies between coded frames within a track are usually different than the dependencies in another track.
For each removed frame, if the frame has a decode timestamp equal to the last decode timestamp for the frame's track, run the following steps:
mode
equals
"
segments
":
[[group
end
timestamp]]
to
presentation
timestamp
.
mode
equals
"
sequence
":
[[group
start
timestamp]]
equal
to
the
[[group
end
timestamp]]
.
Removing all coded frames until the next random access point is a conservative estimate of the decoding dependencies since it assumes all frames between the removed frames and the next random access point depended on the frames that were removed.
If
this
object
is
in
activeSourceBuffers
,
the
current
playback
position
is
greater
than
or
equal
to
start
and
less
than
the
remove
end
timestamp
,
and
HTMLMediaElement
.
readyState
is
greater
than
HAVE_METADATA
,
then
set
the
HTMLMediaElement
.
readyState
attribute
to
HAVE_METADATA
and
stall
playback.
Per
HTMLMediaElement
ready
states
[
HTML
]
logic,
HTMLMediaElement
.
readyState
changes
may
trigger
events
on
the
HTMLMediaElement.
This transition occurs because media data for the current position has been removed. Playback cannot progress until media for the current playback position is appended or the selected/enabled tracks change .
[[buffer
full
flag]]
equals
true
and
this
object
is
ready
to
accept
more
bytes,
then
set
the
[[buffer
full
flag]]
to
false.
This
algorithm
is
run
to
free
up
space
in
this
SourceBuffer
when
new
data
is
appended.
New
data
is
either
the
bytes
being
appended
via
appendBuffer
()
or
the
WebCodecs
EncodedChunks
being
appended
via
appendEncodedChunks
()
.
Need
to
recognize
step
here
that
implementations
MAY
decide
to
set
[[buffer
full
flag]]
true
here
if
it
predicts
that
processing
new
data
in
addition
to
any
existing
bytes
in
[[input
buffer]]
would
exceed
the
capacity
of
the
SourceBuffer
.
Such
a
step
enables
more
proactive
push-back
from
implementations
before
accepting
new
data
which
would
overflow
resources,
for
example.
In
practice,
at
least
one
implementation
already
does
this.
[[buffer
full
flag]]
equals
false,
then
abort
these
steps.
Implementations
MAY
use
different
methods
for
selecting
removal
ranges
so
web
applications
SHOULD
NOT
depend
on
a
specific
behavior.
The
web
application
can
use
the
buffered
attribute
to
observe
whether
portions
of
the
buffered
data
have
been
evicted.
Follow these steps when the coded frame processing algorithm needs to generate a splice frame for two overlapping audio coded frames :
floor(x
*
sample_rate
+
0.5)
/
sample_rate
).
For example, given the following values:
presentation timestamp and decode timestamp are updated to 10.0125 since 10.01255 is closer to 10 + 100/8000 (10.0125) than 10 + 101/8000 (10.012625)
Some implementations MAY apply fades to/from silence to coded frames on either side of the inserted silence to make the transition less jarring.
This is intended to allow new coded frame to be added to the track buffer as if overlapped frame had not been in the track buffer to begin with.
If the new coded frame is less than 5 milliseconds in duration, then coded frames that are appended after the new coded frame will be needed to properly render the splice.
See the audio splice rendering algorithm for details on how this splice frame is rendered.
The following steps are run when a spliced frame, generated by the audio splice frame algorithm, needs to be rendered by the media element:
Here is a graphical representation of this algorithm.
Follow these steps when the coded frame processing algorithm needs to generate a splice frame for two overlapping timed text coded frames :
This is intended to allow new coded frame to be added to the track buffer as if it hadn't overlapped any frames in track buffer to begin with.
SourceBufferList
Object
SourceBufferList
is
a
simple
container
object
for
SourceBuffer
objects.
It
provides
read-only
array
access
and
fires
events
when
the
list
is
modified.
WebIDL[Exposed=(Window,DedicatedWorker)]
interface SourceBufferList
: EventTarget {
readonly attribute unsigned long length
;
attribute EventHandler onaddsourcebuffer
;
attribute EventHandler onremovesourcebuffer
;
getter
SourceBuffer
(unsigned long index);
};
length
of
type
unsigned
long
,
readonly
Indicates
the
number
of
SourceBuffer
objects
in
the
list.
onaddsourcebuffer
of
type
EventHandler
The
event
handler
for
the
event.
addsourcebuffer
onremovesourcebuffer
of
type
EventHandler
The
event
handler
for
the
event.
removesourcebuffer
getter
Allows the SourceBuffer objects in the list to be accessed with an array operator (i.e., []).
Parameter | Type | Nullable | Optional | Description |
---|---|---|---|---|
index |
unsigned
long
|
✘ | ✘ |
SourceBuffer
When this method is invoked, the user agent must run the following steps:
length
attribute
then
return
undefined
and
abort
these
steps.
SourceBuffer
object
in
the
list.
Event name | Interface | Dispatched when... |
---|---|---|
addsourcebuffer
|
Event
|
When
a
SourceBuffer
is
added
to
the
list.
|
removesourcebuffer
|
Event
|
When
a
SourceBuffer
is
removed
from
the
list.
|
This
section
specifies
what
existing
attributes
on
the
HTMLMediaElement
MUST
return
when
a
MediaSource
is
attached
to
the
element.
HTMLMediaElement
.
seekable
The
HTMLMediaElement
.
seekable
attribute
returns
a
new
static
normalized
TimeRanges
object
created
based
on
the
following
steps:
MediaSource
was
constructed
in
a
DedicatedWorkerGlobalScope
that
is
terminated
or
is
closing
then
return
an
empty
TimeRanges
object
and
abort
these
steps.
This
case
is
intended
to
handle
implementations
that
may
no
longer
maintain
any
previous
information
about
buffered
or
seekable
media
in
a
MediaSource
that
was
constructed
in
a
DedicatedWorkerGlobalScope
that
has
been
terminated
by
terminate
()
or
user
agent
execution
of
terminate
a
worker
for
the
MediaSource's
DedicatedWorkerGlobalScope,
for
instance
as
the
eventual
result
of
close
()
execution.
Should there be some (eventual) media element error transition in the case of an attached worker MediaSource having its context destroyed? The experimental Chromium implementation of worker MSE just keeps the element readyState, networkState and error the same as prior to that context destruction, though the seekable and buffered attributes each report an empty TimeRange.
duration
and
[[live
seekable
range]]
,
determined
as
follows:
MediaSource
was
constructed
in
a
Window
duration
and
set
recent
live
seekable
range
to
be
[[live
seekable
range]]
.
duration
and
[[live
seekable
range]]
were
recently,
updated
by
handling
implicit
messages
posted
by
the
MediaSource
to
its
[[port
to
main]]
on
every
change
to
duration
or
[[live
seekable
range]]
.
TimeRanges
object.
HTMLMediaElement
.
buffered
attribute.
HTMLMediaElement
.
buffered
attribute
returns
an
empty
TimeRanges
object,
then
return
an
empty
TimeRanges
object
and
abort
these
steps.
HTMLMediaElement
.
buffered
attribute.
HTMLMediaElement
.
buffered
The
HTMLMediaElement
.
buffered
attribute
returns
a
static
normalized
TimeRanges
object
based
on
the
following
steps.
MediaSource
was
constructed
in
a
DedicatedWorkerGlobalScope
that
is
terminated
or
is
closing
then
return
an
empty
TimeRanges
object
and
abort
these
steps.
This
case
is
intended
to
handle
implementations
that
may
no
longer
maintain
any
previous
information
about
buffered
or
seekable
media
in
a
MediaSource
that
was
constructed
in
a
DedicatedWorkerGlobalScope
that
has
been
terminated
by
terminate
()
or
user
agent
execution
of
terminate
a
worker
for
the
MediaSource's
DedicatedWorkerGlobalScope,
for
instance
as
the
eventual
result
of
close
()
execution.
Should there be some (eventual) media element error transition in the case of an attached worker MediaSource having its context destroyed? The experimental Chromium implementation of worker MSE just keeps the element readyState, networkState and error the same as prior to that context destruction, though the seekable and buffered attributes each report an empty TimeRange.
MediaSource
was
constructed
in
a
Window
TimeRanges
object.
activeSourceBuffers
.length
does
not
equal
0
then
run
the
following
steps:
buffered
for
each
SourceBuffer
object
in
activeSourceBuffers
.
TimeRanges
object
containing
a
single
range
from
0
to
highest
end
time
.
SourceBuffer
object
in
activeSourceBuffers
run
the
following
steps:
buffered
attribute
on
the
current
SourceBuffer
.
readyState
is
"
ended
",
then
set
the
end
time
on
the
last
range
in
source
ranges
to
highest
end
time
.
TimeRanges
resulting
from
the
steps
for
the
Window
case,
but
run
with
the
MediaSource
and
its
SourceBuffer
objects
in
their
DedicatedWorkerGlobalScope
and
communicated
by
using
[[port
to
main]]
implicit
messages
on
every
update
to
the
activeSourceBuffers
,
readyState
,
or
any
of
the
buffering
state
that
would
change
any
of
the
values
of
each
of
those
buffered
attributes
of
the
activeSourceBuffers
.
The overhead of recalculating and communicating recent intersection ranges so frequently is one reason for allowing implementation flexibility to query this information on-demand using other mechanisms such as shared memory and locks as mentioned in cross-context communication model .
This
section
specifies
extensions
to
the
[
HTML
]
AudioTrack
definition.
WebIDL[Exposed=(Window,DedicatedWorker)]
partial interface AudioTrack {
readonly attribute
readonly attribute SourceBuffer
? sourceBuffer
;
};
AudioTrack
needs
Window+DedicatedWorker
exposure.
sourceBuffer
of
type
SourceBuffer
,
readonly
,
nullable
On getting, run the following step:
SourceBuffer
that
was
created
on
the
same
realm
as
this
track,
and
if
that
SourceBuffer
has
not
been
removed
from
the
sourceBuffers
attribute
of
its
parent
media
source
:
SourceBuffer
that
created
this
track.
DedicatedWorkerGlobalScope
SourceBuffer
notified
its
internal
create
track
mirror
handler
in
Window
to
create
this
track,
then
the
Window
copy
of
the
track
would
return
null
for
this
attribute.
This
section
specifies
extensions
to
the
[
HTML
]
VideoTrack
definition.
WebIDL[Exposed=(Window,DedicatedWorker)]
partial interface VideoTrack {
readonly attribute
readonly attribute SourceBuffer
? sourceBuffer
;
};
VideoTrack
needs
Window+DedicatedWorker
exposure.
sourceBuffer
of
type
SourceBuffer
,
readonly
,
nullable
On getting, run the following step:
SourceBuffer
that
was
created
on
the
same
realm
as
this
track,
and
if
that
SourceBuffer
has
not
been
removed
from
the
sourceBuffers
attribute
of
its
parent
media
source
:
SourceBuffer
that
created
this
track.
DedicatedWorkerGlobalScope
SourceBuffer
notified
its
internal
create
track
mirror
handler
in
Window
to
create
this
track,
then
the
Window
copy
of
the
track
would
return
null
for
this
attribute.
This
section
specifies
extensions
to
the
[
HTML
]
TextTrack
definition.
WebIDL[Exposed=(Window,DedicatedWorker)]
partial interface TextTrack {
readonly attribute
readonly attribute SourceBuffer
? sourceBuffer
;
};
sourceBuffer
of
type
SourceBuffer
,
readonly
,
nullable
On getting, run the following step:
SourceBuffer
that
was
created
on
the
same
realm
as
this
track,
and
if
that
SourceBuffer
has
not
been
removed
from
the
sourceBuffers
attribute
of
its
parent
media
source
:
SourceBuffer
that
created
this
track.
DedicatedWorkerGlobalScope
SourceBuffer
notified
its
internal
create
track
mirror
handler
in
Window
to
create
this
track,
then
the
Window
copy
of
the
track
would
return
null
for
this
attribute.
The
bytes
provided
through
appendBuffer
()
for
a
SourceBuffer
form
a
logical
byte
stream.
The
format
and
semantics
of
these
byte
streams
are
defined
in
byte
stream
format
specifications
.
The
byte
stream
format
registry
[
MSE-REGISTRY
]
provides
mappings
between
a
MIME
type
that
may
be
passed
to
addSourceBuffer
()
,
isTypeSupported
()
or
changeType
()
and
the
byte
stream
format
expected
by
a
SourceBuffer
using
that
MIME
type
for
parsing
newly
appended
data.
Implementations
are
encouraged
to
register
mappings
for
byte
stream
formats
they
support
to
facilitate
interoperability.
The
byte
stream
format
registry
[
MSE-REGISTRY
]
is
the
authoritative
source
for
these
mappings.
If
an
implementation
claims
to
support
a
MIME
type
listed
in
the
registry,
its
SourceBuffer
implementation
MUST
conform
to
the
byte
stream
format
specification
listed
in
the
registry
entry.
The byte stream format specifications in the registry are not intended to define new storage formats. They simply outline the subset of existing storage format structures that implementations of this specification will accept.
Byte stream format parsing and validation is implemented in the segment parser loop algorithm.
When
currently
configured
by
addSourceBuffer
()
or
changeType
()
to
expect
appends
of
WebCodecs
encoded
chunks
via
appendEncodedChunks
()
instead
of
a
bytestream
via
appendBuffer
,
the
SourceBuffer
does
not
have
a
specific
bytestream
format
associated.
This section provides general requirements for all byte stream format specifications:
AudioTrack
,
VideoTrack
,
and
TextTrack
attribute
values
from
data
in
initialization
segments
.
If the byte stream format covers a format similar to one covered in the in-band tracks spec [ INBANDTRACKS ], then it SHOULD try to use the same attribute mappings so that Media Source Extensions playback and non-Media Source Extensions playback provide the same track information.
The number and type of tracks are not consistent.
For example, if the first initialization segment has 2 audio tracks and 1 video track, then all initialization segments that follow it in the byte stream MUST describe 2 audio tracks and 1 video track.
Unsupported codec changes occur across initialization segments .
See
the
initialization
segment
received
algorithm,
addSourceBuffer
()
and
changeType
()
for
details
and
examples
of
codec
changes.
Video frame size changes. The user agent MUST support seamless playback.
This will cause the <video> display region to change size if the web application does not use CSS or HTML attributes (width/height) to constrain the element size.
Audio channel count changes. The user agent MAY support this seamlessly and could trigger downmixing.
This is a quality of implementation issue because changing the channel count may require reinitializing the audio device, resamplers, and channel mixers which tends to be audible.
buffered
attribute.
This is intended to simplify switching between audio streams where the frame boundaries don't always line up across encodings (e.g., Vorbis).
For example, if I1 is associated with M1, M2, M3 then the above MUST hold for all the combinations I1+M1, I1+M2, I1+M1+M2, I1+M2+M3, etc.
Byte stream specifications MUST at a minimum define constraints which ensure that the above requirements hold. Additional constraints MAY be defined, for example to simplify implementation.
As well as sections marked as non-normative, all authoring guidelines, diagrams, examples, and notes in this specification are non-normative. Everything else in this specification is normative.
The key words MAY , MUST , MUST NOT , SHOULD , and SHOULD NOT in this document are to be interpreted as described in BCP 14 [ RFC2119 ] [ RFC8174 ] when, and only when, they appear in all capitals, as shown here.
Example use of the Media Source Extensions
<script>
function onSourceOpen(videoTag, e) {
var mediaSource = e.target;
if (mediaSource.sourceBuffers.length > 0)
return;
var sourceBuffer = mediaSource.addSourceBuffer('video/webm; codecs="vorbis,vp8"');
videoTag.addEventListener('seeking', onSeeking.bind(videoTag, mediaSource));
videoTag.addEventListener('progress', onProgress.bind(videoTag, mediaSource));
var initSegment = GetInitializationSegment();
if (initSegment == null) {
// Error fetching the initialization segment. Signal end of stream with an error.
mediaSource.endOfStream("network");
return;
}
// Append the initialization segment.
var firstAppendHandler = function(e) {
var sourceBuffer = e.target;
sourceBuffer.removeEventListener('updateend', firstAppendHandler);
// Append some initial media data.
appendNextMediaSegment(mediaSource);
};
sourceBuffer.addEventListener('updateend', firstAppendHandler);
sourceBuffer.appendBuffer(initSegment);
}
function appendNextMediaSegment(mediaSource) {
if (mediaSource.readyState == "closed")
return;
// If we have run out of stream data, then signal end of stream.
if (!HaveMoreMediaSegments()) {
mediaSource.endOfStream();
return;
}
// Make sure the previous append is not still pending.
if (mediaSource.sourceBuffers[0].updating)
return;
var mediaSegment = GetNextMediaSegment();
if (!mediaSegment) {
// Error fetching the next media segment.
mediaSource.endOfStream("network");
return;
}
// NOTE: If mediaSource.readyState == “ended”, this appendBuffer() call will
// cause mediaSource.readyState to transition to "open". The web application
// should be prepared to handle multiple “sourceopen” events.
mediaSource.sourceBuffers[0].appendBuffer(mediaSegment);
}
function onSeeking(mediaSource, e) {
var video = e.target;
if (mediaSource.readyState == "open") {
// Abort current segment append.
mediaSource.sourceBuffers[0].abort();
}
// Notify the media segment loading code to start fetching data at the
// new playback position.
SeekToMediaSegmentAt(video.currentTime);
// Append a media segment from the new playback position.
appendNextMediaSegment(mediaSource);
}
function onProgress(mediaSource, e) {
appendNextMediaSegment(mediaSource);
}
</script>
<video id="v" autoplay> </video>
<script>
var video = document.getElementById('v');
var mediaSource = new MediaSource();
mediaSource.addEventListener('sourceopen', onSourceOpen.bind(this, video));
video.src = window.URL.createObjectURL(mediaSource);
</script>
<body>
<script>
// Very simple demuxer bound to a specific media file.
// Replace with other source of WebCodecs configs and encoded chunks.
let next_chunk_index = 0;
let buffer;
let chunk_duration = 100 * 1000; // 100 milliseconds for this example.
let metadata = [
{
offset: /* chunk offset, e.g. 42 */,
size: /* chunk size, e.g. 4242 */,
type: /* chunk type, e.g. "key" or "delta" */
}, // Plus additional chunks' metadata listed here.
];
function getConfig() {
return( /* e.g. { videoConfig: { codec: "vp09.00.10.08" } } */ );
}
async function getNextEncodedChunk() {
if (next_chunk_index >= metadata.length)
return null;
if (next_chunk_index == 0)
buffer = await (await fetch(/* e.g. "vp9.chunks" */)).arrayBuffer();
let chunk_metadata = metadata[next_chunk_index];
let chunk_timestamp = chunk_duration * next_chunk_index;
next_chunk_index++;
// EncodedAudioChunks could also be buffered into MSE, but only one form of
// media (audio versus video) can be buffered into a specific SourceBuffer,
// so this simple example is video-only.
return new EncodedVideoChunk( {
type: chunk_metadata.type,
timestamp: chunk_timestamp,
duration: chunk_duration,
data: new Uint8Array(buffer, chunk_metadata.offset, chunk_metadata.size)
} );
}
// MSE player using the WebCodecs content generated, above.
async function getOpenMediaSource() {
return new Promise(async (resolve, reject) => {
const v = document.createElement("video");
document.body.appendChild(v);
const mediaSource = new MediaSource();
const url = URL.createObjectURL(mediaSource);
mediaSource.addEventListener("sourceopen", () => {
URL.revokeObjectURL(url);
if (mediaSource.readyState != "open")
reject();
else
resolve([ v, mediaSource ]);
}, { once: true });
v.src = url;
});
}
async function bufferMedia() {
let [ videoElement, mediaSource ] = await getOpenMediaSource();
let sourceBuffer = mediaSource.addSourceBuffer(await getConfig());
// This simple player attempts to buffer everything immediately. Full
// players will condition buffering, for example, on playback progress
// and availability of chunks.
while (null != (chunk = await(getNextEncodedChunk()))) {
// Note that appending a sequence instead of a single chunk here
// could be more performant.
await sourceBuffer.appendEncodedChunks(chunk);
}
mediaSource.endOfStream();
videoElement.controls = true; // Show default controls.
}
if (!SourceBuffer.prototype.hasOwnProperty("appendEncodedChunks"))
console.log("MSE-for-WebCodecs support is missing.")
else
bufferMedia();
</script>
</body>
This section is non-normative.
The video playback quality metrics described in previous revisions of this specification (e.g., sections 5 and 10 of the Candidate Recommendation ) are now being developed as part of [ MEDIA-PLAYBACK-QUALITY ]. Some implementations may have implemented the earlier draft
VideoPlaybackQuality
object
and
the
HTMLVideoElement
extension
method
getVideoPlaybackQuality()
described
in
those
previous
revisions.
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in: