1. Principles behind design of Web APIs
The Design Principles are directly informed by the ethical framework set out in the Ethical Web Principles [ETHICAL-WEB] . These principles provide concrete practical advice in response to the higher level ethical responsibilities that come with developing the web platform.
1.1. Put user needs first (Priority of Constituencies)
If a trade-off needs to be made, always put user needs above all.
Similarly, when beginning to design an API, be sure to understand and document the user need that the API aims to address.
The internet is for end users : any change made to the web platform has the potential to affect vast numbers of people , and may have a profound impact on any person’s life. [RFC8890]
User needs come before the needs of web page authors, which come before the needs of user agent implementors, which come before the needs of specification writers, which come before theoretical purity.
Like all principles, this isn’t absolute. Ease of authoring affects how content reaches users. User agents have to prioritize finite engineering resources, which affects how features reach authors. Specification writers also have finite resources, and theoretical concerns reflect underlying needs of all of these groups.
See also:
1.2. It should be safe to visit a web page
When adding new features, design them to preserve the user expectation that visiting a web page is generally safe.
The Web is named for its hyperlinked structure. In order for the web to remain vibrant, users need to be able to expect that merely visiting any given link won’t have implications for the security of their computer, or for any essential aspects of their privacy .
For example, an API that allows any website to detect the use of assistive technologies may make users of these technologies feel unsafe visiting unknown web pages, since any web page may detect this private information.
If users have a realistic expectation of safety, they can make informed decisions between Web-based technologies and other technologies. For example, users may choose to use a web-based food ordering page, rather than installing an app, since installing a native app is riskier than visiting a web page.
To work towards making sure the reality of safety on the web matches users' expectations, we can take complementary approaches when adding new features:
-
We can improve the user interfaces through which the Web is used to make it clearer what users of the Web should (and should not) expect;
-
We can change the technical foundations of the Web so that they match user expectations of privacy;
-
We can consider the cases where users would be better off if expectations were higher, and in those cases try to change both technical foundations and expectations.
A new feature which introduces safety risks may still improve user safety overall, if it allows users to perform a task more safely on a web page than it would be for them to install a native app to do the same thing. However, this benefit needs to be weighed against the common goal of users having a reasonable expectation of safety on web pages.
See also:
1.3. Trusted user interface should be trustworthy
Consider whether new features impact trusted user interfaces.
Users depend on trusted user interfaces such as the address bar, security indicators and permission prompts, to understand who they are interacting with and how. These trusted user interfaces must be able to be designed in a way that enables users to trust and verify that the information they provide is genuine, and hasn’t been spoofed or hijacked by the website.
If a new feature allows untrusted user interfaces to resemble trusted user interfaces, this makes it more difficult for users to understand what information is trustworthy.
For
example,
JavaScript
alert()
allows
a
page
to
show
a
modal
dialog
which
looks
like
part
of
the
browser.
This
is
often
used
to
attempt
to
trick
users
into
visiting
scam
websites.
If
this
feature
was
proposed
today,
it
would
probably
not
proceed.
1.4. Ask users for meaningful consent
In the context of fulfilling a user need, a web page may want to make use of a feature that has the potential to cause harm. Features that have this potential for harm should be designed such that people can give meaningful consent for that feature to be used, and that they can refuse consent effectively.
In order to give meaningful consent , the user must:
-
understand what permission they may choose whether to grant the web page
-
be able to choose to give or refuse that permission effectively .
If a feature is powerful enough to require user consent, but it’s impossible to explain to a typical user what they are consenting to, that’s a signal that you may need to reconsider the design of the feature.
If a permission prompt is shown, and the user doesn’t grant permission, the Web page should not be able to do anything that the user believes they have refused consent for.
By asking for consent, we can inform the user of what capabilities the web page does or doesn’t have, reinforcing their confidence that the web is safe . However, the user benefit of a new feature must justify the additional burden on users to decide whether to grant permission for each feature whenever it’s requested by a Web page.
In your specification, the request permission to use and prompt the user to choose algorithms from [permissions] are good ways to ask for consent.
Refusal is most effective if the site cannot distinguish refusal from other, common situations. This can make it more difficult for a site to pressure users to grant consent.
For example, the Geolocation API grants access to a user’s location. This can help users in some contexts, like a mapping application, but may be dangerous to some users in other contexts - especially if used without the user’s knowledge. So that the user may decide whether their location may be used by a Web page, a permission prompt should be shown to the user asking whether to grant location access. If the user refuses permission, no location information is available to the Web page.
See also:
1.5. Use identity appropriately in context
Give people control over the identifying information about themselves they are presenting in different contexts on the web, and be transparent about it.
"Identity" is a complex concept that can be understood in many different ways. It can refer to how someone presents or sees themselves, how they relate to other people, groups, or institutions, and can determine how they behave or how they are treated by others. In web architecture, "identity" is often used as a shortcut to refer to identifiers, and the information attached to them.
Features that use or depend on identifiers and the attachment of data about a person to that identifier carry privacy risks which often reach beyond a single API or system. This includes data that has been passively generated (for example, about their behaviour on the web) as well as that which has been actively collected (for example, they have filled in a form).
For such features, you should understand the context in which it will be used, including how it will be used alongside other features of the web. Make sure the user can give appropriate consent . Design APIs to collect the smallest amount of data necessary. Use short-lived, temporary identifiers unless a persistent identifier is absolutely necessary.
1.6. Support the full range of devices and platforms (Media Independence)
As much as possible, ensure that features on the web work across different input and output devices, screen sizes, interaction modes, platforms, and media .
One of the main values of the Web is that it’s extremely flexible: a Web page may be viewed on virtually any consumer computing device at a very wide range of screen sizes, may be used to generate printed media, and may be interacted with in a large number of different ways. New features should match the existing flexibility of the web platform.
These features still work across a wide variety of contexts, and can be adapted to devices that don’t support their original intent - for example, a tap on a mobile device will fire a click event as a fallback.
Features should also be designed so that the easiest way to use them maintains flexibility.
Sometimes features aren’t yet available on some implementations or platforms despite working on others. In these cases, features should be designed such that it is possible for code to gracefully fail or be polyfilled. See § 2.6 New features should be detectable .
1.7. Add new capabilities with care
Add new capabilities to the web with consideration of existing functionality and content.
The Web includes many extension points that allow for additions; see for example HTML § 1.7.3 Extensibility .
Before adding items, consider integration with existing, similar capabilities. If this leads to a preferred design approach that cannot be implemented by only adding items, it might still be possible; see § 1.8 Remove or change capabilities only once you understand existing usage .
Do not assume that a change or removal is impossible without first checking.
1.8. Remove or change capabilities only once you understand existing usage
Prioritize compatibility with existing content when removing or changing functionality.
Once a significant amount of content has come to depend on a particular behavior, removing or changing that behavior is discouraged. Removing or changing features and capabilities is possible, but it first requires that the nature and scope of the impact on existing content is well understood. This might require research into how features are used by existing content.
The obligation to understand existing usage also applies to any features that content relies upon. This includes vendor-proprietary features and behavior that might be considered implementation bugs. Web features are not solely defined in specifications; they are also defined by how content uses those features.
1.9. Leave the web better than you found it
As you add new capabilities to the web platform, do so in a way that improves the overall platform, for example its security, privacy or accessibility characteristics.
The existence of a defect in one part of the platform must not be used to excuse an addition or extension to the defect, which would further decrease the overall platform quality. Where possible, build new web capabilities that improve the overall platform quality by mitigating existing defects. Do not degrade existing capabilities without good reason.
Parts of the web platform evolve independently. Issues that are present with a certain web technology now may be fixed in a subsequent iteration. Duplicating these issues makes fixing them more difficult. By adhering to this principle we can make sure overall platform quality improves over time.
1.10. Minimize user data
Design features to work with the minimum amount of data necessary to carry out their users' goals.
Data minimization limits the risks of data being inappropriately disclosed or misused.
Design Web APIs to make it easier for sites to request, collect, and/or transmit a small amount of data, or more granular or specific data, than it is to work with more generic or bulk data. APIs should also provide granularity and user controls, in particular over personal data , that is communicated to sites. When additional functionality requires additional data, APIs can enable this subject to user consent (e.g., a permission prompt or user activation).
2. API Design Across Languages
2.1. Prefer simple solutions
Look hard for simple solutions to the user needs you intend to address.
Simple solutions are generally better than complex solutions, although they may be harder to find. Simpler features are easier for user agents to implement and test, more likely to be interoperable, and easier for authors to understand. It is especially important to design your feature so that the most common use cases are easy to accomplish.
Make sure that your user needs are well-defined. This allows you to avoid scope creep, and make sure that your API does actually meet the needs of all users. Of course, complex or rare use cases are also worth solving, though their solutions may be more complicated to use. As Alan Kay said, "simple things should be simple, complex things should be possible."
Do note however that while common cases are often simple, commonality and complexity are not always correlated.
See also:
2.2. Consider tradeoffs between high level and low level APIs
High-level APIs allow user agents more ability to intervene in various ways on behalf of the user , such as to ensure accessibility, privacy, or usability.
A font picker (high level API) was recommended by the TAG over a Font Enumeration API (low level API) as it addresses the bulk of use cases, while preserving user privacy, as it is free from the fingerprinting concerns that accompany a general Font Enumeration API. A native font picker also comes with accessibility built-in, and provides consistency for end users.
Low-level APIs afford authors room for experimentation so that high level APIs can organically emerge from usage patterns over time. They also provide an escape hatch when the higher-level API is not adequate for the use case at hand.
Lower level building blocks cannot always be exposed as Web APIs. A few possible reasons for this are to preserve the user’s security and privacy, or to avoid tying Web APIs to specific hardware implementations. However, high level APIs should be designed in terms of building blocks over lower level APIs whenever possible. This may guide decisions on how high level the API needs to be.
A well-layered solution should ensure continuity of the ease-of-use vs power tradeoff curve and avoid sharp cliffs where a small amount of incremental use case complexity results in a large increase of code complexity.
2.3. Name things thoughtfully
Name APIs with care. Naming APIs well makes it much easier for authors to use them correctly.
See the more detailed Naming principles section for specific guidance on naming.
2.4. Be consistent
It is good practice to consider precedent in the design of your API and to try to be consistent with it.
There is often a tension between API ergonomics and consistency, when existing precedent is of poor usability. In some cases it makes sense to break consistency to improve usability, but the improvement should be very significant to justify this.
Since the web platform has gradually evolved over time, there are often multiple conflicting precedents which are mutually exclusive. You can weigh which precedent to follow by taking into account prevalence (all else being equal, follow the more popular precedent), API ergonomics (all else being equal, follow the more usable precedent), and API age (all else being equal, follow the newer precedent).
There is often a tension between internal and external consistency. Internal consistency is consistency with the rest of the system, whereas external consistency is consistency with the rest of the world. In the web platform, that might materialize in three layers: consistency within the technology the API belongs to (e.g. CSS), consistency with the rest of the web platform, and in some cases external precedent, when the API relates to a particular specialized outside domain. In those cases, it is useful to consider what the majority of users will be. Since for most APIs the target user is someone who is familiar with the technology they are defined in, err on the side of favoring consistency with that.
There is also a separate section on naming consistency .
2.5. Follow guidance from feature specifications
When using a feature, follow guidance in the specification for that feature.
Specifications for features that will be used by other specifications should include guidance for using those features. This ensures that the feature is used correctly and consistently throughout the platform.
An incomplete list of such guidance follows:
-
Building Protocols with HTTP , especially on defining header fields , and other HTTP RFCs
Consult with the relevant community when using their specification. This informs that community about new usage and creates opportunities for improvements to both the using and used specifications.
2.6. New features should be detectable
Provide a way for authors to programmatically detect whether your feature is available, so that web content may gracefully handle the feature not being present.
An existing feature may not be available on a page for a number of reasons. Two of the more common reasons are because it hasn’t been implemented yet, or because it’s only available in secure contexts .
Authors shouldn’t need to write different code to handle each scenario. That way, even if an author only knows or cares about one scenario, the code will handle all of them.
When a feature is available but isn’t feasible to use because a required device isn’t present, it’s better to expose that the feature is available and have a separate way to detect that the device isn’t. This allows authors to handle a device not being available differently from the feature not being available, for example by suggesting the user connect or enable the device.
See § 9.2 Use care when exposing APIs for selecting or enumerating devices .
Authors should always be able to detect a feature from JavaScript, and in some cases the feature should also be detectable in the language where it’s used (such as @supports in CSS).
In some cases, it may not be appropriate to allow feature detection. Whether the feature should be detectable or not should be based on the user need for the feature. If there is a user need or design principle which would fail if feature detection were available for the feature, then you should not support feature detection.
Detecting the availability of a feature does not imply detecting whether consent to use the feature has been granted. Generally, detecting whether the feature is implemented can be done separately from determining whether use of the feature has been authorized. In some cases, it might be necessary to disable feature detection in order to enable denying requests to use the feature.
Also, if a feature is generally not exposed to developers, it is not appropriate to support feature detection. For example, private browsing mode is a concept which is recognised in web specifications, but not exposed to authors. For private browsing mode to support the user’s needs, it must not be feature detected.
See also:
2.7. Design textual formats for humans
Design textual formats that can be easily produced and consumed by people. Textual formats also improve transparency .
Favor readability over compactness. File size can be optimized by tooling, and tends to become a lower priority over time. When file size is a significantly higher priority, perhaps a textual format is not appropriate.
People who are presented with a textual format should be able to use a text editor to easily produce or modify content. People who edit text will introduce a range of errors, but could struggle to identify and fix those errors.
People expect some amount of flexibility in terms of how their edits are processed. Clearly defining syntactic flexibility—such as in whitespace, quoting, or delimiters—could ensure that content is both easy to edit and produces consistent results.
Containing the scope of a format that is affected by a syntax error could improve robustness and human usability. This requires that specifications fully define processing and error handling so that all inputs result in consistent outcomes.
If your format is intended to be used only by machines, a binary format is likely to be more efficient, in addition to discouraging people from authoring or editing content directly.
2.8. Consider limiting new features to secure contexts
Always limit your feature to secure contexts if it would pose a risk to the user without the authentication, integrity, or confidentiality that’s present only in secure contexts.
For other features, TAG members past and present haven’t reached consensus on general advice. Some believe that all new features (other than features which are additions to existing features) should be limited to secure contexts. This would help encourage the use of HTTPS, helping users be more secure in general.
Others believe that features should only be limited to secure contexts if they have a known security or privacy impact. This lowers the barrier to entry for creating web pages that take advantage of new features which don’t impact user security or privacy.
SecureContext
]
extended
attribute
on
interfaces,
namespaces,
or
their
members
(such
as
methods
and
attributes).
However,
for
some
types
of
API
(e.g.,
dispatching
an
event),
limitation
to
secure
contexts
should
just
be
defined
in
normative
prose
in
the
specification.
If
this
is
the
case,
consider
whether
there
might
be
scope
for
adding
a
similar
mechanism
to
[
SecureContext
]
to
make
this
process
easier
for
future
API
developers.
However, if, for some reason there is no way for code to gracefully handle the feature not being present, limiting the feature to secure contexts might cause problems for code (such as libraries) that may be used in either secure or non-secure contexts.
2.9. Don’t reveal that private browsing mode is engaged
Make sure that your feature doesn’t give authors a way to detect private browsing mode.
Some people use private browsing mode to protect their own personal safety. Because of this, the fact that someone is using private browsing mode may be sensitive information about them. This information may harm people if it is revealed to a web site controlled by others who have power over them (such as employers, parents, partners, or state actors ).
Given such dangers, websites should not be able to detect that private browsing mode is engaged.
to act as if the user had immediately aborted the payment request.
This enables User Agents to automatically abort payment requests in private browsing mode (thus protecting sensitive information such as the user’s shipping or billing address) without revealing that private browsing mode is engaged.
See also:
2.10. Consider how your API should behave in private browsing mode
If necessary, specify how your API should behave differently in private browsing mode.
For example, if your API would reveal information that would allow someone to correlate a single user’s activity both in and out of private browsing mode, consider possible mitigations such as introducing noise, or using permission prompts to give the user extra information to help them meaningfully consent to this tracking (see § 1.4 Ask users for meaningful consent ).
Private browsing modes enable users to browse the web without leaving any trace of their private browsing on their device. Therefore, APIs which provide client-side storage should not persist data stored while private browsing mode is engaged after it’s disengaged. This can and should be done without revealing any detectable API differences to the site.
If the User Agent has two simultaneous sessions with a site, one in private browsing mode and one not, storage area changes made in the private browsing mode session should not be revealed to the other browsing session, and vice versa. (The storage event should not be fired at the other session’s window object .)
See also:
2.11. Don’t reveal that assistive technologies are being used
Make sure that your API doesn’t provide a way for authors to detect that a user is using assistive technology without the user’s consent.
The web platform must be accessible to people with disabilities. If a site can detect that a user is using an assistive technology, that site can deny or restrict the user’s access to the services it provides.
People who make use of assistive technologies are often vulnerable members of society ; their use of assistive technologies is sensitive information about them. If an API provides access to this information without the user’s consent , this sensitive information may be revealed to others (including state actors ) who may wish them harm.
Sometimes people propose features which aim to improve the user experience for users of assistive technology, but which would reveal the user’s use of assistive technology as a side effect. While these are well intentioned, they violate § 1.2 It should be safe to visit a web page , so alternative solutions must be found.
The Accessibility Object Model (AOM) used to define a set of events which, when fired, revealed the use of assistive technology .
AOM has since removed these events and replaced them with synthetic DOM events which don’t reveal the use of assistive technology.
See also:
2.12. Require user activation for powerful APIs
Some powerful APIs can produce intrusive UI (eg. auto-playing audio), expose user data (eg. interacting with the clipboard), perform a background activity without an obvious indicator to the user (eg. accessing local storage), or prompt the user to interact with trusted UI (eg. permission prompts, device hardware features). These APIs should be designed to require some indication of user intention (such as user activation ) in order to function. This indicates that the user is intentionally interacting with the web page in question.
User activation is defined in detail in the HTML standard . You should think about the effect your API has on the user experience, as well as any risks presented to the user, when deciding whether user activation needs to only occur once overall ( sticky ), periodically ( transient ) or once per API call ( transient consuming ).
Note that while user activation is in many cases necessary, it is not always sufficient to protect users from invasive behaviours, and seeking meaningful consent is also important.
2.13. Support non-fully active BFCached documents
Specify how your feature behaves in non- fully active BFCached (Back/forward cached) documents if possible.
If your feature does anything that falls into the following categories:
-
Interacts with a document from the "outside" (e.g. sends information to a document)
-
Makes cross-document interaction/resource sharing possible (e.g. holding locks)
-
May malfunction when a document is kept in a non- fully active BFCached state (instead of getting destroyed) after the user navigates away from it or gets restored (e.g. expects that a state saved in the document won’t span multiple navigations)
Specify how your feature works with non- fully active BFCached documents, following the guidelines in the Supporting BFCached Documents guide.
Note: It is possible for a document to become non- fully active for other reasons not related to BFcaching, such as when the iframe holding the document gets detached. This guidance only focuses on the case where BFCache is involved, and not other cases that might cause a document to become non- fully active .
2.14. Prioritize usability over compatibility with third-party tools
Design new features with usability as the primary goal, and compatibility with third-party tooling as a secondary goal.
The web platform benefits from a wide ecosystem of tooling to facilitate easier and faster development. A lot of the time, the syntax of an upcoming web platform feature may conflict with that of a third-party tool causing breakage. This is especially common as third-party tools are often used to prototype new web platform features.
In general, web platform features last a lot longer than most third-party tools, and thus giving them the optimal syntax and functionality should be of high priority.
In some cases, the conflict will introduce problems across a large number of web sites, necessitating the feature’s syntax to be redesigned to avoid clashes.
Array.prototype.contains()
had
to
be
renamed
to
Array.prototype.includes()
to
avoid
clashes
with
the
identically
named
but
incompatible
method
from
PrototypeJS,
a
library
that
was
in
use
in
millions
of
websites.
However, these cases should be exceptions.
When deciding whether to break third party tools with new syntax, there are several factors to consider, such as severity of the breakage, popularity of the third party tool, and many more.
Possibly the most important factor is how severely would the usability of the web platform feature be compromised if its syntax was changed to avoid breaking the third party tool? If several alternatives of similar usability are being considered, it is usually preferable to prioritize the ones that inconvenience third party tools the least.
However, if avoiding breaking the third party tool would lead to a significant negative impact on of the feature’s usability, that is rarely an acceptable tradeoff, unless it causes significant breakage of live websites.
Languages should also provide mechanisms for extensibility that authors can use to extend the language without breaking future native functionality, to reduce such dilemmas in the future.
2.15. Build complex types by composing simpler types
Define subtypes that can always be used in place of a supertype. Avoid interfaces that use inheritance unless everything that can be said about an instance of the parent always applies to the child as well.
With inheritance, a subtype creates a type of item that can stand in for its supertype. A subtype needs to provide the same attributes and methods as its supertype. The subtype also needs to maintain consistent behavior. If the subtype changes the way that an item works, something that handles the supertype might not work properly.
In the theory of type systems, this is known as the Liskov Substitution Principle .
Event
type
is
a
super-type
of
KeyboardEvent
and
PointerEvent
.
Events
always
have
the
same
basic
set
of
properties
and
actions
that
apply,
like
whether
the
event
bubbles
.
The
subtypes
add
new
properties
and
actions,
but
instances
of
those
subtypes
still
act
in
every
way
as
an
Event
.
A simpler approach is often to avoid inheritance and reuse existing capabilities by composition. New items can define properties that hold any existing components that are needed.
HTMLInputElement
would
be
difficult,
because
an
input
element
is
quite
complex
and
a
specialization
of
HTMLInputElement
would
need
to
maintain
that
complexity.
Custom
elements
can
attach
(i.e.,
compose)
items
that
are
necessary
to
interact
with
a
form
without
having
to
deal
with
the
complications
of
HTMLInputElement
.
3. HTML
This section details design principles for features which are exposed via HTML.
3.1. Re-use HTML attribute names (only) for similar functionality
If you are adding a feature that is specified through an HTML attribute, check if there is an existing attribute name on another element that specifies similar functionality. Re-using an existing HTML attribute name means authors can utilize existing knowledge, maintains consistency across the language, and keeps its vocabulary small.
multiple
,
is
used
on
both
select
to
allow
selection
of
multiple
values,
as
well
as
on
input
to
allow
entry
of
multiple
values.
If you do re-use an existing HTML attribute, try to keep its syntax as close as possible to the syntax of the existing attribute.
for
attribute
was
introduced
on
the
label
element,
for
specifying
which
form
element
it
should
be
associated
with.
It
was
later
re-used
by
output
,
for
specifying
which
elements
contributed
input
values
to
or
otherwise
affected
the
calculation.
The
syntax
of
the
latter
is
broader:
it
accepts
a
space-separated
list
of
ids,
whereas
the
former
only
accepts
one
id.
However,
they
both
still
conform
to
the
same
syntax,
whereas
e.g.
if
one
of
them
accepted
a
list
of
ids,
and
the
other
one
a
selector,
that
would
be
an
antipattern.
The inverse also applies: do not re-use an existing HTML attribute name if the functionality you are adding is not similar to that of the existing attribute.
type
attribute
is
used
on
the
input
and
button
elements
to
further
specialize
the
element
type,
whereas
on
every
other
element
(e.g.
link
,
script
,
style
)
it
specifies
MIME
type.
This
is
an
antipattern;
one
of
these
groups
of
attributes
should
have
had
a
different
name.
3.2. Use space-separated attributes for short lists of values, separate elements for longer lists
When
specifying
metadata
about
an
element
that
can
be
a
list
of
values,
common
practice
is
to
use
a
space-separated
list
and
expose
it
as
a
DOMTokenList
.
The
class
attribute
on
elements
takes
a
space-separated
list
of
class
names.
classList
is
a
DOMTokenList
that
allows
authors
to
add
and
remove
class
names.
The
sandbox
attribute
takes
a
space-separated
list
of
sandbox
flags.
iframe.sandbox
is
a
DOMTokenList
that
allows
authors
to
add
and
remove
sandbox
flags.
Consistency with other parts of the Web Platform is important, even if this means using another character to separate values.
The
accept
attribute
is
a
comma-separated
list
of
values,
because
it
needs
to
match
the
syntax
of
the
`
Accept
`
HTTP
header.
Regardless of syntax, attributes should only be used for short lists of values. For longer lists, embedding the entire list in an attribute is discouraged. Instead, current practice is to use separate elements to represent the list items (and any metadata about them). These elements could either be children of the element in question, or linked through an attribute.
The
list
of
values
for
the
select
element
is
provided
as
a
series
of
option
element
children.
However,
when
providing
a
list
of
recommended
values
for
an
input
element,
a
separate
element
is
used
(
datalist
),
linked
through
a
list
attribute.
The
list
of
media
sources
for
a
video
or
audio
element
is
provided
as
a
series
of
source
element
children.
In rare instances, other tradeoffs are necessary.
The
srcset
attribute
allows
for
a
comma-separated
list
of
image
candidate
strings.
This
syntax
was
chosen
over
a
list
of
child
elements
to
avoid
verbosity
and
because
the
img
element
is
an
empty
element
that
does
not
permit
child
elements.
A
space-separated
syntax
would
not
have
been
possible,
as
each
list
item
includes
multiple
values.
3.3. Do not pause the HTML parser
Ensure that your design does not require HTML parser to pause to handle external resources.
As a page parses, the browser discovers assets that the page needs, and figures out a priority in which they should be loaded in parallel. Such parsing can be disrupted by a resource which blocks the discovery of subsequent resources. At worst, it means the browser downloads items in series rather than parallel. At best, it means the browser queues downloads based on speculative parsing, which may turn out to be incorrect.
Features
that
block
the
parser
generally
do
so
because
they
want
to
feed
additional
content
into
the
parser
before
subsequent
content.
This
is
the
case
of
legacy
<script
src="…">
elements,
which
can
inject
into
the
parser
using
document.write(…)
.
Due
to
the
performance
issues
above,
new
features
must
not
do
this.
3.4. Avoid features that block rendering
Features that require resource loading or other operations before rendering the page, often result in blank page (or the previous page). The result is a poor user experience.
Consider adding such features only in cases when the overall user experience is improved. A canonical example of this is blocking rendering in order to download and process a stylesheet. The alternative user experience is a flash of unstyled content, which is undesirable.
See also § 10.2.1 Some APIs should only be exposed to dedicated workers .
3.5. Keep attributes in sync
New content attributes should have a corresponding IDL attribute with the same name, and the state between the two should be kept synchronized. Carving out a synchronized IDL attribute with inconsistent naming results in confusion, and should be avoided.
input
’s
value
,
option
’s
selected
,
and
input
’s
checked
where
the
HTML
attributes
were
never
updated
and
the
IDL
attribute
was
the
single
source
of
truth.
3.6. Name URL-containing attributes based on their primary purpose
If
the
element
enables
the
user
to
navigate
to
the
URL
contained
in
the
attribute,
call
the
attribute
href
,
like
the
a
element’s
href
attribute.
Note:
In
hindsight,
form
’s
action
attribute
should
have
been
named
href
.
If
the
element
causes
the
resource
at
the
given
URL
to
be
loaded,
call
the
attribute
src
,
like
the
img
element’s
src
attribute
or
the
script
element’s
src
attribute.
Note:
HTML
has
a
number
of
legacy
inconsistencies
that
should
not
be
emulated,
like
the
way
link
’s
href
attribute
might
allow
for
navigation
or
for
resource
loading,
depending
on
the
value
of
the
element’s
rel
attribute.
If
the
attribute
identifies
a
URL
that
is
auxilliary
to
the
element’s
purpose,
like
video
’s
poster
,
q
’s
cite
,
or
a
’s
ping
,
name
the
attribute
after
its
semantics.
Note:
remember
that
attributes
containing
URLs
should
be
represented
in
IDL
as
USVString
;
see
§ 8.2
Represent
strings
appropriately
.
3.7. Give each HTML element a single purpose
Each HTML element should have a clear purpose. Using HTML attributes can modify the semantics of an element, but attributes should not fundamentally alter that purpose.
Rather than defining elements that can operate in different modes, having a separate element for each mode is preferable. This simplifies usage for authors.
input
element
uses
a
type
attribute
to
switch
between
very
different
modes.
Though
each
of
these
modes
share
a
common
goal
of
taking
user
input,
that
is
too
broad
a
purpose.
The
semantics
of
these
modes
could
have
been
more
clearly
expressed
through
the
use
of
distinct
element
names.
Overloading of semantics on the same element through the use of attributes could be used to select between different behavior. This only makes sense where the purpose remains the same for any value of the attributes.
textarea
element
and
an
input
element
with
a
type
of
"text"
could
have
used
the
same
element
as
the
differences
between
these
is
not
material.
Potential counterexamples include overspecialization and elements that rely on context to establish their semantics.
acronym
element
is
duplicative
of
the
abbr
element
and
therefore
overspecialization.
source
element
can
have
distinct
purposes
depending
on
context.
A
source
element
is
not
standalone;
it’s
semantics
are
determined
by
its
parent.
4. Cascading Style Sheets (CSS)
This section details design principles for features which are exposed via CSS.
4.1. Separate CSS properties based on what should cascade separately
Decide which values should be grouped together as CSS properties and which should be separate properties based on what makes sense to set independently.
CSS cascading allows declarations from different rules or different style sheets to override one another. A set of values that should all be overridden together should be grouped together in a single property so that they cascade together. Likewise, values that should be able to be overridden independently should be separate properties.
However, the initial-letter-align property should be separate because it sets an alignment policy for all of these effects across the document which is a general stylistic choice and a function of the script (e.g., Latin, Cyrillic, Arabic) used in the document.
4.2. Make appropriate choices for whether CSS properties are inherited
Decide whether a property should be inherited based on whether the effect of the property should be overridden or added to if set on an ancestor as well as a descendant.
If setting the property on a descendant element needs to override (rather than add to) the effect of setting it on an ancestor, then the property should probably be inherited.
If setting the property on a descendant element is a separate effect that adds to setting it on an ancestor, then the property should probably not be inherited.
A specification of an non-inherited property requiring that the handling of an element look at the value of that property on its ancestors (which may also be slow) is a "code smell" that suggests that the property likely should have been inherited. A specification of an inherited property requiring that the handling of an element ignore the value of a property if it’s the same as the value on the parent element is a "code smell" that suggests that the property likely should not have been inherited.
If the background-image property had been inherited, then the specification would have had to create a good bit of complexity to avoid a partially-transparent image being visibly repeated for each descendant element. This complexity probably would have involved behaving differently if the property had the same value on the parent element, which is the "code smell" mentioned above that suggests that a property likely should not have been inherited.
If the font-size property were not inherited, then it would probably have to have an initial value that requires walking up the ancestor chain to find the nearest ancestor that doesn’t have that value. This is the "code smell" mentioned above that suggests that a property likely should have been inherited.
4.3. Choose the computed value type based on how the property should inherit
Choose the computed value of a CSS property based on how it will inherit , including how values where it depends on other properties should inherit.
Inheritance means that an element gets the same computed value for a property that its parent has. This means that processing steps that happen before reaching the computed value affect the value that is inherited, and those that happen after (such as for the used value ) do not.
getComputedStyle()
method.
However, the computed value in this case is the <number> 1.4 , not the <length> 28px . (The used value is 28px .)
The line-height property can be inherited into elements that have a different font-size , and any property on those elements which depends on line-height must take the relevant font-size into account, rather than the font-size for the element from which the line-height value was inherited.
< body style = "font-size: 20px; line-height: 1.4" > < p > This body text has a line height of 28px.</ p > < h2 style = "font-size: 200%" > This heading has a line-height of 56px, not 28px, even though the line-height was declared on the body. This means that the 40px font won’t overflow the line height.</ h2 > </ body >
These number values are generally the preferred values to use for line-height because they inherit better than length values.
See also:
4.4. Name CSS properties and values appropriately
The names of CSS properties are usually nouns, and the names of their values are usually adjectives (although sometimes nouns).
Words in properties and values are separated by hyphens. Abbreviations are generally avoided.
Use the root form of words when possible rather than a form with a grammatical prefix or suffix (for example, "size" rather than "sizing").
The list of values of a property should generally be chosen so that new values can be added. Avoid values like yes , no , true , false , or things with more complex names that are basically equivalent to them.
Avoid words like "mode" or "state" in the names of properties, since properties are generally setting a mode or state.
See § 12 Naming principles for general (cross-language) advice on naming.
4.5. Content should be viewable and accessible by default
Design
CSS
properties
or
CSS
layout
systems
(which
are
typically
values
of
the
display
property),
to
preserve
the
content
as
viewable,
accessible
and
usable
by
default.
overflow:
hidden
or
left:
-40em
).
They
should
not
happen
by
default
as
a
result
of
something
like
display:
flex
or
position:
relative
.
5. JavaScript Language
5.1. Use JavaScript for Web APIs
When designing imperative APIs for the Web, use JavaScript. In particular, you can freely rely upon language-specific semantics and conventions, with no need to keep things generalized.
CustomElementRegistry.define()
method
takes
a
reference
to
a
Constructor
Method
.
This takes advantage of the relatively recent addition of classes to JavaScript, and the fact that method references are very easy to use in JavaScript.
5.2. Preserve run-to-completion semantics
If a change to state originates outside of the JavaScript execution context, propagate that change to JavaScript between tasks, for example by queuing a task , or as part of update the rendering .
Unlike lower-level languages such as C++ or Rust, JavaScript has historically acted as if only one piece of code can execute at once. Because of that, JavaScript authors take for granted that the data available to a function won’t change unexpectedly while the function is running.
Changes that are not the result of developer action and changes that are asynchronously delivered should not happen in the middle of other JavaScript, including between microtasks .
while
loop),
and
after
await
ing
an
already-resolved
Promise
,
developers
are
unlikely
to
expect
things
like:
-
The DOM to update as a result of the HTML parser loading new content from the network
-
img.width
to change as a result of loading image data from the network -
Buttons of a
Gamepad
to change state -
scrollTop
to change, even if scrolling can visually occur -
A synchronous method to act differently depending on asynchronous state changes. For example, if
LockManager
had synchronous methods, their behavior would depend on concurrent calls in other windows.
These things aren’t updated by the currently running script, so they shouldn’t change during the current task.
Data can update synchronously from the result of developer action.
node.remove()
changes
the
DOM
synchronously
and
is
immediately
observable.
A few kinds of situations justify violating this rule:
-
Observing the current time, as in
Date.now()
andperformance.now()
, although note that it’s also useful to present a consistent task-wide time as indocument.timeline.currentTime
. -
Functions meant to help developers interrupt synchronous work, as in the case of
IdleDeadline.timeRemaining()
. -
States meant to protect users from surprising UI changes, like transient activation . Note that
navigator.userActivation.isActive
violates the guidance that recommends a method for this case .
5.3. Don’t expose garbage collection
Ensure your JavaScript Web APIs don’t provide a way for an author to know the timing of garbage collection.
The timing of garbage collection is different in different user agents, and may change over time as user agents work on improving performance. If an API exposes the timing of garbage collection, it can cause programs to behave differently in different contexts. This means that authors need to write extra code to handle these differences. It may also make it more difficult for user agents to implement different garbage collection strategies, if there is enough code which depends on timing working a particular way.
This
means
that
you
shouldn’t
expose
any
API
that
acts
as
a
weak
reference,
e.g.
with
a
property
that
becomes
once
garbage
collection
runs.
Object
and
data
lifetimes
in
JavaScript
code
should
be
predictable.
getElementsByTagName
returns
an
HTMLCollection
object,
which
may
be
re-used
if
the
method
is
called
twice
on
the
same
Document
object,
with
the
same
tag
name.
In
practice,
this
means
that
the
same
object
will
be
returned
if
and
only
if
it
has
not
been
garbage
collected.
This
means
that
the
behaviour
is
different
depending
on
the
timing
of
garbage
collection.
If
getElementsByTagName
were
designed
today,
the
advice
to
the
designers
would
be
to
either
reliably
reuse
the
output,
or
to
produce
a
new
HTMLCollection
each
time
it’s
invoked.
getElementsByTagName
gives
no
sign
that
it
may
depend
on
the
timing
of
garbage
collection.
In
contrast,
APIs
which
are
explicitly
designed
to
depend
on
garbage
collection,
like
WeakRef
or
FinalizationRegistry
,
set
accurate
author
expectations
about
the
interaction
with
garbage
collection.
6. JavaScript API Surface Concerns
6.1. Attributes should behave like data properties
[WEBIDL] attributes should act like simple JavaScript object properties.
In reality, IDL attributes are implemented as accessor properties with separate getter and setter methods. To make them act like JavaScript object properties:
-
Getters must not have any observable side effects.
-
Getters should not perform any complex operations.
-
Ensure that
obj
is always true. Don’t create a new value each time the getter is called.. attribute=== obj. attribute -
If possible, ensure that given
obj
,. attribute= xobj
is true. (This may not be possible if some kind of conversion is necessary for. attribute=== xx
.)
If you were thinking about using an attribute, but it doesn’t behave this way, you should probably use a method instead.
offsetTop
performs
layout,
which
can
be
complex
and
time-consuming.
It
would
have
been
better
if
this
had
been
a
method
like
getBoundingClientRect()
.
6.2. Consider whether objects should be live or static
If an API gives access to an object representing some internal state, decide whether that object should continue to be updated as the state changes.
An object which represents the current state at all times is a live object , while an object which represents the state at the time it was created is a static object .
Live objects
If
an
object
allows
the
author
to
change
the
internal
state,
that
object
should
be
live.
For
example,
DOM
Node
s
are
live
objects,
to
allow
the
author
to
make
changes
to
the
document
with
an
understanding
of
the
current
state.
Properties of live objects may be computed as they are accessed, instead of when the object is created. This makes live objects sometimes a better choice if the data needed is complex to compute, since there is no need to compute all the data before the object is returned.
A live object may also use less memory, since there is no need to copy data to a static version.
Static objects
If an object represents a list that might change, most often the object should be static. This is so that code iterating over the list doesn’t need to handle the possibility of the list changing in the middle.
getElementsByTagName
returns
a
live
object
which
represents
a
list,
meaning
that
authors
need
to
take
care
when
iterating
over
its
items:
let list= document. getElementsByTagName( "td" ); for ( let i= 0 ; i< list. length; i++ ) { let td= list[ i]; let tr= document. createElement( "tr" ); tr. innerHTML= td. outerHTML; // This has the side-effect of removing td from the list, // causing the iteration to become unpredictable. td. parentNode. replaceChild( tr, td); }
The
choice
to
have
querySelectorAll()
return
static
objects
was
made
after
spec
authors
noticed
that
getElementsByTagName
was
causing
problems.
URLSearchParams
isn’t
static,
even
though
it
represents
a
list,
because
it’s
the
way
for
authors
to
change
the
query
string
of
a
URL.
Note: For maplike and setlike types, this advice may not apply, since these types were designed to behave well when they change while being iterated.
If it would not be possible to compute properties at the time they are accessed, a static object avoids having to keep the object updated until it’s garbage collected, even if it isn’t being used.
If a static object represents some state which may change frequently, it should be returned from a method, rather than available as an attribute.
See also:
6.3. Accessors should behave like properties, not methods
IDL attributes that describe object properties or getters produce information about the state of an object.
-
A getter must not have any (observable) side effects. If you have expected side effects, use a method.
-
Getters should not throw exceptions. Getters should behave like regular data properties , and regular data properties do not throw exceptions when read. Furthermore, invalid state should generally be avoided by rejecting writes , not when data is read . Updating existing getters to throw exceptions should be avoided as existing API users may enumerate or wrap the API and not expect the new exception, breaking backwards compatibility.
-
Getters should not perform any blocking operations. If a getter requires performing a blocking operation, it should be a method.
-
If the underlying object has not changed, getters should return the same object each time it is called. This means
obj.property === obj.property
must always hold. Returning a new value from an getter each time is not allowed. If this does not hold, the getter should be a method.
Note:
An
antipattern
example
of
a
blocking
operation
is
with
getters
like
offsetTop
performing
layout.
When
defining
IDL
attributes,
whenever
possible,
preserve
values
given
to
the
setter
for
return
from
the
getter.
That
is,
given
obj.property
=
x
,
a
subsequent
obj.property
===
x
should
be
true.
(This
will
not
always
be
the
case,
e.g.,
if
a
normalization
or
type
conversion
step
is
necessary,
but
should
be
held
as
a
goal
for
normal
code
paths.)
The object you want to return may be live or static . This means:
-
If live, then return the same object each time, until a state change requires a different object to be returned. This can be returned from either a property, getter, or method.
-
If static, then return a new object each time. In which case, this should be be a method.
6.4. Accept optional and/or primitive arguments through dictionaries
API methods should generally use dictionary arguments instead of a series of optional arguments.
This makes the code that calls the method more readable, and the method signature easier to remember. It also makes the API more extensible in the future, particularly if multiple arguments with the same type are needed.
new Event( "example" , { bubbles: true , cancelable: false })
is much more readable than
new Event( "example" , true , false )
You should also consider accepting mandatory parameters through a dictionary, if it would make the API more readable, especially when they are of primitive types.
The dictionary itself should be an optional argument, so that if the author is happy with all of the default options, they can avoid passing an extra argument.
See also:
6.5. Make method arguments optional if possible
If an argument for an API method has a reasonable default value, make that argument optional and specify the default value.
addEventListener()
takes
an
optional
boolean
useCapture
argument.
This
defaults
to
false
,
meaning
that
the
event
should
be
dispatched
to
the
listener
in
the
bubbling
phase
by
default.
For
boolean
arguments,
a
default
value
of
false
is
strongly
preferred.
XMLHttpRequest
defaults
to
true
as
an
exception
to
this
rule.
This
is
for
legacy
interoperability
reasons,
not
as
an
example
of
good
design.
The
default
value
should
be
the
value
that
most
authors
will
choose,
if
that
choice
is
obvious.
For
boolean
attributes,
that
might
mean
that
the
attribute
name
needs
to
be
chosen
so
that
false
is
the
common
choice.
When deciding between different list data types for your API, unless otherwise required, use the following list types:
-
Method list arguments should be of type sequence<T>
-
Method return values should be of type sequence<T>
-
Attributes should be of type ObservableArray<T>
See also:
6.6. Name optional arguments appropriately
Name optional arguments to make the default behavior obvious without being named negatively.
This applies whether they are provided in a dictionary or as single arguments .
addEventListener()
takes
an
options
object
which
includes
an
option
named
once
.
This
indicates
that
the
listener
should
not
be
invoked
repeatedly.
This
option
could
have
been
named
repeat
,
but
that
would
require
the
default
to
be
true
.
Instead
of
naming
it
noRepeat
,
the
API
authors
named
it
once
,
to
reflect
the
default
behaviour
without
using
a
negative.
Other examples:
-
passive
rather thanactive
, or -
isolate
rather thanconnect
, or -
private
rather thanpublic
See also:
6.7. Use overloading wisely
If the behaviour of a method will be significantly different depending on the arguments passed you should usually define a separate method rather than overload a single method.
Overloading a method such that one of its arguments can be either a single value or an array of values is useful. Similarly, passing a dictionary of options is a good pattern because it allows for more flexibility.
However, if subsequent arguments for a method have to change because of the first argument passed, or if a method has different behaviour depending on the types of inputs, this can hinder code readability, and make discovering API functionality more difficult for authors, and should be avoided.
6.8. Classes should have constructors when possible
Make sure that any class that’s part of your API has a constructor, if appropriate.
By
default,
[WEBIDL]
interfaces
generate
"non-constructible"
classes:
trying
to
create
instances
of
them
using
new
X()
will
throw
a
TypeError
.
To
make
them
constructible,
you
can
add
appropriate
constructor
operations
to
your
interface,
and
defining
the
algorithm
for
creating
new
instances
of
your
class.
This allows JavaScript developers to create instances of the class for purposes such as testing, mocking, or interfacing with third-party libraries which accept instances of that class. It also gives authors the ability to create a subclass of the class, which is otherwise prevented, because of the way JavaScript handles subclasses.
This won’t be appropriate in all cases. For example:
-
Some objects represent access to privileged resources, so they need to be constructed by factory methods which can access those resources.
-
Some objects have very carefully controlled lifecycles, so they need to be created and accessed through specific methods.
-
Some objects represent an abstract base class, which shouldn’t be constructed, and which authors should not be able to define subclasses for.
The
Event
class,
and
all
its
derived
interfaces,
are
constructible.
This
is
useful
when
testing
code
which
handles
events:
an
author
can
construct
an
Event
to
pass
to
a
method
which
handles
that
type
of
event.
The
Window
class
isn’t
constructible,
because
creating
a
new
window
is
a
privileged
operation
with
significant
side
effects.
Instead,
the
window.open()
method
is
used
to
create
new
windows.
The
ImageBitmap
class
isn’t
constructible,
as
it
represents
an
immutable,
ready-to-paint
bitmap
image,
and
the
process
of
getting
it
ready
to
paint
must
be
done
asynchronously.
Instead,
the
createImageBitmap()
factory
method
is
used
to
create
it.
The
DOMTokenList
class
is,
sadly,
not
constructible
.
This
prevents
the
creation
of
custom
elements
that
expose
their
token
list
attributes
as
DOMTokenList
s.
Navigator
,
History
,
or
Crypto
,
are
non-constructible
because
they
are
singletons
representing
access
to
per-window
information.
In
these
cases,
something
like
the
Web
IDL
namespace
feature
might
have
been
a
better
fit,
but
these
features
were
designed
before
namespaces,
and
go
beyond
what
is
currently
possible
with
namespaces.
If your API requires this type of singleton, consider using a namespace , and File an issue on Web IDL if there is some problem with using them.
Factory methods can complement constructors, but generally should not be used instead of them. It may still be valuable to include factory methods in addition to constructors, when they provide additional benefits. A common such case is when an API includes base classes and multiple specialized subclasses, with a factory method for creating the appropriate subclass based on the parameters passed. Often the factory method is a static method on the closest common base subclass of the returned result.
createElement
method
is
an
example
of
a
factory
method
that
could
not
have
been
implemented
as
a
constructor,
as
its
result
can
be
any
of
a
number
of
subclasses
of
Element
.
MouseEvent.initMouseEvent()
factory
method
only
creates
MouseEvent
objects,
which
were
originally
not
constructible,
even
though
there
was
no
technical
reason
against
that.
Eventually
it
was
deprecated,
and
the
MouseEvent
object
was
simply
made
constructible.
6.9. Be synchronous when appropriate
Where
possible,
prefer
synchronous
APIs
when
designing
a
new
API.
Synchronous
APIs
are
simpler
to
use,
and
need
less
infrastructure
set-up
(such
as
making
functions
async
).
An API should generally be synchronous if the following rules of thumb apply:
-
The API is not expected to ever be gated behind a permission prompt, or another dialog such as a device selector.
-
The API implementation will not be blocked by a lock, filesystem or network access, for example, inter-process communication.
-
The execution time is short and deterministic.
6.10. Design asynchronous APIs using Promises
If an API method needs to be asynchronous, use Promises, not callback functions.
Using Promises consistently across the web platform means that APIs are easier to use together, such as by chaining promises. Promise-using code also tends to be easier to understand than code using callback functions.
An API might need to be asynchronous if:
-
the user agent needs to prompt the user for permission ,
-
some information might need to be read from disk, or requested from the network,
-
the user agent may need to do a significant amount of work on another thread, or in another process, before returning the result.
See also:
6.11. Cancel asynchronous APIs/operations using AbortSignal
If
an
asynchronous
method
can
be
cancelled,
allow
authors
to
pass
in
an
AbortSignal
as
part
of
an
options
dictionary.
const controller= new AbortController(); const signal= controller. signal; geolocation. read({ signal});
Using
AbortSignal
consistently
as
the
way
to
cancel
an
asynchronous
operation
means
that
authors
can
write
less
complex
code.
For
example,
there’s
a
pattern
of
using
a
single
AbortSignal
for
several
ongoing
operations,
and
then
using
the
corresponding
AbortController
to
cancel
all
of
the
operations
at
once
if
necessary
(such
as
if
the
user
presses
"cancel",
or
a
single-page
app
navigation
occurs.)
Even
if
cancellation
can’t
be
guaranteed,
you
can
still
use
an
AbortController
,
because
a
call
to
abort()
on
AbortController
is
a
request,
rather
than
a
guarantee.
6.12. Use strings for constants and enums
If your API needs a constant, or a set of enumerated values, use string values.
Strings are easier for developers to inspect, and in JavaScript engines there is no performance benefit from using integers instead of strings.
If you need to express a state which is a combination of properties, which might be expressed as a bitmask in another language, use a dictionary object instead. This object can be passed around as easily as a single bitmask value.
6.13. If you need both asynchronous and synchronous methods, synchronous is the exception
In the rare case where you need to have both synchronous and asynchronous methods for the same purpose, default to asynchronous and make synchronous the exception.
Due to current limitations, some specific areas of the web platform lack asynchronous support. Therefore having support for synchronous methods in addition to asynchronous methods can be beneficial for usability. Consider these cases an exception, and have a clear path for deprecation of the synchronous methods as web platform capabilities evolve.
For
most
cases,
the
synchronous
variant
should
be
distinguished
by
naming
it
with
a
Sync()
suffix.
6.14. Output an array of bytes with Uint8Array
If
an
API
returns
a
byte
array,
make
it
a
Uint8Array
,
not
an
ArrayBuffer
.
ArrayBuffer
s
cannot
be
read
from
directly;
the
developer
would
have
to
create
a
view
such
as
a
Uint8Array
to
read
data.
Providing
a
Uint8Array
avoids
that
additional
effort.
If
the
bytes
in
the
buffer
have
a
natural
intepretation
as
one
of
the
other
TypedArray
types,
provide
that
instead.
For
example,
if
the
bytes
represent
Float32
values,
use
a
Float32Array
.
6.15.
Return
undefined
from
side-effect-causing
functions
When
the
purpose
of
a
function
is
to
cause
side
effects
and
not
to
compute
a
value,
the
function
should
be
specified
to
return
undefined
.
Sites are unlikely to come to depend on such a return value, which makes it easier to change the function to return a meaningful value in the future should a use case for one be discovered.
HTMLMediaElement
’s
play()
method
was
originally
defined
to
return
undefined
,
since
its
purpose
was
to
change
the
state
of
the
media
element.
Requests
to
play
media
can
fail
in
a
number
of
ways,
so
play()
was
changed
to
return
a
Promise
.
If
the
API
had
originally
been
defined
to
return
something
other
than
undefined
(for
example,
if
it
had
been
defined
to
return
the
media
element,
a
popular
pattern
in
“chaining”
API
s),
it
would
not
have
been
backwards
compatible
to
enhance
the
usability
of
this
API
in
this
manner.
See also:
7. Event Design
7.1. Use promises for one time events
Follow the advice in the Writing Promise-Using Specifications guideline.
7.2. Events should fire before related Promises resolve
If a Promise-based asynchronous algorithm dispatches events, it should dispatch them before the Promise resolves, rather than after.
When a promise is resolved, a microtask is queued to run its reaction callbacks. Microtasks are processed when the JavaScript stack empties. Dispatching an event is synchronous, which involves the JavaScript stack emptying between each listener. As a result, if a promise is resolved before dispatching a related event, any microtasks that are scheduled in reaction to a promise will be invoked between the first and second listeners of the event.
Dispatching the event first prevents this interleaving. All event listeners are then invoked before any promise reaction callbacks.
7.3. Don’t invent your own event listener-like infrastructure
When creating an API which allows authors to start and stop a process which generates notifications, use the existing event infrastructure to allow listening for the notifications. Create separate API controls to start/stop the underlying process.
startNotifications()
method
on
the
BluetoothRemoteGATTCharacteristic
global
object,
which
adds
the
object
to
the
"active
notification
context
set".
When
the
User
Agent
receives
a
notification
from
the
Bluetooth
device,
it
fires
an
event
at
the
BluetoothRemoteGATTCharacteristic
objects
in
the
active
notification
context
set.
See:
7.4. Always add event handler attributes
If
your
API
adds
a
new
event
type,
add
a
corresponding
on
yourevent
event
handler
IDL
attribute
to
the
interface
of
any
EventHandler
which
may
handle
the
new
event.
it’s important to continue to define event handler IDL attributes because:
-
they preserve consistency in the platform
-
they enable feature-detection for the supported events (see § 2.6 New features should be detectable )
For
consistency,
if
the
event
needs
to
be
handled
by
HTML
and
SVG
elements,
add
the
event
handler
IDL
attributes
on
the
GlobalEventHandlers
interface
mixin,
instead
of
directly
on
the
relevant
element
interface(s).
Similarly,
add
event
handler
IDL
attributes
to
WindowEventHandlers
rather
than
Window
.
7.5. Use events for notification
Events shouldn’t be used to trigger changes, only to deliver a notification that a change has already finished happening.
resize
is
fired
at
the
Window
object.
It’s
not
possible
to
stop
the
resize
from
happening
by
intercepting
the
event.
Nor
is
it
possible
to
fire
a
constructed
resize
event
to
cause
the
window
to
change
size.
The
event
can
only
notify
the
author
that
the
resize
has
already
happened.
7.6. Guard against potential recursion
If your API includes a long-running or complicated algorithm, prevent calling into the algorithm if it’s already running.
If an API method causes a long-running algorithm to begin, you should use events to notify user code of the progress of the algorithm. However, the user code which handles the event may call the same API method, causing the complex algorithm to run recursively. The same event may be fired again, causing the same event handler to be fired, and so on.
To prevent this, make sure that any "recursive" call into the API method simply returns immediately. This technique is "guarding" the algorithm.
AbortSignal
’s
add
,
remove
and
signal
abort
each
begin
with
a
check
to
see
if
the
signal
is
aborted
.
If
the
signal
is
aborted
,
the
rest
of
the
algorithm
doesn’t
run.
In this case, a lot of the important complexity is in the algorithms run during the signal abort steps. These steps iterate through a collection of algorithms which are managed by the add and remove methods.
For
example,
the
ReadableStreamPipeTo
definition
add
s
an
algorithm
into
the
AbortSignal
’s
set
of
algorithms
to
be
run
when
the
signal
abort
steps
are
triggered,
by
calling
abort()
on
the
AbortController
associated
with
the
signal.
This
algorithm
is
likely
to
resolve
promises
causing
code
to
run,
which
may
include
attempting
to
call
any
of
the
methods
on
AbortSignal
.
Since
signal
abort
involves
iterating
through
the
collection
of
algorithms,
it
should
not
be
possible
to
modify
that
collection
while
it’s
running.
And since signal abort would have triggered the code which caused the recursive call back in to signal abort , it’s important to avoid running these steps again if the signal is already in the process of the signal abort steps, to avoid recursion.
Note: A caution about early termination: if the algorithm being terminated would go on to ensure some critical state consistency, be sure to also make the relevant adjustments in state before early termination of the algorithm. Not doing so can lead to inconsistent state and end-user-visible bugs when implemented as-specified.
Note: Be cautious about throwing exceptions in early termination. Keep in mind the scenario in which developers will be invoking the algorithm, and whether they would reasonably expect to handle an exception in this [perhaps rare] case. For example, will this be the only exception in the algorithm?
You won’t always be able to "guard" in this way. For example, an algorithm may have too many entry-points to reliably check all of them. If that’s the case, another option is to defer calling the author code to a later task or microtask. This avoids a stack of recursion, but can’t avoid the risk of an endless loop of follow-up tasks.
Deferring an event is often specified as " queue a task to fire an event ...".
You should always defer events if the algorithm that triggers the event could be running on a different thread or process. In this case, deferral ensures the events can be processed on the correct task in the task queue .
Both the "guarding" and the "deferring" approach have trade-offs.
"Guarding" an algorithm guarantees:
-
at the time events are fired, there is no chance that the state may have changed between the guarded algorithm ending and the event firing.
-
events fired during the algorithm, such as events to notify user code of a state change made as part of the algorithm, can be fired immediately, notifying code of the change without needing to wait for the next task.
-
user code running in the event handler can observe relevant state directly on the instance object they were fired on, rather than needing to be given a copy of the relevant state with the event.
If the events are deferred instead:
-
there is no guarantee that they will be first in the task queue once the algorithm completes.
-
any other task may change the object’s state you should include any state relevant to the event with the deferred event.
-
This usually involves a new subclass of
Event
, with new attributes to hold the state.For example, the
ProgressEvent
addsloaded
,total
, etc. attributes to hold the state.
-
-
if different parts of an algorithm need to coordinate, you may need to define an explicit state machine (well-defined state transitions) to ensure that when a deferred event fires, the behavior of inspecting or changing state is well-defined.
For example, in [payment-request] , the
PaymentRequest
’s[[state]]
internal slot explicitly tracks the object’s state through its well-defined transitions.-
These state transitions often use the guarding technique themselves, to ensure the state transitions happen appropriately.
For example, in [payment-request] note the guards used around the
[[state]]
internal slot, such as in theshow()
algorithm.
-
-
if the deferred event doesn’t need extra state, or a state machine, this probably means that the event is just signalling the completion of the algorithm. If this is true, the API should probably return a
Promise
instead of firing the event. See § 7.1 Use promises for one time events .
Note: events that expose the possibility of recursion as described in this section were sometimes called "synchronous events". This terminology is discouraged as it implies that it’s possible to dispatch an event asynchronously. All events are dispatched synchronously. What is more often implied by "asynchronous event" is to defer firing an event.
7.7.
Use
plain
Event
s
for
state
Where
possible,
use
a
plain
Event
with
a
specified
type
,
and
capture
any
state
information
in
the
target
object.
It’s
usually
not
necessary
to
create
new
subclasses
of
Event
.
7.8. Use Events and Observers appropriately
In
general,
use
EventTarget
and
notification
Event
s,
rather
than
an
Observer
pattern,
unless
an
EventTarget
can’t
work
well
for
your
feature.
Using
an
EventTarget
ensures
your
feature
benefits
from
improvements
to
the
shared
base
class,
such
as
the
addition
of
the
once
.
If using events causes problems, such as unavoidable recursion , consider using an Observer pattern instead.
MutationObserver
,
Intersection
Observer
,
Resize
Observers
,
and
IndexedDB
Observers
are
all
examples
of
an
Observer
pattern.
MutationObserver
replaced
the
deprecated
DOM
Mutation
Events
after
developers
noticed
that
DOM
Mutation
Events
-
fire too often
-
don’t benefit from event propagation, which makes them too slow to be useful
-
cause recursion which is too difficult to guard against.
Mutation Observers:
-
can batch up mutations to be sent to observers after mutations have finished being applied;
-
don’t need to go through event capture and bubbling phases;
-
provide a richer API for expressing what mutations have occurred.
Note: Events can also batch up notifications, but DOM Mutation Events were not designed to do this. Events don’t always need to participate in event propagation, but events on DOM Nodes usually do.
The Observer pattern works like this:
-
Each instance of the Observer class is constructed with a callback, and optionally with some options to customize what should be observed.
-
Instances begin observing specific targets, using a method named
observe()
, which takes a reference to the target to be observed. The options to customize what should be observed may be provided here instead of to the constructor. The callback provided in the constructor is invoked when something interesting happens to those targets. -
Callbacks receive change records as arguments. These records contain the details about the interesting thing that happened. Multiple records can be delivered at once.
-
The author may stop observing by calling a method called
unobserve()
ordisconnect()
on the Observer instance. -
Optionally, a method may be provided to immediately return records for all observed-but-not-yet-delivered occurrences.
IntersectionObserver
may
be
used
like
this:
function checkElementStillVisible( element, observer) { delete element. visibleTimeout; // Process any observations which may still be on the task queue processChanges( observer. takeRecords()); if ( 'isVisible' in element) { delete element. isVisible; logAdImpressionToServer(); // Stop observing this element observer. unobserve( element); } } function processChanges( changes) { changes. forEach( function ( changeRecord) { var element= changeRecord. target; element. isVisible= isVisible( changeRecord. boundingClientRect, changeRecord. intersectionRect); if ( 'isVisible' in element) { // Element became visible element. visibleTimeout= setTimeout(() => { checkElementStillVisible( element, observer); }, 1000 ); } else { // Element became hidden if ( 'visibleTimeout' in element) { clearTimeout( element. visibleTimeout); delete element. visibleTimeout; } } }); } // Create IntersectionObserver with callback and options var observer= new IntersectionObserver( processChanges, { threshold: [ 0.5 ] }); // Begin observing "ad" element var ad= document. querySelector( '#ad' ); observer. observe( ad);
(Example
code
adapted
from
the
IntersectionObserver
explainer
.)
To use the Observer pattern, you need to define:
-
the new Observer object type,
-
an object type for observation options, and
-
an object type for the records to be observed.
The trade-off for this extra work is the following advantages:
-
Instances can be customized at observation time, or at creation time. The constructor for an
Observer
, or itsobserve()
method, can take options allowing authors to customize what is observed for each callback. This isn’t possible withaddEventListener()
. -
It’s easy to stop listening on multiple callbacks using the
disconnect()
orunobserve()
method on theObserver
object. -
You have the option to provide a method like
takeRecords()
, which immediately fetches the relevant data, instead of waiting for an event to fire. -
Because Observers are single-purpose, you don’t need to specify an event type.
Observer
s
and
EventTarget
s
have
these
things
in
common:
-
Both can be customized at creation time.
-
Both can batch occurrences and deliver them at any time.
EventTarget
s don’t need to be synchronous; they can use microtask timing, idle timing, animation-frame timing, etc. You don’t need anObserver
to get special timing or batching. -
Neither
EventTarget
s norObserver
s need to participate in a DOM tree (bubbling/capture and cancellation). Most prominentEventTarget
s areNode
s in the DOM tree, but many other events are standalone; for example,IDBDatabase
andXMLHttpRequestEventTarget
. Even when usingNode
s, your events may be designed to be non-bubbling and non-cancelable.
IntersectionObserver
that’s
an
EventTarget
subclass:
const io= new ETIntersectionObserver( element, { root, rootMargin, threshold}); function listener( e) { for ( const changeof e. changes) { // ... } } io. addEventListener( "intersect" , listener); io. removeEventListener( "intersect" , listener);
Compared
to
the
Observer
version:
-
it’s more difficult to observe multiple elements with the same options;
-
there is no way to request data immediately;
-
it’s more work to remove multiple event listeners for the same event;
-
the author has to provide a redundant
"intersect"
event type.
In
common
with
the
Observer
version:
-
it can still do batching;
-
it has the same timing (based on the JavaScript event queue);
-
authors can still customize what to listen for; and
-
events don’t go through capture or bubbling.
These aspects can be achieved with either design.
See also:
8. Web IDL, Types, and Units
8.1. Use numeric types appropriately
If an API you’re designing uses numbers, use one of the following [WEBIDL] numeric types, unless there is a specific reason not to:
-
unrestricted double
-
Any JavaScript number, including infinities and NaN
-
double
-
Any JavaScript number, excluding infinities and NaN
-
[
EnforceRange
]long long
-
Any JavaScript number from -2 63 to 2 63 , rounded to the nearest integer. If a number outside this range is given, the generated bindings will throw a
TypeError
. -
[
EnforceRange
]unsigned long long
-
Any JavaScript number from 0 to 2 64 , rounded to the nearest integer. If a number outside this range is given, the generated bindings will throw a
TypeError
.
JavaScript
has
only
one
numeric
type,
Number
:
IEEE
754
double-precision
floating
point,
including
±0,
±Infinity,
and
NaN.
[WEBIDL]
numeric
types
represent
rules
for
modifying
any
JavaScript
number
to
belong
to
a
subset
with
particular
properties.
These
rules
are
run
when
a
number
is
passed
to
the
interface
defined
in
IDL,
whether
a
method
or
a
property
setter.
If you have extra rules which need to be applied to the number, you can specify those in your algorithm.
octet
(8
bits,
in
the
range
[0,
255]),
involves
taking
the
modulo
of
the
JavaScript
number.
For
example,
to
convert
a
JavaScript
number
value
of
300
to
an
octet
,
the
bindings
will
first
compute
300
modulo
255,
so
the
resulting
number
will
be
45,
which
might
be
surprising.
Instead,
you
can
use
[
EnforceRange
]
octet
to
throw
a
TypeError
for
values
outside
of
the
octet
range,
or
[
Clamp
]
octet
to
clamp
values
to
the
octet
range
(for
example,
converting
300
to
255).
This
also
works
for
the
other
shorter
types,
such
as
short
or
long
.
bigint
should
be
used
only
when
values
greater
than
2
53
or
less
than
-2
53
are
expected.
An
API
should
not
support
both
BigInt
and
Number
simultaneously,
either
by
supporting
both
types
via
polymorphism,
or
by
adding
separate,
otherwise
identical
APIs
which
take
BigInt
and
Number
.
This
risks
losing
precision
through
implicit
conversions,
which
defeats
the
purpose
of
BigInt
.
8.2. Represent strings appropriately
When
designing
a
web
platform
feature
which
operates
on
strings
,
use
DOMString
unless
you
have
a
specific
reason
not
to.
Most
string
operations
don’t
need
to
interpret
the
code
units
inside
of
the
string,
so
DOMString
is
the
best
choice.
In
the
specific
cases
explained
below,
it
might
be
appropriate
to
use
either
USVString
or
ByteString
instead.
[INFRA]
[WEBIDL]
USVString
is
the
Web
IDL
type
that
represents
scalar
value
strings
.
For
strings
whose
most
common
algorithms
operate
on
scalar
values
(such
as
percent-encoding
),
or
for
operations
which
can’t
handle
surrogates
in
input
(such
as
APIs
that
pass
strings
through
to
native
platform
APIs),
USVString
should
be
used.
Reflecting
IDL
attributes
whose
content
attribute
is
defined
to
contain
a
URL
(such
as
href
)
should
use
USVString
.
[HTML]
ByteString
should
only
be
used
for
representing
data
from
protocols
like
HTTP
which
don’t
distinguish
between
bytes
and
strings.
It
isn’t
a
general-purpose
string
type.
If
you
need
to
represent
a
sequence
of
bytes
,
use
Uint8Array
.
8.3. Use milliseconds for time measurement
If you are designing an API that accepts a time measurement, express the time measurement in milliseconds.
Even if seconds (or some other time unit) are more natural in the domain of an API, sticking with milliseconds ensures that APIs are interoperable with one another. This means that authors don’t need to convert values used in one API to be used in another API, or keep track of which time unit is needed where.
This
convention
began
with
setTimeout()
and
the
Date
API,
and
has
been
used
since
then.
Note: high-resolution time is usually represented as fractional milliseconds using a floating point value, not as an integer value of a smaller time unit like nanoseconds.
8.4. Use the appropriate type to represent times and dates
When
representing
date-times
on
the
platform,
use
the
DOMHighResTimeStamp
type.
DOMHighResTimeStamp
allows
comparison
of
timestamps,
regardless
of
the
user’s
time
settings.
DOMHighResTimeStamp
values
represent
a
time
value
in
milliseconds.
See
[HIGHRES-TIME]
for
more
details.
Don’t
use
the
JavaScript
Date
class
for
representing
specific
date-time
values.
Date
objects
are
mutable
(may
have
their
value
changed),
and
there
is
no
way
to
make
them
immutable.
Date
must
not
be
used,
see
the
following:
-
Frozen date objects? on es-discuss
-
Remove Date from Web IDL on the Web IDL Bugzilla
8.5. Use Error or DOMException for errors
Represent
errors
in
web
APIs
as
ECMAScript
error
objects
(e.g.,
Error
)
or
as
DOMException
.
whether
they
are
exceptions,
promise
rejection
values,
or
properties.
9. APIs that wrap access to device or browser capabilities
New APIs are now being developed in the web platform for interacting with devices. For example, authors wish to be able to use the web to connect with their microphones and cameras , generic sensors (such as gyroscope and accelerometer), Bluetooth and USB -connected peripherals, automobiles , etc.
The same applies to capabilities that might be optionally provided by either the host system or an external service. This includes capabilities that depend on users paying for access to the capability.
These capabilities can be functionality provided by the underlying operating system, or provided by a native third-party library. APIs can provide an abstraction which "wraps" the native functionality without introducing significant complexity, while securing the API surface to the browser. So, these are called wrapper APIs.
This section contains principles for consideration when designing APIs for these capabilities.
9.1. Don’t expose unnecessary information about capabilities
In line with the Data Minimization principle, if you need to give web sites access to information about capabilities, only expose the minimal amount of data necessary.
Firstly, think carefully about whether it is really necessary to expose information at all. Consider whether your user needs could be satisfied by a less powerful API.
Exposing the presence of a device, additional information about a device, or device identifiers, each increase the risk of harming the user’s privacy.
When a user makes a choice to deny access to a device or capability, that should not reveal whether the capability exists. Reducing information leakage in that scenario is more important than when the capability is granted.
As more specific information is shared, the fingerprinting data available to sites gets larger. There are also other potential risks to user privacy.
If there is no way to design a less powerful API, use these guidelines when exposing device information:
- Limit information in any identifier
-
Include as little identifiable information as possible in device identifiers exposed to the web plaform. Identifiable information includes branding, make and model numbers, etc You can usually use a randomly generated identifier instead. Make sure that your identifiers aren’t guessable, and aren’t re-used.
- Keep the user in control
-
When the user chooses to clear browsing data, make sure any stored device identifiers are cleared.
- Hide sensitive information behind a user permission
-
If you can’t create a device identifier in an anonymous way, limit access to it. Make sure the user can provide meaningful consent to a Web page accessing this information.
- Tie identifiers to the same-origin model
-
Create distinct identifiers for the same physical device for each origin that has has access to it.
-
If the same device is requested more than once by the same origin, return the same identifier for it (unless the user has cleared their browsing data). This allows authors to avoid having several copies of the same device.
- Persistable when necessary
-
If a device identifier is time consuming to obtain, make sure authors can store an identifier generated in one session for use in a later session. You can do this by making sure that the procedure to generate the identifier consistently produces the same value for the same device, for each origin.
See also:
9.2. Use care when exposing APIs for selecting or enumerating devices
Look for ways to avoid enumerating devices. If you can’t avoid it, expose the least information possible.
If an API exposes the existence, capabilities, or identifiers of more than one device, all of the risks in § 9.1 Don’t expose unnecessary information about capabilities are multiplied by the number of devices. For the same reasons, consider whether your user needs could be satisfied by a less powerful API. [LEAST-POWER]
If the purpose of the API is to enable the user to select a device from the set of available devices of a particular kind, you may not need to expose a list to script at all. An API which invokes a User-Agent-provided device picker could suffice. Such an API:
-
keeps the user in control,
-
doesn’t expose any device information without the user’s consent ,
-
doesn’t expose any fingerprinting data about the user’s environment by default, and
-
only exposes information about one device at a time.
When designing API which allows users to select a device, it may be necessary to also expose the fact that there are devices are available to be picked. This does expose one bit of fingerprinting data about the user’s environment to websites, so it isn’t quite as safe as an API which doesn’t have such a feature.
RemotePlayback
interface
doesn’t
expose
a
list
of
available
remote
playback
devices
.
Instead,
it
allows
the
user
to
choose
one
device
from
a
device
picker
provided
by
the
User
Agent.
It does enable websites to detect whether or not any remote playback device is available, so the website can show or hide a control the user can use to show the device picker.
The trade-off is that by allowing websites this extra bit of information, the API lets authors make their user interface less confusing. They can choose to show a button to trigger the picker only if at least one device is available.
If you must expose a list of devices, try to expose the smallest subset that satisfies your user needs.
For example, an API which allows the website to request a filtered or constrained list of devices is one option to keep the number of devices smaller. However, if authors are allowed to make multiple requests with different constraints, they may still be able to access the full list.
Finally, if you must expose the full list of devices of a particular kind, please rigorously define the order in which devices will be listed. This can reduce interoperability issues, and helps to mitigate fingerprinting. (Sort order could reveal other information: see Mitigating Browser Fingerprinting in Web Specifications § 6.2 Standardization for more.)
Note: While APIs should not expose a full list of devices in an implementation-defined order, they may need to for web compatibility reasons.
9.3. Design based on user needs, not the underlying capability
Expose new native capabilities being brought to the web based on user needs.
Avoid directly translating an existing native API to the web.
Instead, consider the functionality available from the native API, and the user needs it addresses, and design an API that meets those user needs, even if the implementation depends on the existing native API.
Be particularly careful about exposing the exact lifecycle and data structures of the underlying native APIs. When possible, consider flexibility for new hardware.
This means newly proposed APIs should be designed with regard to how they are intended to be used rather than how the underlying hardware, device, or native API happens to operate today.
9.4. Be proactive about safety
When bringing native capabilities to the web platform, try to design defensively.
Bringing a native capability to the web platform comes with many implications. Users may not want websites to know that their computers have specific capabilities. Therefore, access to anything outside of the logical origin boundary should be permission gated.
For example, if a device can store state, and that state is readable at the same time by multiple origins, a set of APIs that lets you read and write that state is effectively a side-channel that undermines the origin model of the web.
For these reasons, even if the device allows non-exclusive access, you may want to consider enforcing exclusive access per-origin, or even restricting it further to only the current active tab.
Additionally, APIs should be designed so that the applications can gracefully handle physical disruption, such as a device being unplugged.
9.5. Adapt native APIs using web platform principles
When adapting native operating system APIs for the web, make sure the new web APIs are designed with web platform principles in mind.
- Make sure the web API can be implemented on more than one platform
-
When designing a wrapper API, consider how different platforms provide its functionality.
Ideally, all implementations should work exactly the same, but in some cases you may have a reason to expose options which only work on some platforms. If this happens, be sure to explain how authors should write code which works on all platforms. See § 2.6 New features should be detectable .
- Underlying protocols should be open
-
APIs which require exchange with external hardware or services should not depend on closed or proprietary protocols. Depending on non-open protocols undermines the open nature of the web.
- Design APIs to handle the user being off-line
-
If an API depends on some service which is provided by a remote server, make sure that the API functions well when the user can’t access the remote server for any reason.
- Avoid additional fingerprinting surfaces
-
Wrapper APIs can unintentionally expose the user to a wider fingerprinting surface. Please read the TAG’s finding on unsanctioned tracking for additional details.
10. Other API Design Considerations
10.1. Enable polyfills for new features
Polyfills can be hugely beneficial in helping to roll out new features to the web platform. The Technical Architecture Group finding on Polyfills and the Evolution of the Web offers guidance that should be considered in the development of new features, notably:
-
Being "polyfillable" isn’t essential but is beneficial
-
Polyfill development should be encouraged
10.2. Where possible APIs should be made available to dedicated workers
When
exposing
a
feature,
please
consider
whether
it
makes
sense
to
expose
the
feature
to
dedicated
workers
(via
the
DedicatedWorkerGlobalScope
interface).
Many features could work out of the box on dedicated workers and not enabling the feature there could limit the ability for users to run their code in a non-blocking manner.
Certain challenges can exist when trying to expose a feature to dedicated workers, especially if the feature requires user input by asking for permission, or showing a picker or selector. Even though this might discourage spec authors to support dedicated workers, we still recommend designing the feature with dedicated worker support in mind, in order to not add assumptions that will later make it unnecessarily hard to expose these APIs to dedicated workers.
10.2.1. Some APIs should only be exposed to dedicated workers
Developers prefer simple code to complex code. They are more likely to use an API in the simplest way the API allows.
It’s important to avoid adding features that block rendering. § 3.4 Avoid features that block rendering
If the easiest way to use an API is likely to result in render blocking or “ jank ,” the user experience will suffer. (This problem is even more pronounced on low-powered devices, which are more likely to be used by disadvantaged or marginalized users. Remember, the web is for all people .)
Therefore,
APIs
which
would
often
block
the
main
thread
if
used
as
intended
should
not
be
exposed
on
the
Window
interface.
By
restricting
such
APIs
to
the
DedicatedWorkerGlobalScope
interface,
the
“easy”
path
for
web
developers
is
the
path
with
the
best
experience
for
users.
ScriptProcessorNode
s
were
replaced
by
AudioWorklet
s
in
the
Web
Audio
API
because
use
of
ScriptProcessorNode
from
the
main
thread
frequently
resulted
in
a
poor
user
experience.
[WebAudio]
10.3. Only purely computational features should be exposed everywhere
When
exposing
a
feature,
please
consider
whether
it
makes
sense
to
expose
the
feature
to
all
possible
environments
(via
the
[Exposed=*]
annotation
or
including
it
on
all
global
scope
interfaces).
Only purely computational features should be exposed everywhere. That is, they do not perform I/O and do not affect the state of the user agent or the user’s device.
The
TextEncoder
interface
converts
a
string
to
UTF-8
encoded
bytes.
This
is
a
purely
computational
interface,
generally
useful
as
a
JavaScript
language
facility,
so
it
should
be
exposed
everywhere.
localStorage affects the state of the user agent, so it should not be exposed everywhere.
Technically,
console
could
affect
the
state
of
the
user
agent
(by
causing
log
messages
to
appear
in
the
developer
tools)
or
the
user’s
device
(by
writing
to
a
log
file.)
But
these
things
are
not
observable
from
the
running
code,
and
the
practicality
of
having
console
everywhere
outweighs
the
disadvantages.
Additionally, anything relying on an event loop should not be exposed everywhere. Not all global scopes have an event loop.
The
timeout
method
of
AbortSignal
relies
on
an
event
loop
and
should
not
be
exposed
everywhere.
The
rest
of
AbortSignal
is
purely
computational,
and
should
be
exposed
everywhere.
The
[Exposed=*]
annotation
should
also
be
applied
conservatively.
If
a
feature
is
not
that
useful
without
other
features
that
are
not
exposed
everywhere,
default
to
not
exposing
that
feature
as
well.
The
Blob
interface
is
purely
computational,
but
Blob
objects
are
primarily
used
for,
or
obtained
as
a
result
of,
I/O.
By
the
principle
of
exposing
conservatively,
Blob
should
not
be
exposed
everywhere.
10.4. Add new data formats properly
Always define a corresponding MIME type and extend existing APIs to support this type for any new data format.
There are cases when a new capability on the web involves adding a new data format. This can be an image, video, audio, text, or any other type of data that a browser is expected to ingest. Use a strictly validated standardized MIME type for new formats. New formats containing text should only support UTF-8 encoded text.
While legacy media formats do not always have strict enforcement for MIME types (and sometimes rely on peeking at headers, to workaround this), this is mostly for legacy compatibility reasons and should not be expected or implemented for new formats.
It is expected that spec authors also integrate the new format to existing APIs, so that they are safelisted in both ingress (e.g. decoding from a ReadableStream) and egress (e.g. encoding to a WriteableStream) points from a browser’s perspective.
For
example.
if
you
are
to
add
an
image
format
to
the
web
platform,
first
add
a
new
MIME
type
for
the
format.
After
this,
you
would
naturally
add
a
decoder
(and
presumably
an
encoder)
for
said
image
format
to
support
decoding
in
HTMLImageElements.
On
top
of
this,
you
are
also
expected
to
add
support
to
egress
points
such
as
HTMLCanvasElement.toBlob()
and
HTMLCanvasElement.toDataURL()
.
For legacy reasons browsers support MIME type sniffing, but we do not recommend extending the pattern matching algorithm , due to security implications, and instead recommend enforcing strict MIME types for newer formats.
New MIME types should have a specification and should be registered with the Internet Assigned Numbers Authority (IANA).
10.5. Extend existing manifest files rather than creating new ones
If your feature requires a manifest, investigate whether you can extend an existing manifest schema.
New web features should be self-contained and self-describing and ideally should not require an additional manifest file. Some of the existing manifest files include
-
Web App Manifest which contains features related to web applications.
-
Payment Method Manifest which is used for payment methods in the context of the web payment API
-
Publication Manifest which is used by some web publications working group standards
-
Origin Policy which is used to set security policies.
We encourage people to extend existing manifest files. Always try to get the changes into the original spec, or at least discuss the extension with the spec editors. Having this discussion is more likely to result in a better design and lead to something that better integrates with the platform.
When designing new keys and values for a manifest, make sure they are needed (that is, they enable well-thought-out use-cases). Also, please check if a similar key exists. If an existing key/value pair does more or less what is needed, work with the existing spec to extend it to your use-case if possible.
However, if your feature requires a complex set of metadata specific to a functional domain, the creation of a new manifest may be justified.
You may need to make a new manifest file if the domain of the manifest file is different from the existing manifest files. For example, if the fetch timing is different, or if the complexity of the manifest warrants it. Application metadata should be added to the Web App Manifest or be an extension of it. Manifests designated to be used for specific applications or which require interoperability with non-browsers may need to take a different approach. Payment Method Manifest, Publication Manifest, and Origin Policy are examples of these cases.
For example, if you have a single piece of metadata, even if the fetch timing is different than an existing manifest, it is probably best to use an existing manifest (or ideally design the feature in such a way that a manifest is not required). However, if your feature requires a complex set of metadata specific to a functional domain, the creation of a new manifest may be justified.
Note that in all cases, the naming conventions should be harmonized (see § 12 Naming principles ).
Note: By principle, existing manifests use lowercase, underscore-delimited names. There have been times where it was useful to re-use dictionaries from a manifest in DOM APIs as well, which meant converting the names to camel-cased version. One such example is the image resource . For this reason, if a key can clearly be expressed as a single word, that is recommended.
10.6. Consider consumers when serializing
When adding or extending features that involve a parser or a serializer you should consider their effect on serialization. The following are constituencies of serialization results that must be considered:-
Users - because the result of serialization may be presented to the end user
-
Tools - which may rely on the output of serialization in a number of ways (for instance, to detect if the parser need to correct for errors in the input)
-
Web APIs - because serializations may be passed into other APIs of the web platform
Consider language specific expectations - for instance, in some languages, the presence or absence of whitespace may be significant. Languages may differ in the precision of floating point numbers they accept. The results of serialization:
-
Should match developer expectations (for instance, serializing CSS properties should not result in output that is very different from what a CSS author would have written)
-
Idempotence - the result of serializing the output of a parser should be something that, when parsed and serialized, produces itself.
-
Should not add to error accumulation - taking the serialized output of an API and feeding it back to the same API in a loop should result in the same internal state
10.7. Ensure features are developer-friendly
Any new feature should be developer-friendly. While it is hard to quantify friendliness, at least consider the following points.
While error text in exceptions should be generic, developer-oriented error messages (such as those from a developer console) must be meaningful. When a developer encounters an error, the message should be specific to that error case, and not overly generic.
Ideally, developer-oriented error messages should have enough information to guide the developer in pinpointing where the problem is.
Declarative features such as CSS, may require extra work in the implementation for debuggability. Defining this in the specification not only makes the feature more developer-friendly, it also ensures a consistent development experience for the users.
A good example where debuggability was defined as part of the specification is Web Animations .
10.8. Use the best crypto, and expect it to evolve
Use only cryptographic algorithms that have been impartially reviewed by security experts, and make sure your choice of algorithm is proven, and up-to-date. Not only do they become obsolete or insecure, cryptographic protocols and algorithms also evolve quickly.
10.9. Do not expose new information through Client Hints
When using Client Hints, don’t expose information that the web page does not already have access to.
Client hints are an important optimization, but cannot be the sole means by which information is exposed to sites. As it says in RFC 8942 §4.1 where client hints are defined:
Therefore, features relying on this document to define Client Hint headers MUST NOT provide new information that is otherwise not made available to the application by the user agent, such as existing request headers, HTML, CSS, or JavaScript.
If you are trying to add a new client hint that exposes information that is not available to the web page through other means, please pursue exposing this information through an API first.
11. Writing good specifications
This document mostly covers API design for the Web, but those who design APIs are hopefully also writing specifications for the APIs that they design.
11.1. Identify the audience of each requirement in your specification
Document both how authors should write good code using your API, and how implementers of your API should handle poorly-written code.
The web, especially in comparison to other platforms, is designed to be robust in accepting poorly-formed markup. This means that web pages which use older versions of web standards can still be viewed in newer user agents, and also that authors have a shallower learning curve.
To support this, web specification writers need to describe how to interpret poorly-formed markup, as well as well-formed markup.
Implementers need to be able to understand the "supported language", which is more complex than the "conforming language" which authors should be aiming to use.
<table>
element
explains
how
to
process
the
contents
of
a
<table>
element,
including
cases
where
the
contents
do
not
conform
to
the
Content
model
.
11.2. Specify completely and avoid ambiguity
When specifying how a feature should work, make sure that there is enough information so that authors don’t have to write different code to work with different implementations.
If a specification isn’t specific enough, implementers might make different choices which force authors to write extra code to handle the differences.
Implementers shouldn’t need to check details of other implementations to avoid this situation. Instead, the specification should be complete and clear enough on its own.
Note: This doesn’t mean that implementations can’t render things differently, or show different user interfaces for things like permission prompts.
Note: Implementers should file bugs against specifications which don’t give them clear enough information to write the implementation.
11.2.1. Define algorithms clearly
Write algorithms in a way that is clear and concise.
The most common way to write algorithms is to write an explicit sequence of steps. This often looks like pseudo-code.
showModal()
method
is
described
as
a
numbered
sequence
of
steps
which
clearly
explains
when
to
throw
exceptions
and
when
to
run
algorithms
defined
in
other
parts
of
the
HTML
spec.
When writing a sequence of steps, imagine that it is a piece of functional code.
-
Clearly specify the inputs and outputs, name the algorithm and the variables it uses well, and explicitly note the points in the algorithm where the algorithm may return a result or error.
-
As much as possible, avoid writing algorithms which have side effects.
Summarize the purpose of the algorithm before going into detail, so that readers can decide whether to read the steps or skip over them. For example take the following steps, which ensure that there is at most one pending X callback per top-level browsing context.
A plain sequence of steps is not always the best way to write an algorithm. For example, it might make sense to define or re-use a formal syntax or grammar to avoid repetition, or define specific states to be used in a state machine. When using extra constructs like these, the earlier advice still applies.
As much as possible, describe algorithms as closely as possible to how they would be implemented. This may make the spec harder to write, but it means that implementations don’t need to figure out how to translate what’s written in the specification to how it should actually be implemented. In particular, that may mean that different implementations make different decisions that may lead to later features being feasible in one implementation but not another.
CSS selectors are read and understood from left to right, but in practice are matched from right to left in implementations. This allows the most specific term to be matched or not matched quickly, avoiding unnecessary work. The CSS selector matching algorithm is written this way, instead of a hypothetical algorithm which would more closely match how CSS selectors are often read by CSS authors.
See also:
11.2.2. Use explicit flags for state
Instead of describing state with words, use explicit flags for state when writing algorithms.
Using explicit flags makes it clear whether or not the state changes in different error conditions, and makes it clear when the state described by the flags is reset.
11.3. Resolving tension between interoperability and implementability
Specifying a feature, like all engineering work, requires weighing tradeoffs and compromising.
Sometimes, despite the best efforts of all involved, we fail to find a way to interoperably specify a feature that every implementor agrees is implementable in their engine.
Choosing a way forward should be guided by what is best for the end user .
First, examine the nature of the implementability concerns. Perhaps the implementor identified potential end-user harm and decided that the feature should not be implemented. In this situation, it may be best to not specify the feature, and for any implementors who have shipped the feature to un-ship it. Keep in mind that un-shipping features may result in user and author confusion.
Sometimes the same API is implementable in every engine, but its behavior cannot be made to be fully interoperable. Whenever behavior differs between implementations, the end-user impact must be considered. One downside to this approach is that the difference in behavior is not feature detectable . On the other hand, if something changes in the future that enables all implementations to converge on the same behavior, sites will not need to be updated to take advantage of this.
Note: Authors may assume the behavior of the dominant implementation is correct and any other behavior is buggy, which may further entrench the dominant implementaiton.
The backdrop-filter property is known to have a number of visible behavior differences between implementations. At the present time, those working on it believe the benefits of sharing an API outweigh the interoperability costs of the differences.
If behavior differences are serious enough, and cannot be converged further, it may be preferable to specify different APIs, with each implementor going with the alternative that they are able to implement.
One risk of this approach comes about if implementations are eventually able to converge on the same behavior. In this case, multiple alternative APIs may need to be supported indefinitely, which can increase developer complexity and implementation maintenance costs.
Exposing two different APIs that are intended to address the same use cases forces authors to use feature detection to write different code for different browsers.
Note: There is a risk that authors may assume the API variant supported by the dominant implementation is the "correct" one, or they may simply be unaware of the API variants supported in implementations they do not use regularly, which may further entrench the dominant implementaiton.
If you do go this route, try to reduce the cost to authors by minimizing the differences between the APIs. It may be possible to design things such that the non-standard part is a small, isolated component of a larger, standardized framework.
[ENCRYPTED-MEDIA] and [payment-request] are both examples of specifications which minimize API differences by isolating a non-standard component ( Content Decryption Modules and payment methods , respectively) from the rest of the (shared) API surface.
Groups sometimes choose to specify APIs knowing that some implementations are unwilling to implement it. Authors may use feature detection to only use APIs when they are available, and users may benefit where such features are supported and used. This is always a problematic outcome, though it is worst when there is only one willing implementor. Some standards venues have rules explicitly disallowing the standardization of features which only have a single interested implementor. Even if you are not operating in such a group, standardization of single-implementation features is strongly discouraged.
You may also find that none of these options is acceptable to your group. While the best path forward may be to choose not to specify the feature, there is the risk that some implementations may ship the feature as a nonstandard API.
11.4. Avoid monkey patching
A monkey patch layers new functionality on top of an existing specification in a way that extends, overrides, or otherwise modifies the existing specification’s behavior. Monkey patching is generally considered bad practice and should be avoided for the reasons listed below; but is sometimes unavoidable (see our guidance if you need to monkey patch ).An example of monkey patching would be: specification A defines an internal algorithm part of its capability, then specification B overrides or modifies said algorithm with functionality directly, not using publicly defined extension points.
As such, monkey patching (wrongly!) presupposes that underlying functionality cannot, and will not, be changed. This can lead to several problems:
-
If the existing specification changes, the monkey patched text may no longer apply thus causing it to break or lead to confusion.
-
If several specifications monkey patch part of another spec, the order in which the patches are applied can lead to inconsistent behavior.
-
The existing specification authors may not be aware of the monkey patched text in your spec. This misses an opportunity to receive guidance on reaching consensus on a fix. Or worse, the behavior that is being patched is interrelated to so other issue or behavior, making the overall situation worse.
-
An implementer doing code maintenance may read underlying specification and inadvertently revert the monkey patched behavior assuming it is a bug.
Wikipedia also describes some additional pitfalls of monkey patching .
11.4.1. If you need to monkey patch
Sometimes monkey patching is unavoidable (e.g., early in the design phase of a new specification). If you find yourself needing to monkey patch an existing specification, make sure you:-
In your spec, clearly mark the monkey patch as a proposed change to another specification that is only temporarily in this specification, using language like:
-
Identify the place you want to change using a quote from its text or by linking to a defined term if you’re replacing that term’s whole definition. Include the current step numbers if you’re changing something that’s numbered, but remember that numbers aren’t sufficient by themselves because they change when other steps are added or removed. If you’re changing several places within a definition, it can be helpful to paste the entire definition into a
<blockquote>
and use<ins>
and<del>
to describe what’s changing.Before step 4.2 of HTTP fetch , "If request’s redirect mode is "follow", then set request’s service-workers mode to "none".", insert the following steps:
-
The first new step.
In terms of CSS2.1 block-level formatting [CSS2] , the rules for “over-constrained” computations in CSS 2 § 10.3.3 Block-level, non-replaced elements in normal flow are ignored in favor of alignment as specified here and the used value of the margin properties are therefore not adjusted to correct for the over-constraint.
Append
optional USVString value
parameters to the definitions ofURLSearchParams
.delete()
URLSearchParams
.has()
partial interface URLSearchParams {undefined delete (USVString name ,optional USVString value );boolean has (USVString name ,optional USVString value ); };Modify the definition of font-size-adjust as follows:
Name: font-size-adjust Value: none | [ ex-height | cap-height | ch-width | ic-width | ic-height ] ? [ from-font | <number> ] Initial: none Applies to: all elements and text Inherited: yes Percentages: N/A Computed value: a number or the keyword nonethe keyword none , or a pair of a metric keyword and a <number>Canonical order: per grammar Animation type: discrete if the keywords differ, otherwise by computed value type -
-
Keep monkey patches short. If you’re modifying an algorithm by adding more than a couple of steps, define a separate, self-contained algorithm in your specification, and have the monkey patch call the new algorithm.
-
If you’re replacing or adding steps in an algorithm, write the new steps so an editor can paste them verbatim into the upstream algorithm. This implies that control flow like "return" or "abort" will return or abort from the whole upstream algorithm, not just the monkey patched section.
-
Once your feature has been reviewed within your own community, and there seems to be consensus that it’s a good idea, file an issue against the existing specification that asks the upstream community to review your monkey patch. This community may suggest better ways to accomplish your goals. Take them seriously. They might also be able to tell you how to hook into an existing extension point or quickly create one so that you can remove your monkey patch sooner than you expected.
-
Update the
<p class="issue">
block mentioned above to point to the issue you filed against the upstream specification. -
When your work has enough support to merge into the upstream specification, work with that specification’s maintainers to do so.
-
Once the existing specification has integrated your monkey patch, remove the monkey patch from your specification.
Note that monkey patching is different from "modularization", which extends existing technology in a way that is self-contained and doesn’t cause side-effects to other underlying specifications (e.g., the CSS Modules). Modularization is considered a good practice.
It’s also usually ok to extend a specification using WebIDL’s partial interfaces , partial dictionaries , and so on. But even in those cases, it’s highly advisable to coordinate with the authors of the specification being extended.
12. Naming principles
Names take meaning from:
-
signposting (the name itself)
-
use (how people come to understand the name over time)
-
context (the object on the left-hand side, for example)
12.1. Use common words
API naming must be done in easily readable US English. Keep in mind that most web developers aren’t native English speakers. Whenever possible, names should be chosen that use common vocabulary a majority of English speakers are likely to understand when first encountering the name.
Value readability over brevity. Keep in mind, however, that the shorter name is often the clearer one. For instance, it may be appropriate to use technical language or well-known terms of art in the specification where the API is defined.
For
example,
the
Fetch
API’s
Body
mixin’s
json()
method
is
named
for
the
kind
of
object
it
returns.
JSON
is
a
well-known
term
of
art
among
web
developers
likely
to
use
the
Fetch
API.
It
would
harm
comprehension
to
name
this
API
less
directly
connected
to
its
return
type.
[FETCH]
12.2. Use ASCII names
Names must adhere to the local language restrictions, for example CSS ident rules etc. and should be in the ASCII range .
12.3. Consult others on naming
Consult widely on names in your APIs.
You may find good names or inspiration in surprising places.
-
What are similar APIs named on other platforms, or in popular libraries in various programming languages?
-
Ask end users and developers what they call things that your API works with or manipulates.
-
Look at other web platform specifications, and seek advice from others working in related areas of the platform.
-
Also, consult if the names used are inclusive.
Pay particular attention to advice you receive with clearly-stated rationale based on underlying principles.
Tantek Çelik extensively researched how to name the various pieces of a URL. The editors of the URL spec have relied on this research when editing that document. [URL]
Use Web consistent names
When choosing a name for feature or API that has exposure in other technology stacks, the preference should be towards the Web ecosystem naming convention rather than other communities.
The
NFC
standard
uses
the
term
media
to
refer
to
what
the
Web
calls
MIME
type
.
In
such
cases,
the
naming
of
features
or
API
for
the
purposes
of
Web
NFC
must
prefer
naming
consistent
with
MIME
type
.
Use Inclusive Language
Use inclusive language whenever possible.
For example, you should use blocklist and allowlist instead of blacklist and whitelist, and source and replica instead of master and slave.
If you need to refer to a generic persona, such as an author or user, use the generic pronoun "they", "their", etc. For example, "A user may wish to adjust their preferences".
12.4. Use names that describe a purpose
Name things for what they do, not how they do it.
Names
that
reflect
the
a
purpose
of
their
subject
are
more
likely
to
be
future-proof.
Removing
APIs
from
the
web
is
difficult
,
so
names
need
to
outlast
details.
any
implementation
detail.
In particular, names should avoid the use of code names, branding, or details of the technology used to implement capabilities. These can become obsolete and might need to be replaced in the future.
The Remote Playback API was not named after one of the pre-existing, proprietary systems it was inspired by (such as Chromecast or AirPlay). Instead, general terms that describe what the API does were chosen. [REMOTE-PLAYBACK]
The WebTransport API enables the networking capabilities provided by the QUIC protocol. However, the name reflects the more general purpose of transporting data. [WebTransport] [RFC9000]
12.5. Name things consistently
Naming schemes should aim for consistency, to avoid confusion.Sets of related names should agree with each other in:
-
part of speech - noun, verb, etc.
-
negation, for example all of the names in a set should either describe what is allowed or they should all describe what is denied
Boolean properties vs. boolean-returning methods
Boolean
properties,
options,
or
API
arguments
which
are
asking
a
question
about
their
argument
should
not
be
prefixed
with
is
,
while
methods
that
serve
the
same
purpose,
given
that
it
has
no
side
effects,
should
be
prefixed
with
is
to
be
consistent
with
the
rest
of
the
platform.
Use casing rules consistent with existing APIs
Although they haven’t always been uniformly followed, through the history of web platform API design, the following rules have emerged:
Casing rule | Examples | |
---|---|---|
Methods
and
properties
(Web IDL attributes, operations, and dictionary keys) | Camel case |
createAttribute()
compatMode
|
Classes
and
mixins
(Web IDL interfaces) | Pascal case |
NamedNodeMap
NonElementParentNode
|
Initialisms in APIs | All caps, except when the first word in a method or property |
HTMLCollection
innerHTML
bgColor
|
Repeated initialisms in APIs | Follow the same rule |
HTMLHRElement
RTCDTMFSender
|
The abbreviation of "identity"/"identifier" |
Id
,
except
when
the
first
word
in
a
method
or
property
|
getElementById()
pointerId
id
|
Enumeration values | Lowercase, dash-delimited |
"no-referrer-when-downgrade"
|
Events | Lowercase, concatenated |
|
HTML elements and attributes | Lowercase, concatenated |
figcaption
maxlength |
JSON keys | Lowercase, underscore-delimited | short_name |
ismap
on
img
elements
is
reflected
as
the
isMap
property
on
HTMLImageElement
.
The rules for JSON keys are meant to apply to specific JSON file formats sent over HTTP or stored on disk, and don’t apply to the general notion of JavaScript object keys.
Repeated
initialisms
are
particularly
non-uniform
throughout
the
platform.
Infamous
historical
examples
that
violate
the
above
rules
are
XMLHttpRequest
and
HTMLHtmlElement
.
Don’t
follow
their
example;
instead
always
capitalize
your
initialisms,
even
if
they
are
repeated.
Start
factory
method
names
with
create
or
from
Factory
method
names
should
start
with
create
or
from
,
optionally
followed
by
a
more
specific
noun.
If
a
factory
method
constructs
a
new
empty
object,
prefix
the
method
name
with
create
.
However,
if
your
factory
method
creates
an
object
from
existing
data,
prefix
the
method
name
with
from
.
Factory
methods
should
be
an
exception,
not
the
norm,
and
only
used
for
valid
reasons
.
An
example
of
valid
usage
of
a
factory
method
is
when
an
object
is
being
created
also
requires
association
with
the
parent
object
(e.g.
document.createXXX()
).
Use
the
prefix
from
when
there
is
a
source
object
expected
to
be
converted
to
a
target
object.
For
example,
Foo.fromBar()
would
imply
that
a
Foo
object
will
be
created
using
a
Bar
object.
A
common
pattern
is
to
name
generic
factory
methods
create()
or
from()
.
Avoid
inventing
other
prefixes
and
using
of
legacy
prefixes
unless
there
is
a
strong
reason
to
do
so.
A
reason
to
make
an
exception
would
be
to
maintain
consistency
with
existing
factory
methods
under
the
same
object,
such
as
document.initXXX()
.
New
factory
methods
should
not
follow
this
convention.
12.6. Warn about dangerous features
Where possible, mark features that weaken the guarantees provided to developers by making their names start with "unsafe" so that this is more noticeable.
For
example,
Content
Security
Policy
(CSP)
provides
protection
against
certain
types
of
content
injection
vulnerabilities.
CSP
also
provides
features
that
weaken
this
guarantee,
such
as
the
unsafe-inline
keyword,
which
reduces
CSP’s
own
protections
by
allowing
inline
scripts.
13. Other resources
Some useful advice on how to write specifications is available elsewhere:
-
Writing specifications: Kinds of statements (Ian Hickson, 2006)
-
QA Framework: Specification Guidelines (W3C QA Working Group, 2005)
Acknowledgments
This document consists of principles which have been collected by TAG members past and present during TAG design reviews . We are indebted to everyone who has requested a design review from us.
The TAG would like to thank Adrian Hope-Bailie, Alan Stearns, Aleksandar Totic, Alex Russell, Alice Boxhall, Andreas Stöckel, Andrew Betts, Anne van Kesteren, Benjamin C. Wiley Sittler, Boris Zbarsky, Brian Kardell, Charles McCathieNevile, Chris Wilson, Dan Connolly, Daniel Ehrenberg, Daniel Murphy, David Baron, Domenic Denicola, Eiji Kitamura, Eric Shepherd, Ethan Resnick, fantasai, François Daoust, Henri Sivonen, HE Shi-Jun, Ian Hickson, Irene Knapp, Jake Archibald, Jeffrey Yasskin, Jeremy Roman, Jirka Kosek, Kenneth Rohde Christiansen, Kevin Marks, Lachlan Hunt, Léonie Watson, L. Le Meur, Lukasz Olejnik, Maciej Stachowiak, Marcos Cáceres, Mark Nottingham, Martin Thomson, Matt Giuca, Matt Wolenetz, Michael[tm] Smith, Mike West, Nick Doty, Nigel Megitt, Nik Thierry, Ojan Vafai, Olli Pettay, Pete Snyder, Philip Jägenstedt, Philip Taylor, Reilly Grant, Richard Ishida, Rick Byers, Rossen Atanassov, Ryan Sleevi, Sangwhan Moon, Sergey Konstantinov, Stefan Zager, Stephen Stewart, Steven Faulkner, Surma, Tab Atkins-Bittner, Tantek Çelik, Tobie Langel, Travis Leithead, and Yoav Weiss for their contributions to this & the HTML Design Principles document which preceded it.
Special thanks to Anne van Kesteren and Maciej Stachowiak, who edited the HTML Design Principles document.
If you contributed to this document but your name is not listed above, please let the editors know so they can correct this omission.