Copyright © 2025 World Wide Web Consortium . W3C ® liability , trademark and permissive document license rules apply.
This section describes the status of this document at the time of its publication. A list of current W3C publications and the latest revision of this technical report can be found in the W3C standards and drafts index at https://www.w3.org/TR/.
This document provide guidance to Web specification authors on mitigating the privacy impacts of browser fingerprinting.
The Privacy Working Group is collaborating with the Technical Architecture Group ( TAG ) on this guidance.
This document was published by the Privacy Working Group as an Editor's Draft.
Publication as an Editor's Draft does not imply endorsement by W3C and its Members.
This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
This document was produced by a group operating under the W3C Patent Policy . W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy .
This document is governed by the 03 November 2023 W3C Process Document .
In short, browser fingerprinting is the capability of a site to identify or re-identify a visiting user, user agent or device via configuration settings or other observable characteristics.
A similar definition is provided by [ RFC6973 ]. A more detailed list of types of fingerprinting is included below. This document does not attempt to catalog all features currently used or usable for browser fingerprinting; however, A. Research provides links to browser vendor pages and academic findings.
Browser
fingerprinting
can
be
used
as
a
security
measure
(e.g.
as
means
of
authenticating
the
user).
However,
fingerprinting
is
also
a
potential
threat
to
users'
privacy
on
the
Web.
This
document
does
not
attempt
to
provide
a
single
unifying
definition
of
"privacy"
or
"personal
data",
but
we
highlight
how
browser
fingerprinting
might
impact
users'
privacy.
For
example,
browser
fingerprinting
can
be
used
to:
The privacy implications associated with each use case are discussed below. Following from the practice of security threat model analysis, we note that there are distinct models of privacy threats for fingerprinting. Defenses against these threats differ, depending on the particular privacy implication and the threat model of the user.
There are many reasons why users might wish to remain anonymous or unidentified online, including: concerns about surveillance, personal physical safety, and concerns about discrimination against them based on what they read or write when using the Web. When a browser fingerprint is correlated with identifying information (like an email address, a recognized given and sur-name, or a government-issued identifier), an application or service provider may be able to identify an otherwise pseudonymous user. The adversary and consequences of this threat will vary by the particular user and use case, but can include nation-state intelligence agencies and threats of violence or imprisonment.
Browser fingerprinting raises privacy concerns even when offline identities are not implicated. Some users may be surprised or concerned that an online party can correlate multiple visits (on the same or different sites) to develop a profile or history of the user. This concern may be heightened because (see below) it may occur without the user's knowledge or consent and tools such as clearing cookies or using a VPN do not prevent further correlation.
Browser fingerprinting also allows for tracking across origins [ RFC6454 ]: different sites may be able to combine information about a single user even where a cookie policy would block accessing of cookies between origins, because the fingerprint is relatively unique and the same for all origins.
In contrast to other mechanisms defined by Web standards for maintaining state (e.g. cookies), browser fingerprinting allows for collection of data about user activity without clear indications that such collection is happening. Transparency can be important for end users, to understand how ongoing collection is happening, but it also enables researchers, policymakers and others to document or regulate privacy-sensitive activity. Browser fingerprinting also allows for tracking of activity without clear or effective user controls: a browser fingerprint typically cannot be cleared or re-set. (See the finding on unsanctioned tracking [ TAG-UNSANCTIONED ].)
Advances in techniques for browser fingerprinting (see A. Research , below), particularly in active fingerprinting , suggest that complete elimination of the capability of browser fingerprinting by a determined adversary through solely technical means that are widely deployed is implausible. However, mitigations in our technical specifications are possible, as described below ( 6. Mitigations ), and may achieve different levels of success ( 4. Feasibility ).
Mitigations
recommended
here
are
simply
mitigations,
not
solutions.
Users
of
the
Web
cannot
confidently
rely
on
sites
being
completely
unable
to
correlate
traffic,
especially
when
executing
client-side
code.
A
fingerprinting
surface
extends
across
all
implemented
Web
features
for
a
particular
user
agent,
and
even
to
other
layers
of
the
stack;
for
example,
differences
in
TCP
connections.
For
example,
a
user
might
employ
an
onion
routing
system
such
as
Tor
to
limit
network-level
linkability,
but
still
face
the
risk
of
correlating
Web-based
activity
through
browser
fingerprinting,
or
vice
versa.
fingerprinting.
In
order
to
mitigate
these
privacy
risks
as
a
whole,
fingerprinting
must
be
considered
during
the
design
and
development
of
all
specifications.
The TAG finding on Unsanctioned Web Tracking, including browser fingerprinting, includes description of the limitations of technical measures and encourages minimizing and documenting new fingerprinting surface [ TAG-UNSANCTIONED ]. The best practices below detail common actions that authors of specifications for Web features can take to mitigate the privacy impacts of browser fingerprinting. The Self-Review Questionnaire documents mitigations of privacy impacts in Web features more generally that may complement these practices [ security-privacy-questionnaire ].
Passive fingerprinting is browser fingerprinting based on characteristics observable in the contents of Web requests, without the use of any code executed on the client.
Passive fingerprinting would trivially include cookies (often unique identifiers sent in HTTP requests), the set of HTTP request headers and the IP address and other network-level information. The User-Agent string [ RFC9110 ], for example, is an HTTP request header that typically identifies the browser, renderer, version and operating system. For some populations, the User-Agent and IP address will often uniquely identify a particular user's browser [ NDSS-FINGERPRINTING ].
For active fingerprinting , we also consider techniques where a site runs JavaScript or other code on the local client to observe additional characteristics about the browser, user, device or other context. In recent years numerous techniques have ab(used) CSS features to perform fingerprinting on par with JavaScript.
Techniques
for
active
fingerprinting
might
include
accessing
the
window
size,
enumerating
fonts
or
plug-ins,
connected
devices,
evaluating
performance
characteristics,
reading
from
device
sensors,
and
rendering
graphical
patterns.
Key
to
this
distinction
is
that
active
fingerprinting
takes
place
in
a
way
that
is
potentially
detectable
on
the
client.
Note that in some types of active fingerprinting, characteristics are combined on the client to produce a fingerprint. In most cases; however, the characteristics are sent en masse to a server, which can combine them in unobservable ways. The latter mechanism may be detectable, but the efficacy of fingerprinting mitigation techniques is much harder to measure in this scenario.
There are different levels of success in mitigating browser fingerprinting:
Research
has
shown
feasible
improvement
in
privacy
protection
in
all
of
these
areas.
While
lists
of
plugins
remain
a
large
fingerprinting
surface,
entropy
has
decreased
over
time
with
migration
to
Web
APIs
over
plugins
[
HIDING-CROWD
].
Collected
data
on
Web
users
has
shown
mobile
devices
to
have
substantially
larger
anonymity
sets
than
desktop
browsers
[
HIDING-CROWD
].
Research
on
forms
of
active
fingerprinting
has
documented
its
use
and
demonstrated
changes
in
use
of
those
techniques
as
an
apparent
result
of
increased
awareness
[
WPM-MILLION
].
Respawning
of
cookies
has
continued,
with
an
increasing
variety
of
techniques,
but
awareness
and
technical
responses
to
the
issue
has
made
the
practice
less
widespread
[
FLASHCOOKIES-2
].
To mitigate browser fingerprinting in your specification:
The fingerprinting surface of a user agent is the set of observable characteristics that can be used in concert to identify a user, user agent or device or correlate its activity.
Data sources that may be used for browser fingerprinting include:
These data sources may be accessed directly for some features, but in many other cases they are inferred through some other observation. Timing channels, in particular, are commonly used to infer details of hardware (exactly how quickly different operations are completed may provide information on GPU capability, say), network information (via the latency or speed in loading a particular resource) or even user configuration (what items have been previously cached or what resources are not loaded). Consider the side effects of feature and how those side effects would allow inferences of any of these characteristics.
The Tor Browser design document [ TOR-DESIGN ] has more details on these sources and their relative priorities; this document adds environmental characteristics in that sensor readings or data access may distinguish a user, user agent or device by information about the environment (location, for example).
For each identified feature, consider the severity for the privacy impacts described above ( 1.2 Privacy impacts and threat models ) based on the following factors:
While we do not recommend specific trade-offs, these factors can be used to weigh increases to that surface ( 6.1 Weighing increased fingerprinting surface ) and suggest appropriate mitigations. Although each factor may suggest specific mitigations, in weighing whether to add fingerprinting surface they should be considered in concert. For example, access to a new set of characteristics about the user may be high entropy, but be of less concern because it has limited availability and is easily detectable. A cross-origin, drive-by-available, permanent, passive unique identifier is incompatible with our expectations for privacy on the Web.
In
conducting
this
analysis,
it
may
be
tempting
to
dismiss
certain
fingerprinting
surface
in
a
specification
because
of
a
comparison
to
fingerprinting
surface
exposed
by
other
parts
of
the
Web
platform
or
other
layers
of
the
stack.
Be
cautious
about
making
such
claims.
First,
while
similar
information
may
be
available
through
other
means,
similar
is
not
identical:
information
disclosures
may
not
be
exactly
the
same
and
fingerprintability
is
promoted
made
even
more
effective
by
combining
these
distinct
sources.
Second,
where
identical
entropy
is
present,
other
factors
of
severity
or
availability
may
differ
and
those
factors
are
important
for
feasible
mitigation.
Third,
the
platform
is
neither
monolithic
nor
static;
not
all
other
features
are
implemented
in
all
cases
and
may
change
(or
be
removed)
in
the
future.
Fourth,
circular
dependencies
are
a
danger
when
so
many
new
features
are
under
development;
two
specifications
sometimes
refer
to
one
another
in
arguing
that
fingerprinting
surface
already
exists.
It
is
more
useful
to
reviewers
and
implementers
to
consider
the
fingerprinting
surface
provided
by
the
particular
Web
feature
itself,
with
specific
references
where
surface
may
be
available
through
other
features
as
well.
Web
specification
authors
regularly
attempt
to
strike
a
balance
between
new
functionality
and
fingerprinting
surface.
For
example,
feature
detection
functionality
allows
for
progressive
enhancement
with
a
small
addition
to
fingerprinting
surface;
detailed
enumerations
of
plugins,
fonts,
fonts
or
connected
devices
may
provide
a
large
fingerprinting
surface
with
minimal
functional
support.
Authors and Working Groups determine the appropriate balance between these properties on a case-by-case basis, given their understanding of the functionality, its implementations and the severity of increased fingerprinting surface. However, given the distinct privacy impacts described above and in order to improve consistency across specifications, these practices provide some guidance:
Consider each of the severity factors described above and whether that functionality is necessary and whether comparable functionality is feasible with less severe increases to the fingerprinting surface.
In particular, unless a feature cannot reasonably be designed in any other way, increased passive fingerprintability should be avoided. Passive fingerprinting allows for easier and widely-available identification, without opportunities for external detection or control by users or third parties.
What browsing contexts, resources and requests need access to a particular feature? Identifiers can often be scoped to have a different value in different origins. Some configuration may only be necessary in top-level browsing contexts.
Should access to this functionality be limited to where users have granted a particular permission? While excessive permissions can create confusion and fatigue, limiting highly granular data to situations where a user has already granted permission to access sensitive data widely mitigates the risk of that feature being used primarily for browser fingerprinting in "drive-by" contexts. For example, Media Capture and Streams [ mediacapture-streams ] limits access to attached microphone and camera device labels to when the user has granted permission to access a camera or microphone (while still allowing access to the number and configuration of attached cameras and microphones in all contexts, a noted increase in drive-by fingerprinting surface).
Some implementations may also limit the entropy of fingerprinting surface by not exposing different capabilities for different devices or installations of a user agent. Font lists, for example, can be limited to a list commonly available on all devices that run a particular browser or operating system (as implemented in Tor Browser, Firefox and Safari).
Where
a
feature
does
contribute
to
the
fingerprinting
surface
,
indicate
that
impact,
by
explaining
the
effect
(and
any
known
implementer
mitigations)
and
marking
the
relevant
section
with
a
fingerprinting
icon,
as
this
paragraph
is.
<img src="https://www.w3.org/Icons/fingerprint.png"
class="fingerprint"
alt
=
"This
feature
may
contribute
to
browser
fingerprintability."
>
Specifications can mitigate against fingerprintability through standardization; by defining a consistent behavior, conformant implementations won't have variations that can be used for browser fingerprinting.
Randomization of certain browser characteristics has been proposed as a way to combat browser fingerprinting. While this strategy may be pursued by some implementations, we expect in general it will be more effective for us to standardize or null values rather than setting a range over which they can vary. The Tor Browser design [ TOR-DESIGN ] provides more detailed information, but in short: it's difficult to measure how well randomization will work as a mitigation and it can be costly to implement in terms of usability (varying functionality or design in unwanted ways), processing (generating random numbers) and development (including the cost of introducing new security vulnerabilities). Standardization provides the benefit of an increased anonymity set for conformant browsers with the same configuration: that is, an individual can look the same as a larger group of people rather than trying to look like a number of different individuals.
To reduce unnecessary entropy, specify aspects of API return values and behavior that don't contribute to functional differences. For example, if the ordering of return values in a list has no semantic value, specify a particular ordering (alphabetical order by a defined algorithm, for example) so that incidental differences don't expose fingerprinting surface.
Even
within
a
single
implementation,
variation
can
occur
unexpectedly
due
to
differences
in
processor
architecture
or
operating
system
configuration.
Access
to
a
list
of
system
fonts
via
Flash
or
Java
plugins
notably
returns
returned
the
list
sorted
not
in
a
standard
alphabetical
order,
but
in
an
unspecified
order
specific
to
the
system.
This
ordering
adds
added
to
the
entropy
available
from
that
plugin
in
a
way
that
provides
provided
no
functional
advantage.
(See
Collecting
System
Fonts
via
Flash
Plugins
.)
Standardization
does
not
need
to
attempt
to
hide
all
differences
between
different
browsers
(e.g.
Edge
and
Chrome);
implemented
functionality
and
behavior
differences
will
always
exist
between
different
implementations.
For
that
reason,
removing
User-Agent
headers
altogether
is
not
a
goal.
However,
variation
in
the
User-Agent
string
that
reveals
additional
information
about
the
user
or
device
has
been
shown
to
provide
substantial
fingerprinting
surface
[
BEAUTY-BEAST
].
Where
a
client-side
API
provides
some
fingerprinting
surface,
authors
can
still
mitigate
the
privacy
concerns
assist
User
Agents
via
detectability.
If
client-side
fingerprinting
activity
is
to
some
extent
distinguishable
from
functional
use
of
APIs,
user
agent
implementations
may
have
an
opportunity
to
prevent
ongoing
fingerprinting
or
make
it
observable
to
users
and
external
researchers
(including
academics
or
relevant
regulators)
who
may
be
able
to
detect
and
investigate
the
use
of
fingerprinting.
Following the basic principle of data minimization [ RFC6973 ], design your APIs such that a site can access (and does access by default) only the entropy necessary for particular functionality.
Authors
might
design
an
API
to
allow
for
querying
of
a
particular
value,
rather
than
returning
an
enumeration
of
all
values.
User
agents
and
researchers
can
then
more
easily
distinguish
between
sites
that
query
for
one
or
two
particular
values
(gaining
minimal
entropy)
and
those
that
query
for
all
values
(more
likely
attempting
to
fingerprint
the
browser);
or
implementations
can
cap
the
number
of
different
values.
For
example,
Tor
Browser
limits
the
number
of
fonts
that
can
be
queried
with
a
browser.display.max_font_attempts
preference.
The
granularity
or
precision
of
information
returned
can
be
minimized
in
order
to
reduce
entropy.
For
example,
implementations
of
the
Battery
Status
API
[
BATTERY-STATUS
]
allowed
for
high
precision
(double-precision,
or
15-17
significant
digits)
readings
of
the
current
battery
level,
which
provided
a
short-term
identifier
that
could
be
used
to
correlate
traffic
across
origins
or
clearance
of
local
state.
Rounding
off
values
to
lower
precision
mitigates
browser
fingerprinting
while
maintaining
functional
use
cases.
Alternatively,
providing
Boolean
or
a
small
enumeration
of
values
might
provide
functionality
without
revealing
underlying
details;
for
example,
the
Boolean
near
property
in
the
Proximity
Sensor
API
[
PROXIMITY
].
For more information, see:
Related, detectability is improved even with data sent in HTTP headers (what we would typically consider passive fingerprinting) if sites are required to request access (or "opt in") to information before it's sent.
Even for data sent in HTTP request headers, requiring servers to advertise use of particular data, publicly document a policy, or "opt in" before clients send configuration data provides the possibility of detection by user agents or researchers.
For
example,
Client
Hints
[
client-hints-infrastructure
]
proposes
an
Accept-CH
response
header
for
services
to
indicate
that
specific
hints
can
be
used
for
content
negotiation,
rather
than
all
supporting
clients
sending
all
hints
in
all
requests.
This is a relatively new approach; we're still evaluating whether this provides meaningful and useful detectability.
Implementers can facilitate detectability by providing or enabling instrumentation so that users or third parties are able to calculate when fingerprinting surface is being accessed. Of particular importance for instrumentation are: access to all the different sources of fingerprinting surface; identification of the originating script; avoiding exposure that instrumentation is taking place. Beyond the minimization practice described above, these are largely implementation-specific (rather than Web specification) features.
If your specification exposes some fingerprinting surface (whether it's active or passive), some implementers (e.g. Tor Browser) are going to be compelled to disable those features for certain privacy-conscious users.
Following the principle of progressive enhancement, and to avoid further divergence (which might itself expose variation in users), consider whether some functionality in your specification is still possible if fingerprinting surface features are disabled.
Explicit
hooks
or
API
flags
may
be
used
so
that
browser
extensions
or
certain
user
agents
can
easily
disable
specific
features.
features
or
aspects
of
a
feature.
For
example,
the
origin-clean
flag
[
html
]
allows
control
over
whether
an
image
canvas
can
be
read,
event
defined
in
a
significant
fingerprinting
surface.
feature
might
specify
that
certain
properties
that
describe
the
hardware
device
that
triggered
it
may
be
blank.
Features which enable storage of data on the client and functionality for client- or server-side querying of that data can increase the ease of cookie-like fingerprinting . Storage can vary between large amounts of data (for example, the Web Storage API) or just a binary flag (has or has not provided a certain permission; has or has not cached a single resource).
If functionality does not require maintaining client-side state in a way that is subsequently queryable (or otherwise observable), avoid creating a new cookie-like feature. Can the functionality be accomplished with existing HTTP cookies or an existing JavaScript local storage API?
For
example,
the
Flash
plugin's
Local
Shared
Objects
(LSOs)
have
often
been
used
to
be
abused
to
duplicate
and
re-spawn
HTTP
cookies
cleared
by
the
user
[
FLASHCOOKIES
],
and
the
single
bit
that
indicates
if
the
Strict-TransportSecurity
Header
has
been
set
for
a
domain
has
been
abused
in
the
same
way
[
HSTS-SUPERCOOKIE
].
Where features do require setting and retrieving local state, there are ways to mitigate the privacy impacts related to unexpected cookie-like behavior; in particular, you can help implementers prevent "permanent", "zombie", "super" or "evercookies".
Clearly note where state is being maintained and could be queried and provide guidance to implementers on enabling simultaneous deletion of local state for users. Such functionality can mitigate the threat of "evercookies" because the presence of state in one such storage mechanism can't be used to persist and re-create an identifier.
Permanent or persistent data (including any identifiers) are of particular risk because they undermine the ability for a user to clear or re-set the state of their device or to maintain different identities.
Permanent
identifiers
or
other
state
(for
example,
identifiers
or
keys
set
in
hardware)
should
typically
not
be
exposed.
used.
Where
necessary,
access
to
such
identifiers
would
require
user
permission
(however,
and
limitation
to
a
particular
origin.
However
even
heavy-weight
mitigations
are
imperfect:
explaining
the
implications
of
such
permission
to
users
may
be
difficult)
difficult
and
limitation
to
a
particular
origin
(however,
server-side
collusion
between
origins
will
be
difficult
is
typically
impossible
to
detect).
detect.
As
a
result,
your
design
should
not
rely
on
saving
and
later
querying
data
on
the
client
and
expecting
it
to
persist
beyond
a
user's
user
clearing
cookies
or
other
local
state.
That
is,
you
should
not
expect
any
local
state
information
to
be
permanent
or
to
persist
longer
than
other
local
state.
Though not strictly browser fingerprinting, there are other privacy concerns regarding user tracking for features that provide local storage of data. Mitigations suggested in the Web Storage API specification include: safe-listing, block-listing, expiration and secure deletion [HTML#user-tracking] .
Expressions of, and compliance with, a Do Not Track signal does not inhibit the capability of browser fingerprinting, but may mitigate some user concerns about fingerprinting, specifically around tracking as defined in those specifications [ TRACKING-DNT ] [ TRACKING-COMPLIANCE ] and as implemented by services that comply with those user preferences. That is, DNT can mitigate concerns with cooperative sites.
The use of DNT in this way typically does not require changes to other functional specifications. If your specification expects a particular behavior upon receiving a particular DNT signal, indicate that with a reference to [ TRACKING-DNT ]. If your specification introduces a new communication channel that could be used for tracking, you might wish to define how a DNT signal should be communicated.
Some browser developers maintain pages on browser fingerprinting, including: potential mitigations or modifications necessary to decrease the surface of that browser engine; different vectors that can be used for fingerprinting; potential future work. These are not cheery, optimistic documents.
What are the key papers to read here, historically or to give the latest on fingerprinting techniques? What are some areas of open research that might be relevant?
A non-exhaustive list of sites that allow the visitor to test their configuration for fingerprintability.
Many thanks to Robin Berjon for ReSpec and to Tobie Langel for Github advice; to the Privacy Interest Group and the Technical Architecture Group for review; to the Tor Browser designers for references and recommendations; and to Christine Runnegar for contributions.