Copyright © 2026 World Wide Web Consortium. W3C® liability, trademark and permissive document license rules apply.
The communities behind Decentralized Identifiers (DIDs) bring together a diverse group of contributors who have decidedly different notions of exactly what "decentralization" means.
Rather than attempting to resolve this potentially unresolvable question, we propose a rubric — a scoring guide used to evaluate performance, a product, or a project — that teaches how to evaluate a given DID Method according to one's own requirements.
This rubric presents a set of criteria which an Evaluator can apply to any DID Method based on the use cases most relevant to them. We avoid reducing the Evaluation to a single number because the criteria tend to be multidimensional and many of the possible responses are not necessarily good or bad. It is up to the Evaluator to understand how each response in each criteria might illuminate favorable or unfavorable consequences for their needs.
This rubric is a collection of criteria for creating Evaluation Reports that assist people in choosing a DID Method. While this rubric assists in the evaluation of many aspects of a DID Method, it is not exhaustive.
This section describes the status of this document at the time of its publication. A list of current W3C publications and the latest revision of this technical report can be found in the W3C standards and drafts index.
This document was published by the Decentralized Identifier Working Group as a Group Note using the Note track.
This Group Note is endorsed by the Decentralized Identifier Working Group, but is not endorsed by W3C itself nor its Members.
The W3C Patent Policy does not carry any licensing requirements or commitments on this document.
This document is governed by the 18 August 2025 W3C Process Document.
This section is non-normative.
A rubric is a tool used in academia to communicate expectations and evaluate performance. It consists of a set of criteria to be evaluated, possible responses for each criteria, and a scoring guide explaining both how to choose and interpret each response. The act of evaluating a rubric, which we call an Evaluation, provides basis for self-evaluation, procurement decisions, or even marketing points. Written records of an evaluation, which we'll call an Evaluation Report, document how a particular subject is measured against the criteria. For students, a rubric helps to clarify how their work will be evaluated by others. For Evaluators, a rubric provides a consistent framework for investigating and documenting the performance of a subject against a particular set of criteria.
We were inspired to develop a rubric for decentralization when discussions about the requirements for decentralized identifiers, aka DIDs, led to intractable disagreement. It became clear that no single definition of "decentralized" would suffice for all of the motivations that inspired developers of DID Methods to work on a new decentralized approach to identity. Despite this challenge, two facts remained clear:
Rather than attempt to force a definition of "decentralized" that might work for many but would alienate others, the group set out to capture the measurable factors that could enable Evaluators to judge the decentralization of DID Methods based on their own independent requirements, with minimal embedded bias.
Pick the most important criteria for your use, ask each question, and select the most appropriate response. Do this for all of the DID Methods under consideration.
Each Evaluation should start with an explicit framing of the use under consideration. Are you evaluating the Method for use in Internet-of-Things (IoT)? For school childrens' extra-curricular activities? For international travel? The use, or set of uses, will directly affect how some of the questions are answered.
Where a given Method offers variability, such as multiple networks for the same Method, then evaluate each variant. For example, did:ethr supports Ethereum mainnet, multiple testnets and permissioned EVM-compliant networks such as Quorum. To apply a criteria to did:ethr, you will evaluate it against all the variations that matter to you. Each variation should get its own Evaluation. This applies to Level 2 Networks that can operate on multiple Level 1 Networks as well as DID Methods that directly offer support for multiple underlying DID registries.
When creating an Evaluation Report, we recommend noting both the Evaluator and the date of the Evaluation. Many of the criteria are subjective and all of them may evolve. Tracking who made the Evaluation and when they made it will help readers better understand any biases or timeliness issues that may affect the applicability of the Evaluation.
Be selective and acknowledge the subjective. Evaluations do not need to be exhaustive. There is no requirement to answer all the questions. Simply answer the ones most relevant to the use contemplated. Similarly, understand that any recorded Evaluation is going to represent the biases of the Evaluator in the context of their target use. Even the same Evaluator, evaluating the same Method for a different use, may come up with slightly different answers—for example, that which is economically accessible for small businesses might not be cost-effective for refugees, and that could affect how well-suited a given Method is for a specific use.
Finally, note that this particular rubric is about decentralization. It doesn't cover all of the other criteria that might be relevant to evaluating a given DID Method. There are security, privacy, and economic concerns that should be considered. We look forward to working with the community to develop additional rubrics for these other areas and encourage Evaluators to use this rubric as a starting point for their own work rather than the final say in the merit of any given Method.
In short, use this rubric to help understand if a given DID Method is decentralized enough for your needs.
To record and report an Evaluation, we recommend two possible formats, either comprehensive or comparative.
A comprehensive Evaluation applies a single set of criteria to just one Method. This set is chosen by the Evaluator; it need not be all possible criteria, but it is all relevant criteria as judged by the Evaluator.
A comparative Evaluation includes multiple Methods in the same table to easily compare and contrast two or more different Methods. This may include any number of criteria. These are the type of reports we use as examples throughout the criteria list.
In addition to the selected criteria, we recommend each report specify
We have grouped our criteria into several categories:
Evaluators should consider criteria from all groups, as best fits your use cases.
Three categories cover how a given Method is governed: Rulemaking, Operations, and Enforcement. Our approach parallels the same separation of authority embodied in the United States Constitution.
Rulemaking addresses who makes the rules and how. (This is the legislative branch in the US.)
Operations addresses how those rules are executed and how everyone knows that they are carried out. (This is the executive branch in the US.)
Enforcement addresses how we find and respond to rule breaking. (This is the judicial branch in the US.)
This mental model is key to understanding the criteria of each section as well as why we included some criteria and not others.
The remaining categories each covers different areas worth considering when evaluating DID Methods.
Design addresses the method as designed. In other words, the output of the rulemaking: what rules apply to this DID method?
Alternatives address the availability and quality of different implementation choices.
Adoption & Diversity covers questions related to how widely a DID Method is used.
Security influences overall trust in the ecosystem. Different DID methods offer different security guarantees, or guarantees of different strengths.
Privacy addresses the ability of a DID method to ensure various privacy mechanisms. When DIDs are used as identifiers for people, it becomes important to consider what tools a DID method offers to operate at different levels of privacy.
When evaluating the governance of DID Methods, three potentially independent layers should be considered: the specification, the network, and the registry.
For Rulemaking, the criteria should be evaluated against all three of the above layers.
For Operations, the criteria should be evaluated against the network and the registry. The specification is taken as a given (it is the core output of Rulemaking).
For Adoption, the criteria should be evaluated for each major software component: wallet, resolver, and registry.
For Alternatives, the criteria should be evaluated against the particular DID Method.
For the examples in the rest of this document we refer to a set of Methods that are familiar to the authors and exhibit interesting characteristics for Evaluation. See the tables below.
The following sources provided example evaluations for this rubric. These are not presented as objective fact, but rather as attempts by contributors to illustrate noteworthy differences using their subjective judgement. Additional contributions that flow into the registry should include a section that documents their source with an additional row in the table.
| Relative ID | Citation | Note |
|---|---|---|
| eval-1 | DID Method Rubric | Evaluations written by Joe Andrieu, for this rubric. Not explicitly funded. |
| eval-2 | DID Method Rubric | Evaluations written by Daniel Hardman, to illustrate this rubric. Not explicitly funded. |
| eval-3 | VeresOne Rubric Evaluation | Funded by Digital Bazaar under DHS SVIP contract 70RSAT20T00000029. Available at https://didevaluations.com/evals/v1.2022.03.01.pdf. |
| eval-4 | DID Web Rubric Evaluation | Funded by Digital Bazaar under DHS SVIP contract 70RSAT20T00000029. Available at https://didevaluations.com/evals/web.2022.03.01.pdf. |
| eval-5 | DID Ion Rubric Evaluation | Funded by Digital Bazaar under DHS SVIP contract 70RSAT20T00000029. Available at https://didevaluations.com/evals/ion.2022.03.01.pdf. |
| Relative ID | Citation | Note |
|---|---|---|
| usecase-1 | Long term verifiable credentials | The use of DIDs as subject identifiers for long term (life-long) verifiable credentials such as earned academic degrees. |
| usecase-2 | User authentication | The use of DIDs for authenticating users for access to system services. |
| usecase-3 | Verifiable software development | The signing of commits by developers and their verification to ensure that source code in a particular git repository is authentic. |
| Method | Specification | Network | Registry |
|---|---|---|---|
| did:peer | Peer DID Method Spec | n/a (communications can flow over any agreeable channel) | Held by each peer |
| did:git | DID git Spec | git (an open source version control system) | Any Method-compliant git repository |
| did:btcr | DID Method BTCR | Bitcoin | Bitcoin |
| did:sov | Sovrin DID Method Spec | Hyperledger Indy | A particular instance of Hyperledger Indy |
| did:ethr | ethr DID Method Spec | Ethereum | Specific smart contracts for each network. |
| did:jolo | Jolocom DID Method Specification | Ethereum | Specific smart contracts for different networks and subnetworks. |
| did:ipid | IPID DID Method | @johnnycrunch | DIDs persisted to IPFS. |
| did:web | did:web Method Specification | CCG work item | DIDs associated with control of a domain name (DNS). |
| did:indy | did:indy Method Spec | Hyperledger Indy community | Supersedes did:sov to service all Indy-based ledgers. |
| did:iota | IOTA DID Method Spec | IOTA Foundation | DIDs persisted to the IOTA tangle. |
| did:keri | The did:keri Method 0.1 | Sam Smith, et al. | DIDs that can migrate from one blockchain to another, or use no blockchain at all. |
| did:hedera | Hedera Hashgraph DID Method Specification | Hashgraph, Inc | DIDs written to a ledger that uses Hashgraph consensus as an alternative to traditional blockchain. |
| did:key | The did:key Method v0.7 | CCG work item | Use a keypair as a DID. |
| did:twit | Twit DID method specification | Gabe Cohen | Publish a DID via Twitter feed. |
| did:pkh | did:pkh Method Specification | Spruce ID | Wrap an identifier (e.g., payment address) from one of many existing blockchains in did format. |
| did:trustbloc | TrustBloc DID Method Specification 0.1 | Trustbloc | Persist DIDs via Sidetree wrapper around a permissioned ledger. |
| did:jlinc | JLINC DID Method Specification | JLincLabs | Register DIDs over the JLinc protocol for sharing data with terms and conditions. |
The term "criteria" is often treated as both a singular and a plural noun. In the singular, we say "The most important criteria is the buyer's age". This singular use of criteria has been in use for over half a century https://www.merriam-webster.com/dictionary/criterion#usage-1 In the plural we say "That proposal doesn't meet all of the criteria." Sometimes people use both in the same sentence: "Select one criteria from the list of criteria."
However, for formal use, "criteria" is more broadly accepted as plural while "criterion" is singular. By this style rule, the last sentence in the previous paragraph should be "Select one criterion from the list of criteria."
As editors of a Note published by the World Wide Web Consortium, we are torn. We would prefer to use rigorous grammar and be consistent in doing so. At the same time, we find attempts to enforce the formal rule sometimes leads people to using the inverse, such as "Select a criteria from the list of criterion." This is the exact opposite of our desired outcome.
In our own work with implementers and developers of the DID Core specification, we have found that the singular "criteria" is readily accepted and understood and leads to no confusion when used, even alongside plural usage. Since the DID Method Rubric is, at its core, a set of criteria with ample reason to refer to, for example, "Criteria #23", we find the combined singular and plural use is cleaner (just stick with "criteria"), less confusing, and more aligned with common usage among our audience.
As such, throughout the DID Method Rubric, we use the term "criteria" to refer to both singular instances of criteria and plural sets of criteria.
href anchor in the HTML for the
criteria itself. See the sections below on identifiers
and versioning. Subcomponents, such as questions, responses,
relevance, and examples, are not separately versioned; any
change to a subcomponent triggers an appropriate version change to
its primary component.
The primary components managed by this registry are criteria for evaluating DID Methods, with as many as eight subcomponents: name, id, version, question, responses, relevance, examples, and, optionally, a source. In addition, the DID Method Rubric maintains a list of cited references: DID methods, use cases, and evaluations. The criteria are independently identified and versioned (see those sections for details) while references cited in the criteria themselves link to their full citation in the appropriate list.
Each criteria needs a human-friendly, short, descriptive name that captures the essence of the criteria as concisely as possible.
The criteria id must be explicit, unique, and persistent. See the section on Identifiers for details.
The criteria version must increment, as appropriate, any time the criteria, or any of its subcomponents, is updated. See the section on Versioning for details.
This subcomponent presents the key question for evaluating the criteria. The question MUST be a single inquiry that gets to the heart of the criteria. It should be reasonably clear to a competent professional skilled in the art how to resolve the question into one of the possible responses.
This subcomponent lists the expected responses to the question.
This subcomponent explains why this criteria is useful. Readers should be able to understand the extent to which this particular criteria is applicable to their situation.
This section lists example evaluations of different DID methods using this criteria, presented as a table for easy correlation between evaluations of different methods using the same criteria.
This subcomponent, if present, MUST list the specific external source of the criteria, if any, by referring to its entry in the Rubric’s list of Sources Cited. Source entries should include both the name (or title) of the source as well as a URL. When possible, sources should specify the precise page, section, and/or anchor that points to the specific criteria in the source document. Each source listed in the criteria MUST also be listed in the Sources Cited section.
Each criteria must be explicitly, uniquely, and persistently identified using incremental numbers. New criteria should use the next available increment based on the highest numbered identifier in the current publication.
Previous numbered identifiers MUST NOT be re-used, even if the criteria so identified are no longer published; this maintains the persistent linkability of previously published versions. Such “retired” criteria MAY be listed in a Prior Criteria section linking directly to the latest published Rubric containing the retired criteria; this Prior Criteria section MUST also contain an anchor with the canonical id.
The registry shall maintain an entry with the “next available” number. Additions should use the “next available” number to construct an identifier for the new criteria, and update the “next available” entry at the same time. Editors will manage any sequencing errors when accepting PRs.
Identifiers must be constructed by appending that numerical value to the string “criteria-” and incorporated in the ID of the heading element for that criteria, which when placed in the Rubric appears as something like (note the version in its own span after the permalink):
<h4 id="criteria-1">Open contribution (participation)</h4> <a href="#criteria-1">https://www.w3.org/TR/did-rubric#criteria-1</a> <span>1.0.0</span>
Criteria versions allow for incremental improvements while retaining long-term referenceability.
Every example evaluation cited in the Notes entry in an evaluation MUST link to a proper citation in this section.
This is not a comprehensive listing of DID Method Rubric evaluations. It is expected that such evaluations are developed and published elsewhere, in support of various methods and use cases. Only those evaluations cited as Example Evaluations in current criteria will be listed.
Entries in this section MUST include:
Every use case cited in the Notes entry in an evaluation MUST link to an entry in this section.
A collection of the use cases referenced by different evaluations. This is not a comprehensive list (see the DID Use Cases and Requirements document for a more detailed discussion). Rather, it is an aggregation of all the use cases cited by example evaluations in current criteria.
Every DID method cited in the Example Evaluations MUST link to a proper citation in this section.
This is not a comprehensive list (see the DID Specification Registries for a current list of registered Methods). Rather, this section aggregates the methods used in example evaluations of current criteria for easy reference.
Each entry in the table SHOULD contain the following:
The Editors of the DID Method Rubric Registry MUST consider all of the policies above when reviewing additions to the registry and MUST reject registry entries if they violate any of the policies in this section. Entities registering additions can challenge rejections first with the W3C DID Working Group and then, if they are not satisfied with the outcome, with the W3C Staff. W3C Staff need not be consulted on changes to the DID Method Rubric Registry, but do have the final authority on registry contents. This is to ensure that W3C can adequately respond to time sensitive legal, privacy, security, moral, or other pressing concerns without putting an undue operational burden on W3C Staff.
Submissions to the registry that meet all the criteria listed above will be considered for inclusion, however the editors retain the responsibility of curating the content to ensure the broadest applicability of the DID Method Rubric with the goal of enabling effective evaluations of any DID Method for any legitimate use case.
This DID Method Rubric one framework for evaluating DID Methods. It offers a set of criteria which can be used selectively by Evaluators to better understand and document their considerations when deciding to support or adopt a given DID Method.
This Note is a derivative work of A Rubric for Decentralization of DID Methods, a collaborative paper written at Rebooting the Web of Trust IX by Joe Andrieu, Shannon Appelcline, Amy Guy, Joachim Lohkamp, Drummond Reed, Markus Sabadello, and Oliver Terbu.
Referenced in:
Referenced in: