DID Method Rubric v2.0

W3C Group Note

More details about this document
This version:
https://www.w3.org/TR/2026/NOTE-did-rubric-20260127/
Latest published version:
https://www.w3.org/TR/did-rubric/
Latest editor's draft:
https://w3c.github.io/did-rubric/
History:
https://www.w3.org/standards/history/did-rubric/
Commit history
Editors:
Joe Andrieu (Invited Expert)
Ryan Grant (Digital Contract Design)
Daniel Hardman (Invited Expert)
Authors:
Joe Andrieu (Invited Expert)
Daniel Hardman (Invited Expert)
Shannon Appelcline
Amy Guy
Joachim Lohkamp
Drummond Reed (Evernym)
Markus Sabadello (Danube Tech)
Oliver Terbu
Feedback:
GitHub w3c/did-rubric (pull requests, new issue, open issues)
public-did-wg@w3.org with subject line [did-rubric] … message topic … (archives)

Abstract

The communities behind Decentralized Identifiers (DIDs) bring together a diverse group of contributors who have decidedly different notions of exactly what "decentralization" means.

Rather than attempting to resolve this potentially unresolvable question, we propose a rubric — a scoring guide used to evaluate performance, a product, or a project — that teaches how to evaluate a given DID Method according to one's own requirements.

This rubric presents a set of criteria which an Evaluator can apply to any DID Method based on the use cases most relevant to them. We avoid reducing the Evaluation to a single number because the criteria tend to be multidimensional and many of the possible responses are not necessarily good or bad. It is up to the Evaluator to understand how each response in each criteria might illuminate favorable or unfavorable consequences for their needs.

This rubric is a collection of criteria for creating Evaluation Reports that assist people in choosing a DID Method. While this rubric assists in the evaluation of many aspects of a DID Method, it is not exhaustive.

Status of This Document

This is a preview

Do not attempt to implement this version of the specification. Do not reference this version as authoritative in any way. Instead, see https://w3c.github.io/did-rubric/ for the Editor's draft.

This section describes the status of this document at the time of its publication. A list of current W3C publications and the latest revision of this technical report can be found in the W3C standards and drafts index.

This document was published by the Decentralized Identifier Working Group as a Group Note using the Note track.

This Group Note is endorsed by the Decentralized Identifier Working Group, but is not endorsed by W3C itself nor its Members.

The W3C Patent Policy does not carry any licensing requirements or commitments on this document.

This document is governed by the 18 August 2025 W3C Process Document.

1. Introduction

This section is non-normative.

1.1 Background

A rubric is a tool used in academia to communicate expectations and evaluate performance. It consists of a set of criteria to be evaluated, possible responses for each criteria, and a scoring guide explaining both how to choose and interpret each response. The act of evaluating a rubric, which we call an Evaluation, provides basis for self-evaluation, procurement decisions, or even marketing points. Written records of an evaluation, which we'll call an Evaluation Report, document how a particular subject is measured against the criteria. For students, a rubric helps to clarify how their work will be evaluated by others. For Evaluators, a rubric provides a consistent framework for investigating and documenting the performance of a subject against a particular set of criteria.

We were inspired to develop a rubric for decentralization when discussions about the requirements for decentralized identifiers, aka DIDs, led to intractable disagreement. It became clear that no single definition of "decentralized" would suffice for all of the motivations that inspired developers of DID Methods to work on a new decentralized approach to identity. Despite this challenge, two facts remained clear:

  1. the people invested in this work shared a common goal of reversing the problems with centralized identity systems and
  2. they also had numerous, distinct reasons for doing so.

Rather than attempt to force a definition of "decentralized" that might work for many but would alienate others, the group set out to capture the measurable factors that could enable Evaluators to judge the decentralization of DID Methods based on their own independent requirements, with minimal embedded bias.

1.2 How to apply this rubric

Pick the most important criteria for your use, ask each question, and select the most appropriate response. Do this for all of the DID Methods under consideration.

Each Evaluation should start with an explicit framing of the use under consideration. Are you evaluating the Method for use in Internet-of-Things (IoT)? For school childrens' extra-curricular activities? For international travel? The use, or set of uses, will directly affect how some of the questions are answered.

Where a given Method offers variability, such as multiple networks for the same Method, then evaluate each variant. For example, did:ethr supports Ethereum mainnet, multiple testnets and permissioned EVM-compliant networks such as Quorum. To apply a criteria to did:ethr, you will evaluate it against all the variations that matter to you. Each variation should get its own Evaluation. This applies to Level 2 Networks that can operate on multiple Level 1 Networks as well as DID Methods that directly offer support for multiple underlying DID registries.

When creating an Evaluation Report, we recommend noting both the Evaluator and the date of the Evaluation. Many of the criteria are subjective and all of them may evolve. Tracking who made the Evaluation and when they made it will help readers better understand any biases or timeliness issues that may affect the applicability of the Evaluation.

Be selective and acknowledge the subjective. Evaluations do not need to be exhaustive. There is no requirement to answer all the questions. Simply answer the ones most relevant to the use contemplated. Similarly, understand that any recorded Evaluation is going to represent the biases of the Evaluator in the context of their target use. Even the same Evaluator, evaluating the same Method for a different use, may come up with slightly different answers—for example, that which is economically accessible for small businesses might not be cost-effective for refugees, and that could affect how well-suited a given Method is for a specific use.

Finally, note that this particular rubric is about decentralization. It doesn't cover all of the other criteria that might be relevant to evaluating a given DID Method. There are security, privacy, and economic concerns that should be considered. We look forward to working with the community to develop additional rubrics for these other areas and encourage Evaluators to use this rubric as a starting point for their own work rather than the final say in the merit of any given Method.

In short, use this rubric to help understand if a given DID Method is decentralized enough for your needs.

1.3 Evaluation reports

To record and report an Evaluation, we recommend two possible formats, either comprehensive or comparative.

A comprehensive Evaluation applies a single set of criteria to just one Method. This set is chosen by the Evaluator; it need not be all possible criteria, but it is all relevant criteria as judged by the Evaluator.

A comparative Evaluation includes multiple Methods in the same table to easily compare and contrast two or more different Methods. This may include any number of criteria. These are the type of reports we use as examples throughout the criteria list.

In addition to the selected criteria, we recommend each report specify

  1. The Method(s) being evaluated
  2. A link to the Method specification
  3. The Evaluator(s)
  4. The date of the Evaluation
  5. A description of the use case(s) for which the Method is being evaluated
  6. The rubric used for the Evaluation, along with reference to the specific version.
  7. Optionally, a URL for retrieving the report.

1.4 Categories of criteria

We have grouped our criteria into several categories:

  1. Rulemaking
  2. Design
  3. Operations
  4. Enforcement
  5. Alternatives
  6. Adoption & Diversity
  7. Security
  8. Privacy

Evaluators should consider criteria from all groups, as best fits your use cases.

Three categories cover how a given Method is governed: Rulemaking, Operations, and Enforcement. Our approach parallels the same separation of authority embodied in the United States Constitution.

Rulemaking addresses who makes the rules and how. (This is the legislative branch in the US.)

Operations addresses how those rules are executed and how everyone knows that they are carried out. (This is the executive branch in the US.)

Enforcement addresses how we find and respond to rule breaking. (This is the judicial branch in the US.)

This mental model is key to understanding the criteria of each section as well as why we included some criteria and not others.

The remaining categories each covers different areas worth considering when evaluating DID Methods.

Design addresses the method as designed. In other words, the output of the rulemaking: what rules apply to this DID method?

Alternatives address the availability and quality of different implementation choices.

Adoption & Diversity covers questions related to how widely a DID Method is used.

Security influences overall trust in the ecosystem. Different DID methods offer different security guarantees, or guarantees of different strengths.

Privacy addresses the ability of a DID method to ensure various privacy mechanisms. When DIDs are used as identifiers for people, it becomes important to consider what tools a DID method offers to operate at different levels of privacy.

1.5 The architecture

​When evaluating the governance of DID Methods, three potentially independent layers should be considered: the specification, the network, and the registry.

For Rulemaking, the criteria should be evaluated against all three of the above layers.

For Operations, the criteria should be evaluated against the network and the registry. The specification is taken as a given (it is the core output of Rulemaking).

For Adoption, the criteria should be evaluated for each major software component: wallet, resolver, and registry.

For Alternatives, the criteria should be evaluated against the particular DID Method.

For the examples in the rest of this document we refer to a set of Methods that are familiar to the authors and exhibit interesting characteristics for Evaluation. See the tables below.

1.6 DID Evaluations Cited

The following sources provided example evaluations for this rubric. These are not presented as objective fact, but rather as attempts by contributors to illustrate noteworthy differences using their subjective judgement. Additional contributions that flow into the registry should include a section that documents their source with an additional row in the table.

Relative IDCitationNote
eval-1 DID Method Rubric Evaluations written by Joe Andrieu, for this rubric. Not explicitly funded.
eval-2 DID Method Rubric Evaluations written by Daniel Hardman, to illustrate this rubric. Not explicitly funded.
eval-3 VeresOne Rubric Evaluation Funded by Digital Bazaar under DHS SVIP contract 70RSAT20T00000029. Available at https://didevaluations.com/evals/v1.2022.03.01.pdf.
eval-4 DID Web Rubric Evaluation Funded by Digital Bazaar under DHS SVIP contract 70RSAT20T00000029. Available at https://didevaluations.com/evals/web.2022.03.01.pdf.
eval-5 DID Ion Rubric Evaluation Funded by Digital Bazaar under DHS SVIP contract 70RSAT20T00000029. Available at https://didevaluations.com/evals/ion.2022.03.01.pdf.

1.7 Use Cases Referenced

Relative IDCitationNote
usecase-1 Long term verifiable credentials The use of DIDs as subject identifiers for long term (life-long) verifiable credentials such as earned academic degrees.
usecase-2 User authentication The use of DIDs for authenticating users for access to system services.
usecase-3 Verifiable software development The signing of commits by developers and their verification to ensure that source code in a particular git repository is authentic.

1.8 Methods Considered

Method Specification Network Registry
did:peer Peer DID Method Spec n/a (communications can flow over any agreeable channel) Held by each peer
did:git DID git Spec git (an open source version control system) Any Method-compliant git repository
did:btcr DID Method BTCR Bitcoin Bitcoin
did:sov Sovrin DID Method Spec Hyperledger Indy A particular instance of Hyperledger Indy
did:ethr ethr DID Method Spec Ethereum Specific smart contracts for each network.
did:jolo Jolocom DID Method Specification Ethereum Specific smart contracts for different networks and subnetworks.
did:ipid IPID DID Method @johnnycrunch DIDs persisted to IPFS.
did:web did:web Method Specification CCG work item DIDs associated with control of a domain name (DNS).
did:indy did:indy Method Spec Hyperledger Indy community Supersedes did:sov to service all Indy-based ledgers.
did:iota IOTA DID Method Spec IOTA Foundation DIDs persisted to the IOTA tangle.
did:keri The did:keri Method 0.1 Sam Smith, et al. DIDs that can migrate from one blockchain to another, or use no blockchain at all.
did:hedera Hedera Hashgraph DID Method Specification Hashgraph, Inc DIDs written to a ledger that uses Hashgraph consensus as an alternative to traditional blockchain.
did:key The did:key Method v0.7 CCG work item Use a keypair as a DID.
did:twit Twit DID method specification Gabe Cohen Publish a DID via Twitter feed.
did:pkh did:pkh Method Specification Spruce ID Wrap an identifier (e.g., payment address) from one of many existing blockchains in did format.
did:trustbloc TrustBloc DID Method Specification 0.1 Trustbloc Persist DIDs via Sidetree wrapper around a permissioned ledger.
did:jlinc JLINC DID Method Specification JLincLabs Register DIDs over the JLinc protocol for sharing data with terms and conditions.

1.9 Criteria and Criterion

The term "criteria" is often treated as both a singular and a plural noun. In the singular, we say "The most important criteria is the buyer's age". This singular use of criteria has been in use for over half a century https://www.merriam-webster.com/dictionary/criterion#usage-1 In the plural we say "That proposal doesn't meet all of the criteria." Sometimes people use both in the same sentence: "Select one criteria from the list of criteria."

However, for formal use, "criteria" is more broadly accepted as plural while "criterion" is singular. By this style rule, the last sentence in the previous paragraph should be "Select one criterion from the list of criteria."

As editors of a Note published by the World Wide Web Consortium, we are torn. We would prefer to use rigorous grammar and be consistent in doing so. At the same time, we find attempts to enforce the formal rule sometimes leads people to using the inverse, such as "Select a criteria from the list of criterion." This is the exact opposite of our desired outcome.

In our own work with implementers and developers of the DID Core specification, we have found that the singular "criteria" is readily accepted and understood and leads to no confusion when used, even alongside plural usage. Since the DID Method Rubric is, at its core, a set of criteria with ample reason to refer to, for example, "Criteria #23", we find the combined singular and plural use is cleaner (just stick with "criteria"), less confusing, and more aligned with common usage among our audience.

As such, throughout the DID Method Rubric, we use the term "criteria" to refer to both singular instances of criteria and plural sets of criteria.

2. Registration Process

This registry — the DID Method Rubric Registry — provides a public vehicle for publishing updated DID Method Rubric criteria. In order to add or update a criteria, a submitter MUST submit a modification request for this registry, as a pull request on the repository where this registry is hosted, where the modification request adheres to the following policies:

2.1 General Requirements

  1. If there are copyright, trademark, or intellectual property rights concerns, the addition and use MUST be authorized in writing by the intellectual property rights holder under a F/RAND license. For example, criteria that use trademarked brand names, use the titles or excerpts from copyrighted works, or require patented technology to perform the evaluation.
  2. Additions MUST NOT create unreasonable legal, security, moral, or privacy issues that may result in direct or indirect harm to others, including violations of the W3C Code Of Ethics And Professional Conduct https://www.w3.org/Consortium/cepc/ Examples of unacceptable additions include any containing hate speech, unprofessional or unethical attacks, proprietary information, and any personal data or personally identifiable information (excepting identification details of the submitter).
  3. All criteria MUST be uniquely identified and versioned to ensure permanent linkability with version trackability. Those identifiers MUST be present as an href anchor in the HTML for the criteria itself. See the sections below on identifiers and versioning. Subcomponents, such as questions, responses, relevance, and examples, are not separately versioned; any change to a subcomponent triggers an appropriate version change to its primary component.
  4. Use cases, methods, and evaluations cited within a criteria MUST include a local link to a full citation in the relevant citation section: Use Cases Referenced, Methods Evaluated, and Evaluations Cited, respectively.
  5. Rubric criteria and associated metadata in the DID Method Rubric Registry MAY be updated or removed at the editors’ discretion, including the addition or removal of categories of metadata such as Use Cases Referenced or Implementations of Note.

2.2 Component Requirements

The primary components managed by this registry are criteria for evaluating DID Methods, with as many as eight subcomponents: name, id, version, question, responses, relevance, examples, and, optionally, a source. In addition, the DID Method Rubric maintains a list of cited references: DID methods, use cases, and evaluations. The criteria are independently identified and versioned (see those sections for details) while references cited in the criteria themselves link to their full citation in the appropriate list.

2.2.1 Criteria

  1. Updates to criteria MUST follow the following versioning guidelines.
  2. Proposed criteria without at least three independent example evaluations MUST have the word “PROVISIONAL” as the last term in its name. Once at least three independent example evaluations exist, the word “PROVISIONAL” SHOULD be removed.
  3. Accepted (non-provisional) criteria MUST have at least three example evaluations, and generally SHOULD have no more example evaluations than there are possible responses to its question. Each example should distinctly illustrate one possible response to the criteria's question.
  4. To be considered, criteria MUST have the following subcomponents:
    1. Name
    2. Identifier
    3. Version
    4. Question
    5. Possible Responses
    6. Relevance
    7. Example Evaluations
  5. Additionally, criteria MAY have the following optional subcomponents:
    1. Source

2.2.2 Criteria.Name

Each criteria needs a human-friendly, short, descriptive name that captures the essence of the criteria as concisely as possible.

2.2.3 Criteria.Id

The criteria id must be explicit, unique, and persistent. See the section on Identifiers for details.

2.2.4 Criteria.Version

The criteria version must increment, as appropriate, any time the criteria, or any of its subcomponents, is updated. See the section on Versioning for details.

2.2.5 Criteria.Question

This subcomponent presents the key question for evaluating the criteria. The question MUST be a single inquiry that gets to the heart of the criteria. It should be reasonably clear to a competent professional skilled in the art how to resolve the question into one of the possible responses.

2.2.6 Criteria.Responses

This subcomponent lists the expected responses to the question.

  1. Responses are either labeled or open ended.
  2. Labeled responses MUST include a label and a meaning for that label.
  3. Labels for a response MUST start with “A” in each criteria and progress sequentially through the English alphabet.
  4. Open ended responses specify how an evaluator should fill in the response.

2.2.7 Criteria.Relevance

This subcomponent explains why this criteria is useful. Readers should be able to understand the extent to which this particular criteria is applicable to their situation.

2.2.8 Criteria.Examples

This section lists example evaluations of different DID methods using this criteria, presented as a table for easy correlation between evaluations of different methods using the same criteria.

  1. Each example must have the following entries, each as its own column(s) in the examples table.
    1. Method
    2. Responses (on column for each element)
    3. Notes
  2. Some criteria MAY require an additional "Referent" entry, in its own column in the examples table.
  3. Example Evaluations for a given criteria SHOULD highlight distinct responses. That is, each evaluation should have a different set of responses from the other example evaluations. The point of the examples is to illustrate how different DID methods score differently.
  4. The editors will curate the list of proposed examples to provide a best effort illustration of each criteria, with consideration to highlighting a broad corpus of methods.
  5. Criteria with fewer than 3 example evaluations MUST be marked “PROVISIONAL”
  6. Criteria SHOULD have no more example evaluations than there are possible responses.
2.2.8.1 Criteria.Examples.Method
  1. Example evaluations MUST specify the evaluated method in the first column, using a relative link to the method’s entry in the Methods Evaluated section. The entry in the Methods Evaluated table MUST conform to the requirements in that section.
  2. The title of this column MUST be “Method”.
  3. The Method entry MUST uniquely identify a specific DID Method specification suitable for evaluation, including any specified variant such as testnet, mainnet, etc.
2.2.8.2 Criteria.Examples.Referent
  1. Evaluations MAY refer to open ended responses from other evaluations. If present, this referent MUST be listed as a possible response in another criteria’s evaluation. In this manner, a first criteria can elicit structural elements to be separately evaluated in subsequent criteria. This “structure-variable” pattern allows the Rubric to support criteria that matches the structural complexity of the Method in question.
  2. If present, the Referent MUST be in the second column
  3. The column title MUST be appropriate for the referred structure, e.g., “Layer” or “Governing body”, as understood by the criteria which establishes the structural elements.
2.2.8.3 Criteria.Examples.Responses
  1. Example evaluations MUST include at least one response, and may include multiple columns when appropriate. Each column allows for responses to particular system elements such as “Specification”, “Network”, and “Registry”.
  2. Column names such as "Specification", "Network", and "Registry" may be abbreviated as "Spec.", "Net.", and "Reg."
  3. The number and labels for response columns MUST be proposed by the initial PR and will be finalized when the criteria moves from PROVISIONAL to accepted.
  4. Each column MUST contain an entry that communicates the evaluator’s judgment of the best response to the question for this method for that element (Spec, Reg, Net).
  5. For improved comparability across different methods, responses SHOULD be based on the possible responses listed in the criteria. However, evaluators MAY provide any appropriate response, based on their own judgment
  6. Responses MAY be a variant of the Possible Responses, to capture a nuance beyond those listed. For example, one might append a “+” or a “-”, e.g., A+, or even provide two different responses when different situations merit a distinction, e.g., A/D. The reasoning behind these variants &nbps; SHOULD be explained in the Notes column.
  7. A response of “n/a” (short for Not Applicable) SHOULD be used if the criteria doesn’t apply to the evaluated method.
2.2.8.4 Criteria.Examples.Notes
  1. This entry SHOULD include any details that help explain why that particular response was chosen.It MAY be blank. However, the column MUST be included.
  2. The Notes entry MUST specify a distinct use case listed explicitly in this registry in the Use Cases Referenced section, using the label from that section.
    1. The use case label MUST be delimited by square brackets.
    2. The label text itself must be a hyperlink to the entry in the Use Cases Referenced section, for example [UC.1] where “UC.1” links to “#useCase-1” in the current document.
  3. The Notes entry MUST identify a published evaluation by reference to an entry in the Evaluations Cited section of the Rubric, using the label from that section.
    1. The evaluation label MUST be delimited by square brackets.
    2. The label text itself must be a hyperlink to the entry in the Evaluations Cited section, for example [E.1] where “E.1” links to “#eval-1” in the current document.

2.2.9 Criteria.Source

This subcomponent, if present, MUST list the specific external source of the criteria, if any, by referring to its entry in the Rubric’s list of Sources Cited. Source entries should include both the name (or title) of the source as well as a URL. When possible, sources should specify the precise page, section, and/or anchor that points to the specific criteria in the source document. Each source listed in the criteria MUST also be listed in the Sources Cited section.

2.3 Identifiers

Each criteria must be explicitly, uniquely, and persistently identified using incremental numbers. New criteria should use the next available increment based on the highest numbered identifier in the current publication.

Previous numbered identifiers MUST NOT be re-used, even if the criteria so identified are no longer published; this maintains the persistent linkability of previously published versions. Such “retired” criteria MAY be listed in a Prior Criteria section linking directly to the latest published Rubric containing the retired criteria; this Prior Criteria section MUST also contain an anchor with the canonical id.

The registry shall maintain an entry with the “next available” number. Additions should use the “next available” number to construct an identifier for the new criteria, and update the “next available” entry at the same time. Editors will manage any sequencing errors when accepting PRs.

Identifiers must be constructed by appending that numerical value to the string “criteria-” and incorporated in the ID of the heading element for that criteria, which when placed in the Rubric appears as something like (note the version in its own span after the permalink):

Example 1: Example criteria numbering
<h4 id="criteria-1">Open contribution (participation)</h4>
<a href="#criteria-1">https://www.w3.org/TR/did-rubric#criteria-1</a> <span>1.0.0</span>
This allows permanent links to citations based on the publication date of a given Rubric: https://www.w3.org/TR/2021/NOTE-did-rubric-20210826/#criteria-32 as well as version-free links that resolve to the latest version of the criteria, and if retired, to an entry in the Prior Criteria list which itself links to the last valid presentation of the criteria, so that, for example, https://www.w3.org/TR/did-rubric/#criteria-32 points to the current criteria-32 in the latest published document. When that criteria is retired, that same URL points to the entry in the Prior Criteria list, which links to its last versioned publication, e.g., https://www.w3.org/TR/2021/NOTE-did-rubric-20210826/#criteria-32

2.4 Versioning

Criteria versions allow for incremental improvements while retaining long-term referenceability.

  1. Versions contain a Major, Minor, and Patch number, based on SemVer 2.0
  2. Every new criteria has a version of 1.0.0, even if PROVISIONAL.
  3. Updates to criteria must increment the appropriate version number based on the scope of change:
    1. Major number MUST increment if, and only if, the criteria is fundamentally changed in a way that invalidates existing evaluations. This could be a material change to any subcomponent of the criteria, including removing or changing Possible Responses.
    2. Minor number MUST increment if, and only if, new responses are added, without invalidating existing responses. It’s understood that an evaluator of an existing evaluation may desire to choose the new response if they were evaluating the criteria anew; however, as long as prior evaluations remain valid, a Minor increment SHOULD be applied.
    3. Patch number MUST increment if, and only if, the change to the question is “editorial”, defined by clarifying the original intent of the criteria WITHOUT invalidating existing evaluations.
  4. As long as a given criteria is an evolution of essentially the same consideration, it retains the original identifier, even as its version is updated throughout its lifetime.
  5. Criteria that address fundamentally different considerations MUST be assigned a new criteria number rather than iterating an existing version number.

2.5 Metadata Tables

The registry maintains several tables of metadata, which aggregate references cited in Criteria. Each table presents an internally referencable label, anchor tag, and the necessary details for the contents of each.

2.5.1 Evaluations Cited

Every example evaluation cited in the Notes entry in an evaluation MUST link to a proper citation in this section.

This is not a comprehensive listing of DID Method Rubric evaluations. It is expected that such evaluations are developed and published elsewhere, in support of various methods and use cases. Only those evaluations cited as Example Evaluations in current criteria will be listed.

Entries in this section MUST include:

  1. Evaluators The name and contact information for the parties accepting responsibility for that evaluation. It MAY include each evaluator’s affiliation at the time of the evaluation.
  2. Evaluation Date The performance date of the evaluation.
  3. Funding The source, if any, of funds explicitly paid to perform the evaluation.
  4. Use Cases One or more use cases listed Use Cases Referenced section of this registry, using the label from that section.
    1. The use case label MUST be delimited by square brackets.
    2. The label text itself must be a hyperlink to the entry in the Use Cases Referenced section, for example [UC.1] where “UC.1” links to “#useCase-1” in the current document.
  5. Report URL A permalink where readers can read or download the published evaluation

2.5.2 Use Cases Referenced

Every use case cited in the Notes entry in an evaluation MUST link to an entry in this section.

A collection of the use cases referenced by different evaluations. This is not a comprehensive list (see the DID Use Cases and Requirements document for a more detailed discussion). Rather, it is an aggregation of all the use cases cited by example evaluations in current criteria.

  1. This section MUST list all use cases cited by any example evaluation for current criteria.
  2. Use cases must specify a label for use by citations within the registry.
  3. The label must be of the form UC.x where “x” is a unique, incremental number within the Use Cases Referenced section.
  4. Each use case must have an unique name and an illustrative description of a value-creating use of DIDs.
  5. Use cases SHOULD identify unique contexts or situations not covered by other use cases in the registry.

2.5.3 Methods Evaluated

Every DID method cited in the Example Evaluations MUST link to a proper citation in this section.

This is not a comprehensive list (see the DID Specification Registries for a current list of registered Methods). Rather, this section aggregates the methods used in example evaluations of current criteria for easy reference.

Each entry in the table SHOULD contain the following:

  1. A label for the DID method within the registry
  2. An anchor tag so that the entry in the table can be linked by fragment from example evaluations.
  3. Columns for the Specification, the Network, and the Registry (SNR) evaluated with this label.
  4. In the case that all three SNR entries would be identical for different rows in the same table, subsequent entries MAY refer to the prior entry by Method (with link), along with a description of the difference, in a single column spanning all three SNR columns.
  5. Each SNR entry should uniquely describe the component evaluated.
    1. Specification identifies the DID Method specification under evaluation, with a hyperlink to the specification (at the time of the evaluation).
    2. Network identifies the communications mechanism through which DID operations (create, read, update, and deactivate) are communicated.
    3. Registry should unambiguously identify the state management substrate of the method’s Verifiable Data Registry.
  6. Each SNR entry SHOULD include a hyperlinked URL when additional details are available.

2.6 Editorial Acceptance & Publication

The Editors of the DID Method Rubric Registry MUST consider all of the policies above when reviewing additions to the registry and MUST reject registry entries if they violate any of the policies in this section. Entities registering additions can challenge rejections first with the W3C DID Working Group and then, if they are not satisfied with the outcome, with the W3C Staff. W3C Staff need not be consulted on changes to the DID Method Rubric Registry, but do have the final authority on registry contents. This is to ensure that W3C can adequately respond to time sensitive legal, privacy, security, moral, or other pressing concerns without putting an undue operational burden on W3C Staff.

Submissions to the registry that meet all the criteria listed above will be considered for inclusion, however the editors retain the responsibility of curating the content to ensure the broadest applicability of the DID Method Rubric with the goal of enabling effective evaluations of any DID Method for any legitimate use case.

3. The criteria

4. Conclusion

This DID Method Rubric one framework for evaluating DID Methods. It offers a set of criteria which can be used selectively by Evaluators to better understand and document their considerations when deciding to support or adopt a given DID Method.

A. Terminology

Evaluator
The individual or organization applying this rubric to evaluate a DID Method.
Evaluation
The process of and analysis from answering the question in each selected criteria for a given DID Method. Evaluation could refer to deciding the answer to a single criteria or to a "complete" set, where completeness is in the judgment of the Evaluator.
Evaluation Report
The output of an Evaluation. The written documentation of the result of evaluating one or more DID Methods against this rubric. In practice, this is a completed report template, with answers for all of the criteria selected by the Evaluator.
Network
the channel that enables communication of DID Method operations, .e.g, for BTCR, any of the main bitcoin networks can be used (mainnet, testnet, etc.). For did:ethr, the network is any EVM-compliant network (each DID specifies one and only one such network).
Registry
the instantiated storage and integrated business logic that record the effect of DID Method operations. In did:ethr, the registry is a smart contract. In did:btcr, the registry is the specific bitcoin network (the network and the registry are the same).
you
Throughout this document we use the second-person pronoun "you" to refer to Evaluators of a DID Method applying this Rubric.

B. Possible additional criteria

B.1 Maturity

  1. ​How long has the specification been published?
    1. Just a concept sketch
    2. A complete draft...
    3. ...
    4. Published as a fixed, recommended specification
    5. [Published for X years]
  2. ​How mature is the entity who controls the specification?
  3. ​How long has the specification been in live usage?

B.2 Cryptography

  1. Can the individual use their own cryptographic material for key generation without sharing secrets? ​
  2. Can DID Controllers specify their own cryptographic suite for key generation / signing / hashing / etc.? ​
  3. Can DID Controllers specify ANY cryptographic suite for key generation / signing / hashing / etc.? ​
  4. Do DID Controllers have cryptographically provable control over DID Documents? ​
  5. Are all registry transactions publicly inspectable and cryptographically verifiable?Does the Method support specific cryptographic capabilities?
    1. Multi-sig
    2. Shamir secret sharing
    3. HD Keys
    4. Object capabilities
  6. Does the Method provide a cryptographic DID, or does it try to provide a human-readable name? ​
  7. Do DIDs with a few random substitutions result in different valid DIDs, or is there cryptographic error correction to identify transmission errors and typosquatting? ​
  8. Can the Method-specific-id be generated without the use of a per-Method centralized registry service (as required in the DID specification [DID-CORE])?

B.3 Fiduciary commitments

  1. Do operators of resolvers accept fiduciary responsibility to users? Do any? ​
  2. Do operators of registry nodes accept fiduciary responsibility to users? Do any? ​
  3. Do the parties in charge of governance accept fiduciary responsibility to users? Do any? ​
  4. Do wallet creators & maintainers accept fiduciary responsibility to users? Do any?

B.4 Reliable recovery

  1. Are there mechanisms to recover from key loss? ​
  2. Are there non-administrative mechanisms to recover from key loss ​
  3. Are there cryptographically robust mechanisms for key recovery that allow individuals to select specific advocates or stewards?

B.5 Substitutability

  1. Are the DIDs portable to other Methods? ​
  2. Are the DIDs portable to multiple registries? ​
  3. Does the Method allow DID Controllers to specify where the DID Document resides? ​
  4. Does the Method allow DID Controllers to specify wherever they want the DID Document to reside?

B.6 Revocation / deactivation / deletion

  1. Is it possible to provably remove a DID from the system (and all nodes)? ​
  2. Is it possible to provably remove a DID Document from the system (and all nodes)? ​
  3. Are revocations and deactivations provably documented? ​
  4. Do revocations and deactivations allow for publicly visible explanations? ​
  5. Can cryptographic material be selectively revoked or rotated? ​
  6. Resolution ​
  7. Are all DIDs globally resolvable to a definitive, provably current DID Document? ​
  8. Are private DIDs (which are NOT globally resolvable) supported? ​
  9. Can access to DID Documents be limited to authorized parties? ​
  10. Can you get older versions by version number and by timestamp? ​
  11. Can you get cryptographic proof of the history of changes to a given DID Document?

B.7 Costs

  1. How much does DID creation and key rotation cost a DID Controller? ​
  2. Must individual DIDs be written to the registry? ​
  3. How much do changes to a DID Document cost the DID Controller? ​
  4. Do changes to DID Documents require updating the registry? ​
  5. What is the total cost of ownership for a typical DID and DID Document? ​
  6. Are there free versions of wallets? ​
  7. Are there free versions of registry software? ​
  8. Are the free versions of resolvers?

B.8 Censorship resistance

  1. Is there a single legal or natural entity, or set of known entities, who can be targeted with intent to manipulate the operation or governance of the Method? ​
  2. Is participation in operation of the Method dependent on identification using traditional legal credentials, such as a birth certificate, driver's license, or passport? ​
  3. Are activities on the registry traceable to real world individuals? ​
  4. Can DIDs be disabled or revoked by an administrator? ​
  5. Can DID Documents be edited or removed by an administrator?

B.9 Uncategorized

  1. Is the Method name definitive? (there are no known alternative forks or namespace collisions)
  2. Is the Method resilient against registry forks? ​
  3. Permissioned: governed/operation vs. use/creation ​
  4. Open source: multiple independent implementations ​
  5. Open standard ​
  6. Does the individual create and control? ​
  7. Can the individual choose how keys are managed? ​
  8. Does the issuer/controller have a fiduciary responsibility to DID Controller? ​
  9. Does it support social recovery? ​
  10. What does a single DID cost? TCO ​
  11. Is resolution observable? ​
  12. Are stealth DIDs supported? ​
  13. Is deactivation publicly documented? ​
  14. After control is lost can other people deactivate? ​
  15. Possible confusion between implementations and DID Methods ​
  16. Does it support HD Keys? ​
  17. Are transactions publicly cryptographically verifiable? ​
  18. Are DIDs permanent (unremovable--still able to be deactivated but all traces can never vanish)? ​
  19. Can you get the latest version and older versions? Provable order of versions? ​
  20. Is the Method published? ​
  21. Is that Method independently implementable? ​
  22. did:web and the .onion TLD (truly decentralized) RFC 6761 and 7686 ​
  23. Is there a centralized database? ​
  24. Is its blockchain byzantine fault tolerant? ​
  25. Does a single party control a majority of the source of truth? (Under what conditions can the DID controller lose capability?) ​
  26. If you give control away, can you get it back?

C. Acknowledgements

This Note is a derivative work of A Rubric for Decentralization of DID Methods, a collaborative paper written at Rebooting the Web of Trust IX by Joe Andrieu, Shannon Appelcline, Amy Guy, Joachim Lohkamp, Drummond Reed, Markus Sabadello, and Oliver Terbu.

D. References

D.1 Informative references

[DID-CORE]
Decentralized Identifiers (DIDs) v1.0. Manu Sporny; Amy Guy; Markus Sabadello; Drummond Reed. W3C. 19 July 2022. W3C Recommendation. URL: https://www.w3.org/TR/did-core/
[DID-DEC-RUBRIC]
A Rubric for Decentralization of DID Methods. Joe Andrieu; Shannon Appelcline; Amy Guy; Joachim Lohkamp; Drummond Reed; Markus Sabadello; Oliver Terbu. Rebooting the Web of Trust. Published. URL: https://github.com/WebOfTrustInfo/rwot9-prague/blob/master/final-documents/decentralization-rubric.pdf