Copyright © 2023 World Wide Web Consortium . W3C ® liability , trademark and permissive document license rules apply.
This specification describes mechanisms for ensuring the authenticity and integrity of Verifiable Credentials and similar types of constrained digital documents using cryptography, especially through the use of digital signatures and related mathematical proofs.
This section describes the status of this document at the time of its publication. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.
This document was published by the Verifiable Credentials Working Group as an Editor's Draft.
Publication as an Editor's Draft does not imply endorsement by W3C and its Members.
This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
This document was produced by a group operating under the W3C Patent Policy . W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy .
This document is governed by the 12 June 2023 W3C Process Document .
This section is non-normative.
This specification describes mechanisms for ensuring the authenticity and integrity of Verifiable Credentials and similar types of constrained digital documents using cryptography, especially through the use of digital signatures and related mathematical proofs. Cryptographic proofs enable functionality that is useful to implementors of distributed systems. For example, proofs can be used to:
This section is non-normative.
The operation of Data Integrity is conceptually simple. To create a cryptographic proof, the following steps are performed: 1) Transformation, 2) Hashing, and 3) Proof Generation.
Transformation is a process described by a transformation algorithm that takes input data and prepares it for the hashing process. One example of a possible transformation is to take a record of people's names that attended a meeting, sort the list alphabetically by the individual's family name, and rewrite the names on a piece of paper, one per line, in sorted order. Examples of transformations include canonicalization and binary-to-text encoding.
Hashing is a process described by a hashing algorithm that calculates an identifier for the transformed data using a cryptographic hash function . This process is conceptually similar to how a phone address book functions, where one takes a person's name (the input data) and maps that name to that individual's phone number (the hash). Examples of cryptographic hash functions include SHA-3 and BLAKE-3 .
Proof Generation is a process described by a proof serialization algorithm that calculates a value that protects the integrity of the input data from modification or otherwise proves a certain desired threshold of trust. This process is conceptually similar to the way a wax seal can be used on an envelope containing a letter to establish trust in the sender and show that the letter has not been tampered with in transit. Examples of proof serialization functions include digital signatures and proofs of stake .
To verify a cryptographic proof, the following steps are performed: 1) Transformation, 2) Hashing, and 3) Proof Verification.
During verification, the transformation and hashing steps are conceptually the same as described above.
Proof Verification is a process that is described by a proof verification algorithm that applies a cryptographic proof verification function to see if the input data can be trusted. Possible proof verification functions include digital signatures and proofs of stake .
This specification details how cryptographic software architects and implementers can package these processes together into things called cryptographic suites and provide them to application developers for the purposes of protecting the integrity of application data in transit and at rest.
This section is non-normative.
This specification optimizes for the following design goals:
While this specification primarily focuses on Verifiable Credentials, the design of this technology is generalized, such that it can be used for non-Verifiable Credential use cases. In these instances, implementers are expected to perform their own due diligence and expert review as to the applicability of the technology to their use case.
As well as sections marked as non-normative, all authoring guidelines, diagrams, examples, and notes in this specification are non-normative. Everything else in this specification is normative.
The key words MAY , MUST , MUST NOT , OPTIONAL , RECOMMENDED , REQUIRED , and SHOULD in this document are to be interpreted as described in BCP 14 [ RFC2119 ] [ RFC8174 ] when, and only when, they appear in all capitals, as shown here.
This section is non-normative.
Cannot
GET
/uploads/1nwfFn/terms.html
/uploads/shjPnK/terms.html
This section specifies the data model that is used for expressing data integrity proofs , controller documents , and verification methods .
A data integrity proof provides information about the proof mechanism, parameters required to verify that proof, and the proof value itself. All of this information is provided using Linked Data vocabularies such as the [ SECURITY-VOCABULARY ].
The following attributes are defined for use in a data integrity proof :
urn:uuid:6a1676b8-b51f-11ed-937b-d76685a20ff5
).
The
usage
of
this
property
is
further
explained
in
Section
2.1.2
Proof
Chains
.
DataIntegrityProof
and
Ed25519Signature2020
.
Proof
types
determine
what
other
fields
are
required
to
secure
and
verify
the
proof.
assertionMethod
)
during
a
login
process
(
authentication
)
which
would
then
result
in
the
creation
of
a
Verifiable
Credential
they
never
meant
to
create
instead
of
the
intended
action,
which
was
to
merely
logging
into
a
website.
domain
parameter
is
useful
in
challenge-response
protocols
where
the
verifier
is
operating
from
within
a
security
domain
known
to
the
creator
of
the
proof.
Examples
of
a
domain
parameter
include:
domain.example
(DNS
domain),
https://domain.example:8443
(full
Web
origin),
mycorp-intranet
(well-known
text
string),
and
b31d37d4-dd59-47d3-9dd8-c973da43b63a
(UUID).
domain
is
specified.
The
value
is
used
once
for
a
particular
domain
and
window
of
time.
This
value
is
used
to
mitigate
replay
attacks.
Examples
of
a
challenge
value
include:
1235abcd6789
,
79d34551-ae81-44ae-823b-6dadbab9ebd4
,
and
ruby
.
verificationMethod
specified.
The
contents
of
the
value
MUST
be
a
[
MULTIBASE
]-encoded
binary
value.
Alternative
fields
with
different
encodings
MAY
be
used
to
encode
the
data
necessary
to
verify
the
digital
proof
instead
of
this
value.
The
contents
of
this
value
are
determined
by
a
cryptosuite
and
set
to
the
proof
value
generated
by
the
Add
Proof
Algorithm
.
All of the terms above map to URLs. The vocabulary where these terms are defined is the [ SECURITY-VOCABULARY ].
A proof can be added to a JSON document like the following:
{ "title": "Hello world!" };
The
following
proof
secures
the
document
above
using
the
jcs-eddsa-2022
cryptography
suite
[
DI-EDDSA
],
which
produces
a
verifiable
digital
proof
by
transforming
the
input
data
using
the
JSON
Canonicalization
Scheme
(JCS)
[
RFC8785
]
and
then
digitally
signing
it
using
an
Edwards
Digital
Signature
Algorithm
(EdDSA).
{ "title": "Hello world!", "proof": { "type": "DataIntegrityProof", "cryptosuite": "jcs-eddsa-2022", "created": "2023-03-05T19:23:24Z", "verificationMethod": "https://di.example/issuer#z6MkjLrk3gKS2nnkeWcmcxiZPGskmesDpuwRBorgHxUXfxnG", "proofPurpose": "assertionMethod", "proofValue": "zQeVbY4oey5q2M3XKaxup3tmzN4DRFTLVqpLMweBrSxMY2xHX5XTYV 8nQApmEcqaqA3Q1gVHMrXFkXJeV6doDwLWx" } }
Similarly, a proof can be added to a JSON-LD data document like the following:
{ "@context": {"title": "https://schema.org/title"}, "title": "Hello world!" };
The
following
proof
secures
the
document
above
by
using
the
ecdsa-2019
cryptography
suite
[
DI-ECDSA
],
which
produces
a
verifiable
digital
proof
by
transforming
the
input
data
using
the
RDF
Dataset
Canonicalization
Scheme
[
RDF-CANON
]
and
then
digitally
signing
it
using
the
Elliptic
Curve
Digital
Signature
Algorithm
(ECDSA).
{ "@context": [ {"title": "https://schema.org/title"}, "https://w3id.org/security/data-integrity/v1" ], "title": "Hello world!", "proof": { "type": "DataIntegrityProof", "cryptosuite": "ecdsa-2019", "created": "2020-06-11T19:14:04Z", "verificationMethod": "https://ldi.example/issuer#zDnaepBuvsQ8cpsWrVKw8fbpGpvPeNSjVPTWoq6cRqaYzBKVP", "proofPurpose": "assertionMethod", "proofValue": "zXb23ZkdakfJNUhiTEdwyE598X7RLrkjnXEADLQZ7vZyUGXX8cyJZR BkNw813SGsJHWrcpo4Y8hRJ7adYn35Eetq" } }
This
specification
enables
the
expression
of
dates
and
times,
such
as
through
the
created
and
expires
properties.
This
information
might
be
indirectly
exposed
to
an
individual
if
a
proof
is
processed
and
is
detected
to
be
outside
an
allowable
time
range.
When
displaying
date
and
time
values
related
to
the
validity
of
cryptographic
proofs,
implementers
are
advised
to
respect
the
locale
and
local
calendar
preferences
of
the
individual
[
LTLI
].
Conversion
of
timestamps
to
local
time
values
are
expected
to
consider
the
time
zone
expectations
of
the
individual.
See
Verifiable
Credentials
Data
Model
v2.0
for
more
details
about
representing
time
values
to
individuals.
The Data Integrity specification supports the concept of multiple proofs in a single document. There are two types of multi-proof approaches that are identified: Proof Sets (un-ordered) and Proof Chains (ordered).
A
proof
set
is
useful
when
the
same
data
needs
to
be
secured
by
multiple
entities,
but
where
the
order
of
proofs
does
not
matter,
such
as
in
the
case
of
a
set
of
signatures
on
a
contract.
A
proof
set,
which
has
no
order,
is
represented
by
associating
a
set
of
proofs
with
the
proof
key
in
a
document.
{ "@context": [ {"title": "https://schema.org/title"}, "https://w3id.org/security/data-integrity/v1" ], "title": "Hello world!", "proof": [{ "type": "DataIntegrityProof", "cryptosuite": "eddsa-2022", "created": "2020-11-05T19:23:24Z", "verificationMethod": "https://ldi.example/issuer/1#z6MkjLrk3gKS2nnkeWcmcxiZPGskmesDpuwRBorgHxUXfxnG", "proofPurpose": "assertionMethod", "proofValue": "z4oey5q2M3XKaxup3tmzN4DRFTLVqpLMweBrSxMY2xHX5XTYVQeVbY8 nQAVHMrXFkXJpmEcqdoDwLWxaqA3Q1geV6" }, { "type": "DataIntegrityProof", "cryptosuite": "eddsa-2022", "created": "2020-11-05T13:08:49Z", "verificationMethod": "https://pfps.example/issuer/2#z6MkGskxnGjLrk3gK S2mesDpuwRBokeWcmrgHxUXfnncxiZP", "proofPurpose": "assertionMethod", "proofValue": "z5QLBrp19KiWXerb8ByPnAZ9wujVFN8PDsxxXeMoyvDqhZ6Qnzr5CG9 876zNht8BpStWi8H2Mi7XCY3inbLrZrm95" }] }
A
proof
chain
is
useful
when
the
same
data
needs
to
be
signed
by
multiple
entities
and
the
order
of
when
the
proofs
occurred
matters,
such
as
in
the
case
of
a
notary
counter-signing
a
proof
that
had
been
created
on
a
document.
A
proof
chain,
where
proof
order
needs
to
be
preserved,
is
expressed
by
providing
at
least
one
proof
with
an
id
,
such
as
a
UUID
as
a
URN,
and
another
proof
with
a
previousProof
value
that
identifies
the
previous
proof.
{ "@context": [ {"title": "https://schema.org/title"}, "https://w3id.org/security/data-integrity/v1" ], "title": "Hello world!", "proof": [{ "id": "urn:uuid:60102d04-b51e-11ed-acfe-2fcd717666a7", "type": "DataIntegrityProof", "cryptosuite": "eddsa-2022", "created": "2020-11-05T19:23:42Z", "verificationMethod": "https://ldi.example/issuer/1#z6MkjLrk3gKS2nnkeWcmcxiZPGskmesDpuwRBorgHxUXfxnG", "proofPurpose": "assertionMethod", "proofValue": "zVbY8nQAVHMrXFkXJpmEcqdoDwLWxaqA3Q1geV64oey5q2M3XKaxup 3tmzN4DRFTLVqpLMweBrSxMY2xHX5XTYVQe" }, { "type": "DataIntegrityProof", "cryptosuite": "eddsa-2022", "created": "2020-11-05T21:28:14Z", "verificationMethod": "https://pfps.example/issuer/2#z6MkGskxnGjLrk3g KS2mesDpuwRBokeWcmrgHxUXfnncxiZP", "proofPurpose": "assertionMethod", "proofValue": "z6Qnzr5CG9876zNht8BpStWi8H2Mi7XCY3inbLrZrm955QLBrp19Ki WXerb8ByPnAZ9wujVFN8PDsxxXeMoyvDqhZ", "previousProof": "urn:uuid:60102d04-b51e-11ed-acfe-2fcd717666a7" }] }
A proof that describes its purpose helps prevent it from being misused for some other purpose.
Add
a
mention
of
JWK's
key_ops
parameter
and
WebCrypto's
KeyUsage
restrictions;
explain
that
Proof
Purpose
serves
a
different
goal
and
allows
for
finer-grained
restrictions.
Dave
Longley
suggested
that
proof
purposes
enable
verifiers
to
know
what
the
proof
creator's
intent
was
so
the
message
can't
be
accidentally
abused
for
another
purpose,
e.g.,
a
message
signed
for
the
purpose
of
merely
making
an
assertion
(and
thus
perhaps
intended
to
be
widely
shared)
being
abused
as
a
message
to
authenticate
to
a
service
or
take
some
action
(invoke
a
capability).
It's
a
goal
to
keep
the
number
of
them
limited
to
as
few
categories
as
are
really
needed
to
accomplish
this
goal.
The following is a list of commonly used proof purpose values.
Note:
The
Authorization
Capabilities
[
ZCAP
]
specification
defines
additional
proof
purposes
for
that
use
case,
such
as
capabilityInvocation
and
capabilityDelegation
.
A controller document is a set of data that specifies one or more relationships between a controller and a set of data, such as a set of public cryptographic keys. The controller document SHOULD contain verification relationships that explicitly permit the use of certain verification methods for specific purposes.
A controller document can express verification methods , such as cryptographic public keys , which can be used to authenticate or authorize interactions with the controller or associated parties. For example, a cryptographic public key can be used as a verification method with respect to a digital signature; in such usage, it verifies that the signer could use the associated cryptographic private key. Verification methods might take many parameters. An example of this is a set of five cryptographic keys from which any three are required to contribute to a cryptographic threshold signature.
The
verificationMethod
property
is
OPTIONAL
.
If
present,
the
value
MUST
be
a
set
of
verification
methods
,
where
each
verification
method
is
expressed
using
a
map
.
The
verification
method
map
MUST
include
the
id
,
type
,
controller
,
and
specific
verification
material
properties
that
are
determined
by
the
value
of
type
and
are
defined
in
2.3.1.1
Verification
Material
.
A
verification
method
MAY
include
additional
properties.
Verification
methods
SHOULD
be
registered
in
the
Data
Integrity
Specification
Registries
[TBD
-
DIS-REGISTRIES].
The
verificationMethod
property
is
REQUIRED
for
proofs,
unlike
controller
documents,
for
which
it
is
optional.
See
section
2.1
Proofs
.
The
value
of
the
id
property
for
a
verification
method
MUST
be
a
string
that
conforms
to
the
conforms
to
the
[
URL
]
syntax.
type
property
MUST
be
a
string
that
references
exactly
one
verification
method
type.
In
order
to
maximize
global
interoperability,
the
verification
method
type
SHOULD
be
registered
in
the
Data
Integrity
Specification
Registries
[TBD
--
DIS-REGISTRIES].
controller
property
MUST
be
a
string
that
conforms
to
the
[
URL
]
syntax.
revoked
property
is
OPTIONAL
.
If
provided,
it
MUST
be
an
[
XMLSCHEMA11-2
]
combined
date
and
time
string
specifying
when
the
verification
method
ceased
to
be
used.
Once
the
value
is
set,
it
is
not
expected
to
be
updated,
and
systems
depending
on
the
value
are
expected
to
not
verify
any
proofs
associated
with
the
verification
method
at
or
after
the
time
of
revocation.
{ "@context": [ "https://www.w3.org/ns/did/v1", "https://w3id.org/security/data-integrity/v1" ] "id": "did:example:123456789abcdefghi", ... "verificationMethod": [{ "id": ..., "type": ..., "controller": ..., "publicKeyJwk": ... }, { "id": ..., "type": ..., "controller": ..., "publicKeyMultibase": ... }] }
The
semantics
of
the
controller
property
are
the
same
when
the
subject
of
the
relationship
is
the
controller
document
as
when
the
subject
of
the
relationship
is
a
verification
method
,
such
as
a
cryptographic
public
key.
Since
a
key
can't
control
itself,
and
the
key
controller
cannot
be
inferred
from
the
controller
document
,
it
is
necessary
to
explicitly
express
the
identity
of
the
controller
of
the
key.
The
difference
is
that
the
value
of
controller
for
a
verification
method
is
not
necessarily
a
controller
.
controllers
are
expressed
using
the
`
controller
`
property
at
the
highest
level
of
the
controller
document
.
Verification
material
is
any
information
that
is
used
by
a
process
that
applies
a
verification
method
.
The
type
of
a
verification
method
is
expected
to
be
used
to
determine
its
compatibility
with
such
processes.
Examples
of
verification
material
include
JsonWebKey
and
Multikey
.
A
cryptographic
suite
specification
is
responsible
for
specifying
the
verification
method
type
and
its
associated
verification
material
format.
For
examples,
see
the
Data
Integrity
ECDSA
Cryptosuites
and
the
Data
Integrity
EdDSA
Cryptosuites
.
For
a
list
of
verification
method
types
please
see
the
[
SECURITY-VOCABULARY
].
To increase the likelihood of interoperable implementations, this specification limits the number of formats for expressing verification material in a controller document . The fewer formats that implementers have to implement, the more likely it will be that they will support all of them. This approach attempts to strike a delicate balance between ease of implementation and supporting formats that have historically had broad deployment.
A
verification
method
MUST
NOT
contain
multiple
verification
material
properties
for
the
same
material.
For
example,
expressing
key
material
in
a
verification
method
using
both
publicKeyJwk
and
publicKeyMultibase
at
the
same
time
is
prohibited.
An example of a controller document containing verification methods using both properties above is shown below.
{ "@context": [ "https://www.w3.org/ns/did/v1", "https://w3id.org/security/suites/jws-2020/v1", "https://w3id.org/security/multikey/v1" ] "id": "did:example:123456789abcdefghi", ... "verificationMethod": [{ "id": "did:example:123#_Qq0UL2Fq651Q0Fjd6TvnYE-faHiOpRlPVQcY_-tA4A", "type": "JsonWebKey", // external (property value) "controller": "did:example:123", "publicKeyJwk": { "crv": "Ed25519", // external (property name) "x": "VCpo2LMLhn6iWku8MKvSLg2ZAoC-nlOyPVQaO3FxVeQ", // external (property name) "kty": "OKP", // external (property name) "kid": "_Qq0UL2Fq651Q0Fjd6TvnYE-faHiOpRlPVQcY_-tA4A" // external (property name) } }, { "id": "did:example:123456789abcdefghi#keys-1", "type": "Multikey", // external (property value) "controller": "did:example:pqrstuvwxyz0987654321", "publicKeyMultibase": "z6MkmM42vxfqZQsv4ehtTjFFxQ4sQKS2w6WR7emozFAn5cxu" }], ... }
The Multikey data model is a specific type of verification method that utilizes the [ MULTICODEC ] specification to encode key types into a single binary stream that is then encoded using the [ MULTIBASE ] specification.
When
specifing
a
Multikey
,
the
object
takes
the
following
form:
type
property
MUST
contain
the
string
Multikey
.
publicKeyMultibase
property
is
OPTIONAL
.
If
present,
the
value
MUST
be
a
[
MULTIBASE
]
encoded
[
MULTICODEC
]
value.
secretKeyMultibase
property
is
OPTIONAL
.
If
present,
the
value
MUST
be
a
[
MULTIBASE
]
encoded
[
MULTICODEC
]
value.
An example of a Multikey is provided below:
{ "@context": ["https://w3id.org/security/multikey/v1"], "id": "did:example:123456789abcdefghi#keys-1", "type": "Multikey", "controller": "did:example:123456789abcdefghi", "publicKeyMultibase": "z6MkmM42vxfqZQsv4ehtTjFFxQ4sQKS2w6WR7emozFAn5cxu" }
In
the
example
above,
the
publicKeyMultibase
value
starts
with
the
letter
z
,
which
is
the
[
MULTIBASE
]
header
that
conveys
that
the
binary
data
is
base58-encoded
using
the
Bitcoin
base-encoding
alphabet.
The
decoded
binary
data
[
MULTICODEC
]
header
is
0xed
,
which
specifies
that
the
remaining
data
is
a
32-byte
raw
Ed25519
public
key
.
The Multikey data model is also capable of encoding secret keys, whose subtypes include symmetric keys and private keys .
{ "@context": ["https://w3id.org/security/suites/secrets/v1"], "id": "did:example:123456789abcdefghi#keys-1", "type": "Multikey", "controller": "did:example:123456789abcdefghi", "secretKeyMultibase": "z3u2fprgdREFtGakrHr6zLyTeTEZtivDnYCPZmcSt16EYCER" }
In
the
example
above,
the
secretKeyMultibase
value
starts
with
the
letter
z
,
which
is
the
[
MULTIBASE
]
header
that
conveys
that
the
binary
data
is
base58-encoded
using
the
Bitcoin
base-encoding
alphabet.
The
decoded
binary
data
[
MULTICODEC
]
header
is
0x1300
,
which
specifies
that
the
remaining
data
is
a
32-byte
raw
Ed25519
private
key.
The JSON Web Key (JWK) data model is a specific type of verification method that uses the JWK specification [ RFC7517 ] to encode key types into a set of parameters.
When
specifing
a
JsonWebKey
,
the
object
takes
the
following
form:
type
property
MUST
contain
the
string
JsonWebKey
.
publicKeyJwk
property
is
OPTIONAL
.
If
present,
the
value
MUST
be
a
map
representing
a
JSON
Web
Key
that
conforms
to
[
RFC7517
].
The
map
MUST
NOT
any
members
of
the
private
information
class,
such
as
"d",
as
described
in
the
JWK
Registration
Template
.
It
is
RECOMMENDED
that
verification
methods
that
use
JWKs
[
RFC7517
]
to
represent
their
public
keys
use
the
value
of
kid
as
their
fragment
identifier.
It
is
RECOMMENDED
that
JWK
kid
values
are
set
to
the
public
key
fingerprint
[
RFC7638
].
See
the
first
key
in
Example
8
for
an
example
of
a
public
key
with
a
compound
key
identifier.
secretKeyJwk
property
is
OPTIONAL
.
If
present,
the
value
MUST
be
a
map
representing
a
JSON
Web
Key
that
conforms
to
[
RFC7517
].
An example of an object that conforms to this data model is provided below:
{ "@context": ["https://www.w3.org/ns/security/jwk/v1"], "id": "did:example:123456789abcdefghi#key-1", "type": "JsonWebKey", "controller": "did:example:123456789abcdefghi", "publicKeyJwk": { "kty": "OKP", "alg": "EdDSA" "crv": "Ed25519", "kid": "key-1", "x": "_1EiHquO2aUx9JARSu0P8jdYT_OVneYxYOnOMAmUcFI", } }
In
the
example
above,
the
publicKeyJwk
value
contains
the
JSON
Web
Key.
The
kty
property
encodes
the
key
type
of
"OKP",
which
means
"Octet
string
key
pairs".
The
alg
property
identifies
the
algorithm
intended
for
use
with
the
public
key.
The
crv
property
identifies
the
particular
curve
type
of
the
public
key.
The
kid
property
specifies
how
the
public
key
might
be
referenced
in
software
systems;
if
present,
the
kid
value
SHOULD
match
the
id
property
of
the
encapsulating
JsonWebKey
object.
Finally,
the
x
property
specifies
the
point
on
the
Ed25519
curve
that
is
associated
with
the
public
key.
The
publicKeyJwk
property
MUST
NOT
contain
any
property
marked
as
"Private"
in
any
registry
contained
in
the
JOSE
Registries
[
JOSE-REGISTRIES
].
The JSON Web Key data model is also capable of encoding secret keys , sometimes referred to as private keys .
{ "@context": ["https://www.w3.org/ns/security/jwk/v1"], "id": "did:example:123456789abcdefghi#key-1", "type": "JsonWebKey", "controller": "did:example:123456789abcdefghi", "secretKeyJwk": { "kty": "OKP", "alg": "EdDSA" "crv": "Ed25519", "kid": "key-1", "d": "Q6JwjCUdThSnoxfXHSFt5C1nVFycY_ZpW7qVzK644_g", "x": "_1EiHquO2aUx9JARSu0P8jdYT_OVneYxYOnOMAmUcFI", } }
The
private
key
example
above
is
almost
identical
to
the
previous
example
of
the
public
key,
except
that
the
information
is
stored
in
the
secretKeyJwk
property
(rather
than
the
publicKeyJwk
),
and
the
private
key
value
is
encoded
in
the
d
property
thereof
(alongside
the
x
property,
which
still
specifies
the
point
on
the
Ed25519
curve
that
is
associated
with
the
public
key).
Verification methods can be embedded in or referenced from properties associated with various verification relationships as described in 2.3.2 Verification Relationships . Referencing verification methods allows them to be used by more than one verification relationship .
If
the
value
of
a
verification
method
property
is
a
map
,
the
verification
method
has
been
embedded
and
its
properties
can
be
accessed
directly.
However,
if
the
value
is
a
URL
string
,
the
verification
method
has
been
included
by
reference
and
its
properties
will
need
to
be
retrieved
from
elsewhere
in
the
controller
document
or
from
another
controller
document
.
This
is
done
by
dereferencing
the
URL
and
searching
the
resulting
resource
for
a
verification
method
map
with
an
id
property
whose
value
matches
the
URL.
{ ... "authentication": [ // this key is referenced and might be used by // more than one verification relationship "did:example:123456789abcdefghi#keys-1", // this key is embedded and may *only* be used for authentication { "id": "did:example:123456789abcdefghi#keys-2", "type": "Multikey", // external (property value) "controller": "did:example:123456789abcdefghi", "publicKeyMultibase": "z6MkmM42vxfqZQsv4ehtTjFFxQ4sQKS2w6WR7emozFAn5cxu" } ], ... }
A verification relationship expresses the relationship between the controller and a verification method .
Different verification relationships enable the associated verification methods to be used for different purposes. It is up to a verifier to ascertain the validity of a verification attempt by checking that the verification method used is contained in the appropriate verification relationship property of the controller document .
The verification relationship between the controller and the verification method is explicit in the controller document . Verification methods that are not associated with a particular verification relationship cannot be used for that verification relationship . For example, a verification method in the value of the ` authentication ` property cannot be used to engage in key agreement protocols with the controller —the value of the ` keyAgreement ` property needs to be used for that.
The controller document does not express revoked keys using a verification relationship . If a referenced verification method is not in the latest controller document used to dereference it, then that verification method is considered invalid or revoked.
The following sections define several useful verification relationships . A controller document MAY include any of these, or other properties, to express a specific verification relationship . In order to maximize global interoperability, any such properties used SHOULD be registered in the Data Integrity Specification Registries [TBD: DIS-REGISTRIES].
The
authentication
verification
relationship
is
used
to
specify
how
the
controller
is
expected
to
be
authenticated
,
for
purposes
such
as
logging
into
a
website
or
engaging
in
any
sort
of
challenge-response
protocol.
authentication
property
is
OPTIONAL
.
If
present,
the
associated
value
MUST
be
a
set
of
one
or
more
verification
methods
.
Each
verification
method
MAY
be
embedded
or
referenced.
{ "@context": [ "https://www.w3.org/ns/did/v1", "https://w3id.org/security/multikey/v1" ], "id": "did:example:123456789abcdefghi", ... "authentication": [ // this method can be used to authenticate as did:...fghi "did:example:123456789abcdefghi#keys-1", // this method is *only* approved for authentication, it may not // be used for any other proof purpose, so its full description is // embedded here rather than using only a reference { "id": "did:example:123456789abcdefghi#keys-2", "type": "Multikey", "controller": "did:example:123456789abcdefghi", "publicKeyMultibase": "z6MkmM42vxfqZQsv4ehtTjFFxQ4sQKS2w6WR7emozFAn5cxu" } ], ... }
If authentication is established, it is up to the application to decide what to do with that information.
This
is
useful
to
any
authentication
verifier
that
needs
to
check
to
see
if
an
entity
that
is
attempting
to
authenticate
is,
in
fact,
presenting
a
valid
proof
of
authentication.
When
a
verifier
receives
some
data
(in
some
protocol-specific
format)
that
contains
a
proof
that
was
made
for
the
purpose
of
"authentication",
and
that
says
that
an
entity
is
identified
by
the
id
,
then
that
verifier
checks
to
ensure
that
the
proof
can
be
verified
using
a
verification
method
(e.g.,
public
key
)
listed
under
`
authentication
`
in
the
controller
document
.
Note
that
the
verification
method
indicated
by
the
`
authentication
`
property
of
a
controller
document
can
only
be
used
to
authenticate
the
controller
.
To
authenticate
a
different
controller
,
the
entity
associated
with
the
value
of
controller
needs
to
authenticate
with
its
own
controller
document
and
associated
`
authentication
`
verification
relationship
.
The
assertionMethod
verification
relationship
is
used
to
specify
how
the
controller
is
expected
to
express
claims,
such
as
for
the
purposes
of
issuing
a
Verifiable
Credential
[
VC-DATA-MODEL-2.0
].
assertionMethod
property
is
OPTIONAL
.
If
present,
the
associated
value
MUST
be
a
set
of
one
or
more
verification
methods
.
Each
verification
method
MAY
be
embedded
or
referenced.
This property is useful, for example, during the processing of a verifiable credential by a verifier. During verification, a verifier checks to see if a verifiable credential contains a proof created by the controller by checking that the verification method used to assert the proof is associated with the ` assertionMethod ` property in the corresponding controller document .
{ "@context": [ "https://www.w3.org/ns/did/v1", "https://w3id.org/security/multikey/v1" ], "id": "did:example:123456789abcdefghi", ... "assertionMethod": [ // this method can be used to assert statements as did:...fghi "did:example:123456789abcdefghi#keys-1", // this method is *only* approved for assertion of statements, it is not // used for any other verification relationship, so its full description is // embedded here rather than using a reference { "id": "did:example:123456789abcdefghi#keys-2", "type": "Multikey", // external (property value) "controller": "did:example:123456789abcdefghi", "publicKeyMultibase": "z6MkmM42vxfqZQsv4ehtTjFFxQ4sQKS2w6WR7emozFAn5cxu" } ], ... }
The
keyAgreement
verification
relationship
is
used
to
specify
how
an
entity
can
generate
encryption
material
in
order
to
transmit
confidential
information
intended
for
the
controller
,
such
as
for
the
purposes
of
establishing
a
secure
communication
channel
with
the
recipient.
keyAgreement
property
is
OPTIONAL
.
If
present,
the
associated
value
MUST
be
a
set
of
one
or
more
verification
methods
.
Each
verification
method
MAY
be
embedded
or
referenced.
An example of when this property is useful is when encrypting a message intended for the controller . In this case, the counterparty uses the cryptographic public key information in the verification method to wrap a decryption key for the recipient.
{ "@context": "https://www.w3.org/ns/did/v1", "id": "did:example:123456789abcdefghi", ... "keyAgreement": [ // this method can be used to perform key agreement as did:...fghi "did:example:123456789abcdefghi#keys-1", // this method is *only* approved for key agreement usage, it will not // be used for any other verification relationship, so its full description is // embedded here rather than using only a reference { "id": "did:example:123#zC9ByQ8aJs8vrNXyDhPHHNNMSHPcaSgNpjjsBYpMMjsTdS", "type": "X25519KeyAgreementKey2019", // external (property value) "controller": "did:example:123", "publicKeyMultibase": "z6LSn6p3HRxx1ZZk1dT9VwcfTBCYgtNWdzdDMKPZjShLNWG7" } ], ... }
The
capabilityInvocation
verification
relationship
is
used
to
specify
a
verification
method
that
might
be
used
by
the
controller
to
invoke
a
cryptographic
capability,
such
as
the
authorization
to
update
the
controller
document
.
capabilityInvocation
property
is
OPTIONAL
.
If
present,
the
associated
value
MUST
be
a
set
of
one
or
more
verification
methods
.
Each
verification
method
MAY
be
embedded
or
referenced.
An example of when this property is useful is when a controller needs to access a protected HTTP API that requires authorization in order to use it. In order to authorize when using the HTTP API, the controller uses a capability that is associated with a particular URL that is exposed via the HTTP API. The invocation of the capability could be expressed in a number of ways, e.g., as a digitally signed message that is placed into the HTTP Headers.
The server providing the HTTP API is the verifier of the capability and it would need to verify that the verification method referred to by the invoked capability exists in the ` capabilityInvocation ` property of the controller document . The verifier would also check to make sure that the action being performed is valid and the capability is appropriate for the resource being accessed. If the verification is successful, the server has cryptographically determined that the invoker is authorized to access the protected resource.
{ "@context": [ "https://www.w3.org/ns/did/v1", "https://w3id.org/security/multikey/v1" ], "id": "did:example:123456789abcdefghi", ... "capabilityInvocation": [ // this method can be used to invoke capabilities as did:...fghi "did:example:123456789abcdefghi#keys-1", // this method is *only* approved for capability invocation usage, it will not // be used for any other verification relationship, so its full description is // embedded here rather than using only a reference { "id": "did:example:123456789abcdefghi#keys-2", "type": "Multikey", // external (property value) "controller": "did:example:123456789abcdefghi", "publicKeyMultibase": "z6MkmM42vxfqZQsv4ehtTjFFxQ4sQKS2w6WR7emozFAn5cxu" } ], ... }
The
capabilityDelegation
verification
relationship
is
used
to
specify
a
mechanism
that
might
be
used
by
the
controller
to
delegate
a
cryptographic
capability
to
another
party,
such
as
delegating
the
authority
to
access
a
specific
HTTP
API
to
a
subordinate.
capabilityDelegation
property
is
OPTIONAL
.
If
present,
the
associated
value
MUST
be
a
set
of
one
or
more
verification
methods
.
Each
verification
method
MAY
be
embedded
or
referenced.
An
example
of
when
this
property
is
useful
is
when
a
controller
chooses
to
delegate
their
capability
to
access
a
protected
HTTP
API
to
a
party
other
than
themselves.
In
order
to
delegate
the
capability,
the
controller
would
use
a
verification
method
associated
with
the
capabilityDelegation
verification
relationship
to
cryptographically
sign
the
capability
over
to
another
controller
.
The
delegate
would
then
use
the
capability
in
a
manner
that
is
similar
to
the
example
described
in
2.3.2.4
Capability
Invocation
.
{ "@context": [ "https://www.w3.org/ns/did/v1", "https://w3id.org/security/multikey/v1" ], "id": "did:example:123456789abcdefghi", ... "capabilityDelegation": [ // this method can be used to perform capability delegation as did:...fghi "did:example:123456789abcdefghi#keys-1", // this method is *only* approved for granting capabilities; it will not // be used for any other verification relationship, so its full description is // embedded here rather than using only a reference { "id": "did:example:123456789abcdefghi#keys-2", "type": "Multikey", // external (property value) "controller": "did:example:123456789abcdefghi", "publicKeyMultibase": "z6MkmM42vxfqZQsv4ehtTjFFxQ4sQKS2w6WR7emozFAn5cxu" } ], ... }
The term Linked Data is used to describe a recommended best practice for exposing, sharing, and connecting information on the Web using standards, such as URLs, to identify things and their properties. When information is presented as Linked Data, other related information can be easily discovered and new information can be easily linked to it. Linked Data is extensible in a decentralized way, greatly reducing barriers to large scale integration.
With the increase in usage of Linked Data for a variety of applications, there is a need to be able to verify the authenticity and integrity of Linked Data documents. This specification adds authentication and integrity protection to data documents through the use of mathematical proofs without sacrificing Linked Data features such as extensibility and composability.
While this specification provides mechanisms to digitally sign Linked Data, the use of Linked Data is not necessary to gain some of the advantages provided by this specification.
Cryptographic
suites
that
implement
this
specification
can
be
used
to
secure
verifiable
credentials
and
verifiable
presentations
.
Implementers
that
are
addressing
those
use
cases
are
cautioned
that
additional
checks
might
be
appropriate
when
processing
those
types
of
documents.
For
example,
there
are
some
use
cases
where
it
is
important
to
ensure
that
the
verification
method
used
in
a
proof
is
associated
with
the
issuer
in
a
verifiable
credential
,
or
the
holder
in
a
verifiable
presentation
,
during
the
process
of
validation
.
One
way
to
check
for
such
an
association
is
to
ensure
that
the
value
of
the
controller
property
of
a
proof's
verification
method
matches
the
URL
value
used
to
identify
the
issuer
or
holder
,
respectively.
This
particular
association
indicates
that
the
issuer
or
holder
,
respectively,
is
the
controller
of
the
verification
method
used
to
verify
the
proof.
This section lists cryptographic hash values that might change during the Candidate Recommendation phase based on implementer feedback that requires the referenced files to be modified.
Implementations that perform JSON-LD processing MUST treat the following JSON-LD context URLs as already resolved, where the resolved document matches the corresponding hash values below:
URL and Media Type | Content |
---|---|
https://www.w3.org/ns/data-integrity/v1
application/ld+json |
sha256:
v/POI0jhSjPansxhJAP1fwepCBZ2HK77fRZfCCyBDs0=
sha3-512: Sg1PLFxKyEYQns9Zr0BoYXtFeDNfrHUDNMkyq4QEWv |
https://www.w3.org/ns/multikey/v1
application/ld+json |
sha256:
SA+P0RxXl8ZH6xdPcmX71crzeGkCYMPfYjdVT46SF0o=
sha3-512: BgQBN00t6vj1T4uYt5SwdPOhe2BVNY11UEvkVo/rbBv |
The
security
vocabulary
terms
that
the
JSON-LD
contexts
listed
above
resolve
to
are
in
the
https://w3id.org/security#
namespace.
That
is,
all
security
terms
in
this
vocabulary
are
of
the
form
https://w3id.org/security#TERM
,
where
TERM
is
the
name
of
a
term.
Implementations that perform RDF processing MUST treat the following JSON-LD vocabulary URL as already resolved, where the resolved document matches the corresponding hash values below.
When dereferencing the https://w3id.org/security# URL, the data returned depends on HTTP content negotiation. These are as follows:
Media Type | Description and Cryptographic Hashes | |
---|---|---|
application/ld+json |
The
vocabulary
in
JSON-LD
format
[
JSON-LD
].
sha256: LEaoTyf796eTaSlYWjfPe3Yb+poCW9TjWYTbFDmC0tc= sha3-512: f4DhJ3xhT8nT+GZ8UUZi4QC+HT//wXE2fRTgUP4UNw |
|
text/turtle |
The
vocabulary
in
Turtle
format
[
TURTLE
].
sha256: McnhLyt7+/A/0iLb3CUXD0itNw+7bwwjtzOww/zwoyI= sha3-512: jZtZsqgPPPo+jphAcN8/St4VdRLLAmN3nEQhzs0twE |
|
text/html |
The
vocabulary
in
HTML+RDFa
Format
[
HTML-RDFA
].
sha256: eUHP1xiSC157iTPDydZmxg/hvmX3g/nnCn+FO25d4dc= sha3-512: z53j8ryjVeX16Z/dby//ujhw37degwi09+LAZCTUB8 |
It
is
possible
to
confirm
the
digests
listed
above
by
running
the
following
command
from
a
modern
Unix
command
interface
line:
curl
-sL
-H
"Accept:
<MEDIA_TYPE>"
<DOCUMENT_URL>
|
openssl
dgst
-<DIGEST_ALGORITHM>
-binary
|
openssl
base64
-nopad
-a
.
For further information regarding processing of JSON-LD Contexts and Vocabularies, see Verifiable Credentials v2.0: Base Context and Verifiable Credentials v2.0: Vocabularies .
A
data
integrity
proof
is
designed
to
be
easy
to
use
by
developers
and
therefore
strives
to
minimize
the
amount
of
information
one
has
to
remember
to
generate
a
proof.
Often,
just
the
cryptographic
suite
name
(e.g.
eddsa-2022
)
is
required
from
developers
to
initiate
the
creation
of
a
proof.
These
cryptographic
suite
s
are
often
created
or
reviewed
by
people
that
have
the
requisite
cryptographic
training
to
ensure
that
safe
combinations
of
cryptographic
primitives
are
used.
This
section
specifies
the
requirements
for
authoring
cryptographic
suite
specifications.
The requirements for all data integrity cryptographic suite specifications are as follows:
type
and
any
parameters
that
can
be
used
with
the
suite.
@protected
keyword.
The
following
language
was
deemed
to
be
contentious:
The
specification
MUST
provide
a
link
to
an
interoperability
test
report
to
document
which
implementations
are
conformant
with
the
cryptographic
suite
specification.
The
Working
Group
is
seeking
feedback
on
whether
or
not
this
is
desired
given
the
important
role
that
cryptographic
suite
specifications
play
in
ensuring
data
integrity.
A
number
of
cryptographic
suites
follow
the
same
basic
pattern
when
expressing
a
data
integrity
proof.
This
section
specifies
that
general
design
pattern,
a
cryptographic
suite
type
called
a
DataIntegrityProof
,
which
reduces
the
burden
of
writing
and
implementing
cryptographic
suites
through
the
reuse
of
design
primitives
and
source
code.
When
specifing
a
cryptographic
suite
that
utilizes
this
design
pattern,
the
proof
value
takes
the
following
form:
type
property
MUST
contain
the
string
DataIntegrityProof
.
cryptosuite
property
MUST
contain
a
string
specifying
the
name
of
the
cryptosuite.
proofValue
property
MUST
be
used,
as
specified
in
2.1
Proofs
.
Cryptographic
suite
designers
MUST
use
mandatory
proof
value
properties
defined
in
Section
2.1
Proofs
,
and
MAY
define
other
properties
specific
to
their
cryptographic
suite.
One
of
the
design
patterns
seen
in
Data
Integrity
cryptosuites
from
2012
to
2020
was
use
of
the
type
property
to
establish
a
specific
type
for
a
cryptographic
suite.
For
example,
the
Ed25519Signature2020
cryptographic
suite
was
one
such
specification.
This
led
to
a
greater
burden
on
cryptographic
suite
implementations,
where
every
new
cryptographic
suite
required
a
new
JSON-LD
Context
to
be
specified,
resulting
in
a
sub-optimal
developer
experience.
A
streamlined
version
of
this
design
pattern
emerged
in
2020,
such
that
a
developer
would
only
need
to
include
a
single
JSON-LD
Context
to
support
all
modern
cryptographic
suites.
This
encouraged
more
modern
cryptosuites
—
such
as
the
EdDSA
Cryptosuites
[
DI-EDDSA
]
and
the
ECDSA
Cryptosuites
[
DI-ECDSA
]
—
to
be
built
based
on
the
streamlined
pattern
described
in
this
section.
To
improve
the
developer
experience,
authors
creating
new
Data
Integrity
cryptographic
suite
specifications
SHOULD
use
the
modern
pattern
—
where
the
type
is
set
to
DataIntegrityProof
;
the
cryptosuite
property
carries
the
identifier
for
the
cryptosuite;
and
any
cryptosuite-specific
cryptographic
data
is
encapsulated
(i.e.,
not
directly
exposed
as
application
layer
data)
within
proofValue
.
A
list
of
cryptographic
suite
specifications
that
are
known
to
follow
this
pattern
is
provided
in
the
Proof
types
section
of
the
Verifiable
Credentials
Specifications
Directory
.
The algorithms defined below are generalized in that they require a specific transformation algorithm , hashing algorithm , proof serialization algorithm , and proof verification algorithm to be specified by a particular cryptographic suite (see Section 3. Cryptographic Suites ).
At present the creation of the verification hash is delegated to the cryptographic suite specification when generating and verifying a proof. It is expected that this algorithm is going to be common to most cryptographic suites. It is predicted that the algorithm that generates the verification hash will eventually be defined in this specification.
The following algorithm specifies how to add a digital proof to a document, which can be used to verify the authenticity and integrity of an unsecured data document . Required inputs are an unsecured data document ( unsecuredDocument ) and proof options ( options ). The proof options MUST contain a type identifier for the cryptographic suite ( type ) and any other properties needed by the cryptographic suite type; an identifier for the verification method ( verificationMethod ) that can be used to verify the authenticity of the proof; an [ XMLSCHEMA11-2 ] combined date and time string ( created ) containing the current date and time, accurate to at least one second, in Universal Time Code format, a security domain ( domain ), and a receiver-supplied challenge ( challenge ) might also be specified in the options . A secured data document is produced as output. Whenever this algorithm encodes strings, it MUST use UTF-8 encoding.
PROOF_GENERATION_ERROR
MUST
be
raised.
PROOF_GENERATION_ERROR
MUST
be
raised.
PROOF_GENERATION_ERROR
MUST
be
raised.
PROOF_GENERATION_ERROR
MUST
be
raised.
While
the
output
of
the
hashing
algorithm
can
be
a
single
value,
such
as
a
32
byte
SHA2-256
value,
implementers
are
advised
that
some
cryptographic
suite(s)
might
define
hashData
to
be
comprised
of
multiple
values
that
might
be
processed
independently
in
the
proof
serialization
algorithm.
For
example,
this
approach
is
known
to
be
taken
in
certain
cryptographic
suites
that
allow
selective
disclosure
or
unlinkability
via
the
digital
proof.
The following algorithm specifies how to incrementally add a proof to a proof set or proof chain starting with a secured document containing either a proof or proof set/chain. Required inputs are a secured data document ( securedDocument ) and proof options ( options ). The proof options ( options ) must satisfy the criteria in section 4.1 Add Proof . A new secured data document is produced as output. Whenever this algorithm encodes strings, it MUST use UTF-8 encoding.
previousProof
attribute
check
if
an
element
of
the
proof_list
has
a
matching
id
attribute.
If
not
a
PROOF_GENERATION_ERROR
MUST
be
raised.
The following algorithm specifies how to check the authenticity and integrity of a secured data document by verifying its digital proof. Required inputs are a secured data document ( securedDocument ) and proof options ( options ). A verification result is produced as output.
MALFORMED_PROOF_ERROR
MUST
be
raised.
MALFORMED_PROOF_ERROR
MUST
be
raised.
MISMATCHED_PROOF_PURPOSE_ERROR
MUST
be
raised.
proof
value
removed.
CREATED_TIME_DEVIATION_ERROR
MUST
be
raised.
INVALID_DOMAIN_ERROR
MUST
be
raised.
INVALID_CHALLENGE_ERROR
MUST
be
raised.
In
a
proof
set
or
proof
chain,
a
secured
data
document
has
a
proof
attribute
which
contains
a
list
of
proofs
(
proof_list
).
The
following
algorithm
specifies
how
to
check
the
authenticity
and
integrity
of
a
secured
data
document
by
verifying
each
proof
in
the
proof_list
.
Required
inputs
are
a
secured
data
document
(
securedDocument
).
A
list
of
verification
results
is
produced
as
output.
previousProof
attribute,
modify
the
associated
isProofVerified
by
iteratively
checking
whether
the
proof
identified
by
the
id
value
in
previousProof
is
valid,
along
with
any
previousProofs
up
the
chain.
The following algorithm specifies how to safely retrieve a verification method, such as a cryptographic public key , by using a verification method identifier contained in a data integrity proof . Required inputs are a data integrity proof ( proof ) and a set of dereferencing options ( options ). A verification method is produced as output.
INVALID_VERIFICATION_METHOD_URL
error
MUST
be
raised.
INVALID_CONTROLLER_DOCUMENT_ID
error
MUST
be
raised.
INVALID_CONTROLLER_DOCUMENT
error
MUST
be
raised.
INVALID_VERIFICATION_METHOD
error
MUST
be
raised.
INVALID_PROOF_PURPOSE_FOR_VERIFICATION_METHOD
error
MUST
be
raised.
The following example provides a minimum conformant controller document containing a minimum conformant verification method as required by the algorithm in this section:
{
"id": "https://controller.example/123",
"verificationMethod": [{
"id": "https://controller.example/123#key-456",
"type": "ExampleVerificationMethodType",
"controller": "https://controller.example/123",
// public cryptographic material goes here
}],
"authentication": ["#key-456"]
}
The following section describes security considerations that developers implementing this specification should be aware of in order to create secure software.
Cryptography secures information through the use of secrets. Knowledge of the necessary secret makes it computationally easy to access certain information. The same information can be accessed if a computationally-difficult, brute-force effort successfully guesses the secret. All modern cryptography requires the computationally difficult approach to remain difficult throughout time, which does not always hold due to breakthroughs in science and mathematics. That is to say that Cryptography has a shelf life .
This specification plans for the obsolescence of all cryptographic approaches by asserting that whatever cryptography is in use today is highly likely to be broken over time. Software systems have to be able to change the cryptography in use over time in order to continue to secure information. Such changes might involve increasing required secret sizes or modifications to the cryptographic primitives used. However, some combinations of cryptographic parameters might actually reduce security. Given these assumptions, systems need to be able to distinguish different combinations of safe cryptographic parameters, also known as cryptographic suites, from one another. When identifying or versioning cryptographic suites, there are several approaches that can be taken which include: parameters, numbers, and dates.
Parametric
versioning
specifies
the
particular
cryptographic
parameters
that
are
employed
in
a
cryptographic
suite.
For
example,
one
could
use
an
identifier
such
as
RSASSA-PKCS1-v1_5-SHA1
.
The
benefit
to
this
scheme
is
that
a
well-trained
cryptographer
will
be
able
to
determine
all
of
the
parameters
in
play
by
the
identifier.
The
drawback
to
this
scheme
is
that
most
of
the
population
that
uses
these
sorts
of
identifiers
are
not
well
trained
and
thus
will
not
understand
that
the
previously
mentioned
identifier
is
a
cryptographic
suite
that
is
no
longer
safe
to
use.
Additionally,
this
lack
of
knowledge
might
lead
software
developers
to
generalize
the
parsing
of
cryptographic
suite
identifiers
such
that
any
combination
of
cryptographic
primitives
becomes
acceptable,
resulting
in
reduced
security.
Ideally,
cryptographic
suites
are
implemented
in
software
as
specific,
acceptable
profiles
of
cryptographic
parameters
instead.
Numbered
versioning
might
specify
a
major
and
minor
version
number
such
as
1.0
or
2.1
.
Numbered
versioning
conveys
a
specific
order
and
suggests
that
higher
version
numbers
are
more
capable
than
lower
version
numbers.
The
benefit
of
this
approach
is
that
it
removes
complex
parameters
that
less
expert
developers
might
not
understand
with
a
simpler
model
that
conveys
that
an
upgrade
might
be
appropriate.
The
drawback
of
this
approach
is
that
its
not
clear
if
an
upgrade
is
necessary,
as
software
version
number
increases
often
don't
require
an
upgrade
for
the
software
to
continue
functioning.
This
can
lead
to
developers
thinking
their
usage
of
a
particular
version
is
safe,
when
it
is
not.
Ideally,
additional
signals
would
be
given
to
developers
that
use
cryptographic
suites
in
their
software
that
periodic
reviews
of
those
suites
for
continued
security
are
required.
Date-based versioning specifies a particular release date for a specific cryptographic suite. The benefit of a date, such as a year, is that it is immediately clear to a developer if the date is relatively old or new. Seeing an old date might prompt the developer to go searching for a newer cryptographic suite, where as a parametric or number-based versioning scheme might not. The downside of a date-based version is that some cryptographic suites might not expire for 5-10 years, prompting the developer to go searching for a newer cryptographic suite only to not find one that is newer. While this might be an inconvenience, it is one that results in safer ecosystem behavior.
The
following
text
is
currently
under
debate:
It
is
highly
encouraged
that
cryptographic
suite
identifiers
are
versioned
using
a
year
designation.
For
example,
the
cryptographic
suite
identifier
ecdsa-2022
implies
that
the
suite
is
probably
an
acceptable
of
ECDSA
in
the
year
2025,
but
might
not
be
a
safe
choice
in
the
year
2042.
A
date-based
versioning
mechanism,
however,
is
not
enough
by
itself.
All
cryptographic
suites
that
follow
this
specification
are
intended
to
be
registered
[
VC-SPECS
]
in
a
way
that
clearly
signal
which
cryptosuites
are
deprecated,
standardized,
or
experimental.
Cryptosuite
registration
will
follow
CFRG,
IETF,
NIST,
FIPS,
and
safecurves
guidance.
Use
of
deprecated
suites
are
expected
to
throw
errors
in
implementations
unless
a
useUnsafeCryptosuites
option
is
used
specifying
exactly
the
unsafe
cryptosuite
to
use.
Use
of
experimental
suites
are
expected
to
throw
errors
in
implementations
unless
a
useExperimentalCryptosuites
option
is
used
specifying
exactly
the
experimental
cryptosuite
to
use.
Modern cryptographic algorithms provide a number of tunable parameters and options to ensure that the algorithms can meet the varied requirements of different use cases. For example, embedded systems have limited processing and memory environments and might not have the resources to generate the strongest digital signatures for a given algorithm. Other environments, like financial trading systems, might only need to protect data for a day while the trade is occurring, while other environments might need to protect data for multiple decades. To meet these needs, cryptographic algorithm designers often provide multiple ways to configure a cryptographic algorithm.
Cryptographic library implementers often take the specifications created by cryptographic algorithm designers and specification authors and implement them such that all options are available to the application developers that use their libraries. This can be due to not knowing which combination of features a particular application developer might need for a given cryptographic deployment. All options are often exposed to application developers.
Application developers that use cryptographic libraries often do not have the requisite cryptographic expertise and knowledge necessary to appropriately select cryptographic parameters and options for a given application. This lack of expertise can lead to an inappropriate selection of cryptographic parameters and options for a particular application.
This specification sets the priority of constituencies to protect application developers over cryptographic library implementers over cryptographic specification authors over cryptographic algorithm designers. Given these priorities, the following recommendations are made:
The guidance above is meant to ensure that useful cryptographic options and parameters are provided at the lower layers of the architecture while not exposing those options and parameters to application developers who may not fully understand the balancing benefits and drawbacks of each option.
The VCWG is seeking guidance on adding language to allow the use of experimental or deprecated cryptography. By default, those features will be disabled and will require the application developer to specifically allow use on a per-cryptographic suite basis. There will be requirements for all implementing libraries to throw errors or warnings when deprecated or experimental options are selected without the appropriate override flags.
Section 5.1 Versioning Cryptography Suites emphasized the importance of providing relatively easy to understand information concerning the timeliness of particular cryptographic suite, while section 5.2 Protecting Application Developers further emphasized minimizing the number of options to be specified. Indeed, section 3. Cryptographic Suites lists requirements for cryptographic suites which include detailed specification of algorithm, transformation, hashing, and serialization. Hence, the name of the cryptographic suite does not need to include all this detail, which implies the parametric versioning mentioned in section 5.1 Versioning Cryptography Suites is neither necessary nor desirable.
The recommended naming convention for cryptographic suites is a string composed of a signature algorithm identifier, separated by a hyphen from an option identifier (if the cryptosuite supports incompatible implementation options), followed by a hyphen and designation of the approximate year that the suite was proposed.
For
example,
the
[
DI-EDDSA
]
is
based
on
EdDSA
digital
signatures,
supports
two
incompatible
options
based
on
canonicalization
approaches,
and
was
proposed
in
roughly
the
year
2022,
so
it
would
have
two
different
cryptosuite
names:
eddsa-rdfc-2022
and
eddsa-jcs-2022
.
Although
the
[
DI-ECDSA
]
is
based
on
ECDSA
digital
signatures,
supports
the
same
two
incompatible
canonicalization
approaches
as
[
DI-EDDSA
],
and
supports
two
different
levels
of
security
(128
bit
and
192
bit)
via
two
alternative
sets
of
elliptic
curves
and
hashes,
it
has
only
two
cryptosuite
names:
ecdsa-rdfc-2019
and
ecdsa-jcs-2019
.
The
security
level
and
corresponding
curves
and
hashes
are
determined
from
the
multi-key
format
of
the
public
key
used
in
validation.
Cryptographic agility is a practice by which one designs frequently connected information security systems to support switching between multiple cryptographic primitives and/or algorithms . The primary goal of cryptographic agility is to enable systems to rapidly adapt to new cryptographic primitives and algorithms without making disruptive changes to the systems' infrastructure. Thus, when a particular cryptographic primitive, such as the SHA-1 algorithm, is determined to be no longer safe to use, systems can be reconfigured to use a newer primitive via a simple configuration file change.
Cryptographic agility is most effective when the client and the server in the information security system are in regular contact. However, when the messages protected by a particular cryptographic algorithm are long-lived, as with Verifiable Credentials, and/or when the client (holder) might not be able to easily recontact the server (issuer), then cryptographic agility does not provide the desired protections.
Cryptographic layering is a practice where one designs rarely connected information security systems to employ multiple primitives and/or algorithms at the same time . The primary goal of cryptographic layering is to enable systems to survive the failure or one or more cryptographic algorithms or primitives without losing cryptographic protection on the payload. For example, digitally signing a single piece of information using RSA, ECDSA, and Falcon algorithms in parallel would provide a mechanism that could survive the failure of two of these three digital signature algorithms. When a particular cryptographic protection is compromised, such as an RSA digital signature using 768-bit keys, systems can still utilize the non-compromised cryptographic protections to continue to protect the information. Developers are urged to take advantage of this feature for all signed content that might need to be protected for a year or longer.
This specification provides for both forms of agility. It provides for cryptographic agility, which allows one to easily switch from one algorithm to another. It also provides for cryptographic layering, which allows one to simultaneously use multiple cryptographic algorithms, typically in parallel, such that any of those used to protect information can be used without reliance on or requirement of the others, while still keeping the digital proof format easy to use for developers.
At times, it is beneficial to transform the data being protected during the cryptographic protection process. Such "in-line" transformation can enable a particular type of cryptographic protection to be agnostic to the data format it is carried in. For example, some Data Integrity cryptographic suites utilize RDF Dataset Canonicalization [ RDF-CANON ] which transforms the initial representation into a canonical form [ N-QUADS ] that is then serialized, hashed, and digitally signed. As long as any syntax expressing the protected data can be transformed into this canonical form, the digital signature can be verified. This enables the same digital signature over the information to be expressed in JSON, CBOR, YAML, and other compatible syntaxes without having to create a cryptographic proof for every syntax.
Being able to express the same digital signature across a variety of syntaxes is beneficial because systems often have native data formats with which they operate. For example, some systems are written against JSON data, while others are written against CBOR data. Without transformation, systems that process their data internally as CBOR are required to store the digitally signed data structures as JSON (or vice-versa). This leads to double-storing data and can lead to increased security attack surface if the unsigned representation stored in databases accidentally deviates from the signed representation. By using transformations, the digital proof can live in the native data format to help prevent otherwise undetectable database drift over time.
This specification is designed to avoid requiring the duplication of signed information by utilizing "in-line" data transformations. Application developers are urged to work with cryptographically protected data in the native data format for their application and not separate storage of cryptographic proofs from the data being protected. Developers are also urged to regularly confirm that the cryptographically protected data has not been tampered with as it is written to and read from application storage.
Some transformations, such as RDF Dataset Canonicalization [ RDF-CANON ], have mitigations for input data sets that can be used by attackers to consume excessive processing cycles. This class of attack is called dataset poisoning , and all modern RDF Dataset canonicalizers are required to detect these sorts of bad inputs and halt processing. The test suites for RDF Dataset Canonicalization includes such poisoned datasets to ensure that such mitigations exist in all conforming implementations. Generally speaking, cryptographic suite specifications that use transformations are required to mitigate these sorts of attacks, and implementers are urged to ensure that the software libraries that they use enforce these mitigations. These attacks are in the same general category as any resource starvation attack, such as HTTP clients that deliberately slow connections, thus starving connections on the server. Implementers are advised to consider these sorts of attacks when implementing defensive security strategies.
The VCWG is seeking feedback on normative language that cryptographic suite implementers need to follow to ensure that they do not utilize data transformation mechanisms that can map to the same output. That is, given different inputs for canonicalization scheme #1 and canonicalization scheme #2, they must not produce the same output value. As an analogy, this is the same requirement for cryptographic hashing mechanisms and is why those schemes are designed to be collision resistant. Cryptographic canonicalization mechanisms have the same requirement. At present, this isn't a problem because the three expected canonicalization schemes — the Universal RDF Dataset Canonicalization Algorithm 2015 [ RDF-CANON ], JSON Canonicalization Scheme [ RFC8785 ], and a theoretical future base-encoding canonicalization — have entirely different outputs.
The VCWG is seeking feedback on whether to explain why modern canonicalization schemes are simpler than the far more complex XML Canonicalization schemes of the early 2000s. Some readers seem to be under the impression that all canonicalization is difficult and has to be avoided at all costs (including costs to application developers). The WG would like to understand if it would be helpful to include a section explaining why some simpler data syntaxes (such as JSON) are easier to canonicalize than more complex data syntaxes (such as XML).
The inspectability of application data has effects on system efficiency and developer productivity. When cryptographically protected application data, such as base-encoded binary data, is not easily processed by application subsystems, such as databases, it increases the effort of working with the cryptographically protected information. For example, a cryptographically protected payload that can be natively stored and indexed by a database will result in a simpler system that:
Similarly, a cryptographically protected payload that can be processed by multiple upstream networked systems increases the ability to properly layer security architectures. For example, if upstream systems do not have to repeatedly decode the incoming payload, it increases the ability for a system to distribute processing load by specializing upstream subsystems to actively combat attacks. While a digital signature needs to always be checked before taking substantive action, other upstream checks can be performed on transparent payloads — such as identifier-based rate limiting, signature expiration checking, or nonce/challenge checking — to reject obviously bad requests.
Additionally, if a developer is not able to easily view data in a system, the ability to easily audit or debug system correctness is hampered. For example, requiring application developers to cut-and-paste base-encoded application data makes development more challenging and increases the chances that obvious bugs will be missed because every message needs to go through a manually operated base-decoding tool.
There are times, however, where the correct design decision is to make data opaque. Data that does not need to be processed by other application subsystems, as well as data that does not need to be modified or accessed by an application developer, can be serialized into opaque formats. Examples include digital signature values, cryptographic key parameters, and other data fields that only need to be accessed by a cryptographic library and need not be modified by the application developer. There are also examples where data opacity is appropriate when the underlying subsystem does not expose the application developer to the underlying complexity of the opaque data, such as databases that perform encryption at rest. In these cases, the application developer continues to develop against transparent application data formats while the database manages the complexity of encrypting and decrypting the application data to and from long-term storage.
This specification strives to provide an architecture where application data remains in its native format and is not made opaque, while other cryptographic data, such as digital signatures, are kept in their opaque binary encoded form. Cryptographic suite implementers are urged to consider appropriate use of data opacity when designing their suites, and to weigh the design trade-offs when making application data opaque versus providing access to cryptographic data at the application layer.
Implementers must ensure that a verification method is bound to a particular controller by going from the verification method to the controller document, and then ensuring that the controller document also contains the verification method.
When an implementation is verifying a proof , it is imperative that it verify not only that the verification method used to generate the proof is listed in the controller document , but also that it was intended to be used to generate the proof that is being verified. This process is known as "verification relationship validation".
The process for verification relationship validation is outlined in Section 4.5 Retrieve Verification Method .
This process is used to ensure that cryptographic material, such as a private cryptographic key, is not misused by application to an unintended purpose. An example of cryptographic material misuse would be if a private cryptographic key meant to be used to issue a Verifiable Credential was instead used to log into a website (that is, for authentication). Not checking a verification relationship is dangerous because the restriction and protection profile for some cryptographic material could be determined by its intended use. For example, some applications could be trusted to use cryptographic material for only one purpose, or some cryptographic material could be more protected, such as through storage in a hardware security module in a data center versus as an unencrypted file on a laptop.
When an implementation is verifying a proof , it is imperative that it verify that the proof purpose match the intended use.
This process is used to ensure that proofs are not misused by an application for an unintended purpose, as this is dangerous for the proof creator. An example of misuse would be if a proof that stated its purpose was for securing assertions in verifiable credentials was instead used for authentication to log into a website. In this case, the proof creator attached proofs to any number of verifiable credentials that they expected to be distributed to an unbounded number of other parties. Any one of these parties could log into a website as the proof creator if the website erroneously accepted such a proof as authentication instead of its intended purpose.
One of the algorithmic processes used by this specification is canonicalization, which is a type of transformation . Canonicalization is the process of taking information that might be expressed in a variety of semantically equivalent ways as input, and expressing all output in a single way, called a "canonical form".
The security of a resulting data integrity proof that utilizes canonicalization is highly dependent on the correctness of the algorithm. For example, if a canonicalization algorithm converts two inputs that have different meanings into the same output, then the author's intentions can be misrepresented to a verifier . This can be used as an attack vector by adversaries.
Additionally, if semantically relevant information in an input is not present in the output, then an attacker could insert such information into a message without causing proof verification to fail. This is similar to another transformation that is commonly used when cryptographically signing messages: cryptographic hashing. If an attacker is able to produce the same cryptographic hash from a different input, then the cryptographic hash algorithm is not considered secure.
Implementers are strongly urged to ensure proper vetting of any canonicalization algorithms to be used for transformation of input to a hashing process. Proper vetting includes, at a minimum, association with a peer reviewed mathematical proof of algorithm correctness; multiple implementations and vetting by experts in a standards setting organization is preferred. Implementers are strongly urged not to invent or use new mechanisms unless they have formal training in information canonicalization and/or access to experts in the field who are capable of producing a peer reviewed mathematical proof of algorithm correctness.
The following section describes privacy considerations that developers implementing this specification should be aware of in order to create privacy enhancing software.
When a digitally-signed payload contains data that is seen by multiple verifiers, it becomes a point of correlation. An example of such data is a shopping loyalty card number. Correlatable data can be used for tracking purposes by verifiers, which can sometimes violate privacy expectations. The fact that some data can be used for tracking might not be immediately apparent. Examples of such correlatable data include, but are not limited to, a static digital signature or a cryptographic hash of an image.
It is possible to create a digitally-signed payload that does not have any correlatable tracking data while also providing some level of assurance that the payload is trustworthy for a given interaction. This characteristic is called unlinkability which ensures that no correlatable data are used in a digitally-signed payload while still providing some level of trust, the sufficiency of which must be determined by each verifier.
It is important to understand that not all use cases require or even permit unlinkability. There are use cases where linkability and correlation are required due to regulatory or safety reasons, such as correlating organizations and individuals that are shipping and storing hazardous materials. Unlinkability is useful when there is an expectation of privacy for a particular interaction.
There are at least two mechanisms that can provide some level of unlinkability. The first method is to ensure that no data value used in the message is ever repeated in a future message. The second is to ensure that any repeated data value provides adequate herd privacy such that it becomes practically impossible to correlate the entity that expects some level of privacy in the interaction.
A variety of methods can be used to achieve unlinkability. These methods include ensuring that a message is a single use bearer token with no information that can be used for the purposes of correlation, using attributes that ensure an adequate level of herd privacy, and the use of cryptosuites that enable the entity presenting a message to regenerate new signatures while not compromising the trust in the message being presented.
Selective disclosure is a technique that enables the recipient of a previously-signed message (that is, a message signed by its creator) to reveal only parts of the message without disturbing the verifiability of those parts. For example, one might selectively disclose a digital driver's license for the purpose of renting a car. This could involve revealing only the issuing authority, license number, birthday, and authorized motor vehicle class from the license. Note that in this case, the license number is correlatable information, but some amount of privacy is preserved because the driver's full name and address are not shared.
Not
all
software
or
cryptosuites
are
capable
of
providing
selective
disclosure.
If
the
author
of
a
message
wishes
it
to
be
selectively
disclosable
by
its
recipient,
then
they
need
to
enable
selective
disclosure
on
the
specific
message,
and
both
need
to
use
a
capable
cryptosuite.
The
author
might
also
make
it
mandatory
to
disclose
certain
parts
of
the
message.
A
recipient
that
wants
to
selectively
disclose
partial
content
of
the
message
needs
to
utilize
software
that
is
able
to
perform
the
technique.
An
example
of
a
cryptosuite
that
supports
selective
disclosure
is
bbs-2022
.
It is possible to selectively disclose information in a way that does not preserve unlinkability. For example, one might want to disclose the inspection results related to a shipment, which include the shipment identifier or lot number, which might have to be correlatable due to regulatory requirements. However, disclosure of the entire inspection result might not be required as selectively disclosing just the pass/fail status could be deemed adequate. For more information on disclosing information while preserving privacy, see Section 6.1 Unlinkability .
The following section describes accessibility considerations that developers implementing this specification are urged to consider in order to ensure that their software is usable by people with different cognitive, motor, and visual needs. As a general rule, this specification is used by system software and does not directly expose individuals to information subject to accessibility considerations. However, there are instances where individuals might be indirectly exposed to information expressed by this specification and thus the guidance below is provided for those situations.
This specification enables the expression of dates and times related to the validity period of cryptographic proofs. This information might be indirectly exposed to an individual if a proof is processed and is detected to be outside an allowable time range. When exposing these dates and times to an individual, implementers are urged to take into account cultural normas and locales when representing dates and times in display software. In addition to these considerations, presenting time values in a way that eases the cognitive burden on the individual receiving the information is a suggested best practice.
For example, when conveying the expiration date for a particular set of digitally signed information, implementers are urged to present the time of expiration using language that is easier to understand rather than language that optimizes for accuracy. Presenting the expiration time as "This ticket expired three days ago." is preferred over a phrase such as "This ticket expired on July 25th 2023 at 3:43 PM." The former provides a relative time that is easier to comprehend than the latter time, which requires the individual to do the calculation in their head and presumes that they are capable of doing such a calculation.
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in: