The Web of Things is applicable to multiple IoT domains,
including Smart Home, Industrial, Smart City, Retail, and Health
applications, where usage of the W3C WoT standards can simplify the
development of IoT systems that combine devices from multiple vendors
and ecosystems.
During the last charter period of the WoT Working Group several
specifications were developed to address requirements for
these domains.
This Use Case and Requirements Document is created
to collect new IoT use cases from various domains
that have been contributed by various stakeholders.
These serve as a baseline for identifying requirements
for the standardisation work in the W3C WoT groups.
Status of This Document
This is a
preview
Do not attempt to implement this version of the specification. Do not
reference this version as authoritative in any way.
Instead, see
https://w3c.github.io/wot-use-cases/ for the Editor's draft.
This section describes the status of this
document at the time of its publication. Other documents may supersede
this document. A list of current W3C publications and the latest revision
of this technical report can be found in the
W3C technical reports index at
https://www.w3.org/TR/.
Publication as an Editor's Draft does not imply endorsement by the W3C
Membership. This is a draft document and may be updated, replaced or
obsoleted by other documents at any time. It is inappropriate to cite this
document as other than work in progress.
This document was produced by a group
operating under the
W3C Patent Policy.
The group does not expect this document to become a W3C Recommendation.
W3C maintains a
public list of any patent disclosures
made in connection with the deliverables of
the group; that page also includes
instructions for disclosing a patent. An individual who has actual
knowledge of a patent which the individual believes contains
Essential Claim(s)
must disclose the information in accordance with
section 6 of the W3C Patent Policy.
The World Wide Web Consortium (W3C) has published the Web of Things
(WoT) Architecture and Web of Things (WoT) Thing Description (TD) as
official W3C Recommendations in May 2020. These specifications enable
easy integration across Internet of Things platforms and applications.
The W3C Web of Thing Architecture [wot-architecture] defines an
abstract architecture, the WoT Thing Description [wot-thing-description]
defines a format to describes a broad spectrum of very different devices,
which may be connected over various protocols.
During the inception phase of the WoT 1.0 specifications in 2017-2018
the WoT IG collected use cases and requirements to enable
interoperability of Internet of Things (IoT)
services on a worldwide basis.
Thes released specifications have been created to address the
use cases and requirements for the first version of the WoT specifications,
which are documented in
https://w3c.github.io/wot/ucr-doc/
The present document gathers and describes new use cases and requirements
for future standardisation work in the WoT standard.
2. Definitions: Terminology, Stakeholders and Actors
2.1 Terminology
The present document uses the terminology as the WoT Architecture [wot-architecture].
Editor's note
TODO: Define additional terminology.
2.2 Stakeholders
Editor's note
TODO: Describe stakeholder roles and tasks.
device owners
device user
cloud provider
service provider
device manufacturer
gateway manufacturer
identity provider
directory service operator?
3. Use Cases
3.1 Application Domains
3.1.1 Retail
3.1.1.1 Retail
Submitter(s)
David Ezell, Michael Lagally, Michael McCool
Reviewer(s)
<Suggest reviewers>
Tracker Issue ID
Category
<please leave blank>
Class
<please leave blank>
Status
<please leave blank>
Target Users
Retailers, customers, suppliers.
Motivation
Integrating and interconnecting multiple devices into the common retail workflow
(i.e., transaction log) drastically improves retail business operations at multiple levels.
It brings operational visibility,including consumer behavior and environmental information,
that was not previously possible or viable in a meaningful way.
It drastically speeds up the process of root cause analysis of operational issues and
simplifies the work of retailers.
Expected Devices
Connected sensors, such as people counters, presence sensors, air quality, room ocupancy, door sensors.
Cloud services. Video analytics edge services.
Expected Data
Inventory data, supply chain status information, discrete sensor data or data streams.
Dependencies
<List the affected WoT deliverables>
tbd
Description
Falling costs of sensors, communications, and handling of very large volumes of data combined with cloud
computing enable retail business operations with increased operational efficiency, better customer service,
and even increased revenue growth and return on investment.
Accurate forecasts allow retailers to coordinate demand-driven outcomes that deliver connected customer interactions.
They drive optimal strategies in planning, increasing inventory productivity in retail supply chains,
decreasing operational costs and driving customer satisfaction from engagement, to sale, to fulfilment.
Understanding of store activity juxtaposed with traditional information streams can boost worker and consumer safety,
comply better with work safety regulations, enhance system and site security, and improve worker efficiency
by providing real-time visibility into worker status, location, and work environment.
Variants
Use edge computing, in particular video analytics, in combination with IoT devices to deliver an enhanced
customer experience, better manage inventory, or otherwise improve the store workflow.
Security Considerations
In retail, replay attacks can cause monetary loss, customers may be incorrectly charged or over-charged.
To avoid replay attacks, "Things" should implement a sequence number for each message and digital signature.
Servers ("Things" or "Cloud") should verify the signature and disallow for duplicated messages.
For "Things" relying on electronic payments, "Things" must comply with PCI-DSS requirements.
"Things" must never store credit card information.
Customer satisfaction and trust depends on availability, so attacks such as Denial-of-Service (DoS) need to be prevented or mitigated.
To prevent DoS, implement "Things" with early DoS detection.
Have an automated DoS system that will notify the controlling unit of the problem.
Implement IP white list, that could be part of the DoS early detection system.
Make sure your network perimeter is defended with up to date firewall software.
Privacy Considerations
As a general rule, personal consumer information should not be stored.
That is especially true in the retail industry where a security breach could cause financial, reputation, and brand damage.
If personal or information that can identify a consumer is to be stored,
it should be to conduct business and with the explicit acknowledgment of the consumer.
WoT vendors and integrators should always have a privacy policy and make it easily available.
By default, devices should adopt an opt-out policy.
That means, unless the consumer explicitly allowed for the data capture and storage, avoid doing it.
Gaps
<Describe any gaps that are not addressed in the current WoT work items>
tbd
Existing standards
<Provide links to relevant standards that are relevant for this use case>
tbd
Comments
3.1.2 Audio/Video
3.1.2.1 Media Use Case Information Bucket
This document is not a full use case description. It is rather a collection of
thoughts and ideas to capture information and provide references and have a common discussion basis.
The intention is to trigger new ideas and collect them in a single document at this point.
The document is work in progress.
Submitter(s)
<Put your name here>
Reviewer(s)
<Suggest reviewers>
Tracker Issue ID
<please leave blank>
Category
<please leave blank>
Class
<please leave blank>
Status
<please leave blank>
Target Users
<List all stakeholders that are involved in the use case from the following list:
device owners
device user
cloud provider
service provider
device manufacturer
gateway manufacturer
network operator (potentially transparent for WoT use cases)
identity provider
directory service operator>
Motivation
<Provide a description of the problem that is solved by the use case and a reason why this use case is important for the users>
Expected Devices
<List the target devices, e.g. as a sensor, solar panel, air conditioner>
Expected Data
<List the type of expected data, e.g. weather and climate data, medical conditions, machine sensors, vehicle data>
Dependencies - Affected WoT deliverables and/or work items
<List the affected WoT deliverables that have to be changed to enable this use case>
Description
Sync of media across different devices:
people in different homes watch the same content at the same time. Conversation about content.
Multi-room sync playback
Multi-camera angles
Voice control of a media playback device (integration of smart speakers from multiple vendors)
Describe proprietary (vendor specific) device control interfaces to control media playback on TV set.
(proprietary implementations exist, open protocol is proposed?)
#### Variants:
<Describe possible use case variants, if applicable>
Gaps
<Describe any gaps that are not addressed in the current WoT standards and building blocks>
Existing standards & related information
<Provide links to relevant standards that are relevant for this use case>
There are MANY TV standards and this would be a long list. Rather leave blank unless a very specific standard provides more insight.
Comments
#### Further information and resources:
NHK Hybridcast updates:
https://www.w3.org/2011/webtv/wiki/images/d/d1/MediaTimedEventsInHybridcast_TPAC20190916.pdf
MediaTimedEvents in Hybridcast:
https://www.w3.org/2011/webtv/wiki/images/d/d1/MediaTimedEventsInHybridcast_TPAC20190916.pdf
BBC Moto GP at Home:
https://2immerse.eu/motogp-at-home/
3.1.2.2 Home WoT devices work according to TV programs
<Provide a description of the problem that is solved by the use case and a reason why this use case is important for the users>
A lot of home devices, such as TV, cleaner, and home lighting, connect to an IP network.
When you watch a content program, these devices should coorperate for enhancing your expereence.
If the cleaning robot makes a loud noise while watching the TV program, it will hinder viewing.
Also, even if you set up the theater environment with smart lights, it is troublesome to operate it yourself each time the TV program switches.
Therefore, by WoT device to operate in accordance with the TV program being viewed, thereby improving the user experience.
WoT devices work according to TV programs:
Cleaning robot stops at an important situation,
Color of smart lights are changed according to TV programs,
Smart Mirror is notified that favorite TV show will start.
Expected Devices
Hybridcast TV
Hybridcast Connect application (in a smartdevice such as smartphone)
Cleaning Robot
Smart Light (such as Philips Hue)
Smart Mirror
Expected Data
The trigger value of the scene of the TV program.
Hybridcast connect application know the Thing Description of the devices in home. (Discovery?)
Dependencies
<List the affected WoT deliverables>
Description
Home smart devices behave according to TV programs.
Hybridcast applications in TV emit information about TV programs for smart home devices.
(Hybridcast is a Japanese Integrated Broadcast-Broadband system. Hybridcast applications are HTML5 applications that work on Hybridcast TV.)
Hybridcast Contact application receives the information and controlls smart home devices.
!scenario_nhk
#### Variants:
<Describe possible use case variants, if applicable>
Gaps
<Describe any gaps that are not addressed in the current WoT work items>
Existing standards
Hybridcast and Hybridcast Connect: a Japanese Integrated Broadcast-Broadband system,
HbbTV,
ATSC 3.0,
...etc.
Agricultural corporation, Farmer, Manufacturers (Sensor, other facilities), Cloud provider
Motivation
Greenhouse Horticulture controlled by computers can create an optimal environment for growing plants. This enables to improve productivity and ensure stable vegetable production throughout the year, independent of the weather. This is the result of research on the growth of plants in the 1980s. For example, in tomatoes, switching to hydroponics and optimizing the temperature, humidity and CO2 concentration required for photosynthesis resulted in a five times increase in yield. The growth conditions for other vegetables also have been investigated, and this control system is applied now.
Expected Devices
Sensors ( temperature, humidity, brightness, UV brightness, air pressure, and CO2)
Heating, CO2 generator, open and close sunlight shielding sheet.
Expected Data
Sensors’ values to clarify the gaps between conditions for maximizing photosynthesis and the current environment.
Following sensors values at one or some points in the greenhouse: temperature, humidity, brightness, and CO2.
Dependencies
WoT Architecture、WoT Thing Description
Description
Sensors and some facilities like heater, CO2 generator, sheet controller are connected to the gateway via wired or wireless networks. The gateway is connected to the cloud via the Internet. All sensors and facilities can be accessed and controlled from the cloud.
To maximize photosynthesis, the temperature, CO2 concentration, and humidity in the greenhouse are mainly controlled. When the sunlight comes in the morning and CO2 concentration inside decreases, the application turns on the CO2 generator to keep over 400 ppm, the same as the air outside. The temperature in the greenhouse is adjusted by controlling the heater and the sunlight shielding sheet.
The cloud gathers all sensor data and the status of the facilities. The application makes the best configuration for the region of the greenhouse located.
#### Variants:
<Describe possible use case variants, if applicable>
Gaps
In the case of the wireless connection to the sensors, the gateway should keep the latest value of the sensors since the wireless connection is sometimes broken. The gateway can create a virtual entity corresponding to the sensor and allow the application to access this virtual entity having the actual sensor status like sleeping.
Existing standards
Comments
3.1.4 Smart City
3.1.4.1 Smart City Geolocation
Submitter(s)
Jennifer Lin (GovTech Singapore), Michael McCool
Reviewer(s)
Michael Lagally
Tracker Issue ID
<please leave blank>
Category
<please leave blank>
Class
<please leave blank>
Status
<please leave blank>
Target Users
A Smart City managing mobile devices and sensors,
including passively mobile sensor packs, packages,
vehicles, and autonomous robots, where their location needs to
be determined dynamically.
TODO: Stakeholders/Users need to be further clarified. They include "city government, people counting service providers, police, network providers, ...
Motivation
Smart Cities need to track a large number of mobile devices and sensors.
Location information may be integrated with a logistics or fleet management
system.
A reusable geolocation module is needed with a common network interface to
include in these various applications.
For outdoor applications, GPS
could be used, but indoors other geolocation technologies might be
used, such as WiFi triangulation or vision-based navigation (SLAM).
Therefore the geolocation information should be technology-agnostic.
NOTE: we prefer the term "geolocation", even indoors, over "localization" to
avoid confusion with language localization.
Expected Devices
One of the following:
A geolocation system on a personal device, such as a smart phone.
A geolocation system to be attached to some other portable device.
A geolocation system attached to a mobile vehicle.
A geolocation system on a payload transported by a vehicle.
A geolocation system on an indoor mobile robot.
Expected Data
Sensor ID
Timestamp of last geolocation
2D location
* typically latitude and longitude
* May also be semantic, i.e. room in a building, exit
Optional:
Semantic location
* Possibly in addition to numerical lat/long location.
Altitude
* May also be semantic, i.e. floor of a building
Heading
Speed
Accuracy information
* Confidence interval, e.g. distance that true location will be within some probability.
* Gaussian covariance matrix
* For each measurement
* For lat/long, may be a single value (see web browser API; radius?)
Geolocation technology (GPS, SLAM, etc).
* Note that multiple technologies might be used together.
* Include parameters such as sample interval, accuracy
For each geolocation technology, data specific to that technology:
* GPS: NMEA type
Historical data
Note: the system should be capable of notifying consumers
of changes in location.
This may be used to implement geofencing by some other system.
This may require additional parameters, such as the
maximum distance that the device may be moved before a notification is
sent, or the maximum amount of time between updates.
Notifications may be sent by a variety of means, some of which may
not be traditional push mechanisms (for example, email might be used).
For geofencing applications, it is not necessary that the device be aware
of the fence boundaries; these can be managed by a separate system.
Dependencies
node-wot
Description
Smart Cities have the need to observe the physical locations of
large number of mobile devices
in use in the context of a Fleet or Logistics Management System, or
to place sensor data on a map in a Dashboard application.
These systems may also include geofencing notifications and mapping
(visual tracking) capabilities.
#### Variants:
A version of the system may log historical data so the past
locations of the devices can be recovered.
Geolocation technologies other than GPS may be used. The payload
may contain additional information specific to the geolocation
technology used. In particular, in indoor situations technologies such
as WiFi triangulation or (V)SLAM may be more appropriate.
Geofencing may be implemented using event notifications and
will require setting of additional parameters such as maximum distance.
Security Considerations
High-resolution timestamps can be used in conjunction with cache manipulation to
access protected regions of memory, as with the SPECTRE exploit. Certain
geolocation APIs and technologies can return high-resolution timestamps which
can be a potential problem. Eventually these issues will be addressed in cache
architecture but in the meantime a workaround is to artificially limit the
resolution of timestamps.
Privacy Considerations
Location is generally considered private information when it is used with a device
that may be associated with a specific person, such as a phone or vehicle, as it
can be used to track that person and infer their activities or who they associate
with (if multiple people are being tracked at once). Therefore APIs to access geographic location
in sensitive contexts are often restricted, and access is allowed only after confirming
permission from the user.
Gaps
There is no standardized semantic vocabulary for representing location data.
Location data can be point data, a path, an area or a volumetric object.
Location information can be expressed using multiple standards,
but the reader of location data in a TD or in data returned by an IoT device
must be able to unambiguously describe location information.
There are both dynamic (data returned by a mobile sensor) and static (fixed installation
location) applications for geolocation data. For dynamic location data, some recommended vocabulary
to annotate data schemas would be useful. For static location data, a standard format
for metadata to be included in a TD itself would be useful.
* World Geodetic System
* Defines lat/long/alt coordinate system used by most other geolocation standards
* More complicated than you would think (need to deal with deviations of Earth from
a true sphere, gravitational irregularities, position of centroid, etc. etc.)
* Very basic RDF definitions for lat, long, and alt
* Does not define heading or speed
* Does not define accuracy
* Does not define timestamps
* Uses string as a data model (rather than a number)
* W3C Devices and Sensors WG is now handling
* There is an updated proposal: https://w3c.github.io/geolocation-sensor/#geolocationsensor-interface
* Data schema of updated proposal is similar to existing API, but all elements are now optional
* Data includes latitude, longitude, altitude, heading, and speed
* Accuracy is included for latitude/longitude (single number in meters, 95% confidence, interpretation
a little ambiguous, but probably intended to be a radius) and altitude, but not for heading or speed.
Open Geospatial Consortium: https://www.ogc.org/
* See http://docs.opengeospatial.org/as/18-005r4/18-005r4.html
* Referring to locations by coordinates
* Has standards defining semantics for identifying locations
* Useful for mapping
ISO
* ISO19111: https://www.iso.org/standard/74039.html
* Standard for referring to locations by coordinates
* Related to OGS standard above and WGS84
* Various other standards that relate to remote sensing, geolocation, etc.
* Here is an example (see references): https://www.iso.org/obp/ui/fr/#iso:std:iso:ts:19159:-2:ed-1:v1:en
SSN: https://www.w3.org/TR/vocab-ssn/
* Defines "accuracy": https://www.w3.org/TR/vocab-ssn/#SSNSYSTEMAccuracy
* Definition of accuracy is consistent with how it is used in Web Geolocation API
* Also defines related terms Precision, Resolution, Latency, Drift, etc.
Timestamps:
* W3C standard in proposed new web geolocation API: https://w3c.github.io/hr-time/#dom-domhighrestimestamp
* See also related issues such as latency defined in SSN
Note that accuracy and time are issues that apply to all kinds of sensors, not just
geolocation. However, the specific geolocation technology of GPS
is special since it is also a source of accurate time.
Comments
3.1.4.2 Interactive Public Spaces
Submitter(s)
Michael McCool
Reviewer(s)
<Suggest reviewers>
Tracker Issue ID
<please leave blank>
Category
Accessibility
Class
<please leave blank>
Status
<please leave blank>
Target Users
<List all users that are involved in the use case, e.g. device manufacturer, gateway manufacturer, cloud provider>
Motivation
Public spaces provide many opportunities for engaging, social and fun interaction.
At the same time,
preserving privacy while sharing tasks and activities with other people is a major issue
in ambient systems.
These systems may also deliver personalized information in combination
with more general services presented publicly.
A trustful discovery of the services and devices available in such environments
is a necessity to guarantee personalization and privacy in public-space applications.
Expected Devices
Public spaces supporting personalizable services and device access.
Expected Data
Command and status information transferred between the personal mobile device
application and the public space's services and devices.
Profile data for user preferences.
Dependencies
WoT Thing Description
WoT Discovery
Optional: WoT Scripting API in application on mobile personal device and possibly
in IoT orchestration services in the public space.
Description
Interactive installations such as touch-sensitive or gesture-tracking billboards
may be set up in public places.
Objects that present public information (e.g. a map of a shopping mall)
can use a multimodal interface (built-in or in tandem with the user's mobile devices)
to simplify user interaction and provide faster access.
Other setups can stimulate social activities,
allowing multiple people to enter an interaction simultaneously to work together
towards a certain goal (for a prize)
or just for fun (e.g. play a musical instrument or control a lighting exhibition).
In a context where privacy is an issue
(for example, with targeted/personalized alerts or advertisements),
the user's mobile device acts as a mediator for the services
running on the public network.
This allows the user to receive relevant information in the way she sees fit.
Notifications can serve as triggers for interaction with public devices and services
if the user chooses to do so.
#### Variants:
The user may have additional mobile devices they want to incorporate into
an interaction, for example a headset acting as an auditory aid or personal speech output
device.
Gaps
Data format describing user interface preferences.
Existing standards
This use case is based on MMI UC 3.1.
Comments
Does not include Requirements section from original MMI use case.
3.1.4.3 Meeting Room Event Assistance
Submitter(s)
Michael McCool
Reviewer(s)
<Suggest reviewers>
Tracker Issue ID
<please leave blank>
Category
Accessibility
Class
<please leave blank>
Status
<please leave blank>
Target Users
<List all users that are involved in the use case, e.g. device manufacturer, gateway manufacturer, cloud provider>
Motivation
Expected Devices
Meeting space supporting personalizable services and device access.
Expected Data
Command and status information transferred between the personal mobile device
application and the meeting space's services and devices.
Profile data for user preferences.
Dependencies
WoT Thing Description
WoT Discovery
Optional: WoT Scripting API in application on mobile personal device and possibly
in IoT orchestration services in the meeting space.
Description
A conference room where a series of meetings will take place.
People can go in and out of the room before,
after and during the meeting.
The door is "touched" by a badge.
An application on the user's mobile device can
activate any available display in the room and the room and can access and
receive notification from devices and services in the room.
The chair of the meeting is notified by a dynamically composed graphic animation,
audio notification or a mobile phone notification,
about available devices and services, and
can install applications indicated by links.
The chair of the meeting selects a setup procedure by text amongst the provided links.
These options could be, for example:
photo step-by-step instructions (smartphone, HDTV display, Web site),
audio instructions (MP3 audio guide, room speakers reproduction, HDTV audio)
or RFID enhanced instructions (mobile SmartTag Reader, RFID Reader for smartphone).
The chair of the meeting chooses the room speakers reproduction,
then the guiding Service is activated and he starts to set the video projector.
After some attendees arrive,
the chair of the meeting changes to the slide show option and continues to
follow the instructions at the same step it was paused but with another more
private modality for example, a smartphone slideshow.
#### Variants:
The user may have additional mobile devices they want to incorporate into
an interaction, for example a headset acting as an auditory aid or personal speech output
device.
Gaps
Data format describing user interface preferences.
Ability to install applications based on links that can access IoT services.
Existing standards
This use case is based on MMI UC 3.2.
Comments
Does not include Requirements section from original MMI use case.
3.1.5 Health
3.1.5.1 Public Health
3.1.5.1.1 Public Health Monitoring
Submitter(s)
Jennifer Lin (GovTech Singapore)
Reviewer(s)
Michael McCool, Michael Lagally
Tracker Issue ID
<please leave blank>
Category
<please leave blank>
Class
<please leave blank>
Status
<please leave blank>
Target Users
Agencies, companies and other organizations in a Smart City with
significant pedestrian traffic in a pandemic situation.
Motivation
A system to monitor the health of people in public places is useful to
control the spread of infectious diseases. In particular, we would like
to identify individuals with temperatures outside the norm (i.e. running
a fever) and then take appropriate action. Actions can include sending
a notification or actuating a security device, such as a gate.
This mechanism should be non-invasive and non-contact since the solution
should not itself contribute to the spread of infectious diseases.
Data may also be aggregated for statistics purposes, for example, to
identify the number of people in an area with elevated temperatures.
This has additional requirements to avoid double-counting individuals.
Expected Devices
One of the following:
A thermal camera.
Face detection (AI) service
* May be on device or be an edge or cloud service.
Optional:
RGB and/or depth camera registered with the thermal camera
Cloud service for data aggregation and analytics.
Some way to identify location (optional)
Note that location might be static and configured during installation,
but might also be based on a localization technology if the device needs to be
portable (for example, if it needs to be set up quickly for an event).
Expected Data
Sensor ID
Timestamp
Number of people identified with a fever in image
Estimated temperature for each person
* May be coarse, low/normal/high
Location
* Latitude, Longitude, Altitude, Accuracy
* Semantic (eg a particular building entrance)
Thermal image
Optional:
RGB image
Depth image
Localization technology (see localization use case)
Integration with local IoT devices: gates, lights, or people (guards)
Bounding boxes around faces of identified people in image(s)
Data that can be used to uniquely identify a face (distinguish it from others)
* Aggregation system may output the total number of unique faces with fever
Note 1: the system should be capable of notifying consumers (such as security personnel),
of fever detections.
This may be email, SMS, or some other mechanism, such as MQTT publication.
Note 2: In all cases where images are captured, privacy considerations apply.
It would also be useful to count unique individuals for statistics purposes,
but not necessarily based on identifying particular people. This is to
avoid counting the same person multiple times.
Dependencies
node-wot
Description
A thermal camera image is taken of a group of people
and an AI service is used to identify faces in the image.
The temperature of each person is then estimated from the registered face;
for greater accuracy, a consistent location for sampling should be used, such
as the forehead.
The estimated temperature is compared to high (and optionally, low)
thresholds and a notification (or other action) is taken if the
temperature is outside the norm.
Additional features may be extracted to identify unique individuals.
#### Variants:
Enough information is included in the notification that the specific
person that raised the alarm can be identified. For example, if an RGB
camera is also registered with the thermal camera, then a bounding box may
be indicated via JSON and the RGB image included; or the bounding box could
be actually drawn into the sent image, or the face could be cropped out.
This is useful if, for example, a notification needs to be sent to health
or security workers who need to identify the person in a crowd.
Instead of simply a notification, an action may be taken, such as closing
or refusing to open a gate at the entrance to a building, to prevent sick
employees from entering the building.
To generate statistics, for example to count the number of people with
fevers, then unique individuals need to be indentified to avoid counting
the same person more than once.
The same sensors might be used to determine the number of people in
an area and send a notification if crowded conditions are detected, in
order to support social distancing behaviour (for instance, supporting
an app that notifies users when a destination is crowded) in a pandemic situation.
Cameras that provide video streams rather than still images.
Security Considerations
Because PII is involved (see below) access should be controlled (only provided to authorized users) and communications protected (encrypted).
Privacy Considerations
Images of people and their health status is involved.
- If later these are made public then the health information of a particular person would be released publically.
- There is also the possibility that the camera data could be in error, and should be confirmed with a more accurate sensor.
- This information needs to be treated as PII and protected: only distributed to authorized users, and deleted when no longer needed.
- However, derived aggregate information can be kept and published.
Gaps
Onboarding mechanism for rapidly deploying a large number of devices
Standard vocabulary for geolocation information
Implementations able to handle image payload formats, possibly in combination with non-image data (eg images and JSON in a single response)
Video streaming support (if we wish to serve video stream from the camera instead of still images)
Standard ways to specify notification mechanisms and data payloads for things like SMS and email (in addition to the expected MQTT, CoAP, and HTTP event mechanisms)
Existing standards
Comments
May be additional requirements for privacy since images of people and their health status is involved.
Different sub-use cases: immediate alerts or actions vs. aggregate data gathering
3.1.5.1.2 Machines in a hospital ICU should be able to speak to one another<Pick a descriptive title>
Submitter(s)
Taki Kamiya
Reviewer(s)
<Suggest reviewers>
Tracker Issue ID
<please leave blank>
Category
<please leave blank>
Class
<please leave blank>
Status
<please leave blank>
Target Users
device owners
device user
cloud provider
service provider
device manufacturer
gateway manufacturer
identity provider
Motivation
Preventable medical errors may account for more than 100,000 deaths per year in U.S. alone. These errors are mainly caused by failures of communication such as a chart misread or the wrong data passed along to machines or staffs. Part of the problem could be solved if the machines could speak to one another. Manufacturers have little incentive to make their proprietary code and data easily to accessible and process able by their competitors’ machines. So the task of middleman falls to the hospital staffs. In addition to saving lives, a common framework could result in collecting and recording more clinical data on patients, making it easier to deliver precision medicine.
Expected Devices
<List the target devices, e.g. as a sensor, solar panel, air conditioner>
Expected Data
<List the type of expected data, e.g. weather and climate data, medical conditions, machine sensors, vehicle data>
Dependencies - Affected WoT deliverables and/or work items
<List the affected WoT deliverables that have to be changed to enable this use case>
Description
Physiological Closed-Loop Control (PCLC) devices are a group of emerging technologies, which use feedback from physiological sensor(s) to autonomously manipulate physiological variable(s) through delivery of therapies conventionally delivered by clinician(s).
Clinical scenario without PCLC. An elderly female with end-stage renal failure was given a standard insulin infusion protocol to manage her blood glucose, but no glucose was provided. Her blood glucose dropped to 33, then rebounded to over 200 after glucose was given. This scenario has not changed for decades.
The desired state with PCLC implemented in an ICU. A patient is receiving an IV insulin infusion and is having the blood glucose continuously monitored. The infusion pump rate is automatically adjusted according to the real-time blood glucose levels being measured, to maintain blood glucose values in a target range. If the patient’s glucose level does not respond appropriately to the changes in insulin administration, the clinical staff is alerted.
Medical devices do not interact with each other autonomously (monitors, ventilator, IV pumps, etc.) Contextually rich data is difficult to acquire. Technologies and standards to reduce medical errors and improve efficiency have not been implemented in theater or at home.
In recent years, researchers have made progress developing PCLC devices for mechanical ventilation, anesthetic delivery applications, and so on. Despite these promises and potential benefits, there has been limited success in the translation of PCLC devices from bench to bedside. A key challenge to bringing PCLC devices to a level required for a clinical trials in humans is risk management to ensure device reliability and safety.
The United States Food and Drug Administration (FDA) classifies new hazards that might be introduced by PCLC devices into three categories. Besides clinical factors (e.g. sensor validity and reliability, inter- and intra-patient physiological variability) and usability/human factors (e.g. loss of situational awareness, errors, and lapses in operation), there are also engineering challenges including robustness, availability, and integration issues.
#### Variants:
US military developed ONR SBIR (Automated Critical Care System Prototype), and found those issues.
No plug and play, i.e. cannot swap O2 Sat with another manufacturer.
No standardization of data outputs for devices to interoperate.
Must have the exact make/model to replace a faulty device or system will not work.
Security Considerations
Security considerations for interconnected and dynamically composable medical systems are critical not only because laws such as HIPAA mandate it, but also because security attacks can have serious safety consequences for patients. The systems need to support automatic verification that the system components are being used as intended in the clinical context, that the components are authentic and authorized for use in that environment, that they have been approved by the hospital’s biomedical engineering staff and that they meet regulatory safety and effectiveness requirements.
For security and safety reasons, ICE (F2761-09) medical devices never interact directly each other. All interaction is coordinated and controlled via the applications.
While transport-level security such as TLS provides reasonable protection against external attackers, they do not provide mechanisms for granular access control for data streams happening within the same protected link. Transport-level security is also not sufficiently flexible to balance between security and performance. Another issue with widely used transport-level security solutions is the lack of support for multicast.
Privacy Considerations
<Describe any issues related to privacy; if there are none, say "none" and justify>
Gaps
Multicast support. It has proven useful for efficient and scalable discovery and information exchange in industrial systems.
Medical Devices and Medical Systems - Essential safety requirements for equipment comprising the patient-centric integrated clinical environment (ICE) - Part 1: General requirements and conceptual model.
The idea behind ICE is to allow medical devices that conform to the ICE standard, either natively or using an adapter, to interoperate with other ICE-compliant devices regardless of manufacturer.
MDIRA Version 1.0 provides requirements and implementation guidance for MDIRA-compliant systems focused on trauma and critical care in austere environments.
Johns Hopkins University Applied Physics Laboratory (JHU-APL) lead a research project in collaboration with US military to develop a framework of autonomous / closed loop prototypes for military health care which are dual use for the civilian healthcare system.
Comments
3.1.5.2 Private Health
3.1.5.2.1 Health Notifiers
Submitter(s)
Michael McCool
Reviewer(s)
<Suggest reviewers>
Tracker Issue ID
<please leave blank>
Category
Accessibility
Class
<please leave blank>
Status
<please leave blank>
Target Users
<List all users that are involved in the use case, e.g. device manufacturer, gateway manufacturer, cloud provider>
End user with a health problem they wish to monitor.
Health services provider (doctor, nurse, paramedic, etc).
Motivation
In critical situations regarding health,
like a medical emergency,
media multimodality may be the most effective way to communicate alerts,
When the goal is to monitor the health evolution of a
person in both emergency and non-emergency contexts,
access via networked devices may be the most effective way to collect data and
monitor a patient's status.
Expected Devices
Medical facilities supporting device and service access.
Expected Data
Command and status information transferred between the personal mobile device
application and the meeting space's services and devices.
Profile data for user preferences.
Dependencies
WoT Thing Description
WoT Discovery
Optional: WoT Scripting API in application on mobile personal device and possibly
in IoT orchestration services.
Description
In medical facilities,
a system may provide multiple options to control sensor operations
by voice or gesture ("start reading my blood pressure now").
These interactions may be mediated by an application installed into a smartphone.
The system integrates information from multiple sensors
(for example, blood pressure and heart rate);
reports medical sensor readings periodically (for example, to a remote medical facility)
and sends alerts when unusual readings/events are detected.
#### Variants:
The user may have additional mobile devices they want to incorporate into
an interaction, for example a headset acting as an auditory aid or personal speech output
device.
Gaps
Data format describing user interface preferences.
Ability to install applications based on links that can access IoT services.
Existing standards
This use case is based on MMI UC 3.2.
Comments
Does not include Requirements section from original MMI use case.
3.1.6 Manufacturing
3.1.6.1 Big Data for Manufacturing
Submitter(s)
Michael Lagally
Reviewer(s)
<Suggest reviewers>
Tracker Issue ID
Category
<please leave blank>
Class
<please leave blank>
Status
<please leave blank>
Target Users
Device owners, cloud provider.
Motivation
Production lines for industrial manufacturing consist of multiple machines, where each machine incorporates sensors for various values.
A failure of a single machine can cause defective products or a stop of the entire production.
Big data analysis enables to identify behavioral patterns across multiple production lines of the entire production plant and across multiple plants.
The results of this analysis can be used for optimizing consumption of raw materials, checking the status of production lines and plants and predicting and preventing fault conditions.
Expected Devices
Various sensors, e.g. temperature, light, humidity, vibration, noise, air quality.
Expected Data
Discrete sensor values, such as temperature, light, humidity, vibration, noise, air quality readings.
The data can be delivered periodically or on demand.
Dependencies
Thing Description: groups of devices, aggregation / composition mechanism, thing templates
Discovery/Onboarding: Onboarding of groups of devices
Description
A company owns multiple factories which contain multiple production lines.
Examples are production lines, environment sensors,
These devices collect data from multiple sensors and transmit this information to the cloud. Sensor data is stored in the cloud, can be visualized and analyzed using machine learning / AI.
The cloud service allows to manage single and groups of devices.
Combining the data streams from multiple devices allows to get an easy overview of the state of all connected devices in the user's realm.
In many cases there are groups of devices of the same kind, so the aggregation of data across devices can serve to identify anomalies or to predict impending outages.
The cloud service allows to manage single and groups of devices and can help to identify abonormal conditions.
For this purpose a set of rules can be defined by the user, which raises alerts towards the user or triggers actions on devices based on these rules.
This enables the early detection of pending problems and reduces the risk of machine outages, quality problems or threats to the environment or life of humans.
#### Variants:
<Describe possible use case variants, if applicable>
Gaps
<Describe any gaps that are not addressed in the current WoT work items>
Existing standards
<Provide links to relevant standards that are relevant for this use case>
Comments
See also Digital Twin use case.
3.1.7 Multi-Vendor System Integration
3.1.7.1 Out of the box interoperability
Submitter(s)
Michael Lagally
Reviewer(s)
All WoT members, specifically Sebastian Kaebisch, Michael McCool, Ege Korkan, Zoltan Kis, Takuki Kamiya, Ryuichi Matsukura, Kunihiko Toumura, Michael Koster.
Tracker Issue ID
<please leave blank>
Category
<please leave blank>
Class
<please leave blank>
Status
<please leave blank>
Target Users
device owner
service provider
cloud provider
device manufacturer
gateway manufacturer
Motivation
As an device owner, I want to know whether a device will work with my system before I purchase it to avoid wasting money.
- Installers of IoT devices want to be able to determine if a given device will be compatible with the rest of their installed systems and whether they will have access to its data and affordances.
As a developer, I want TDs to be as simple as possible so that I can efficiently develop them.
- Here "simple" should relate to the end goal, "efficiently develop"; that is, TDs should be straightforward for the average developer to complete and validate.
As a developer, I want to be able to validate that a Thing will be compatible with a Consumer without having to test against every possible consumer.
As a cloud provider I want to onboard, manage and communicate with as many devices as possible out of the box.
This should be possible without device specific customization.
Dependencies - Affected WoT deliverables and/or work items
WoT Profile, WoT Thing Description
Description
As a consumer of devices I want to be able to process data from any device that conforms to a class of devices.
I want to have a guarantee that I'm able to correctly interact with all affordances of the Thing that complies with this class of devices.
Behavioral ambiguities between different implementations of the same description should not be possible.
I want to integrate it into my existing scenarios out of the box, i.e. with close to zero configuration tasks.
#### Variants:
N/A
Gaps
<Describe any gaps that are not addressed in the current WoT standards and building blocks>
Existing standards
various.
Comments
A strawman proposal for a profile specification has been submitted at: https://github.com/w3c/wot-profile
An initial set of requirements is available here:
https://github.com/w3c/wot-profile/edit/master/REQUIREMENTS/requirements.md
Recommendations for commonalities and interoperability profiles of IoT platforms:
https://european-iot-pilots.eu/wp-content/uploads/2018/11/D06_02_WP06_H2020_CREATE-IoT_Final.pdf
3.1.7.2 Digital Twin
Submitter(s)
Michael Lagally
Reviewer(s)
<Suggest reviewers>
Tracker Issue ID
Category
<please leave blank>
Class
<please leave blank>
Status
<please leave blank>
Target Users
Device owners, cloud provider.
Motivation
A digital twin is the virtual representation of a physical asset such as a machine, a vehicle, robot, sensor.
Using a digital twin allows businesses to analyze their physical assets to troubleshoot in real time, predict future problems, minimize downtime, and perform simulations to create new business opportunities.
A digital twin may also be called a twin or a shadow. Digital twin technology may be referred to as device virtualization.
Digital twins can be located in the edge or in the cloud.
Expected Devices
Various devices such as sensors, machines, vehicles, production lines, industry robots.
Digital twin platforms at the edge or in the cloud.
<List the target devices, e.g. as a sensor, solar panel, air conditioner>
Expected Data
<List the type of expected data, e.g. weather and climate data, medical conditions, machine sensors, vehicle data>
Machine status information, discrete sensor data or data streams.
Dependencies
<List the affected WoT deliverables>
WoT Architecture
WoT Thing Description
WoT Profile
WoT Scripting?
Description
<Provide a description from the users perspective>
The user benefits from using digital twins with the following scenarios:
Better visibility: Continually view the operations of the machines or devices, and the status of their interconnected systems.
Accurate prediction: Retrieve the future state of the machines from the digital twin model by using modeling.
What-if analysis: Easily interact with the model to simulate unique machine conditions and perform what-if analysis using well-designed interfaces.
Documentation and communication: Use of the digital twin model helps to understand, document, and explain the behavior of a specific machine or a collection of machines.
Integration of disparate systems: Connect with back-end applications related to supply chain operations such as manufacturing, procurement, warehousing, transportation, or logistics.
Variants
#### Virtual Twin
The virtual twin is a representation of a physical device or an asset. A virtual twin uses a model that contains observed and desired attribute values and also uses a semantic model of the behavior of the device.
Intermittent connectivity: An application may not be able to connect to the physical asset. In such a scenario, the application must be able to retrieve the last known status and to control the operation states of other assets.
Protocol abstraction: Typically, devices use a variety of protocols and methods to connect to the IoT network. From a users perspective this complexity should not affect other business applications such as an enterprise resource planning (ERP) application.
Business rules: The user can specify the normal operating range of a property in a semantic model.
Business rules can be declaratively defined and actions can be automatically invoked in the edge or on the device.
Example: In a fleet of connected vehicles, the user monitors a collection of operating parameters, such as fuel level, location, speed and others. The semantics-based virtual twin model enables the user to decide whether the operating parameters are in normal range. In out of range conditions the user can take appropriate actions.
#### Predictive Twin
In a predictive twin, the digital twin implementation builds an analytical or statistical model for prediction by using a machine-learning technique. It need not involve the original designers of the machine. It is different from the physics-based models that are static, complex, do not adapt to a constantly changing environment, and can be created only by the original designers of the machine.
A data analyst can easily create a model based on external observation of a machine and can develop multiple models based on the user’s needs.
The model considers the entire business scenario and generates contextual data for analysis and prediction.
When the model detects a future problem or a future state of a machine, the user can prevent or prepare for them.
The user can use the predictive twin model to determine trends and patterns from the contextual machine data. The model helps to address business problems.
#### Twin Projections
In twin projections, the predictions and the insights integrate with back-end business applications, making IoT an integral part of business processes.
When projections are integrated with a business process, they can trigger a remedial business workflow.
Prediction data offers insights into the operations of machines. Projecting these insights into the back-end applications infrastructure enables business applications to interact with the IoT system and transform into intelligent systems.
Gaps
<Describe any gaps that are not addressed in the current WoT work items>
WoT does not define a way to describe the behavior of a thing to use for a simulation.
Existing standards
<Provide links to relevant standards that are relevant for this use case>
Comments
3.1.7.3 Cross Protocol Interworking
Submitter(s)
Michael Lagally
Reviewer(s)
Ege Korkan, Michael McCool, Michael Koster, Sebastian Käbisch
Tracker Issue ID
Category
<please leave blank>
Class
<please leave blank>
Status
<please leave blank>
Target Users
Device owners, cloud providers.
Motivation
In smart city, home and industrial scenarios various devices are connected to a common network. These devices implement different protocols. To enable interoperability, an "agent" needs to communicate across different protocols. Platforms for this agent can be edge devices, gateways or cloud services.
Interoperability across protocols is a must for all user scenarios that integrate devices from more than one protocol.
Expected Devices
Various sensors, e.g. temperature, light, humidity, vibration, noise, air quality, edge devices, gateways, cloud servers and services.
Expected Data
Discrete sensor values, such as temperature, light, humidity, vibration, noise, air quality readings.
A/V streams.
The data can be delivered periodically or on demand.
Dependencies
WoT Profiles.
Description
There are multiple user scenarios that are addressed by this use case.
An example in the smart home environment is an automatic control lamps, air conditioners, heating, window blinds in a household
based on sensor data, e.g. sunlight, human presence, calendar and clock, etc.
In an industrial environment individual actuators and production devices use different protocols.
Examples include MQTT, OPC-UA, Modbus, Fieldbus, and others.
Gathering data from these devices, e.g. to support digital twins or big data use cases requires an "Agent" to bridge across these protocols.
To provide interoperability and to reduce implementation complexity of this agent a common set of (minimum and maximum)
requirements need to be supported by all interoperating devices.
A smart city environment is similar to the industrial scenario in terms of device interoperability. Devices differ however,
they include smart traffic lights, traffic monitoring, people counters, cameras.
#### Variants:
Gaps
A common profile across protocols is required to address this use case.
Existing standards
MQTT, OPC-UA, BACNet, CoAP, various other home and industrial protocols.
Comments
3.1.8 Multimodal System Integration
3.1.8.1 Multimodal Recognition Support
Submitter(s)
Michael McCool
Reviewer(s)
<Suggest reviewers>
Tracker Issue ID
<please leave blank>
Category
Accessibility
Class
<please leave blank>
Status
<please leave blank>
Target Users
<List all users that are involved in the use case, e.g. device manufacturer, gateway manufacturer, cloud provider>
Motivation
Recognizer system development has arrived at a point of maturity where
if we want to dramatically enhance recognition performance,
sensor fusion from multiple modalities is needed.
In order to achieve this,
an image recognizer should incorporate results coming from other
kinds of recognizers (e.g. audio recognizer) within the network
engaged in the same interaction cycle.
Expected Devices
Audio sensing device (microphone).
Video sensing device (camera).
Audio recognition service.
Video recognition service.
Devices capabale of presenting alerts in various modalities.
Expected Data
Command and status information transferred between the sensing devices,
the recognition services, and the alert devices.
Profile data for user preferences.
Dependencies
WoT Thing Description
WoT Discovery
Optional: WoT Scripting API in application on mobile personal device and possibly
in IoT orchestration services.
Description
An audio recognizer has been trained with the more common sounds in the house,
in order to provide alerts in case of an emergency.
In the same house a security system uses a video recognizer to identify people
at the front door.
These two systems need to cooperate with a remote home management system
to provide integrated services.
#### Variants:
Gaps
Support for video and audio recognition services.
Existing standards
This use case is based on MMI UC 5.1.
Comments
Does not include Requirements section from original MMI use case.
3.1.8.2 Enhancement of Synergistic Interactions
Submitter(s)
Michael McCool
Reviewer(s)
<Suggest reviewers>
Tracker Issue ID
<please leave blank>
Category
Accessibility
Class
<please leave blank>
Status
<please leave blank>
Target Users
<List all users that are involved in the use case, e.g. device manufacturer, gateway manufacturer, cloud provider>
Motivation
One of the main indicators concerning the usability of a system
is the corresponding level of accessibility provided by it.
The opportunity for all the users to receive and to deliver all kinds of information,
regardless of the information format or the type of user profile,
state or impairment is a recurrent need in web applications.
One of the means to achieve accessibility is the design of a more
synergic interaction based on the discovery of multimodal Modality Components.
Synergy is two or more entities functioning together to produce a result
that is not obtainable independently.
It means "working together".
For example,
how to avoid disruptive interactions
in nomadic systems (always affected by the changing context)
is an important issue.
In these applications,
user interaction is difficult,
distracted and less precise.
Discovery and use of alternative input and output devices
can increase synergic interaction offering new possibilities
more adapted to the current context.
Such a system can also enhance the fusion process for target groups of
users experiencing permanent or temporary learning difficulties or with sensorial,
emotional or social impairments.
Expected Devices
A normal client computer with I/O devices that need to be emulated.
Alternative I/O devices that need to be interfaced to the client system.
Expected Data
Command and status information transferred between the client computer
and the alternative I/O devices.
Profile data for user preferences.
Dependencies
WoT Thing Description
WoT Discovery
Optional: WoT Scripting API in application on mobile personal device and possibly
in IoT orchestration services.
Description
A person working mostly with a PC is having a problem with his right arm and hands.
He is unable to use a mouse or a keyboard for a few months.
He can point at things, sketch, clap, make gestures, but he can not make any precise movements.
A generic interface allows this person to perform his most important tasks in his
personal devices:
to call someone, open a mailbox, access his agenda or navigate over some Web pages.
The generic interface can propose child-oriented intuitive interfaces like a
clapping-based interface,
a very articulated TTS component, or reduced gesture input widgets.
Other specialized devices might include phones with very big numbers,
very simple remote controls,
screens displaying text at high resolution,
or voice command devices.
#### Variants:
Gaps
Existing standards
This use case is based on MMI UC 5.2.
Comments
Does not include Requirements section from original MMI use case.
3.1.9 Accessibility
3.1.9.1 Audiovisual Devices Acting as Smartphone Extensions
Submitter(s)
Michael McCool
Reviewer(s)
<Suggest reviewers>
Tracker Issue ID
<please leave blank>
Category
Accessibility
Class
<please leave blank>
Status
<please leave blank>
Target Users
<List all users that are involved in the use case, e.g. device manufacturer, gateway manufacturer, cloud provider>
Motivation
Many of today's home IoT-enabled devices can provide similar functionality
(e.g. audio/video playback),
differing only in certain aspects of the user interface.
This use case would allow continuous interaction with a specific
application as the user moves from room to room,
with the user interface switched automatically to the set of
devices available in the user's present location.
On the other hand,
some devices can have specific capabilities
and user interfaces that can be used to add information to a larger context
that can be reused by other applications and devices.
This drives the need to spread an application across different devices
to achieve a more user-adapted and meaningful interaction according to the
context of use.
Both aspects provide arguments for exploring use cases where
applications use distributed multimodal interfaces.
Expected Devices
Mobile phone or other client running an application requiring a extended and
more accessible user interface.
IoT-enabled audio-visual devices providing audio and visual information
display capabilities that can be used to augment the user interface of the
application.
Possible edge computation services providing speech-to-text or described video
(e.g. object detection) capabilities.
Expected Data
Visual display information mapping information from audio to visual modalities,
for example text generated from voice recognition.
Text from an application that needs to be displayed at a larger size.
Visual alerts correspondig to audio stimuli, eg sound effects in a game mapped
to visual icons.
Visual information mapped to audio information, for example,
described video based on an AI service providing object recognition.
Dependencies
WoT Thing Description
WoT Discovery
Optional: WoT Scripting API accessible from application for interacting
with devices.
Description
A home entertainment system is adapted by a mobile device
as a set of user interface components.
In addition to media rendering and playback,
these Devices also act as input or output modalities for
an application, for example an application running on a smartphone.
The native user interface on the application
does not have to be manipulated directly at all.
A wall-mounted touch-sensitive TV could be used to navigate applications,
and a wide-range microphone can handle speech input.
Spatial (Kinect-style) gestures may also be used to control
application behavior.
Accessibility support software on the smartphone
discovers available modalities and arranges them to best
serve the user's purpose.
One display can be used to show photos and movies,
another for navigation.
As the user walks into another room,
this configuration is adapted dynamically to the new location.
User intervention may be sometimes required to decide on
the most convenient modality configuration.
The state of the interaction is maintained
while switching between modality sets.
For example,
if the user was navigating a GUI menu in the living room,
it is carried over to another screen when she switches rooms,
or replaced with a different modality such as voice
if there are no displays in the new location.
#### Variants:
Modalities may be translated from one form to another to accomodate
accessibility issues, for example, visual cues into audio cues and
vice-versa, as appropriate.
Gaps
An AI service may be require to perform modality mapping, for example,
object recognition.
Existing standards
This use case is based on MMI UC 1.1.
Comments
Does not include Requirements section from original MMI use case.
Variant supporting
modality conversion is not included in the original MMI use case.
3.1.9.2 Unified Smart Home Control and Status Interface
Submitter(s)
Michael McCool
Reviewer(s)
<Suggest reviewers>
Tracker Issue ID
<please leave blank>
Category
Accessibility
Class
<please leave blank>
Status
<please leave blank>
Target Users
<List all users that are involved in the use case, e.g. device manufacturer, gateway manufacturer, cloud provider>
Motivation
The increase in the number of controllable devices in an
intelligent home creates a problem with controlling all available services
in a coherent and useful manner.
Having a shared context,
built from information collected through sensors and direct user input,
would improve recognition of user intent, and thus simplify interactions.
In addition,
multiple input mechanisms could be selected by the user based on device type,
level of trust and the type of interaction required for a particular task.
Expected Devices
Mobile phone or other client running an application providing command
mediation capabilities.
IoT-enabled smart home devices supporting
remote sensing and actuation functionality.
Expected Data
Command and status information transferred between the command mediation
application and one or more devices.
Dependencies
WoT Thing Description
WoT Discovery
Optional: WoT Scripting API accessible from application for interacting
with devices.
Description
Smart home functionality (window blinds, lights, air conditioning etc.)
is controlled through a multimodal interface,
composed from modalities built into the house itself
(e.g. speech and gesture recognition)
and those available on the user's personal devices
(e.g. smartphone touchscreen).
The system may automatically adapt to the preferences of a specific user,
or enter a more complex interaction if multiple people are present.
Sensors built into various devices around the house can act as input
modalities that feed information to the home and affect its behavior.
For example,
lights and temperature in the gym room can be adapted dynamically
as workout intensity recorded by the fitness equipment increases.
The same data can also increase or decrease volume and tempo of music tracks
played by the user's mobile device or the home's media system.
#### Variants:
The intelligent home in tandem with the user's personal
devices can additionally monitor user behavior for emotional patterns
such as 'tired' or 'busy' and adapt further.
Gaps
A service may be needed to recognize gestures and emotional states.
Existing standards
This use case is based on MMI UC 1.2; original title was Intelligent Home Apparatus.
Comments
Does not include Requirements section from original MMI use case.
3.1.10 Automotive
3.1.10.1 Smart Car Configuration Management
Submitter(s)
Michael McCool
Reviewer(s)
<Suggest reviewers>
Tracker Issue ID
<please leave blank>
Category
Accessibility
Class
<please leave blank>
Status
<please leave blank>
Target Users
<List all users that are involved in the use case, e.g. device manufacturer, gateway manufacturer, cloud provider>
Motivation
User interface personalization is a task that most often needs to be repeated
for all Devices a user wishes to interact with recurringly.
With complex devices,
this task can also be very time-consuming,
which is problematic if the user regularly accesses similar,
but not identical devices, as in the case of several cars rented over a month.
A standardized set of personal information and preferences that could be used
to configure personalizable devices automatically would be very helpful for all
these cases in which the interaction becomes a customary practice.
Expected Devices
Personal mobile device running an application providing command
mediation capabilities.
IoT-enabled smart car supporting
remote sensing, actuation, and configuration functionality.
Expected Data
Command and status information transferred between the personal mobile device
application and the car's services and devices.
Profile data for user preferences.
Dependencies
WoT Thing Description
WoT Discovery
Optional: WoT Scripting API in application on mobile personal device and possibly
in IoT orchestration services in the car.
Description
Basic in-car functionality is standardized to be managed by other devices.
A user can control seat, radio or AC settings through a personalized multimodal interface
shared by the car and her personal mobile device.
User preferences are stored on the mobile Device (or in the cloud),
and can be transferred across different car models handling a specific functionality
(e.g. all cars with touchscreens should be able to adapt to a "high contrast" preference).
The car can make itself available as a complex modality component that wraps around all
functionality and supported modalities,
or as a collection of modality components such as touchscreen, speech recognition system,
or audio player.
In the latter case,
certain user preferences may be shared with other environments.
For example,
a user may opt to select the "high contrast" scheme at night on all of her displays,
in the car or at home.
A car that provides a set of modalities can be also adapted by the mobile device
to compose an interface for its functionality,
for example to manage playback of music tracks through the car's voice control system.
Sensor data provided by the phone can be mixed with data recorded by the car's own sensors
to profile user behavior which can be used as context in multimodal interaction.
#### Variants:
Additional portable devices may be brought into the car and also be
incorporated into an application, for example, a GPS navigation system.
Gaps
Data format describing user interface preferences.
Existing standards
This use case is based on MMI UC 2.1.
Comments
Does not include Requirements section from original MMI use case.
3.1.11 Energy / Smart Grids
3.1.11.1 Smart Grids
Submitter(s)
Christian Glomb (Siemens)
Reviewer(s)
Michael Lagally (Oracle)
Tracker Issue ID
<please leave blank>
Category
<please leave blank>
Class
<please leave blank>
Status
<please leave blank>
Target Users
Grid operators on all voltage levels line Distribution System Operators (DSO), Transmission System Operators (TSO)
Plant operators (centralized as well as de-centralized producers)
Virtual Power Plant (VPP) operators
Energy grid markets
Cloud providers where grid backend services are hosted and where Operation Technology bridges to Information Technology
Device manufacturers, owners, and users; devices include communication gateways, monitoring and control units
Motivation
<Provide a description of the problem that is solved by the use case and a reason why this use case is important for the users>
Expected Devices
A smart grid integrates all players in the electricity market into one overall system through the interaction of generation, storage, grid management and consumption. Power and storage plants are already controlled today in such a way that only as much electricity is produced as is needed. Smart grids include consumers as well as small, decentralized energy suppliers and storage locations in this control system, so that on the one hand, consumption is more homogeneous in terms of time and space (see also intelligent electricity consumption) and on the other hand, in principle inhomogeneous producers (e.g. wind power) and consumers (e.g. lighting) can be better integrated.
Expected Data
Weather and climate data
Metering data (both production as well as consumption as well as storage, e.g. 15 min. intervals)
Real time data from PMUs (Phasor Measurement Units)
Machine and equipment monitoring data (enabling health checks)
...
Dependencies - Affected WoT deliverables and/or work items
The term Smart Grid refers to the communicative networking and control of power generators, storage facilities, electrical consumers, and grid equipment in power transmission and distribution networks for electricity supply. This enables the optimization and monitoring of the interconnected components. The aim is to secure the energy supply on the basis of efficient and reliable system operation.
#### Variants:
##### Decentralized Power Generation
While electricity grids with centralized power generation have dominated up to now, the trend is moving towards decentralized generation plants, both for generation from fossil primary energy through small CHP plants and for generation from renewable sources such as photovoltaic systems, solar thermal power plants, wind turbines and biogas plants. This leads to a much more complex structure, primarily in the area of load control, voltage maintenance in the distribution grid and maintenance of grid stability. In contrast to medium to large power plants, smaller, decentralised generation plants also feed directly into the lower voltage levels such as the low-voltage grid or the medium-voltage grid. This use case variants also includes operation and control of energy storages like batteries.
##### Virtual Power Plants
A Virtual Power Plant (VPP) is an aggregation of Distributed Energy Resources (DERs) that can act as an entity on energy markets or as an ancillary service to grid operation.
The individual DERs often have a primary use on their own, with electric generation/consumption being a side-effect resp. secondary use. This results in negotiations/collaborations between many different parties e.g. such as the DER owner, the VPP operator, the grid operator and others.
##### Smart Metering
For consumers, a major change is the installation of smart meters. Their core tasks are remote reading and the possibility to realize fluctuating prices within a day at short notice. All electricity meters must therefore be replaced by those with remote data transmission.
##### Other variants
Emergency response, grid synchronization, grid black start
Building Blocks
Multi-Stakeholder Operation: Multiple involved parties have to find a common mode of operation
Device Lifecycle Management: Since the VPP is a dynamic system of loosely coupled DERs, the appearance and disappearance of DERs as well as the software management on the devices itself requires a means to orchestrate the lifecycle of individual device's respective components.
Embedded Runtime: Especially for DERs in remote locations, maintaining a close couple control loop can be expensive if feasible at all. Therefore, it is desirable to be able to offload control logic to the DER itself.
Ensemble Discovery: In order to dynamically find matching DERs needed for the operational goal of a VPP, a registry with different options of DER discovery is needed.
Content-Negotiation: The different stakeholders have to interact and therefore need a common data format.
Resource Description: The DER has to describe itself to enable discovery of single DERs and ensembles, also the operational data needs to be understood by the different stakeholders without engineering effort.
Push Services: As there is a fan-out with many devices that probably have a rate-limited connection connecting to one single command centre, a bidirectional communication mechanism is needed rather than polling for the reverse direction
Object Memory: As multiple and interchangable stakeholders are involved in the application, a backlog of the object is beneficial for scrollkeeping
Non-Functionals
Privacy: As fine-grained metering informtion provides sensitive data about a household, the system should show a high digree of privacy
Trust: Since the data exchange between the virtual power plant and the distributed energy resource leads to a physical action that invokes high currents and monetary flows, the integrity of both parties and the exchange's data is crucial
Layered L7 Communication: Since multiple different links are used for monitoring and control, integration requires a clear and consistent seperation of information from the used serialization and application protocols to enable the exchange of homogenous information over heterogenous application layer protocols
Gaps
<Describe any gaps that are not addressed in the current WoT standards and building blocks>
Existing standards
IEC 61850 - International standard for data models and communication protocols
IEEE 1547 - US standard for interconnecting distributed resources with electric power systems
Comments
3.1.12 Transportation
3.1.12.1 Transportation
Submitter(s)
Zoltan Kis
Reviewer(s)
Jennifer Lin, Michael McCool
Tracker Issue ID
<please leave blank>
Category
<please leave blank>
Class
<please leave blank>
Status
<please leave blank>
Sub-categories
Transportation - Infrastructure
Transportation - Cargo
Transportation - People
Target Users
Smart Cities: managing roads, public transport and commuting, autonomous and human driven vehicles, transportation tracking and control systems, route information systems, commuting and public transport, vehicles, on-demand transportation, self driving fleets, vehicle information and control systems, infrastructure sharing and payment system, smart parking, smart vehicle servicing, emergency monitoring, etc.
Transport companies: managing shipping, air cargo, train cargo and last mile delivery transportation systems including automated systems.
Commuters: Mobility as a service, booking systems, route planning, ride sharing, self-driving, self-servicing infrastructure, etc.
Motivation
Provide common vocabulary for describing transport related services and solutions that can be reused across sub-categories, for easier interoperability between various systems owned by different stakeholders.
Thing Description templates could be defined in many subdomains to help integration or interworking between multiple systems.
Transportation of goods can be optimized at global level by enhancing interoperability between vertical systems.
Expected Devices
Road information system (routes, conditions, navigation).
Road control system (e.g. virtual rails).
Traffic management services, e.g. intelligent traffic light system with localization and identification (by satellite, radio frequency identification, cameras etc).
Emergency monitoring and data/location sharing.
Airport management.
Shipping docks and ports management.
Train networks management.
Public transport vehicles (train, metro, tram, bus, minibus), mobility as a service (ride sharing, bicycle sharing, scooters etc).
Transportation network planning and management (hubs, backbones, sub-networks, last mile network).
Electronic timetable management system.
Vehicles (human driven, self-driving, isolated or part of fleet).
Connected vehicles (cars, ships, airplanes, trains, buses etc).
Expected Data
Vehicle data (identification, location, speed, route, selected vehicle data).
Weather and climate data.
Contextual data (representing various risk factors, delays, etc).
Transportation system implementers will be able to use a unified data description model accoss various systems.
#### Variants:
There will be different verticals, such as:
Smart City public transport
Smart City traffic management
Smart city vehicle management
Cargo traffic management
Cargo vehicle management
Gaps
<Describe any gaps that are not addressed in the current WoT work items>
Existing standards
<Provide links to relevant standards that are relevant for this use case>
Comments
3.1.13 Smart Building
3.1.13.1 Smart Building
Submitter(s)
Sebastian Kaebisch (Siemens)
Reviewer(s)
Michael Lagally (Oracle)
Tracker Issue ID
<please leave blank>
Category
<please leave blank>
Class
<please leave blank>
Status
<please leave blank>
Target Users
Motivation and Description
Buildings such as office buildings, hotels, airports, hospitals, train stations and sports stadiums typically consist of heterogeneous IoT systems such as lightings, elevators, security (e.g., door control), air-conditionings, fire warnings, heatings, pools, parking control, etc.
Monitoring, controlling, and management of such a heterogeneous IoT landscape is quite chellanging in terms of engeneering and maintenance.
Expected Devices
All kind of sensors and actuators (e.g., HVAC).
Expected Users
systems engineers
system administrators
third party user
Expected Data
Hetrogeneos data models from different IoT systems such as BACnet, KNX, and Modbus.
Dependencies - Affected WoT deliverables and/or work items
Construction and renovation companies often deal with the challenge of delivering target energy-efficient buildings given specific budget and time constraints. Energy efficiency, as one of the key factors for renovation investments, depends on the availability of various data sources to support the renovation design and planning. These include climate data and building material along with residential comfort and energy consumption profiles. The profiles are created using a combination of manual inputs and sensory data collected from residents.
Expected Devices
Gateway (e.g. Single-board computer with a Z-Wave controller)
Dependencies - Affected WoT deliverables and/or work items
<List the affected WoT deliverables that have to be changed to enable this use case>
Description
Renovation of residential buildings to improve energy efficiency depend on a wide range of sensory information to understand the building conditions and consumption models. As part of the pre-renovation activities, the renovation companies deploy various sensors to collect relevant data over a period of time. Such sensors become part of a wireless sensor network (WSN) and expose data endpoint with the help of one or more gateway devices. Depending on the protocols, the endpoints require different interaction flows to securely access the current and historical measurements. The renovation applications need to discover the sensors, their endpoints and how to interact with them based on search criteria such as the physical location, mapping to the building model or measurement type.
#### Variants:
<Describe possible use case variants, if applicable>
#### Security Considerations:
<Describe any issues related to security; if there are none, say "none" and justify>
#### Privacy Considerations:
The TD may expose personal information about the building layout and residents.
Gaps
There is no standard vocabulary for embedding application-specific meta data inside the TD. It is possible to extend the TD context and add additional fields but with too much flexibility, every application may end up with a completely different structure, making such information more difficult to discover. In this use-case, the application specific data are:
the mapping between each thing and the space in the building model
various identifiers for each thing (e.g. sensor serial number, z-wave ID, SenML name)
indoor coordinates
There is no standard API specification for the WoT Thing Directory to maintain and query TDs.
device owners : The university -> Research Group -> Specific Lab
device user : Students and potentially anyone who participates in plugfests
service provider : The university -> Research Group
network operator : The university
Motivation
This use case motivates a standardized use of shared resources. One example is when a physical resource of the Thing should not be used by multiple Consumers at the same time like the arm of the robot but its position can be read my multiple Consumers.
Expected Devices
Concrete devices are irrelevant for this use case but devices with a physical state is required. However, we have currently the following devices that are connected to Raspberry Pis where the WoT stack (node-wot or similar) is running. Concrete device models can be given upon request.
Robotic arms
Conveyor belts
Motorized sliders where the robots or devices can be mounted on
Philips Hue devices: Light bulbs, LED Strips, Motion sensors, Switch. We do not have the source code of these devices (brownfield)
Various sensors (brightness, humidity, temperature, gyroscopic sensors)
LED Screen to display messages
There are also IP Cameras but they are not WoT compatible and are not planned to be made compatible.
Expected Data
Atmospheric data of a room, machine sensors
Dependencies - Affected WoT deliverables and/or work items
We are offering a practical course for the students where they can interact fully remotely with WoT devices and verify their physical actions via video streams. We have sensors and actuators like robots. Students then build mashup applications to deepen their knowledge of WoT technologies. Official page of the course is here.
#### Variants:
Security Considerations
The devices are connected to the Internet and are secured behind a router and proxy.
Privacy Considerations
none from the WoT point of view since we want the devices to be used by anyone and the devices do not share any information that is related to the students or us as the provider of the devices.
However, there are cameras which can show humans entering the room as a side effect (they are meant to monitor the devices). The streams are accessible only to authorized users, the room has signs on the door and there is a cage around the area that is filmed.
Gaps
#### Thing Description
How to give hints that a particular action should not be used by others at the same time. A new keyword (like "shared":true) would be needed for devices that do not implement a describable mechanism.
How to describe the mechanism that the Thing implements to manage the shared resources. Does it happen in the security level?
#### Scripting API
How does the Consumer code change when this mechanism is used. Does it get settled in the implementation or scripting level.
Existing standards
Comments
3.1.15 Oauth2 Flows
3.1.15.1 OAuth2 Flows
Submitter(s)
Michael McCool, Cristiano Aguzzi
Reviewer(s)
<Suggest reviewers>
Tracker Issue ID
<please leave blank>
Category
<please leave blank>
Class
<please leave blank>
Status
WIP
Target Users
device owner
device user
device application
service provider
identity provider
directory service
Motivation
OAuth 2.0 is an authorization protocol widely known for its usage across several web services.
It enables third-party applications to obtain limited access to HTTP services on behalf of the resource owner
or of itself.
The protocol defines the following actors:
Client: an application that wants to use a resource owned by the resource owner.
Authorization Server: An intermediary that authorizes the client for a particular scope.
Resource: a web resource
Resource Server: the server where the resource is stored
Resource Owner: the owner of a particular web resource. If it is a human is usually referred to as an end-user.
More specifically from the RFC:
> An entity capable of granting access to a protected resource.
These actors can be mapped to WoT entities:
Client is a WoT Consumer
Authorization Server is a third-party service
Resource is an interaction affordance
Resource Server is a Thing described by a Thing Description acting as a server.
May be a device or a service.
Resource Owner might be different in each use case.
A Thing Description may also combine resources from different owners or web server.
TO DO: Check the OAuth 2.0 spec to determine exactly how Resource Owner is defined.
Is it the actual owner of the resource (eg running the web server) or simply someone
with the rights to access that resource?
The OAuth 2.0 protocol specifies an authorization layer that separates the client from the resource owner.
The basic steps of this protocol are summarized in the following diagram:
Steps A and B defines what is known as authorization grant type or flow.
What is important to realize here is that not all of these interactions
are meant to take place over a network protocol.
In some cases,
interaction with with a human through a user interface may be intended.
OAuth2.0 defines 4 basic flows plus an extension mechanism.
The most common of which are:
code
implicit
password (of resource owner)
client (credentials of the client)
In addition, a particular extension which is of interest to IoT is the device flow.
Further information about the OAuth 2.0 protocol can be found in
IETF RFC6749.
In addition to the flows, OAuth 2.0 also supports scopes.
Scopes are identifiers which can be attached to tokens.
These can be used to limit authorizations to
particular roles or actions in an API.
Each token carries a set of scopes and these can be checked when an interaction
is attempted and access can be denied if the token does not include a scope
required by the interaction.
This document describes relevant use cases for each of the OAuth 2.0 authorization flows.
Expected Devices
To support OAuth 2.0, all devices must have the capability of:
Both the producer and consumer must be able to create and participate in a TLS connection.
The producer must be able to verify an access (bearer) token (i.e. have sufficient computational power/connectivity).
Comment:
Investigate whether DTLS can be used.
Certainly the connection needs to be encrypted; this is required in the OAuth 2.0 specification.
Investigate whether protocols other than HTTP can be used, e.g. CoAP.
- found an interesting IETF draft RFC about CoAP support(encrypted using various mechanisms like DTLS or CBOR Object Signing and Encryption): draft-ietf-ace-oauth
Expected Data
Depending on the OAuth 2.0 flow specified, various URLs and elements need to be specified,
for example, the location of an authorization token server.
OAuth 2.0 is also based on bearer tokens and so
needs to include the same data as those, for example, expected encryption suite.
Finally,
OAuth 2.0 supports scopes so these need to be defined in the security scheme and specified in
the form.
Dependencies - Affected WoT deliverables and/or work items
Thing Description, Scripting API, Discovery, and Security.
Description
A general use case for OAuth 2.0 is when a WoT consumer wants to access restricted interaction affordances.
In particular, when those affordances have a specific resource owner which
may grant some temporary permissions to the consumer.
The WoT consumer can either be hosted in a remote device or interact directly with the end-user inside an application.
#### Variants:
For each OAuth 2.0 flow, there is a corresponding use case variant.
We also include the experimental "device" flow for consideration.
##### code
A natural application of this protocol is when the end-user wants to interact directly with the consumed thing or to grant his authorization to a remote device. In fact from the RFC6749
> Since this is a redirection-based flow, the client must be capable of
interacting with the resource owner's user-agent (typically a web
browser) and capable of receiving incoming requests (via redirection)
from the authorization server.
This implies that the code flow can be only used when the resource owner interacts directly with the WoT consumer at least once. Typical scenarios are:
In a home automation context, a device owner uses a third party software to interact with/orchestrate one or more devices
Similarly, in a smart farm, the device owner might delegate its authorization to third party services.
In a smart home scenario, Thing Description Directories might be deployed using this authorization mechanism. In particular, the list of the registered TDs might require an explicit read authorization request to the device owner (i.e. an human who has bought the device and installed it).
...
The following diagram shows the steps of the protocol adapted to WoT idioms and entities. In this scenario, the WoT Consumer has read the Thing Description of a Remote Device and want to access one of its WoT Affordances protected with OAuth 2.0 code flow.
the protocol is heavily end-user oriented. In fact, the RFC states the following
> Due to the polling nature of this protocol (as specified in Section 3.4), care is needed to avoid overloading the capacity of the token endpoint. To avoid unneeded requests on the token endpoint, the client SHOULD only commence a device authorization request when **prompted by the user and not automatically**, such as when the app starts or when the previous authorization session expires or fails.
TLS is required both between WoT Consumer/Authorization Server and between Browser/Authorization Server
Other user interactions methods may be used but are left out of scope
##### client credential
The Client Credentials grant type is used by clients to obtain an access token outside of the context of an end-user. From RFC6749:
> The client can request an access token using only its client
credentials (or other supported means of authentication) when the
the client is requesting access to the protected resources under its
control, or __those of another resource owner that has been previously
arranged with the authorization server__ (the method of which is beyond
the scope of this specification).
Therefore the client credential grant can be used:
When the resource owner is a public authority. For example, in a smart city context, the authority provides a web service where to register an application id.
Companion application
Industrial IoT. Consider a smart factory where the devices or services are provisioned with client credentials.
...
The Client Credentials flow is illustrated in the following diagram. Notice how the Resource Owner is not present.
Comment: Usually client credentials are distributed using an external service which is used by humans to register a particular application. For example, the npm cli has a companion dashboard where a developer requests the generation of a token that is then passed to the cli. The token is used to verify the publishing process of npm packages in the registry. Further examples are Docker cli and OpenId Connect Client Credentials.
##### implicit
**Deprecated**
From OAuth 2.0 Security Best Current Practice:
> In order to avoid these issues, clients SHOULD NOT use the implicit
grant (response type "token") or other response types issuing access
tokens in the authorization response, unless access token injection
in the authorization, response is prevented and the aforementioned
token leakage vectors are mitigated.
The RFC above suggests using code flow with Proof Key for Code Exchange (PKCE) instead.
The implicit flow was designed for public clients typically implemented inside a browser (i.e. javascript clients). As the code is a redirection-based flow and it requires direct interaction with the resource's owner user-agent. However, it requires one less step to obtain a token as it is returned directly in the authentication request (see the diagram below).
Considering the WoT context this flow is not particularly different from code grant and it can be used in the same scenarios.
Comment: even if the implicit flow is deprecated existing services may still using it.
##### resource owner password
**Deprecated** From OAuth 2.0 Security Best Current Practice:
> The resource owner password credentials grant MUST NOT be used. This
grant type insecurely exposes the credentials of the resource owner
to the client. Even if the client is benign, this results in an
increased attack surface (credentials can leak in more places than
just the AS) and users are trained to enter their credentials in
places other than the AS.
For completeness the diagram flow is reported below.
<List the target devices, e.g. as a sensor, solar panel, air conditioner>
Expected Data
<List the type of expected data, e.g. weather and climate data, medical conditions, machine sensors, vehicle data>
Dependencies
<List the affected WoT deliverables>
Description
Handle the entire device lifecycle:
Define terminology for lifecycle states and transitions.
#### Actors (represent a physical person or group of persons (company))
Manufacturer
Service Provider
Network Provider (potentially transparent for WoT use cases)
Device Owner (User)
Others?
#### Roles:
Depending on the use case, an actor can have multiple roles,
e.g. security maintainer.
Roles can be delegated.
#### Variants:
There are (at least) two different entities to consider:
Things / Devices
Consumers, e.g. cloud services or gateways
In more complex use cases there are additional entities:
Intermediates
Directories
Gaps
The current architecture spec does not describe device lifecycle in detail.
A common lifecycle model helps to clarify terminology and structures the discussion
in different groups.
Interaction of a device with other entities such as directories may introduce
additional states and transitions.
Existing standards
<Provide links to relevant standards that are relevant for this use case>
WoT Security
ETSI OneM2M
OMA LwM2M
OCF
IEEE
SIM cards / GSMA
IETF
Application Lifecycle (W3C Multimodal Interaction WG)
Comments
All lifecycle contributions and discussion documents are available at:
https://github.com/w3c/wot-architecture/blob/master/proposals/lifecycle
documents that were created / discussed in the architecture TF.
Lifecycle comparisons:
https://github.com/w3c/wot-architecture/blob/master/proposals/Device-lifecycle-comparisons.pdf
Lifecycle states:
https://github.com/w3c/wot-architecture/blob/master/proposals/lifecycle/lifecycle-states.md
Draft lifecycle diagram:
https://github.com/w3c/wot-architecture/blob/master/proposals/lifecycle/WoT%20lifecycle%20diagram-WoT%20new%20lifecycle.svg
Layered lifecycle:
https://github.com/w3c/wot-architecture/blob/master/proposals/lifecycle/WoT%20layered%20%20lifecycle%20diagram-WoT%20new%20lifecycle.svg
System lifecycle:
https://github.com/w3c/wot-architecture/blob/master/proposals/lifecycle/unified%20device%20lifecycle.svg
IoT Security Bootstrapping:
https://github.com/w3c/wot-security/blob/master/presentations/2020-03-16-Bootstrapping%20IoT%20Security%20-%20The%20IETF%20Anima%20and%20OPC-UA%20Recipes.pdf