https://www.digitalplaybook.org/api.php?action=feedcontributions&user=Sangamithra+Panneer+Selvam&feedformat=atomdigitalplaybook.org - User contributions [en]2024-03-29T01:02:18ZUser contributionsMediaWiki 1.39.0https://www.digitalplaybook.org/index.php?title=AIoT_Business_Viewpoint&diff=7060AIoT Business Viewpoint2022-03-29T14:57:13Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><br />
<imagemap><br />
Image:2.2-v-BusinessViewpoint.png|800px|frameless|center|AIoT Business Viewpoint<br />
<br />
rect 698 59 2421 430 [[AIoT_Business_Viewpoint|Business Viewpoint]]<br />
rect 702 485 2425 859 [[AIoT_Usage_Viewpoint|Usage Viewpoint]]<br />
rect 702 911 2433 1281 [[AIoT_Data_and_Functional_Viewpoint|Data/Functional Viewpoint]]<br />
rect 706 1337 2440 1715 [[AIoT_Implementation_Viewpoint|Implementation Viewpoint]]<br />
rect 2468 55 3122 1711 [[AIoT_Product_Viewpoint|Product Viewpoint]]<br />
rect 1 1041 146 1191 [[Product_Architecture|Product Architecture]]<br />
<br />
desc none<br />
</imagemap><br />
__NOTOC__<br />
<br />
<s data-category="AIoTFramework"></s><br />
<br />
The Business Viewpoint of the AIoT Product/Solution Design builds on the different artifacts created for the [[Business_Model_Design|Business Model]]. As part of the design process, the business model can be refined, e.g., through additional market research. In particular, the detailed design should include KPIs, quantitative planning, and a milestone-based timeline.<br />
__TOC__<br />
<br />
= Business Model =<br />
The business model is usually the starting point of the product/solution design. The business model should describe the rationale of how the organization creates, delivers, and captures value by utilizing AIoT. The [[Business_Model_Design|business model design]] section provides a good description of how to identify, document and validate AIoT-enabled business models. A number of different templates are provided, of which the business model canvas is the most important. The business model canvas should include a summary of the AIoT-enabled value proposition, the key customer segments to be addressed, how customer relationships are built, and the channels through which customers are serviced. Furthermore, it should provide a summary of the key activities, resources and partners required to deliver on the value proposition. Finally, a high-level summary of the business case should be provided, including cost and revenue structure.<br />
<br />
[[File:2.2.-bv-VacuumCanvas.png|900px|frameless|center|link=|ACME:Vac Business Model Canvas]]<br />
<br />
The fictitious ACME:Vac business model assumes that AI and IoT are used to enable a high-end vacuum cleaning robot, which will be offered as a premium product (not an easy decision - some argue that the mid-range position in this market is more attractive). AI will be used not only for robot control and automation but also for product performance analysis, as well as analysis of customer behaviour. This intelligence will be used to optimize the customer experience, create customer loyalty, and identify up-selling opportunities.<br />
<br />
= Key Performance Indicators =<br />
Many organizations use Key Performance Indicators (KPIs) to measure how effectively a company is achieving its key business objectives. KPIs are often used on multiple levels, from high-level business objectives to lower-level process or product-related KPIs. In our context, the KPIs would either be related to an AIoT-enabled product or solution.<br />
<br />
A Digital OEM that takes a smart, connected product to market usually has KPIs that cover business performance, user experience and customer satisfaction, product quality, and the effectiveness and efficiency of the product development process.<br />
<br />
A Digital Equipment Operator who is launching a smart, connected solution to manage a particular process or a fleet of assets would usually have solution KPIs that cover the impact of the AIoT-enabled solution on the business process that it is supporting. Alternatively, business-related KPIs could measure the performance of the fleet of assets and the impact of the solution on that performance. Another typical operator KPI could be coverage of the solution. For example, in a large, heterogeneous fleet of assets, it could measure the number of assets that have been retrofitted successfully. UX and customer satisfaction-related KPIs would only become involved if the solution actually has a direct customer impact. Solution quality and the solution development process would certainly be another group of important KPIs.<br />
<br />
[[File:2.2.-bv-KPIs.png|800px|frameless|center|link=|Vacuum Robot - Product KPIs]]<br />
<br />
The figure with KPIs shown here provides a set of example KPIs for the ACME:Vac product. The business performance-related KPIs cover the number of robovacs sold, the direct sales revenue, recurring revenue from digital add-on features, and finally the gross margin.<br />
<br />
The UX/customer satisfaction KPIs would include some general KPIs, such as Net Promoter Score (results of a survey asking respondents to rate the likelihood that they would recommend the ACME:Vac product), System Usability Scale (assessment of perceived usability), and Product Usage (e.g., users per specific feature). The Task Success Rate KPIs may include how successful and satisfied customers are with the installation and setup of the robovac. Another important KPI in this group would measure how successful customers are actually using the robovac for its main purpose, namely, cleaning. The Time on Task KPIs could measure how long the robovac is taking for different tasks in different modes.<br />
<br />
Product Quality KPIs need to cover a wide range of process- and product-related topics. An important KPI is test coverage. This is a very important KPI for AIoT-enabled products, since testing physical products in combination with digital features can be quite complex and expensive but a critical success factor. Incident metrics such as MTBF (mean time before failure) and MTTR (mean time to recovery, repair, respond, or resolve) need to look at the local robovac installations, as well as the shared cloud back end. Finally, the number of support calls per day can be another important indicator of product quality. Functional product quality KPIs for ACME:Vac would include cleaning speed, cleaning efficiency, and recharging speed.<br />
<br />
Finally, the Product Development KPIs must cover all of the different development and production pipelines, including hardwire development, product manufacturing, software development, and AI development.<br />
<br />
= Quantitative Planning =<br />
Quantitative planning is an important input for the rest of the design exercise. For the Digital OEM, this would usually include information related to the number of products sold, as well as product usage planning data. For example, it can be important to understand how many users are likely to use a certain key feature in which frequency to be able to design the feature and its implementation and deployment accordingly. <br />
<br />
The quantitative model for the ACME:Vac product could include, for example, some overall data related to the number of units sold. Another interesting bit of information is the expected number of support calls per year because this gives an indication for how this process must be set up. Other information of relevance for the design team includes the expected average number of rooms serviced per vacuum robot, the number of active users, the number of vacuum cleaning runs per day, and the number of vacuum cleaner bags used by the average customer per year.<br />
<br />
[[File:2.2-bv-quantitative plan.png|600px|frameless|center|link=|Quantitative Plan]]<br />
<br />
For a Digital Equipment Operator, the planning data must at its core include information about the number of assets to be supported. However, it can also be important to understand certain usage patterns and their quantification. For example, a predictive maintenance solution used to monitor thousands of escalators and elevators for a railroad operator should be based on a quantitative planning model that includes some basic assumptions, not only about the number of assets to be monitored, but also about the current average failure rates. This information will be important for properly designing the predictive maintenance solution, e.g., from a scalability point of view.<br />
<br />
= Milestones / Timeline =<br />
Another key element of the business viewpoint is the milestone-based timeline. For the Digital OEM, this will be a high-level plan for designing, implementing and manufacturing, launching, supporting, and continuously enhancing the product. <br />
<br />
The timeline for the ACME:Vac product differentiates between the physical product and the AIoT part (including embedded hardware and software, AI, and cloud). If custom embedded hardware is to be designed and manufactured, this could also be subsumed under the physical product workstream, depending on the organizational setup. The physical product workstream includes a product design and manufacturing engineering phase until the Start of Production (SOP). After the SOP, this workstream focuses on manufacturing. A new workstream for the next physical product generation starting after the SOP is omitted in this example. The AIoT workstream generally assumes that an AIoT DevOps model is applied consistently through all phases. <br />
<br />
Key milestones for both the physical product and the AIoT part include the initial product design and architecture (result of sprint 0), the setup of the test lab for testing the physical product, the first end-to-end prototype combining the physical product with the AIoT-enabled digital features, the final prototype/Minimum Viable Product, and finally the SOP.<br />
<br />
The following figure also highlights the V-Sprints, which in this example applies to both physical product development and the AIoT development. While physical product development is unlikely to deliver potentially shippable product increments at the end of each V-Sprint, it still assumes the same sprint cadence.<br />
<br />
Because sourcing is typically such a decisive factor, the timeline includes milestones for the key sourcing contracts that must be secured. Details regarding the procurement process are omitted on this level.<br />
<br />
[[File:2.2-bv-Milestones.png|1000px|frameless|center|link=|Example Milestone Plan]]<br />
<br />
For a Digital Equipment Operator, this plan would focus less on the development and manufacturing of the physical product. Instead, it would most likely include a dedicated workstream for managing the retrofit of the solution to the existing physical assets.<br />
<br />
= Authors and Contributors =<br />
{|{{Borderstyle-author}}<br />
|{{Designstyle-author|Image=[[File:Dirk Slama.jpeg|left|100px]]|author={{Dirk Slama|Title=AUTHOR}}}}<br />
<br><br />
{{Designstyle-author|Image=[[File:Michael Hohmann.jpg|left|100px]]|author={{Michael Hohmann|Title=CONTRIBUTOR}}}}<br />
|}</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=AIoT_Data_and_Functional_Viewpoint&diff=7059AIoT Data and Functional Viewpoint2022-03-29T14:56:45Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><br />
<imagemap><br />
Image:2.2-v-DataFunctionalViewpoint.png|800px|frameless|center|AIoT Data/Functional Viewpoint<br />
<br />
rect 698 59 2421 430 [[AIoT_Business_Viewpoint|Business Viewpoint]]<br />
rect 702 485 2425 859 [[AIoT_Usage_Viewpoint|Usage Viewpoint]]<br />
rect 702 911 2433 1281 [[AIoT_Data_and_Functional_Viewpoint|Data/Functional Viewpoint]]<br />
rect 706 1337 2440 1715 [[AIoT_Implementation_Viewpoint|Implementation Viewpoint]]<br />
rect 2468 55 3122 1711 [[AIoT_Product_Viewpoint|Product Viewpoint]]<br />
rect 1 1041 146 1191 [[Product_Architecture|Product Architecture]]<br />
<br />
desc none<br />
</imagemap><br />
<br />
__NOTOC__<br />
<s data-category="AIoTFramework"></s><br />
<br />
The Data and Functional Viewpoint provides design details that focus on the overall functionality of the product or solution, as well as the underlying data architecture. The starting point can be a refinement of the AIoT Solution Sketch. A better understanding of the data architecture can be achieved with a basic data domain model. The component and API landscape is the initial functional decomposition, and will have a huge impact on the implementation. The optional Digital Twin landscape helps understand how Digital Twin as a concept fit into the overall design. Finally, AI Feature Mapping helps identify which features are best suited to an implementation with AI.<br />
__TOC__<br />
<br />
= AIoT Solution Sketch = <br />
The AIoT Solution Sketch from the Business Model can be refined in this perspective, adding more layers of detail, in a slightly more structured process of presentation. The solution sketch should include the physical asset or project (the robovac in our example), as well as other key assets (e.g., the robovac charging station) and key users. Since interactions between the physical assets and the back end are key, they should be listed explicitly. The sketch should also include an overview of key UIs, the key business processes supported, the key AI and analytics-related elements, the main data domains, and external databases or applications. <br />
<br />
[[File:2.2-Vacuum Solution Sketch.png|800px|frameless|center|link=|Solution Sketch for Vacuum Robot]]<br />
<br />
= Data Domain Model = <br />
The Data Domain Model should provide a high-level overview of the key entities of the product design, including their relationships to external systems. The Domain Model should include approximately a dozen key entities. It does not aim to provide the same level of detail as a thorough data schema or object model. Instead, it should serve as the foundation for discussing data requirements between stakeholders, across multiple stakeholder groups in the product team.<br />
<br />
[[File:2.2-Vacuum Data Domain Model.png|800px|frameless|center|link=|Data Domain Model for Vacuum Robot]]<br />
<br />
For example, the main data domains that have been identified for the ACME:Vac product are the customer, the robovac itself, floor maps and cleaning data. Each of these domains is described by listing the 5-10 key entities within. This is typically a good level of detail: sufficiently meaningful for planning discussions, without getting lost in detail.<br />
<br />
The design team must make a decision on whether the required data from an AI perspective should already be included here. This can make sense if AI-related data also play an important role in other, non AI-based parts of the system. In this case, potential dependencies can be identified and discussed here. If the AI has dedicated input sources (e.g., specific sensors that are only used by the AI), then it is most likely more interesting at this point what kind of data or information is provided by the AI as an output.<br />
<br />
= Component and API Landscape= <br />
To manage the complexity of an AIoT-enabled product or solution, the well-established approach of functional decomposition and componentization should be applied. The results should be captured in a high-level component landscape, which helps visualize and communicate the key functional components.<br />
<br />
== Functional decomposition and componentization == <br />
Functional decomposition is a method of analysis that breaks a complex body of work down into smaller, more easily manageable units. <br />
This “Divide & Conquer” strategy is essential for managing complexity. Especially if the body of work cannot be implemented by a single team, splitting the work in a way that it can be assigned to different teams in a meaningful way becomes very important. <br />
Since a system's architectural structure tends to be a mirror image of the organizational structure, it is important that team building follows the functional decomposition process. The idea of "feature teams" to support this is discussed in the [[AIoT_Product_Viewpoint|AIoT Product Viewpoint]].<br />
<br />
Another key point of functional decomposition is functional scale: without effective functional decomposition and management of functional dependencies, it will be difficult to build a functionally rich application. It does not stop at building the initial release. Most modern software-based systems are built on the agile philosophy of continuous evolution. While an AIoT-enabled product or solution does not only consist of software -- it also includes AI and hardware -- enabling evolvability is usually a key requirement. Functional decomposition and componentization will enable the encapsulation of changes and thus support efficient system evolution.<br />
<br />
<br />
The logical construct for encapsulating key data and functionality in an AIoT system should be the component. From a functional viewpoint, components are logical constructs, independent of a particular technology. Later in the implementation viewpoint, they can be mapped to specific programming languages, AI platforms, or even specific functionality implemented in hardware. Additionally, component frameworks such as microservices can be added where applicable.<br />
<br />
<br />
[[File:2.2-Componentization.png|800px|frameless|center|link=|Functional decomposition and componentization]]<br />
<br />
<br />
The functional decomposition process should go hand in hand with the development of the agile story map (see [[AIoT_Product_Viewpoint|AIoT Product Viewpoint]]) since the story map will contain the official definition of the body of work, broken down into epics and features.<br />
<br />
<br />
The first iteration of the component landscape can actually be very close to the story map, since it should truly only focus on the functional decomposition. In a second iteration, the logical AIoT components must be mapped to a distributed component architecture. This perspective is actually somewhat between the functional and implementation viewpoints. The mapping to the distributed component architecture must take a number of different aspects into consideration, including business/functional requirements, cost constraints, technical constraints and architectural constraints.<br />
<br />
<br />
A key functional requirement simply is availability. In a distributed system, remote access to a component always has a higher risk of the component not being available, e.g., due to connectivity issues. Other business-driven aspects are organizational constraints (especially if different parts of the distributed system are developed by different organizational units), physical control and legal aspects (deploying critical data or functionality in the field or in certain countries can be difficult). This is also closely related to data ownership and data sharing requirements.<br />
<br />
<br />
Achim Nonnenmacher, expert for Software-defined Vehicle at Bosch comments: ''Because of the availability issues related to distributed applications, many leading services are using a capability-based architecture. For example, many smart phones have two versions of their key services - one which works with the data and capabililities available on the phone, and one which works only with cloud connectivity. For example, you can say "Hey, bring me home", and the offline phone will still be able to provide rudimentary voice recognition and navigation services using the AI and map data on the phone. Only if the phone is online will it be able to make full use of better cloud-based AI and data. We still have to learn in many ways how to apply this to the vehicle-based applications of the future, but this will be important.''<br />
<br />
Another key distribution aspect is cost constraints. For example, many manufacturers of physical products have to ensure that the unit costs are kept to a minimum. This can be an argument for deploying computationally intensive functions not on the physical product but rather on a cloud back end, which can better distribute loads coming from a large number of connected products. Similar arguments apply to communication costs and data storage costs.<br />
<br />
Furthermore, the distributed architecture will be greatly influenced by technical constraints, such as latency (the minimum amount of time for a single bit of data to travel across the network), bandwidth (data transfer capacity of the network), performance (e.g., edge vs. cloud compute performance), time sensitivity (related to latency), and security. <br />
<br />
<br />
Finally, a number of general aspects should be considered from the architectural perspective. For example, the technical target platform (e.g., software vs. AI) will play a role. Another factor is the different speed of development: a single component should not combine functionalities that will evolve at significantly different speeds. Similarly, one should avoid combining functionality, which only requires standard Quality Management with functionality, which must be treated as functional safety relevant to a single component. In this case, the QM functionality must also be treated as functional safety relevant, making it costliest to test, maintain and update.<br />
<br />
While some of these constraints and considerations are of a more technical nature, they need to be combined with more business or functional considerations when designing the distributed component architecture.<br />
<br />
== Component Landscape == <br />
The result of the functional decomposition process should be a component landscape, which focuses on functional aspects but already includes a high-level distribution perspective. <br />
<br />
<br />
[[File:2.1-Example-Component-Architecture.png|800px|frameless|center|link=|Example: Initial Component Architecture]]<br />
<br />
<br />
The example shown here is the high-level component landscape for the ACME:Vac product. This component landscape has three swimlanes: one for the robovac (i.e., the edge platform), one for the main cloud service, and one for the smartphone running the ACME:Vac mobile app. The components of the robot include basic robot control and sensor access, as well as the route/trajectory calculation. These components would most likely be based on an embedded platform, but this level of detail is omitted from a functional viewpoint. In addition, the robot will have a configuration component, as well as a component offering remote services that can be accessed from the robot control component in the cloud.<br />
In addition, the cloud contains components for robot configuration, user configuration management, map data, and the management of the system status and usage history. Finally, the mobile app has a component to manage the main app screen, map management, and remote robot configuration.<br />
<br />
== API Management ==<br />
<br />
In his famous "API Mandate", Jeff Bezos -- CEO of Amazon at the time -- declared that ''"All teams will henceforth expose their data and functionality through service interfaces."'' at Amazon. If the CEO of a major companies gets involved on this level, you can tell how important this topic is. <br />
<br />
APIs (Application Programming Interfaces) are how components make data and functionality available to other components. Today, a common approach are so-called RESTful APIs, which utilize the popular HTTP internet protocol. However, there are many different types of APIs. In AIoT, another important category of APIs are between the software and the hardware layer. These APIs are often provided as low-level c APIs (of course, any c API can again be wrapped in a REST API and exposed to remote clients). Most APIs support a basic request/response pattern to enable interactions between components. Some applications require a more message-oriented, de-coupled way of interaction. This requires a special kind of API. <br />
<br />
Regardless of the technical nature of the API, it is good practice to document APIs via an API contract. This contract defines the input and output arguments, as well as the expected behavior. "Interface first" is an important design approach which mandates that before implementing a new application component, one should first define the APIs, including the API contract. This approach ensures de-coupling between component users and component implementers, which in turn reduces dependencies and helps managing complexity. Because APIs are such an important part of modern, distributed system development, they should be managed and documented as key artefacts, e.g. using modern API management tools which support API documentation standards like [https://www.openapis.org/ OpenAPI].<br />
<br />
From the system design point of view, the component landscape introduced earlier should be augmented with information about the key APIs supported by the different components. For example, the component landscape documentation can support links to the detailed API documentations in different repositories. This way, the component landscape provides a high level description not only of how data and functionality is clustered in the system, but also how it can be accessed.<br />
<br />
= Digital Twin Landscape = <br />
<br />
<br />
As introduced in the [[Digital_Twin_101|Digital Twin 101]] section, using the Digital Twins concept can be useful, especially when dealing with complex physical assets. In this case, a Digital Twin Landscape should be included with the Data/Functional Viewpoint. The Digital Twin Landscape should provide an overview of the key logical Digital Twin models and their relationships. Relationships between Digital Twin model elements can be manifold. They should be used to help define the so-called ""knowledge graph"" across different, often heterogeneous data sources used to construct the Digital Twin model.<br />
<br />
<br />
In some cases, the implementation of the Digital Twin will rely on specific standards and Digital Twin platforms. The Digital Twin Landscape should keep this in mind and only use modeling techniques that will be supported later by the implementation environment. For example, the Digital Twins Definition Language ([https://github.com/Azure/opendigitaltwins-dtdl DTDL]) is an open standard supported by Microsoft, specifically designed to support modeling of Digital Twins. Some of the rich features of DTDL include Digital Twin interfaces and components, different kinds of relationships, as well as persistent properties and transient telemetry events.<br />
<br />
[[File:DT Landscape.png|1000px|frameless|center|link=|Digital Twin Landscape]]<br />
<br />
In the example shown here, these modeling features are used to create a visual Digital Twin Landscape for the ACME:Vac example. The example differentiates between two types of Digital Twin model elements: system (green) and environment (blue). The system elements relate to the physical components of the robovac system. Environment elements relate to the environment in which a robovac system is actually deployed.<br />
<br />
This differentiation is important for a number of reasons. First, the Digital Twin system elements are known in advance, while the Digital Twin environment elements actually need to be created from sensor data (see the discussion on [[Digital_Twin_101|Digital Twin reconstruction]]).<br />
<br />
Second, while the Digital Twin model is supposed to provide a high level of abstraction, it cannot be seen completely in isolation of all the different constraints discussed in the previous section. For example, not all telemetry events used on the robot will be directly visible in the cloud. Otherwise, too much traffic between the robots and the cloud will be created.<br />
<br />
This is why the Digital Twin landscape in this example assigns different types of model elements to different components. In this way, the distributed nature of the component landscape is taken into consideration, allowing for the creation of realistic mapping to a technical implementation later on.<br />
<br />
= <span id="AIFeatureMapping"></span>AI Feature Mapping = <br />
The final element in the Data/Functional Viewpoint should be an assessment of the key features with respect to suitability for implementation with AI. As stated in the [[Digital_OEM#Key_design_decisions|introduction]], a key decision for product managers in the context of AIoT will be whether a new feature should be implemented using AI, Software, or Hardware. To ensure that the potential for the use of AI in the system is neither neglected nor overstated, a structured process should be applied to evaluate each key feature in this respect.<br />
<br />
<br />
[[File:AI Feature Mapping.png|700px|frameless|center|link=|AI Feature Mapping]]<br />
<br />
<br />
In the example shown here, the features from the agile story map (see [[AIoT_Product_Viewpoint|AIoT Product Viewpoint]]) are used as the starting point. For each feature, the expected outcome is examined. Furthermore, from an AI point of view, it needs to be understood which live data can be made available to potentially AI-enabled components, as well as which training data. Depending on this information, an initial recommendation regarding the suitability of a given feature for implementation with AI can be derived. This information can be mapped back to the overall component landscape, as indicted by the figure below. Note also that a new component for cloud-based model training is added to this version of the component landscape. Not that this level of detail does not describe, for example, details of the algorithms used, e.g. Simultaneous Localization and Mapping (SLAM), etc.<br />
<br />
[[File:Component Landscape with AI.png|900px|frameless|center|Identifying AI-enabled components]]<br />
<br />
= Authors and Contributors = <br />
{|{{Borderstyle-author}}<br />
|{{Designstyle-author|Image=[[File:Dirk Slama.jpeg|left|100px]]|author={{Dirk Slama|Title=AUTHOR}}}}<br />
<br><br />
{{Designstyle-author|Image=[[File:Michael Hohmann.jpg|left|100px]]|author={{Michael Hohmann|Title=CONTRIBUTOR}}}}<br />
|}</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=AIoT_Implementation_Viewpoint&diff=7058AIoT Implementation Viewpoint2022-03-29T14:56:27Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><imagemap><br />
Image:2.2-v-ImplementationViewpoint.png|800px|frameless|center|AIoT Implementation Viewpoint<br />
<br />
rect 698 59 2421 430 [[AIoT_Business_Viewpoint|Business Viewpoint]]<br />
rect 702 485 2425 859 [[AIoT_Usage_Viewpoint|Usage Viewpoint]]<br />
rect 702 911 2433 1281 [[AIoT_Data_and_Functional_Viewpoint|Data/Functional Viewpoint]]<br />
rect 706 1337 2440 1715 [[AIoT_Implementation_Viewpoint|Implementation Viewpoint]]<br />
rect 2468 55 3122 1711 [[AIoT_Product_Viewpoint|Product Viewpoint]]<br />
rect 1 1041 146 1191 [[Product_Architecture|Product Architecture]]<br />
<br />
desc none<br />
</imagemap><br />
__NOTOC__<br />
<s data-category="AIoTFramework"></s><br />
<br />
The Implementation Viewpoint must provide sufficient detail to have meaningful technical discussions between the different technical stakeholders of the product team. However, most design artifacts in this viewpoint will still be on a level of abstraction which will hide many of the different details required by the implementation teams. Nevertheless, it is important to find a common language and understanding between the different stakeholders, including a realistic mapping to the [[AIoT_Data_and_Functional_Viewpoint|Data / Functional Viewpoint]].<br />
<br />
The AIoT Implementation Viewpoint should at least include an End-to-End Architecture, details on the planned integration with the physical asset (either following a line-fit or retrofit approach), as well as high-level hardware, software and AI architectures.<br />
__TOC__<br />
<br />
= End-to-End Architecture =<br />
The End-to-End Architecture should include the integration of physical assets, as well as the integration of existing enterprise applications in the back end. In between, an AIoT system will usually have edge and cloud or on-premises back end components. These should also be described with some level of detail, including technical platforms, middleware, AI and Digital Twin components, and finally the business logic itself.<br />
<br />
[[File:0.3.1 IoT Architecture.png|900px|frameless|center|link=|IoT Architecture]]<br />
<br />
= <span id="Asset"></span>Asset Integration =<br />
The Asset Integration perspective should provide an overview of the physical parts of the product, including sensors, antennas, battery/power supply, HMI, and onboard computers. The focus is on how these different elements are integrated with the asset itself. For example, where exactly on the asset would the antenna be located, where to position key elements such as main board, battery, sensors, etc. Finally, an important question will concern wiring for power supply, as well as access to local bus systems.<br />
<br />
[[File:2.3-iv-AssetIntegration.png|700px|frameless|center|link=|Asset Integration]]<br />
<br />
= <span id="HW"></span>Hardware Architecture=<br />
Depending on the requirements of the AIoT system, custom hardware development can be an important success factor. <br />
The complexity of custom hardware design and development should not be underestimated.<br />
From the hardware design point of view, a key artefact is usually the schematic design of the required PCBs (Printed Circuit Boards).<br />
<br />
[[File:HW Architecture.png|900px|frameless|center|link=|Robovac hardware architecture]]<br />
<br />
The ACME:Vac example shown here includes the main control unit, HMI, power management, sensors, wireless connectivity, signal conditioning, and finally the control of the different motors.<br />
<br />
= <span id="SW"></span>Software Architecture =<br />
The technical software architecture should have a logical layering, showing key software components and their main dependencies. For the ACME:Vac example, the software architecture would include two main perspectives: the software architecture on the robovac (shown here) and the backend architecture (not shown here).<br />
<br />
[[File:SW Architecture.png|800px|frameless|center|link=|Example: Robovac SW architecture]]<br />
<br />
Depending on the type of organization, software architecture will be ad hoc or follow standards such as the OpenGroup`s [[https://www.opengroup.org/togaf|TOGAF]] framework. TOGAF, for example, provides the concept of Architecture and Solution Building Blocks (ABB and SBB, respectively), which can be useful in more complex AIoT projects.<br />
<br />
The example shown here is generic (like an ABB in TOGAF terms). Not shown here is a mapping of the software architecture to concrete products and standards (like a TOGAF SBB), which would usually be the case in any project. However, the ''AIoT Playbook'' does not want to favor any particular vendor and is consequently leaving this exercise to the reader.<br />
<br />
= <span id="AI"></span>AI Pipeline Architecture=<br />
The AI Pipeline Architecture should explain, on a technical level, how data preparation, model training and deployment of AI models are supported. For each of these phases, it must be understood which AI-specific frameworks are being used, which additional middleware, which DBMS or other data storage technology, and which hardware and OS.<br />
<br />
Finally, the AI Pipeline Architecture must show how the deployment of trained models to cloud and edge nodes is supported. For distributed edge nodes in particular, the support for OTA (over-the-air) updates should be explained. Furthermore, in the case of AI on distributed edge nodes, the architecture must explain how model monitoring data are captured and consolidated back in the cloud.<br />
<br />
[[File:2.2-iv-AI.png|800px|frameless|center|link=|AI Architecture]]<br />
<br />
= Putting it all together =<br />
The Data/Functional Viewpoint has introduced the concept of functional decompositioning, including the documentation of the distributed component architecture. The Implementation Viewpoint has added different technical perspectives. <br />
The different functional components must be mapped to technology-specific pipelines. For this, feature teams must be defined that combine the required technical skills/access to the required technical pipelines for a specific feature (see the [[AIoT_Product_Viewpoint|AIoT Product Viewpoint]] for a more detailed discussion on feature teams and how they are assigned).<br />
<br />
[[File:2.1-Decomposition.png|800px|frameless|center|link=|Architectural Decomposition and System Integration]]<br />
<br />
The results from the different technical pipelines are individual technical components that must be integrated via different types of interfaces. For example, smartphone, cloud and edge components can be integrated via REST interfaces. On the edge, embedded components are often integrated via C interfaces. The integration between embedded software and hardware is done via different types of Hardware/Software Interfaces (HSI). Finally, any AIoT hardware components must be physically integrated with the actual physical product. During the development/testing phase, this will usually be a manual process, while later it will be either a standardized retrofit or line-fit process.<br />
<br />
All of this will be required to integrate the different components required for a specific feature across the different technical pipelines. Multiple features will be integrated to form the entire system (or system-of-systems, depending on the complexity or our product or solution).<br />
<br />
= Authors and Contributors =<br />
{|{{Borderstyle-author}}<br />
|{{Designstyle-author|Image=[[File:Dirk Slama.jpeg|left|100px]]|author={{Dirk Slama|Title=AUTHOR}}}}<br />
<br><br />
{{Designstyle-author|Image=[[File:Michael Hohmann.jpg|left|100px]]|author={{Michael Hohmann|Title=CONTRIBUTOR}}}}<br />
|}</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=AIoT_Product_Viewpoint&diff=7057AIoT Product Viewpoint2022-03-29T14:56:09Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><br />
<imagemap><br />
Image:2.2-v-ProductViewpoint.png|800px|frameless|center|AIoT Product Viewpoint<br />
<br />
rect 698 59 2421 430 [[AIoT_Business_Viewpoint|Business Viewpoint]]<br />
rect 702 485 2425 859 [[AIoT_Usage_Viewpoint|Usage Viewpoint]]<br />
rect 702 911 2433 1281 [[AIoT_Data_and_Functional_Viewpoint|Data/Functional Viewpoint]]<br />
rect 706 1337 2440 1715 [[AIoT_Implementation_Viewpoint|Implementation Viewpoint]]<br />
rect 2468 55 3122 1711 [[AIoT_Product_Viewpoint|Product Viewpoint]]<br />
rect 1 1041 146 1191 [[Product_Architecture|Product Architecture]]<br />
<br />
desc none<br />
</imagemap><br />
__NOTOC__<br />
<s data-category="AIoTFramework"></s><br />
<br />
The Product Viewpoint must map the other elements of the Product Architecture to the key elements of an agile product organization. The main artefact here is the agile story map, which is the highest level structural description of the entire body of work. Feature team mapping supports the mapping of the work described in the story map to the teams needed to implement the different product features. Finally, for each team and each sprint an individual sprint backlog must be created based on the story map and the results of the feature team mappings.<br />
<br />
__TOC__<br />
<br />
= <span id="StoryMap"></span>Story Map =<br />
It is best practice in the agile community to breakdown a larger body of work into specific work items using a hierarchical approach. Depending on the method applied, this hierarchy could include themes, epics, features, and user stories.<br />
<br />
A story map organizes user stories in a logical way to present the big picture of the product. Story maps help ensure that user stories are well balanced, covering all important aspects of the planned solution at a similar level of detail. Story maps provide a two-dimensional graphical visualization of the Product Backlog. Many modern development support tools (such as Jira) support automatic visualization of the product backlog as a story map.<br />
<br />
The AIoT Framework assumes the following hierarchy:<br />
* Epic: A high-level work description, usually outlining a particular usage scenario from the perspective of one of multiple personas<br />
* Feature: A specific feature to support an epic<br />
* User Story: short requirements written from the perspective of an end user<br />
<br />
Depending on the complexity of the project and the agile method chosen, this may need to be adapted, e.g. by further adding ''themes'' as a way of bundling epics.<br />
<br />
When starting to break down the body of work, one should first agree on a set of top-level epics, and ensure that they are consistent, do not overlap, and cover everything that is needed. For each epic, a small number of features should be defined. These features should functionally be independent (see the discussion on [[AIoT_Data_and_Functional_Viewpoint|functional decomposition]]). Finally, features can further be broken down into user stories. User stories are short and concise descriptions of the desired functionality told from the perspective of the user.<br />
<br />
[[File:2.1-Example-Story-Map.png|800px|frameless|center|link=|Example: Initial Story Map]]<br />
<br />
The example shown here is the story map for the ACME:Vac product. It has six epics, including HMI, Cleaning, Maps, Navigation/Sensing, Configuration and Status/History. Each epic is broken down into a small number of key features supporting the epic. User stories are not shown on this level. Note that this story map does not include the entire mechatronic part of the system, including chassis, motor, locomotion (climbing over obstacles, etc.), home base, etc. Also, functional safety is not included here, which would be another real-world requirement.<br />
<br />
= Feature Team Mapping =<br />
One of the main challenges in almost all product organizations is the creation of efficient mapping between the organizational structure and the product structure (the same applies to projects and solutions). The problem here is that organizations are often more structured around skills (UX, frontend, back end, testing, etc.), while product features usually require a mixture of these skills.<br />
<br />
Consequently, the ''AIoT Playbook'' recommends an approach based on feature teams, which are assigned on demand to match the requirements of a specific feature. See [[Agile_AIoT_Organization|Agile AIoT Organization]] for a more detailed discussion. Feature teams can exist for longer periods of time, spanning multiple sprints, if the complexity of the feature requires this.<br />
<br />
[[File:2.1-Example-US2Comp-Mapping.png|800px|frameless|center|link=|Mapping User Story to Components and Feature Team]]<br />
<br />
In the example shown here, the user story "Change cleaning mode" (part of the cleaning mode configuration feature) is analyzed. <br />
The results of the analysis show that a number of components on the robovac, the cloud and mobile app must be created or extended to support this user story. A similar analysis must be done for all other user stories of the overarching feature before a proposal for the supporting feature team can be made. In this case, the feature team must include a domain expert, an embedded developer, a cloud developer, a mobile app developer, and an integration/test expert. To support the scrum approach, who in the feature team plays the role of product (or feature) owner, as well as the scrum master, must be agreed upon.<br />
<br />
= Sprint Backlogs =<br />
In preparation for each sprint, an individual sprint backlog must be created for each team, which is specific to the upcoming sprint. The sprint backlog is derived from the story map (essentially the product backlog). The sprint backlog contains only those items that are scheduled for implementation during that sprint. The sprint backlog can contain user stories to support features but also bug fixes or nonfunctional requirements. <br />
<br />
In larger organizations with multiple feature teams, the Chief Product Owner is responsible for the overarching story map, which serves as the product backlog. He prioritizes product backlog items based on risk, business value, dependencies, size, and date needed and assigns them to the individual teams. The teams will usually refine them and create their own sprint backlogs, in alignment with the Chief Product Owner and the product/feature owners of the individual teams.<br />
<br />
= Authors and Contributors =<br />
{|{{Borderstyle-author}}<br />
|{{Designstyle-author|Image=[[File:Dirk Slama.jpeg|left|100px]]|author={{Dirk Slama|Title=AUTHOR}}}}<br />
<br><br />
{{Designstyle-author|Image=[[File:Michael Hohmann.jpg|left|100px]]|author={{Michael Hohmann|Title=CONTRIBUTOR}}}}<br />
|}</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=AIoT_Product_Viewpoint&diff=7056AIoT Product Viewpoint2022-03-29T14:55:23Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div>__NOTOC__<br />
<br />
<imagemap><br />
Image:2.2-v-ProductViewpoint.png|800px|frameless|center|AIoT Product Viewpoint<br />
<br />
rect 698 59 2421 430 [[AIoT_Business_Viewpoint|Business Viewpoint]]<br />
rect 702 485 2425 859 [[AIoT_Usage_Viewpoint|Usage Viewpoint]]<br />
rect 702 911 2433 1281 [[AIoT_Data_and_Functional_Viewpoint|Data/Functional Viewpoint]]<br />
rect 706 1337 2440 1715 [[AIoT_Implementation_Viewpoint|Implementation Viewpoint]]<br />
rect 2468 55 3122 1711 [[AIoT_Product_Viewpoint|Product Viewpoint]]<br />
rect 1 1041 146 1191 [[Product_Architecture|Product Architecture]]<br />
<br />
desc none<br />
</imagemap><br />
<s data-category="AIoTFramework"></s><br />
<br />
The Product Viewpoint must map the other elements of the Product Architecture to the key elements of an agile product organization. The main artefact here is the agile story map, which is the highest level structural description of the entire body of work. Feature team mapping supports the mapping of the work described in the story map to the teams needed to implement the different product features. Finally, for each team and each sprint an individual sprint backlog must be created based on the story map and the results of the feature team mappings.<br />
<br />
__TOC__<br />
<br />
= <span id="StoryMap"></span>Story Map =<br />
It is best practice in the agile community to breakdown a larger body of work into specific work items using a hierarchical approach. Depending on the method applied, this hierarchy could include themes, epics, features, and user stories.<br />
<br />
A story map organizes user stories in a logical way to present the big picture of the product. Story maps help ensure that user stories are well balanced, covering all important aspects of the planned solution at a similar level of detail. Story maps provide a two-dimensional graphical visualization of the Product Backlog. Many modern development support tools (such as Jira) support automatic visualization of the product backlog as a story map.<br />
<br />
The AIoT Framework assumes the following hierarchy:<br />
* Epic: A high-level work description, usually outlining a particular usage scenario from the perspective of one of multiple personas<br />
* Feature: A specific feature to support an epic<br />
* User Story: short requirements written from the perspective of an end user<br />
<br />
Depending on the complexity of the project and the agile method chosen, this may need to be adapted, e.g. by further adding ''themes'' as a way of bundling epics.<br />
<br />
When starting to break down the body of work, one should first agree on a set of top-level epics, and ensure that they are consistent, do not overlap, and cover everything that is needed. For each epic, a small number of features should be defined. These features should functionally be independent (see the discussion on [[AIoT_Data_and_Functional_Viewpoint|functional decomposition]]). Finally, features can further be broken down into user stories. User stories are short and concise descriptions of the desired functionality told from the perspective of the user.<br />
<br />
[[File:2.1-Example-Story-Map.png|800px|frameless|center|link=|Example: Initial Story Map]]<br />
<br />
The example shown here is the story map for the ACME:Vac product. It has six epics, including HMI, Cleaning, Maps, Navigation/Sensing, Configuration and Status/History. Each epic is broken down into a small number of key features supporting the epic. User stories are not shown on this level. Note that this story map does not include the entire mechatronic part of the system, including chassis, motor, locomotion (climbing over obstacles, etc.), home base, etc. Also, functional safety is not included here, which would be another real-world requirement.<br />
<br />
= Feature Team Mapping =<br />
One of the main challenges in almost all product organizations is the creation of efficient mapping between the organizational structure and the product structure (the same applies to projects and solutions). The problem here is that organizations are often more structured around skills (UX, frontend, back end, testing, etc.), while product features usually require a mixture of these skills.<br />
<br />
Consequently, the ''AIoT Playbook'' recommends an approach based on feature teams, which are assigned on demand to match the requirements of a specific feature. See [[Agile_AIoT_Organization|Agile AIoT Organization]] for a more detailed discussion. Feature teams can exist for longer periods of time, spanning multiple sprints, if the complexity of the feature requires this.<br />
<br />
[[File:2.1-Example-US2Comp-Mapping.png|800px|frameless|center|link=|Mapping User Story to Components and Feature Team]]<br />
<br />
In the example shown here, the user story "Change cleaning mode" (part of the cleaning mode configuration feature) is analyzed. <br />
The results of the analysis show that a number of components on the robovac, the cloud and mobile app must be created or extended to support this user story. A similar analysis must be done for all other user stories of the overarching feature before a proposal for the supporting feature team can be made. In this case, the feature team must include a domain expert, an embedded developer, a cloud developer, a mobile app developer, and an integration/test expert. To support the scrum approach, who in the feature team plays the role of product (or feature) owner, as well as the scrum master, must be agreed upon.<br />
<br />
= Sprint Backlogs =<br />
In preparation for each sprint, an individual sprint backlog must be created for each team, which is specific to the upcoming sprint. The sprint backlog is derived from the story map (essentially the product backlog). The sprint backlog contains only those items that are scheduled for implementation during that sprint. The sprint backlog can contain user stories to support features but also bug fixes or nonfunctional requirements. <br />
<br />
In larger organizations with multiple feature teams, the Chief Product Owner is responsible for the overarching story map, which serves as the product backlog. He prioritizes product backlog items based on risk, business value, dependencies, size, and date needed and assigns them to the individual teams. The teams will usually refine them and create their own sprint backlogs, in alignment with the Chief Product Owner and the product/feature owners of the individual teams.<br />
<br />
= Authors and Contributors =<br />
{|{{Borderstyle-author}}<br />
|{{Designstyle-author|Image=[[File:Dirk Slama.jpeg|left|100px]]|author={{Dirk Slama|Title=AUTHOR}}}}<br />
<br><br />
{{Designstyle-author|Image=[[File:Michael Hohmann.jpg|left|100px]]|author={{Michael Hohmann|Title=CONTRIBUTOR}}}}<br />
|}</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=AIoT_Implementation_Viewpoint&diff=7055AIoT Implementation Viewpoint2022-03-29T14:55:06Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div>__NOTOC__<br />
<br />
<imagemap><br />
Image:2.2-v-ImplementationViewpoint.png|800px|frameless|center|AIoT Implementation Viewpoint<br />
<br />
rect 698 59 2421 430 [[AIoT_Business_Viewpoint|Business Viewpoint]]<br />
rect 702 485 2425 859 [[AIoT_Usage_Viewpoint|Usage Viewpoint]]<br />
rect 702 911 2433 1281 [[AIoT_Data_and_Functional_Viewpoint|Data/Functional Viewpoint]]<br />
rect 706 1337 2440 1715 [[AIoT_Implementation_Viewpoint|Implementation Viewpoint]]<br />
rect 2468 55 3122 1711 [[AIoT_Product_Viewpoint|Product Viewpoint]]<br />
rect 1 1041 146 1191 [[Product_Architecture|Product Architecture]]<br />
<br />
desc none<br />
</imagemap><br />
<br />
<s data-category="AIoTFramework"></s><br />
<br />
The Implementation Viewpoint must provide sufficient detail to have meaningful technical discussions between the different technical stakeholders of the product team. However, most design artifacts in this viewpoint will still be on a level of abstraction which will hide many of the different details required by the implementation teams. Nevertheless, it is important to find a common language and understanding between the different stakeholders, including a realistic mapping to the [[AIoT_Data_and_Functional_Viewpoint|Data / Functional Viewpoint]].<br />
<br />
The AIoT Implementation Viewpoint should at least include an End-to-End Architecture, details on the planned integration with the physical asset (either following a line-fit or retrofit approach), as well as high-level hardware, software and AI architectures.<br />
__TOC__<br />
<br />
= End-to-End Architecture =<br />
The End-to-End Architecture should include the integration of physical assets, as well as the integration of existing enterprise applications in the back end. In between, an AIoT system will usually have edge and cloud or on-premises back end components. These should also be described with some level of detail, including technical platforms, middleware, AI and Digital Twin components, and finally the business logic itself.<br />
<br />
[[File:0.3.1 IoT Architecture.png|900px|frameless|center|link=|IoT Architecture]]<br />
<br />
= <span id="Asset"></span>Asset Integration =<br />
The Asset Integration perspective should provide an overview of the physical parts of the product, including sensors, antennas, battery/power supply, HMI, and onboard computers. The focus is on how these different elements are integrated with the asset itself. For example, where exactly on the asset would the antenna be located, where to position key elements such as main board, battery, sensors, etc. Finally, an important question will concern wiring for power supply, as well as access to local bus systems.<br />
<br />
[[File:2.3-iv-AssetIntegration.png|700px|frameless|center|link=|Asset Integration]]<br />
<br />
= <span id="HW"></span>Hardware Architecture=<br />
Depending on the requirements of the AIoT system, custom hardware development can be an important success factor. <br />
The complexity of custom hardware design and development should not be underestimated.<br />
From the hardware design point of view, a key artefact is usually the schematic design of the required PCBs (Printed Circuit Boards).<br />
<br />
[[File:HW Architecture.png|900px|frameless|center|link=|Robovac hardware architecture]]<br />
<br />
The ACME:Vac example shown here includes the main control unit, HMI, power management, sensors, wireless connectivity, signal conditioning, and finally the control of the different motors.<br />
<br />
= <span id="SW"></span>Software Architecture =<br />
The technical software architecture should have a logical layering, showing key software components and their main dependencies. For the ACME:Vac example, the software architecture would include two main perspectives: the software architecture on the robovac (shown here) and the backend architecture (not shown here).<br />
<br />
[[File:SW Architecture.png|800px|frameless|center|link=|Example: Robovac SW architecture]]<br />
<br />
Depending on the type of organization, software architecture will be ad hoc or follow standards such as the OpenGroup`s [[https://www.opengroup.org/togaf|TOGAF]] framework. TOGAF, for example, provides the concept of Architecture and Solution Building Blocks (ABB and SBB, respectively), which can be useful in more complex AIoT projects.<br />
<br />
The example shown here is generic (like an ABB in TOGAF terms). Not shown here is a mapping of the software architecture to concrete products and standards (like a TOGAF SBB), which would usually be the case in any project. However, the ''AIoT Playbook'' does not want to favor any particular vendor and is consequently leaving this exercise to the reader.<br />
<br />
= <span id="AI"></span>AI Pipeline Architecture=<br />
The AI Pipeline Architecture should explain, on a technical level, how data preparation, model training and deployment of AI models are supported. For each of these phases, it must be understood which AI-specific frameworks are being used, which additional middleware, which DBMS or other data storage technology, and which hardware and OS.<br />
<br />
Finally, the AI Pipeline Architecture must show how the deployment of trained models to cloud and edge nodes is supported. For distributed edge nodes in particular, the support for OTA (over-the-air) updates should be explained. Furthermore, in the case of AI on distributed edge nodes, the architecture must explain how model monitoring data are captured and consolidated back in the cloud.<br />
<br />
[[File:2.2-iv-AI.png|800px|frameless|center|link=|AI Architecture]]<br />
<br />
= Putting it all together =<br />
The Data/Functional Viewpoint has introduced the concept of functional decompositioning, including the documentation of the distributed component architecture. The Implementation Viewpoint has added different technical perspectives. <br />
The different functional components must be mapped to technology-specific pipelines. For this, feature teams must be defined that combine the required technical skills/access to the required technical pipelines for a specific feature (see the [[AIoT_Product_Viewpoint|AIoT Product Viewpoint]] for a more detailed discussion on feature teams and how they are assigned).<br />
<br />
[[File:2.1-Decomposition.png|800px|frameless|center|link=|Architectural Decomposition and System Integration]]<br />
<br />
The results from the different technical pipelines are individual technical components that must be integrated via different types of interfaces. For example, smartphone, cloud and edge components can be integrated via REST interfaces. On the edge, embedded components are often integrated via C interfaces. The integration between embedded software and hardware is done via different types of Hardware/Software Interfaces (HSI). Finally, any AIoT hardware components must be physically integrated with the actual physical product. During the development/testing phase, this will usually be a manual process, while later it will be either a standardized retrofit or line-fit process.<br />
<br />
All of this will be required to integrate the different components required for a specific feature across the different technical pipelines. Multiple features will be integrated to form the entire system (or system-of-systems, depending on the complexity or our product or solution).<br />
<br />
= Authors and Contributors =<br />
{|{{Borderstyle-author}}<br />
|{{Designstyle-author|Image=[[File:Dirk Slama.jpeg|left|100px]]|author={{Dirk Slama|Title=AUTHOR}}}}<br />
<br><br />
{{Designstyle-author|Image=[[File:Michael Hohmann.jpg|left|100px]]|author={{Michael Hohmann|Title=CONTRIBUTOR}}}}<br />
|}</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=AIoT_Data_and_Functional_Viewpoint&diff=7054AIoT Data and Functional Viewpoint2022-03-29T14:54:50Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div>__NOTOC__<br />
<br />
<imagemap><br />
Image:2.2-v-DataFunctionalViewpoint.png|800px|frameless|center|AIoT Data/Functional Viewpoint<br />
<br />
rect 698 59 2421 430 [[AIoT_Business_Viewpoint|Business Viewpoint]]<br />
rect 702 485 2425 859 [[AIoT_Usage_Viewpoint|Usage Viewpoint]]<br />
rect 702 911 2433 1281 [[AIoT_Data_and_Functional_Viewpoint|Data/Functional Viewpoint]]<br />
rect 706 1337 2440 1715 [[AIoT_Implementation_Viewpoint|Implementation Viewpoint]]<br />
rect 2468 55 3122 1711 [[AIoT_Product_Viewpoint|Product Viewpoint]]<br />
rect 1 1041 146 1191 [[Product_Architecture|Product Architecture]]<br />
<br />
desc none<br />
</imagemap><br />
<br />
<s data-category="AIoTFramework"></s><br />
<br />
The Data and Functional Viewpoint provides design details that focus on the overall functionality of the product or solution, as well as the underlying data architecture. The starting point can be a refinement of the AIoT Solution Sketch. A better understanding of the data architecture can be achieved with a basic data domain model. The component and API landscape is the initial functional decomposition, and will have a huge impact on the implementation. The optional Digital Twin landscape helps understand how Digital Twin as a concept fit into the overall design. Finally, AI Feature Mapping helps identify which features are best suited to an implementation with AI.<br />
__TOC__<br />
<br />
= AIoT Solution Sketch = <br />
The AIoT Solution Sketch from the Business Model can be refined in this perspective, adding more layers of detail, in a slightly more structured process of presentation. The solution sketch should include the physical asset or project (the robovac in our example), as well as other key assets (e.g., the robovac charging station) and key users. Since interactions between the physical assets and the back end are key, they should be listed explicitly. The sketch should also include an overview of key UIs, the key business processes supported, the key AI and analytics-related elements, the main data domains, and external databases or applications. <br />
<br />
[[File:2.2-Vacuum Solution Sketch.png|800px|frameless|center|link=|Solution Sketch for Vacuum Robot]]<br />
<br />
= Data Domain Model = <br />
The Data Domain Model should provide a high-level overview of the key entities of the product design, including their relationships to external systems. The Domain Model should include approximately a dozen key entities. It does not aim to provide the same level of detail as a thorough data schema or object model. Instead, it should serve as the foundation for discussing data requirements between stakeholders, across multiple stakeholder groups in the product team.<br />
<br />
[[File:2.2-Vacuum Data Domain Model.png|800px|frameless|center|link=|Data Domain Model for Vacuum Robot]]<br />
<br />
For example, the main data domains that have been identified for the ACME:Vac product are the customer, the robovac itself, floor maps and cleaning data. Each of these domains is described by listing the 5-10 key entities within. This is typically a good level of detail: sufficiently meaningful for planning discussions, without getting lost in detail.<br />
<br />
The design team must make a decision on whether the required data from an AI perspective should already be included here. This can make sense if AI-related data also play an important role in other, non AI-based parts of the system. In this case, potential dependencies can be identified and discussed here. If the AI has dedicated input sources (e.g., specific sensors that are only used by the AI), then it is most likely more interesting at this point what kind of data or information is provided by the AI as an output.<br />
<br />
= Component and API Landscape= <br />
To manage the complexity of an AIoT-enabled product or solution, the well-established approach of functional decomposition and componentization should be applied. The results should be captured in a high-level component landscape, which helps visualize and communicate the key functional components.<br />
<br />
== Functional decomposition and componentization == <br />
Functional decomposition is a method of analysis that breaks a complex body of work down into smaller, more easily manageable units. <br />
This “Divide & Conquer” strategy is essential for managing complexity. Especially if the body of work cannot be implemented by a single team, splitting the work in a way that it can be assigned to different teams in a meaningful way becomes very important. <br />
Since a system's architectural structure tends to be a mirror image of the organizational structure, it is important that team building follows the functional decomposition process. The idea of "feature teams" to support this is discussed in the [[AIoT_Product_Viewpoint|AIoT Product Viewpoint]].<br />
<br />
Another key point of functional decomposition is functional scale: without effective functional decomposition and management of functional dependencies, it will be difficult to build a functionally rich application. It does not stop at building the initial release. Most modern software-based systems are built on the agile philosophy of continuous evolution. While an AIoT-enabled product or solution does not only consist of software -- it also includes AI and hardware -- enabling evolvability is usually a key requirement. Functional decomposition and componentization will enable the encapsulation of changes and thus support efficient system evolution.<br />
<br />
<br />
The logical construct for encapsulating key data and functionality in an AIoT system should be the component. From a functional viewpoint, components are logical constructs, independent of a particular technology. Later in the implementation viewpoint, they can be mapped to specific programming languages, AI platforms, or even specific functionality implemented in hardware. Additionally, component frameworks such as microservices can be added where applicable.<br />
<br />
<br />
[[File:2.2-Componentization.png|800px|frameless|center|link=|Functional decomposition and componentization]]<br />
<br />
<br />
The functional decomposition process should go hand in hand with the development of the agile story map (see [[AIoT_Product_Viewpoint|AIoT Product Viewpoint]]) since the story map will contain the official definition of the body of work, broken down into epics and features.<br />
<br />
<br />
The first iteration of the component landscape can actually be very close to the story map, since it should truly only focus on the functional decomposition. In a second iteration, the logical AIoT components must be mapped to a distributed component architecture. This perspective is actually somewhat between the functional and implementation viewpoints. The mapping to the distributed component architecture must take a number of different aspects into consideration, including business/functional requirements, cost constraints, technical constraints and architectural constraints.<br />
<br />
<br />
A key functional requirement simply is availability. In a distributed system, remote access to a component always has a higher risk of the component not being available, e.g., due to connectivity issues. Other business-driven aspects are organizational constraints (especially if different parts of the distributed system are developed by different organizational units), physical control and legal aspects (deploying critical data or functionality in the field or in certain countries can be difficult). This is also closely related to data ownership and data sharing requirements.<br />
<br />
<br />
Achim Nonnenmacher, expert for Software-defined Vehicle at Bosch comments: ''Because of the availability issues related to distributed applications, many leading services are using a capability-based architecture. For example, many smart phones have two versions of their key services - one which works with the data and capabililities available on the phone, and one which works only with cloud connectivity. For example, you can say "Hey, bring me home", and the offline phone will still be able to provide rudimentary voice recognition and navigation services using the AI and map data on the phone. Only if the phone is online will it be able to make full use of better cloud-based AI and data. We still have to learn in many ways how to apply this to the vehicle-based applications of the future, but this will be important.''<br />
<br />
Another key distribution aspect is cost constraints. For example, many manufacturers of physical products have to ensure that the unit costs are kept to a minimum. This can be an argument for deploying computationally intensive functions not on the physical product but rather on a cloud back end, which can better distribute loads coming from a large number of connected products. Similar arguments apply to communication costs and data storage costs.<br />
<br />
Furthermore, the distributed architecture will be greatly influenced by technical constraints, such as latency (the minimum amount of time for a single bit of data to travel across the network), bandwidth (data transfer capacity of the network), performance (e.g., edge vs. cloud compute performance), time sensitivity (related to latency), and security. <br />
<br />
<br />
Finally, a number of general aspects should be considered from the architectural perspective. For example, the technical target platform (e.g., software vs. AI) will play a role. Another factor is the different speed of development: a single component should not combine functionalities that will evolve at significantly different speeds. Similarly, one should avoid combining functionality, which only requires standard Quality Management with functionality, which must be treated as functional safety relevant to a single component. In this case, the QM functionality must also be treated as functional safety relevant, making it costliest to test, maintain and update.<br />
<br />
While some of these constraints and considerations are of a more technical nature, they need to be combined with more business or functional considerations when designing the distributed component architecture.<br />
<br />
== Component Landscape == <br />
The result of the functional decomposition process should be a component landscape, which focuses on functional aspects but already includes a high-level distribution perspective. <br />
<br />
<br />
[[File:2.1-Example-Component-Architecture.png|800px|frameless|center|link=|Example: Initial Component Architecture]]<br />
<br />
<br />
The example shown here is the high-level component landscape for the ACME:Vac product. This component landscape has three swimlanes: one for the robovac (i.e., the edge platform), one for the main cloud service, and one for the smartphone running the ACME:Vac mobile app. The components of the robot include basic robot control and sensor access, as well as the route/trajectory calculation. These components would most likely be based on an embedded platform, but this level of detail is omitted from a functional viewpoint. In addition, the robot will have a configuration component, as well as a component offering remote services that can be accessed from the robot control component in the cloud.<br />
In addition, the cloud contains components for robot configuration, user configuration management, map data, and the management of the system status and usage history. Finally, the mobile app has a component to manage the main app screen, map management, and remote robot configuration.<br />
<br />
== API Management ==<br />
<br />
In his famous "API Mandate", Jeff Bezos -- CEO of Amazon at the time -- declared that ''"All teams will henceforth expose their data and functionality through service interfaces."'' at Amazon. If the CEO of a major companies gets involved on this level, you can tell how important this topic is. <br />
<br />
APIs (Application Programming Interfaces) are how components make data and functionality available to other components. Today, a common approach are so-called RESTful APIs, which utilize the popular HTTP internet protocol. However, there are many different types of APIs. In AIoT, another important category of APIs are between the software and the hardware layer. These APIs are often provided as low-level c APIs (of course, any c API can again be wrapped in a REST API and exposed to remote clients). Most APIs support a basic request/response pattern to enable interactions between components. Some applications require a more message-oriented, de-coupled way of interaction. This requires a special kind of API. <br />
<br />
Regardless of the technical nature of the API, it is good practice to document APIs via an API contract. This contract defines the input and output arguments, as well as the expected behavior. "Interface first" is an important design approach which mandates that before implementing a new application component, one should first define the APIs, including the API contract. This approach ensures de-coupling between component users and component implementers, which in turn reduces dependencies and helps managing complexity. Because APIs are such an important part of modern, distributed system development, they should be managed and documented as key artefacts, e.g. using modern API management tools which support API documentation standards like [https://www.openapis.org/ OpenAPI].<br />
<br />
From the system design point of view, the component landscape introduced earlier should be augmented with information about the key APIs supported by the different components. For example, the component landscape documentation can support links to the detailed API documentations in different repositories. This way, the component landscape provides a high level description not only of how data and functionality is clustered in the system, but also how it can be accessed.<br />
<br />
= Digital Twin Landscape = <br />
<br />
<br />
As introduced in the [[Digital_Twin_101|Digital Twin 101]] section, using the Digital Twins concept can be useful, especially when dealing with complex physical assets. In this case, a Digital Twin Landscape should be included with the Data/Functional Viewpoint. The Digital Twin Landscape should provide an overview of the key logical Digital Twin models and their relationships. Relationships between Digital Twin model elements can be manifold. They should be used to help define the so-called ""knowledge graph"" across different, often heterogeneous data sources used to construct the Digital Twin model.<br />
<br />
<br />
In some cases, the implementation of the Digital Twin will rely on specific standards and Digital Twin platforms. The Digital Twin Landscape should keep this in mind and only use modeling techniques that will be supported later by the implementation environment. For example, the Digital Twins Definition Language ([https://github.com/Azure/opendigitaltwins-dtdl DTDL]) is an open standard supported by Microsoft, specifically designed to support modeling of Digital Twins. Some of the rich features of DTDL include Digital Twin interfaces and components, different kinds of relationships, as well as persistent properties and transient telemetry events.<br />
<br />
[[File:DT Landscape.png|1000px|frameless|center|link=|Digital Twin Landscape]]<br />
<br />
In the example shown here, these modeling features are used to create a visual Digital Twin Landscape for the ACME:Vac example. The example differentiates between two types of Digital Twin model elements: system (green) and environment (blue). The system elements relate to the physical components of the robovac system. Environment elements relate to the environment in which a robovac system is actually deployed.<br />
<br />
This differentiation is important for a number of reasons. First, the Digital Twin system elements are known in advance, while the Digital Twin environment elements actually need to be created from sensor data (see the discussion on [[Digital_Twin_101|Digital Twin reconstruction]]).<br />
<br />
Second, while the Digital Twin model is supposed to provide a high level of abstraction, it cannot be seen completely in isolation of all the different constraints discussed in the previous section. For example, not all telemetry events used on the robot will be directly visible in the cloud. Otherwise, too much traffic between the robots and the cloud will be created.<br />
<br />
This is why the Digital Twin landscape in this example assigns different types of model elements to different components. In this way, the distributed nature of the component landscape is taken into consideration, allowing for the creation of realistic mapping to a technical implementation later on.<br />
<br />
= <span id="AIFeatureMapping"></span>AI Feature Mapping = <br />
The final element in the Data/Functional Viewpoint should be an assessment of the key features with respect to suitability for implementation with AI. As stated in the [[Digital_OEM#Key_design_decisions|introduction]], a key decision for product managers in the context of AIoT will be whether a new feature should be implemented using AI, Software, or Hardware. To ensure that the potential for the use of AI in the system is neither neglected nor overstated, a structured process should be applied to evaluate each key feature in this respect.<br />
<br />
<br />
[[File:AI Feature Mapping.png|700px|frameless|center|link=|AI Feature Mapping]]<br />
<br />
<br />
In the example shown here, the features from the agile story map (see [[AIoT_Product_Viewpoint|AIoT Product Viewpoint]]) are used as the starting point. For each feature, the expected outcome is examined. Furthermore, from an AI point of view, it needs to be understood which live data can be made available to potentially AI-enabled components, as well as which training data. Depending on this information, an initial recommendation regarding the suitability of a given feature for implementation with AI can be derived. This information can be mapped back to the overall component landscape, as indicted by the figure below. Note also that a new component for cloud-based model training is added to this version of the component landscape. Not that this level of detail does not describe, for example, details of the algorithms used, e.g. Simultaneous Localization and Mapping (SLAM), etc.<br />
<br />
[[File:Component Landscape with AI.png|900px|frameless|center|Identifying AI-enabled components]]<br />
<br />
= Authors and Contributors = <br />
{|{{Borderstyle-author}}<br />
|{{Designstyle-author|Image=[[File:Dirk Slama.jpeg|left|100px]]|author={{Dirk Slama|Title=AUTHOR}}}}<br />
<br><br />
{{Designstyle-author|Image=[[File:Michael Hohmann.jpg|left|100px]]|author={{Michael Hohmann|Title=CONTRIBUTOR}}}}<br />
|}</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=AIoT_Usage_Viewpoint&diff=7053AIoT Usage Viewpoint2022-03-29T14:54:33Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div>__NOTOC__<br />
<imagemap><br />
Image:2.2-v-UsageViewpoint.png|800px|frameless|center|AIoT Usage Viewpoint<br />
<br />
rect 698 59 2421 430 [[AIoT_Business_Viewpoint|Business Viewpoint]]<br />
rect 702 485 2425 859 [[AIoT_Usage_Viewpoint|Usage Viewpoint]]<br />
rect 702 911 2433 1281 [[AIoT_Data_and_Functional_Viewpoint|Data/Functional Viewpoint]]<br />
rect 706 1337 2440 1715 [[AIoT_Implementation_Viewpoint|Implementation Viewpoint]]<br />
rect 2468 55 3122 1711 [[AIoT_Product_Viewpoint|Product Viewpoint]]<br />
rect 1 1041 146 1191 [[Product_Architecture|Product Architecture]]<br />
<br />
desc none<br />
</imagemap><br />
<s data-category="AIoTFramework"></s><br />
The goal of the UX (User Experience) viewpoint is to provide a holistic view of how the product or solution will be utilized by the user and other stakeholders. Good UX practice usually includes extensive product validation, including usability testing, user feedback, pilot user tests, and so on. A good starting point is usually customer surveys or interviews. In the case of an AIoT-enabled product or solution it can also make sense to include site surveys to better understand the environment of the physical products or assets.<br />
<br />
To ensure realistic and consistent use cases across the design, a set of personas should be defined, representing the typical users of the product or solution. Revisiting the User Journey from the initial business design helps clarify many details. Finally, HMI (Human-Machine Interaction) design, early prototypes and wire frames are also essential elements of the UX viewpoint.<br />
<br />
__TOC__<br />
<br />
= <span id="SiteSurvey"></span>Site Surveys and Stakeholder Interviews =<br />
To capture and validate requirements, it is common practice for IT projects to perform stakeholder interviews. This should also be done in case of an AIoT product/project.<br />
<br />
However, an AIoT project is different in that it also involves physical assets and potentially also very specific sites, e.g., a factory. Requirements can heavily depend on the type of environment in which assets are deployed. Additionally, usage patterns might vastly differ, depending on the environment. Consequently, it is highly recommended for the team responsible for the product design to spend time on-site and investigate different usage scenarios in different environments.<br />
<br />
While many AIoT solutions might be deployed at a dedicated site, this might not be true for AIoT-enabled products. Take, for example, a smart kitchen appliance, which will be sold to private households. In this particular case, it can make sense to actually build a real kitchen as a test lab to test the usage of the product in a realistic environment. Alternatively, in the case of our Vacuum Robot, different scenarios for testing the robot must be made available, including different room types and different floor surfaces (wood panels, carpets, etc.).<br />
<br />
= <span id="Personas"></span>Personas=<br />
Personas are archetypical users of the product or solution. Often, personas represent fictitious people who are based on your knowledge of real users. The UX Viewpoint should define a comprehensive set of personas that help model the product features in a way that takes the perspective of different product users into consideration. By personifying personas, the product team will ideally even develop an emotional bond to key personas, since they will accompany them through an intense development process. A persona does not necessarily need a sophisticated fictitious background story, but at least it should have a real-world first name and individual icon, as shown in the example below.<br />
<br />
[[File:2.2.-bv-Personas.png|700px|frameless|center|link=|AIoT Personas]]<br />
<br />
= <span id="Journey"></span>User Journeys =<br />
The initial User Journeys from the Business Model design phase can be used as a starting point. Often, it can be a good idea in this phase of the product design to create individual journey maps for different scenarios, adding more detail to the original, high-level journey.<br />
<br />
The example user journey for ACME:Vac shown here is not that different from most user journey designs found for normal software projects. The main difference is that the user journey here is created along the life cycle of the product from the customer's point of view. This includes important phases such as Asset Activation, Asset Usage and Service Incidents.<br />
<br />
[[File:2.2-Vacuum-Journey.png|1000px|frameless|center|link=|Customer Journey for Vacuum Robot]]<br />
<br />
From the point of view of a Digital Equipment Operator, the user journey most likely focuses less on an end customer but more on the different enterprise stakeholders and how they are experiencing the introduction and operations of the solution. Important phases in the journey here would be the solution retrofit, standard operations, and what actually happens in case of an incident monitored or triggered by the solution. For example, for a predictive maintenance solution, it is important not only to understand the deep algorithmic side of it but also how it integrates with an existing organization and its established processes.<br />
<br />
= <span id="HMI"></span>UX / HMI Strategy =<br />
The UX / HMI strategy will have a huge impact on usability. Another important factor is the question of how much the supplier will be able to learn about how the user is interacting with the product. This is important, for example, for product improvements but also potentially for upselling and digital add-on services.<br />
<br />
[[File:2.2-Vacuum UX HMI.png|800px|frameless|center|link=|UX/HMI for Vacuum Robot]]<br />
<br />
The HMI strategy for ACME:Vac seems relatively straightforward at first sight: HMI features on the robovac are reduced to a minimum, including only some status LEDs and a reset button. Instead, almost all of the user interaction is done via the smartphone app. In addition, some basic commands such as starting an ad hoc cleaning run are supported via smart home integration.<br />
<br />
It is important that the decision for the HMI strategy will have a huge impact not only on usability and customer experience but also on many other aspects, such as product evolvability (a physical HMI cannot be easily updated, while an app-based HMI can), customer intimacy (easier to learn how a customer is using the product via digital platforms), as well as the entire design and development process, including manufacturing (none needed for app-based HMI). <br />
<br />
However, the risk of completely removing the HMI from the physical product should also not be underestimated. For example, in the case of bad connectivity or unavailability of a required cloud backend, the entire physical product might become entirely unusable.<br />
<br />
= <span id="Mockup"></span>Mockups / Wireframes / Prototypes =<br />
To ensure a good user experience, it is vital to try out and validate different design proposals as early as possible. For purely software-based HMI, UI mockups or wireframes are a powerful way of communicating the interactive parts of the product design, e.g., web interfaces and apps for smartphones or tablets. They should initially be kept on the conceptual level. Tools such as Balsamiq offer a comic-style way of creating mockups, ensuring that they are not mistaken for detailed UI implementation designs. The figure shown here provides a mockup for the ACME:Vac floor map management feature on the ACME:Vac app based on this style.<br />
<br />
[[File:2.2-Vacuum-Wireframe.png|400px|frameless|center|link=|Example Wireframe for Vacuum Robot Smart Phone App]]<br />
<br />
It should be noted that the validation of the UX for physical HMI can require the actual creation of a physical prototype. Again, this should be done as early as possible in the design and development phase, because any UX issued identified in the early stages will help save money and effort further downstream. For example, while the HMI on board the ACME:Vac robot is kept to a minimum, there are still interesting aspects to be tested, including docking with the charging station and replacement of the garbage bag.<br />
<br />
= Authors and Contributors =<br />
{|{{Borderstyle-author}}<br />
|{{Designstyle-author|Image=[[File:Dirk Slama.jpeg|left|100px]]|author={{Dirk Slama|Title=AUTHOR}}}}<br />
<br><br />
{{Designstyle-author|Image=[[File:Michael Hohmann.jpg|left|100px]]|author={{Michael Hohmann|Title=CONTRIBUTOR}}}}<br />
|}</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=AIoT_Business_Viewpoint&diff=7052AIoT Business Viewpoint2022-03-29T14:54:18Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div>__NOTOC__<br />
<br />
<imagemap><br />
Image:2.2-v-BusinessViewpoint.png|800px|frameless|center|AIoT Business Viewpoint<br />
<br />
rect 698 59 2421 430 [[AIoT_Business_Viewpoint|Business Viewpoint]]<br />
rect 702 485 2425 859 [[AIoT_Usage_Viewpoint|Usage Viewpoint]]<br />
rect 702 911 2433 1281 [[AIoT_Data_and_Functional_Viewpoint|Data/Functional Viewpoint]]<br />
rect 706 1337 2440 1715 [[AIoT_Implementation_Viewpoint|Implementation Viewpoint]]<br />
rect 2468 55 3122 1711 [[AIoT_Product_Viewpoint|Product Viewpoint]]<br />
rect 1 1041 146 1191 [[Product_Architecture|Product Architecture]]<br />
<br />
desc none<br />
</imagemap><br />
<s data-category="AIoTFramework"></s><br />
<br />
The Business Viewpoint of the AIoT Product/Solution Design builds on the different artifacts created for the [[Business_Model_Design|Business Model]]. As part of the design process, the business model can be refined, e.g., through additional market research. In particular, the detailed design should include KPIs, quantitative planning, and a milestone-based timeline.<br />
__TOC__<br />
<br />
= Business Model =<br />
The business model is usually the starting point of the product/solution design. The business model should describe the rationale of how the organization creates, delivers, and captures value by utilizing AIoT. The [[Business_Model_Design|business model design]] section provides a good description of how to identify, document and validate AIoT-enabled business models. A number of different templates are provided, of which the business model canvas is the most important. The business model canvas should include a summary of the AIoT-enabled value proposition, the key customer segments to be addressed, how customer relationships are built, and the channels through which customers are serviced. Furthermore, it should provide a summary of the key activities, resources and partners required to deliver on the value proposition. Finally, a high-level summary of the business case should be provided, including cost and revenue structure.<br />
<br />
[[File:2.2.-bv-VacuumCanvas.png|900px|frameless|center|link=|ACME:Vac Business Model Canvas]]<br />
<br />
The fictitious ACME:Vac business model assumes that AI and IoT are used to enable a high-end vacuum cleaning robot, which will be offered as a premium product (not an easy decision - some argue that the mid-range position in this market is more attractive). AI will be used not only for robot control and automation but also for product performance analysis, as well as analysis of customer behaviour. This intelligence will be used to optimize the customer experience, create customer loyalty, and identify up-selling opportunities.<br />
<br />
= Key Performance Indicators =<br />
Many organizations use Key Performance Indicators (KPIs) to measure how effectively a company is achieving its key business objectives. KPIs are often used on multiple levels, from high-level business objectives to lower-level process or product-related KPIs. In our context, the KPIs would either be related to an AIoT-enabled product or solution.<br />
<br />
A Digital OEM that takes a smart, connected product to market usually has KPIs that cover business performance, user experience and customer satisfaction, product quality, and the effectiveness and efficiency of the product development process.<br />
<br />
A Digital Equipment Operator who is launching a smart, connected solution to manage a particular process or a fleet of assets would usually have solution KPIs that cover the impact of the AIoT-enabled solution on the business process that it is supporting. Alternatively, business-related KPIs could measure the performance of the fleet of assets and the impact of the solution on that performance. Another typical operator KPI could be coverage of the solution. For example, in a large, heterogeneous fleet of assets, it could measure the number of assets that have been retrofitted successfully. UX and customer satisfaction-related KPIs would only become involved if the solution actually has a direct customer impact. Solution quality and the solution development process would certainly be another group of important KPIs.<br />
<br />
[[File:2.2.-bv-KPIs.png|800px|frameless|center|link=|Vacuum Robot - Product KPIs]]<br />
<br />
The figure with KPIs shown here provides a set of example KPIs for the ACME:Vac product. The business performance-related KPIs cover the number of robovacs sold, the direct sales revenue, recurring revenue from digital add-on features, and finally the gross margin.<br />
<br />
The UX/customer satisfaction KPIs would include some general KPIs, such as Net Promoter Score (results of a survey asking respondents to rate the likelihood that they would recommend the ACME:Vac product), System Usability Scale (assessment of perceived usability), and Product Usage (e.g., users per specific feature). The Task Success Rate KPIs may include how successful and satisfied customers are with the installation and setup of the robovac. Another important KPI in this group would measure how successful customers are actually using the robovac for its main purpose, namely, cleaning. The Time on Task KPIs could measure how long the robovac is taking for different tasks in different modes.<br />
<br />
Product Quality KPIs need to cover a wide range of process- and product-related topics. An important KPI is test coverage. This is a very important KPI for AIoT-enabled products, since testing physical products in combination with digital features can be quite complex and expensive but a critical success factor. Incident metrics such as MTBF (mean time before failure) and MTTR (mean time to recovery, repair, respond, or resolve) need to look at the local robovac installations, as well as the shared cloud back end. Finally, the number of support calls per day can be another important indicator of product quality. Functional product quality KPIs for ACME:Vac would include cleaning speed, cleaning efficiency, and recharging speed.<br />
<br />
Finally, the Product Development KPIs must cover all of the different development and production pipelines, including hardwire development, product manufacturing, software development, and AI development.<br />
<br />
= Quantitative Planning =<br />
Quantitative planning is an important input for the rest of the design exercise. For the Digital OEM, this would usually include information related to the number of products sold, as well as product usage planning data. For example, it can be important to understand how many users are likely to use a certain key feature in which frequency to be able to design the feature and its implementation and deployment accordingly. <br />
<br />
The quantitative model for the ACME:Vac product could include, for example, some overall data related to the number of units sold. Another interesting bit of information is the expected number of support calls per year because this gives an indication for how this process must be set up. Other information of relevance for the design team includes the expected average number of rooms serviced per vacuum robot, the number of active users, the number of vacuum cleaning runs per day, and the number of vacuum cleaner bags used by the average customer per year.<br />
<br />
[[File:2.2-bv-quantitative plan.png|600px|frameless|center|link=|Quantitative Plan]]<br />
<br />
For a Digital Equipment Operator, the planning data must at its core include information about the number of assets to be supported. However, it can also be important to understand certain usage patterns and their quantification. For example, a predictive maintenance solution used to monitor thousands of escalators and elevators for a railroad operator should be based on a quantitative planning model that includes some basic assumptions, not only about the number of assets to be monitored, but also about the current average failure rates. This information will be important for properly designing the predictive maintenance solution, e.g., from a scalability point of view.<br />
<br />
= Milestones / Timeline =<br />
Another key element of the business viewpoint is the milestone-based timeline. For the Digital OEM, this will be a high-level plan for designing, implementing and manufacturing, launching, supporting, and continuously enhancing the product. <br />
<br />
The timeline for the ACME:Vac product differentiates between the physical product and the AIoT part (including embedded hardware and software, AI, and cloud). If custom embedded hardware is to be designed and manufactured, this could also be subsumed under the physical product workstream, depending on the organizational setup. The physical product workstream includes a product design and manufacturing engineering phase until the Start of Production (SOP). After the SOP, this workstream focuses on manufacturing. A new workstream for the next physical product generation starting after the SOP is omitted in this example. The AIoT workstream generally assumes that an AIoT DevOps model is applied consistently through all phases. <br />
<br />
Key milestones for both the physical product and the AIoT part include the initial product design and architecture (result of sprint 0), the setup of the test lab for testing the physical product, the first end-to-end prototype combining the physical product with the AIoT-enabled digital features, the final prototype/Minimum Viable Product, and finally the SOP.<br />
<br />
The following figure also highlights the V-Sprints, which in this example applies to both physical product development and the AIoT development. While physical product development is unlikely to deliver potentially shippable product increments at the end of each V-Sprint, it still assumes the same sprint cadence.<br />
<br />
Because sourcing is typically such a decisive factor, the timeline includes milestones for the key sourcing contracts that must be secured. Details regarding the procurement process are omitted on this level.<br />
<br />
[[File:2.2-bv-Milestones.png|1000px|frameless|center|link=|Example Milestone Plan]]<br />
<br />
For a Digital Equipment Operator, this plan would focus less on the development and manufacturing of the physical product. Instead, it would most likely include a dedicated workstream for managing the retrofit of the solution to the existing physical assets.<br />
<br />
= Authors and Contributors =<br />
{|{{Borderstyle-author}}<br />
|{{Designstyle-author|Image=[[File:Dirk Slama.jpeg|left|100px]]|author={{Dirk Slama|Title=AUTHOR}}}}<br />
<br><br />
{{Designstyle-author|Image=[[File:Michael Hohmann.jpg|left|100px]]|author={{Michael Hohmann|Title=CONTRIBUTOR}}}}<br />
|}</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Verification_and_Validation&diff=7051Verification and Validation2022-03-29T14:53:30Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><br />
<imagemap><br />
Image:2.6-VerificationValidation.png|frameless|1000px|Ignite AIoT - Artificial Intelligence<br />
<br />
rect 4 0 651 133 [[AIoT_Framework|More...]]<br />
rect 970 0 1298 133 [[AIoT_Data_Strategy|More...]]<br />
rect 651 0 970 133 [[Artificial_Intelligence|More...]]<br />
rect 1298 0 1767 133 [[Digital_Twin_Execution|More...]]<br />
rect 1767 0 2095 133 [[Internet_of_Things|More...]]<br />
rect 2095 0 2542 133 [[Hardware.exe|Hardware.exe]]<br />
<br />
rect 2764 128 3539 257 [[Product_Architecture|More...]]<br />
rect 2764 257 3539 390 [[Agile AIoT|More...]]<br />
rect 2764 385 3539 518 [[AIoT_DevOps_and_Infrastructure|More...]]<br />
rect 2764 518 3539 651 [[Trust_and_Security|More...]]<br />
rect 2764 651 3539 784 [[Reliability_and_Resilience|More...]]<br />
rect 2764 779 3539 917 [[Verification_and_Validation|More...]]<br />
<br />
desc none<br />
</imagemap><br />
<s data-category="AIoTFramework"></s><br />
<br />
Quality Management (QM) is responsible for overseeing all activities and tasks needed to maintain a desired level of quality. QM in Software Development traditionally has three main components: quality planning, quality assurance, and quality control. In many agile organizations, QM is becoming closely integrated with the DevOps organization. Quality Assurance (QA) is responsible for setting up the organization and its processes to ensure the desired level of quality. In an agile organization, this means that QA needs to be closely aligned with DevOps. Quality Control (QC) is responsible for the output, usually by implementing a test strategy along the various stages of the DevOps cycle. Quality Planning is responsible for setting up the quality and test plans. In a DevOps organization, this will be a continuous process.<br />
<br />
QM for AIoT-enabled systems must take into consideration all the specific challenges of AIoT development, including QM for combined hardware/software development, QM for highly distributed systems (including edge components in the field), as well as any homologation requirements of the specific industry. Verification & Validation (V&V) usually plays an important role as well. For safety relevant systems (e.g., in transportation, aviation, energy grids), Independent Verification & Validation (IV&V) via an independent third party can be required.<br />
__TOC__<br />
<br />
= Verification & Validation =<br />
Verification and validation (V&V) are designed to ensure that a system meets the requirements and fulfills its intended purpose. Some widely used Quality Management Systems, such as ISO 9000, build on verification and validation as key quality enablers. Validation is sometimes defined as the answer to the question ''"Are you building the right thing?"'' since it checks that the requirements are correctly implemented. Verification can be expressed as ''"Are you building the product right?"'' since it relates to the needs of the user. Common verification methods include unit tests, integration tests and test automation. Validation methods include user acceptance tests and usability tests. Somewhere in between verification and validation we have regression tests, system tests and beta test programs. Verification usually links back to requirements. In an agile setup, this can be supported by linking verification tests to the Definition of Done and the Acceptance Criteria of the user stories.<br />
<br />
[[File:2.6-QC.png|800px|frameless|center|link=|Quality Control]]<br />
<br />
= Quality Assurance and AIoT DevOps =<br />
So how does Quality Assurance fit with our holistic AIoT DevOps approach? First, we need to understand the quality-related challenges, including functional and nonfunctional. Functional challenges can be derived from the agile story map and sprint backlogs. Non-functional challenges in an AIoT system will be related to AI, cloud and enterprise systems, networks, and IoT/edge devices. In addition, previously executed tests, as well as input from ongoing system operations, must be taken into consideration. All of this must serve as input to the Quality Planning. During this planning phase, concrete actions for QA-related activities in development, integration, testing and operations will be defined.<br />
<br />
QA tasks during development must be supported both by the development team, and by any dedicated QA engineers. The developers usually perform tasks such as manual testing, code reviews, and the development of automated unit tests. The QA engineers will work on the test suite engineering and automation setup.<br />
<br />
During the CI phase (Continuous Integration), basic integration tests, automated unit tests (before the check-in of the new code), and automatic code quality checks can be performed.<br />
<br />
During the CT phase (Continuous Testing), many automated tests can be performed, including API testing, integration testing, system testing, automated UI tests, and automated functional tests.<br />
<br />
Finally, during Continuous Delivery (CD) and operations, User Acceptance Test (UATs) and lab tests can be performed. For an AIoT system, digital features of the physical assets can be tested with test fleets in the field. Please note that some advanced users are now even building test suites that are embedded with the production systems. For example, Netflix became famous for the development of the concept of chaos engineering. By letting loose an "army" of so-called Chaos Monkeys onto their production systems, they forced the engineers to ensure that their systems withstand turbulent and unexpected conditions in the real world. This is now referred to as [https://en.wikipedia.org/wiki/Chaos_engineering "Chaos Engineering"].<br />
<br />
[[File:2.6-QPlanning.png|1000px|frameless|center|link=|Quality Assurance and AIoT DevOps ]]<br />
<br />
= Quality Assurance for AIoT =<br />
What are some of the AIoT-specific challenges for QA? The following looks at QA & AI, as well as the integration perspective. AI poses its own set of challenges on AI. And the integration perspective is important since an AIoT system, by its very nature, will be highly distributed and consist of multiple components.<br />
<br />
== QA & AI ==<br />
QA for AI has some aspects that are very different from traditional QA for software. The use of training data, labels for supervised learning, and ML algorithms instead of code with its usual IF/THEN/ELSE-logic poses many challenges from the QA perspective. The fact that most ML algorithms are not [https://en.wikipedia.org/wiki/Explainable_artificial_intelligence "explainable"] adds to this.<br />
<br />
From the perspective of the final system, QA of the AI-related services usually focuses on functional testing, considering AI-based services a black box ("[https://en.wikipedia.org/wiki/Black-box_testing Black Box Testing]") which is tested in the context of the other services that make up the complete AIoT system. However, it will usually be very difficult to ensure a high level of quality if this is the only test approach. Consequently, QA for AI services in an AIoT system also requires a [https://en.wikipedia.org/wiki/White-box_testing "white box"] approach that specifically focuses on AI-based functionality.<br />
<br />
In his article "Data Readiness: Using the 'Right' Data" <ref name="castrounis" />, Alex Castrounis describes the following considerations for the data used for AI models:<br />
* Data quantity: does the dataset have sufficient quantity of data?<br />
* Data depth: is there enough varied data to fill out the feature space (i.e., the number of possible value combinations across all features in a dataset)?<br />
* Data balance: does the dataset contain target values in equal proportions?<br />
* Data representativeness: Does the data reflect the range and variety of feature values that a model will likely encounter in the real world?<br />
* Data completeness: does the dataset contain all data that have a significant relationship with and influence on the target variable?<br />
* Data cleanliness: has the data been cleaned of errors, e.g., inaccurate headers or labels, or values that are incomplete, corrupted, or incorrectly formatted?<br />
<br />
In practice, it is important to ensure that cleaning efforts in the test dataset are not causing situations where the model cannot deal with errors or inconsistencies when processing unseen data during the inference process.<br />
<br />
In addition to the data, the model itself must also undergo a QA process. Some of the common techniques used for model validation and testing include the following:<br />
* '''Statistical validation''' examines the qualitative and quantitative foundation of the model, e.g., validating the model's mathematical assumptions<br />
* The '''holdout method''' is a basic type of cross-validation. The dataset is split into two sets, the training set and the test set. The model is trained on the training set. The test set is used as "unseen data" to evaluate the skill of the model. A common split is 80% training data and 20% test data.<br />
* '''Cross-validation''' is a more advanced method used to estimate the skill of an ML model. The dataset is randomly split into k "folds" (hence "k fold cross-validation"). One fold is used as the test set, the k-1 for training. The process is repeated until each fold has been used once as the test set. The results are then summarized with the mean of the model skill scores.<br />
* '''Model simulation''' embeds the final model into a simulation environment for testing in near-real-world conditions (as opposed to training the model using the simulation).<br />
* '''Field tests''' and '''production tests''' allow for testing of the model under real-world conditions. However, for models used in functional safety-related environments, this means that in the case of badly performing models, a safe and controlled degradation of the service must be ensured.<br />
<br />
[[File:2.6-QAforAI.png|800px|frameless|center|link=|QA for AI]]<br />
<br />
== Integrated QA for AIoT==<br />
At the service level, AI services can usually be tested using the methods outlined in the previous section. After the initial tests are performed by the AI service team, it is important that AI services be integrated into the overall AIoT product for real-world integration tests. This means that AI services are integrated with the remaining IoT services to build the full AIoT system. This is shown in the following figure. The fully integrated system can then be used for User Acceptance Tests, load and scalability tests, and so on.<br />
<br />
[[File:2.6-QAforAIoT.png|800px|frameless|center|link=|QA for AIoT]]<br />
<br />
= Homologation =<br />
Usually, the homologation process requires the submission of an official report to the approval authority. In some cases, a third-party assessment must be included as well (see Independent Verification & Validation above). Depending on the product, industry and region, the approval authorities will differ. The result is usually an approval certificate that can either relate to a product ("type") or the organization that is responsible for creating and operating the product.<br />
<br />
Since AIoT combines many new and sometimes emerging technologies, the homologation process might not always be completely clear. For example, there are still many questions regarding the use of OTA and AI in the automotive approval processes of most countries.<br />
<br />
Nevertheless, it is important for product managers to have a clear picture of the requirements and processes in this area, and that the foundation for efficient homologation in the required areas is ensured early on. Doing so will avoid delays in approvals that can have an impact on the launch of the new product.<br />
<br />
[[File:Homologation.png|800px|frameless|center|AIoT and Homologation]]<br />
<br />
= References =<br />
<references><br />
<ref name="castrounis">''Data Readiness: Using the “Right” Data'', Alex Castrounis, 2010</ref><br />
</references><br />
<br />
= Authors and Contributors =<br />
<br />
{|{{Borderstyle-author}}<br />
|{{Designstyle-author|Image=[[File:Dirk Slama.jpeg|left|100px]]|author={{Dirk Slama|Title=AUTHOR}}}}<br />
{{Designstyle-author|Image=[[File:Eric Schmidt.jpg|left|100px]]|author={{Eric Schmidt|Title=CONTRIBUTOR}}}}<br />
{{Designstyle-author|Image=[[File:Martin Lubisch.jpg|left|100px]]|author={{Martin Lubisch|Title=CONTRIBUTOR}}}}<br />
|}</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Reliability_and_Resilience&diff=7050Reliability and Resilience2022-03-29T14:53:12Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><br />
<imagemap><br />
Image:2.5-ReliabilityResilience.png|frameless|1000px|Ignite AIoT - Reliability & Resilience<br />
<br />
rect 4 0 651 133 [[AIoT_Framework|More...]]<br />
rect 970 0 1298 133 [[AIoT_Data_Strategy|More...]]<br />
rect 651 0 970 133 [[Artificial_Intelligence|More...]]<br />
rect 1298 0 1767 133 [[Digital_Twin_Execution|More...]]<br />
rect 1767 0 2095 133 [[Internet_of_Things|More...]]<br />
rect 2095 0 2542 133 [[Hardware.exe|Hardware.exe]]<br />
<br />
rect 2764 128 3539 257 [[Product_Architecture|More...]]<br />
rect 2764 257 3539 390 [[Agile AIoT|More...]]<br />
rect 2764 385 3539 518 [[AIoT_DevOps_and_Infrastructure|More...]]<br />
rect 2764 518 3539 651 [[Trust_and_Security|More...]]<br />
rect 2764 651 3539 784 [[Reliability_and_Resilience|More...]]<br />
rect 2764 779 3539 917 [[Verification_and_Validation|More...]]<br />
<br />
desc none<br />
</imagemap><br />
<br />
<s data-category="AIoTFramework"></s><br />
<br />
Ensuring a high level of robustness for AIoT-based systems is usually a key requirement. Robustness is a result of two key concepts: reliability and resilience ("R&R"). Reliability concerns designing, running and maintaining systems to provide consistent and stable services. Resilience refers to a system's ability to resist adverse events and conditions. <br />
<br />
Ensuring reliability and resilience is a broad topic that ranges from basics such as proper error handling on the code level up to georeplication and disaster recovery. In addition, there are some overlaps with Security, as well as Verification and Validation. This section first discusses reliability and resilience in the context of AIoT DevOps and then looks at the AI and IoT specifics in more detail.<br />
__TOC__<br />
<br />
= R&R for AIoT DevOps =<br />
Traditional IT systems have been using reliability and resilience engineering methods for decades. The emergence of hyperscaling cloud infrastructures has taken this to new levels. Some of the best practices in this space are well documented, for example, Google's Site Reliability Engineering approach for production systems <ref name="murphy" />. These types of systems need to address challenges such as implementing recovery mechanisms for individual IT services or entire regions, dealing with data backups, replication, clustering, network load-balancing and failover, georedundancy, etc.<br />
<br />
The IoT adds to these challenges because parts of the system are implemented not in the data center but rather as hardware and software components that are deployed in the field. These field deployments can be based on sophisticated EDGE platforms or on some very rudimentary embedded controllers. Nevertheless, IT components deployed in the field often play by different rules -- and if it is only for the fact that it is much harder (or even technically or economically impossible) to access them for any kind of unplanned physical repairs or upgrades.<br />
<br />
Finally, AI is adding further challenges in terms of model robustness and model performance. As will be discussed later, some of these challenges are related to the algorithmic complexity of the AI models, while many more arise from complexities of handling the AI development cycle in production environments, and finally adding the specifics of the IoT on top of it all.<br />
<br />
[[File:2.5-RR DevOps.png|900px|frameless|center|link=|R&R DevOps for AIoT]]<br />
<br />
Ensuring R&R for AIoT-enabled systems is usually not something that can be established in one step, so it seems natural to integrate the R&R perspective into the [[DevOps_and_Infrastructure|AIoT DevOps]] cycle. Naturally, the R&R perspective must be integrated with each of the four AIoT DevOps quadrants. From the R&R perspective, agile development must address not only the application code level but also the AI/model level, as well as the infrastructure level. Continuous Integration must ensure that all R&R-specific aspects are integrated properly. This can go as far as preparing the system for Chaos Engineering Experiments<ref name="chaos" />. Continuous Testing must ensure that all R&R concepts are continuously validated. This must include basic system-level R&R, as well as AI and IoT-specific R&R aspects. Finally, Continuous Delivery/Operations must bring R&R to production. Some companies are even going to the extreme to conduct continuous R&R tests as part of their production systems (one big proponent of this approach is Netflix, where the whole Chaos Engineering approach originated).<br />
<br />
== R&R Planning: Analyze Rate Act ==<br />
While it is important that R&R is treated as a normal part of the AIoT DevOps cycle, it usually makes sense to have a dedicated R&R planning mechanism, which looks at R&R specifically. Note that a similar approach has also been suggested for Security, as well as Verification & Validation. It is important that none of these three areas is viewed in isolation, and that redundancies are avoided.<br />
<br />
[[File:2.5-RR Analyze Rate Act.png|800px|frameless|center|link=|Analyze Rate Act]]<br />
<br />
The AIoT Framework proposes a dedicated Analyze/Rate/Act planning process for R&R, embedded into the AIoT DevOps cycle, as shown by the figure preceding. <br />
<br />
The "Analyze" phase of this process must take two key elements into consideration:<br />
* R&R metrics/KPIs: A performance analysis and evaluation of the actual live system. This must be updated and used as input for each iteration of the planning process. In the early phases, the focus will be more on how to actually define the R&R KPIs and acquire related data, while in the later phases this information will become an integral part of the R&R planning process.<br />
* Component/Dependency Analysis (C/DA): Utilizing existing system documentation such as architecture diagrams and flowcharts, the R&R team should perform a thorough analysis of all the components in the system, and their potential dependencies. From this process, a list of potential R&R Risk Areas should be compiled ("RA list").<br />
<br />
The RA list can contain risks at different levels of granularity, ranging from risks related to the availability of individual microservices up to risks related to the availability of entire regions. The RA list must also be compared to the results of the Threat Modeling that comes out of the [[Trust_and_Security|DevSecOps]] planning process. In some cases, it can even make sense to join these two perspectives into a single list or risk repository.<br />
<br />
The "Rate" phase must look at each item from the RA list in detail, including the potential impact of the risk, the likelihood that it occurs, ways of detecting issues related to the risk, and ways for resolving them. Finally, a brief action plan should describe a plan for automating the detection and resolution of issues related to the risk, including a rough effort estimate. Based on all of the above, a rating for each item in the RA list should be provided.<br />
<br />
The "Act" phase starts with prioritizing and scheduling the most pressing issues based on the individual ratings. Highly rated issued must then be transferred to the general development backlog. This will likely include additional analysis of dependencies to backlog items more related to the application development side of things.<br />
<br />
== Minimum Viable R&R ==<br />
Similar to the discussion on Minimum Viable Security, project management must carefully strike a balance between investments in R&R and other investments. A system that does not support basic R&R will quickly frustrate users or even worse -- result in lost business or actual physical harm. However, especially in the early stages of market exploration, the focus must be on features and usability. Determining when to invest in R&R as the system matures is a key challenge for the team.<br />
<br />
= Robust, AI-based components in AIoT =<br />
The AI community is still in the early stages of addressing reliability, resilience and related topics such as robustness and explainability of AI-based systems. H. Truong provides the following definitions <ref name="r3e" /> from the ML perspective:<br />
* Robustness: Dealing with imbalanced data and learning in open-world(out-of-distribution) situations<br />
* Reliability: Reliable learning and reliable inference in terms of accuracy and reproducibility of ML models; uncertainties/confidence in inferences; reliable ML service serving<br />
* Resilience: bias in data, adversary attacks in ML, resilience learning, computational Byzantine failures<br />
<br />
In the widely cited paper on ''Hidden Technical Debt in Machine Learning Systems''<ref name="sculley" />, the authors emphasize that only a small fraction of real-world ML systems are composed of ML code, while the required surrounding infrastructure is vast and complex, including configuration, data collection, feature extraction, data verification, machine resource management, analysis tools, process management tools, serving infrastructure, and monitoring.<br />
<br />
[[File:2.5-RR Robust AI Components.png|800px|frameless|center|link=|Robust AI Components for AIoT]]<br />
<br />
The AIoT Framework suggests differentiating between the online and offline perspectives of the AI-based components in the AIoT system. The offline perspective must cover data sanitation, robust model design, and model verification. The online perspective must include runtime checks (e.g., feature values out of range or invalid outputs), an approach for graceful model degradation, and runtime monitoring. Between the online and offline perspectives, a high level of automation must be achieved, covering everything from training to testing and deployments.<br />
<br />
[[File:2.5-RR AIoT Architecture.png|800px|frameless|center|link=|Architecture for robust, AI-enabled AIoT components]]<br />
<br />
Mapping all of the above R&R elements to an actual AIoT system architecture is not an easy feat. Acquiring high-quality test data from assets in the field is not always easy. Managing the offline AI development and experimentation cycle can rely on standard AI engineering and automation tools. However, model deployments to assets in the field rely on nonstandard mechanisms, e.g., relying on [[OTA_Updates|OTA (over-the-air) updates]] from the IoT toolbox. Dealing with multiple instances of models deployed onto multiple assets (or EDGE instances) in the field is something that goes beyond standard AI processing in the cloud. Finally, gathering -- and making sense of -- monitoring data from multiple instances/assets is beyond today's well-established AI engineering principles.<br />
<br />
= Reliability & Resilience for IoT =<br />
Finally, we need to address the IoT specifics of Reliability & Resilience. For the backend (cloud or enterprise), of course, most of the standard R&R aspects of Internet/cloud/enterprise systems apply. Since the IoT adds new categories of clients (i.e., assets) to access the backends, this has to be taken into consideration from an R&R perspective. For example, the IoT backend must be able to cope with malfunctioning or potentially malicious behaviour of EDGE or embedded components.<br />
<br />
For the IoT components deployed in the field, environmental factors can play a significant role, which requires extra ruggedness for hardware components, which can be key from the R&R perspective. Additionally, depending on the complexity of the EDGE/embedded functions, many of the typical R&R features found in modern cloud environments will have to be reinvented to ensure R&R for components deployed in the field.<br />
<br />
Finally, for many IoT systems -- especially where assets can physically move -- there will be much higher chances of losing connectivity from the asset to the backend. This typically requires that both backend and field-based components implement a certain degree of autonomy. For example, an autonomous vehicle must be able to function in the field without access to additional data (e.g., map data) from the cloud. Equally, a backend asset monitoring solution must be able to function, even if the asset is currently not connected. For example, asset status information must be augmented with a timestamp that indicates when this information was last updated.<br />
<br />
[[File:2.5-RR for IoT.png|800px|frameless|center|link=|Building robust IoT solutions]]<br />
<br />
= References =<br />
<references><br />
<ref name="r3e"> [https://www.researchgate.net/publication/341762862_R3E_-An_Approach_to_Robustness_Reliability_Resilience_and_Elasticity_Engineering_for_End-to-End_Machine_Learning_Systems ''R3E – An Approach to Robustness, Reliability, Resilience and Elasticity Engineering for End-to-End Machine Learning''], Hong-Linh Truong, 2020</ref><br />
<ref name="chaos"> [https://en.wikipedia.org/wiki/Chaos_engineering#cite_note-1 ''Chaos Engineering''], Wikipedia</ref><br />
<ref name="murphy">''Site Reliability Engineering: How Google Runs Production Systems'', N. Murphy et al., 2016</ref><br />
<ref name="sculley">[https://papers.nips.cc/paper/2015/file/86df7dcfd896fcaf2674f757a2463eba-Paper.pdf ''Hidden Technical Debt in Machine Learning Systems''], D. Sculley et al., 2015</ref><br />
</references><br />
<br />
= Authors and Contributors =<br />
<br />
{|{{Borderstyle-author}}<br />
|{{Designstyle-author|Image=[[File:Dirk Slama.jpeg|left|100px]]|author={{Dirk Slama|Title=AUTHOR}}}}<br />
|}</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Trust_and_Security&diff=7049Trust and Security2022-03-29T14:52:57Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><br />
<imagemap><br />
Image:2.4-TrustSecurity.png|frameless|1000px|Ignite AIoT - Trust & Security<br />
<br />
rect 4 0 651 133 [[AIoT_Framework|More...]]<br />
rect 970 0 1298 133 [[AIoT_Data_Strategy|More...]]<br />
rect 651 0 970 133 [[Artificial_Intelligence|More...]]<br />
rect 1298 0 1767 133 [[Digital_Twin_Execution|More...]]<br />
rect 1767 0 2095 133 [[Internet_of_Things|More...]]<br />
rect 2095 0 2542 133 [[Hardware.exe|Hardware.exe]]<br />
<br />
rect 2764 128 3539 257 [[Product_Architecture|More...]]<br />
rect 2764 257 3539 390 [[Agile AIoT|More...]]<br />
rect 2764 385 3539 518 [[AIoT_DevOps_and_Infrastructure|More...]]<br />
rect 2764 518 3539 651 [[Trust_and_Security|More...]]<br />
rect 2764 651 3539 784 [[Reliability_and_Resilience|More...]]<br />
rect 2764 779 3539 917 [[Verification_and_Validation|More...]]<br />
<br />
<br />
desc none<br />
</imagemap><br />
<br />
<s data-category="AIoTFramework"></s><br />
<br />
Digital Trust (or trust in digital solutions) is a complex topic. When do users deem a digital product truly trustworthy? What if a physical product component is added, as in smart, connected products? While security is certainly a key enabler of Digital Trust, there are many other aspects that are important, including ethical considerations, data privacy, quality and robustness (including reliability and resilience). Since AIoT-enabled products can have a direct, physical impact on the well-being of people, safety also plays an important role.<br />
<br />
Safety is traditionally closely associated with Verification and Validation; which has its own, dedicated section in Ignite AIoT. The same holds true for robustness (see Reliability and Resilience). Since security is such a key enabler, it will have its own, dedicated discussion here, followed by a summary of AIoT Trust Policy Management. Before delving into this, we first need to understand the AI and IoT-specific challenges from a security point of view.<br />
__TOC__<br />
<br />
= Why companies invest in cyber security =<br />
[[File:2.4-WhySecurity.png|800px|frameless|center|link=|Why companies invest in cyber security]]<br />
<br />
= AI-related Trust and Security Challenges =<br />
As excited as many business managers are about the potential applications of AI, many users and citizens are skeptical of its potential abuses. A key challenge with AI is that it is ''per se'' not explainable: there are no more explicitly coded algorithms, but rather "black box" models that are trained and fine-tuned over time with data from the outside, with no chance of tracing and "debugging" them the traditional way at runtime. While Explainable AI is trying to resolve this challenge, there are no satisfactory solutions available.<br />
<br />
One key challenge with AI is bias: while the AI model might be statistically correct, it is being fed training data that include a bias, which will result in (usually unwanted) behaviour. For example, an AI-based HR solution for the evaluation of job applicants that is trained on biased data will result in biased recommendations. <br />
<br />
While bias is often introduced unintentionally, there are also many potential ways to intentionally attack an AI-based system. A [https://www.belfercenter.org/publication/AttackingAI recent report] from the Belfer Center describes two main classes of AI attacks: Input Attacks and Poisoning Attacks.<br />
<br />
'''Input attacks:''' These kinds of attacks are possible because an AI model never covers 100% of all possible inputs. Instead, statistical assumptions are made, and mathematical functions are developed to allow creation of an abstract model of the real world derived from the training data. So-called adversarial attacks try to exploit this by manipulating input data in a way that confuses the AI model. For example, a small sticker added to a stop sign can confuse an autonomous vehicle and make it think that it is actually seeing a green light. <br />
<br />
'''Poisoning attacks:''' This type of attack aims at corrupting the model itself, typically during the training process. For example, malicious training data could be inserted to install some kind of backdoor in the model. This could, for example, be used to bypass a building security system or confuse a military drone.<br />
<br />
= IoT-related Trust and Security Challenges =<br />
Since the IoT deals with the integration of physical products, one has to look beyond the cloud and enterprise perspective, including networks and physical assets in the field. If a smart connected product is suddenly no longer working because of technical problems, users will lose trust and wish back the dumb, non-IoT version of it. If hackers use an IoT-connected toy to invade a family's privacy sphere, this is a violation of trust beyond the normal hacked internet account. Consequently, addressing security and trust for any IoT-based product is key.<br />
<br />
The OWASP (The Open Web Application Security Project, a nonprofit foundation) project has published the OWASP IoT Top 10, a list of the top security concerns that each IoT product must address:<br />
* Weak Guessable, or Hardcoded Passwords<br />
* Insecure Network Services<br />
* Insecure Ecosystem Interfaces (Web, backend APIs, Cloud, and mobile interfaces)<br />
* Lack of Secure Update Mechanism (Secure OTA)<br />
* Use of Insecure or Outdated Components<br />
* Insufficient Privacy Protection<br />
* Insecure Data Transfer and Storage<br />
* Lack of Device Management<br />
* Insecure Default Settings<br />
* Lack of Physical Hardening<br />
<br />
Understanding these additional challenges is key. However, to address them -- together with the previously discussed AI-related challenges -- a pragmatic approach is required that fits directly with the product team's DevOps approach. The result is sometimes also referred to as DevSecOps, which will be introduced in the following.<br />
<br />
= DevSecOps for AIoT =<br />
DevSecOps augments the DevOps approach, integrating security practices into all elements of the DevOps cycle. While traditionally many security teams are centralized, in the DevSecOps approach it is assumed that security is actually delivered by the DevOps team and processes. This starts with Security-by-Design, but also includes integration, testing and delivery. From an AIoT perspective, the key is to ensure that DevSecOps addresses all challenges presented by the different aspects of AIoT: AI, cloud/enterprise, network, and IoT devices/assets. The following figure provides an overview of the proposed AIoT DevSecOps model for AIoT.<br />
<br />
[[File:2.4-DevSecOps.png|1000px|frameless|center|link=|DevSecOps for AIoT]]<br />
<br />
DevSecOps needs to address each of the four DevOps quadrants. In addition, Security Planning was added as a fifth quadrant. The following will look at each of these five quadrants in detail.<br />
<br />
= Security Planning for AIoT =<br />
Security Planning for AIoT must first determine the general approach. Next, Threat Modeling will provide insights into key threats and mitigation strategies. Finally, the security architecture and setup must be determined. Of course, this is an iterative approach, which requires continuous evaluation and refinement.<br />
== DevSecOps Approach ==<br />
The first step toward enabling DevSecOps for an AIoT product organization is to ensure that key stakeholders agree on the security method used and how to integrate it with the planned DevOps setup. In addition, clarity must be reached on resources and roles: <br />
* Is there a dedicated budget for DevSecOps (training, consulting, tools, infrastructure, certification)?<br />
* Will there be a dedicated person (or even team) with their security hat on?<br />
* How much time is each developer expected to spend on security?<br />
* Will the project be able to afford dedicated DevSecOps training for the development teams?<br />
* Will there be a dedicated security testing team?<br />
* Will there be external support, e.g., an external company performing the penetration tests?<br />
* How will security-related reporting be set up during development and operations?<br />
<br />
== Threat Modeling ==<br />
Threat Modeling is a widely established approach for identifying and predicting security threats (using the attacker’s point of view) and protecting IT assets by building a defense strategy that prepares the appropriate mitigation strategies. Threat models provide a comprehensive view of an organization’s full attack surface and help to make decisions on how to prioritize security-related investments.<br />
<br />
There are a number of established threat modeling techniques available, including [https://threatmodeler.com/threat-modeling-methodologies-overview-for-your-business/ STRIDE and VAST]. The figure following describes the overall threat modeling process.<br />
<br />
[[File:2.4-ThreatModeling.png|700px|frameless|center|link=|Threat Modeling]]<br />
<br />
First, the so-called [https://en.wikipedia.org/wiki/Security_Target Target of Evaluation (ToE)] must be defined, including security objectives and requirements, as well as a definition of assets in scope.<br />
<br />
Second, the Threats & Attack Surfaces must be identified. For this, the STRIDE model can be used as a starting point. [https://insights.sei.cmu.edu/sei_blog/2018/12/threat-modeling-12-available-methods.html STRIDE] provides a common set of threats, as defined in the table below (including AIoT-specific examples).<br />
<br />
[[File:2.4-STRIDE.png|800px|frameless|center|link=|STRIDE]]<br />
<br />
The STRIDE threat categories can be used to perform an in-depth analysis of the [https://community.arm.com/iot/b/internet-of-things/posts/five-steps-to-successful-threat-modelling attack surface]. For this purpose, threat modeling usually uses component diagrams of the target system and applies the threat categories to it. An example is shown in the following figure.<br />
<br />
[[File:2.4-STRIDEApplied.png|700px|frameless|center|link=|Analyzing the attack surface]]<br />
<br />
Finally, the potential severity of different attack scenarios will have to be evaluated and compared. For this process, an established method such as the Common Vulnerability Scoring System (CVSS) can be used. CVSS uses a score from zero to ten to help rank different attack scenarios. An example is given in the following figure.<br />
[[File:2.4-CVSS.png|800px|frameless|center|link=|CVSS]]<br />
<br />
Next, the product team needs to define a set of criteria for dealing with the risks on the different levels, e.g.<br />
* High risk: Fixed immediately<br />
* Medium risk: Fixed in next minor release<br />
* Low risk: Fixed in next major release<br />
<br />
To manage the identified and classified risks, a risk catalog or risk register is created to track the risks and the status. This would usually be done as part of the overall defect tracking.<br />
<br />
== Security Architecture & Setup ==<br />
Securing an AIoT system is not a single task, and the results of the threat modeling exercise are likely to show attack scenarios of very different kinds. Some of these scenarios will have to be addressed during the later phases of the DevSecOps cycle, e.g., during development and testing. However, some basic security measures can usually already be established as part of the system architecture and setup, including:<br />
* Basic security measures, such as firewalls and anti-virus software<br />
* Installation of network traffic monitors and port scanners<br />
* Hardware-related security architecture measures, e.g., Trusted Platform Module (TPM) for extremely sensitive systems<br />
<br />
These types of security-related architecture decisions should be made in close alignment with the product architecture team, early in the architecture design.<br />
<br />
== Integration, Testing, and Operations ==<br />
<br />
In '''DevSecOps''', the development teams must be integrated into all security-related activities. On the code-level, regular code reviews from a security perspective can be useful. On the hardware-level, design and architecture reviews should be performed from a security perspective as well. For AI, the actual coding is usually only a small part of the development. Model design and training play a more important role and should also be included in regular security reviews.<br />
<br />
'''Continuous Integration''' has to address security concerns specifically on the code level.<br />
Code-level security tests/inspections include:<br />
* Before compilation/packaging: SAST can be used for Static Application Security Testing.<br />
* IAST (Interactive application security testing) uses code instrumentation, which can slow down performance. Individual decisions about enabling/disabling it will have to be made as part of the CI process.<br />
<br />
'''Security testing''' includes tests with a specific focus on testing for security vulnerabilities.<br />
These can include:<br />
* Applications, e.g., DAST (Dynamic Application Security Testing)<br />
* Hardware-related security tests<br />
* AI model security tests<br />
* End-to-End System, e.g., manual and automated penetration tests<br />
<br />
'''Secure operations''' have to include a number of activities, including:<br />
* Threat Intelligence<br />
* Infrastructure and Network Testing (including Secure OTA)<br />
* Security tests in the field<br />
* RASP: Runtime Application Self-Protection<br />
* Monitor/Detect/Response/Recover<br />
<br />
== Minimum Viable Security ==<br />
The key challenge with security planning and implementation is to find the right approach and the right level of required resource investments. If too little attention (and % of project resources and budget) is given to security, then there is a good chance that this will result in a disaster - fast. However, if the entire project is dominated by security, this can also be a problem. This relates to the resources allocated to different topics, but also to the danger of over-engineering the security solutions (and in the process making it too difficult to deliver the required features and usability). Figuring out the Minimum Viable Security is something that must be done between product management and security experts. Also, it is important that this is seen as an ongoing effort, constantly reacting to new threats and supporting the system architecture as it evolves.<br />
<br />
= Trust Policy Management for AIoT =<br />
In addition to security-related activities, an AIoT product team should also consider taking a proactive approach toward broader trust policies.<br />
These trust policies can include topics such as:<br />
* Data sharing policies (e.g., sharing of IoT data with other stakeholders)<br />
* Transparency policies (e.g., making data sharing policies transparent to end users)<br />
* Ethics-related policies (e.g., for AI-based decisions)<br />
<br />
Taking a holistic view of AIoT trust policies and establishing central trust policy management can significantly contribute to creating trust between all stakeholders involved.<br />
<br />
{{Infobox|The [http://digitaltrustforum.org/ Digital Trust Forum (DTF)] is working on Trust Policy Management for AIoT-based smart, connected products}}<br />
<br />
= Authors and Contributors =<br />
<br />
{|{{Borderstyle-author}}<br />
|{{Designstyle-author|Image=[[File:Dirk Slama.jpeg|left|100px]]|author={{Dirk Slama|Title=AUTHOR}}}}<br />
<br><br />
{{Designstyle-author|Image=[[File:Pablo Endres.jpg|left|100px]]|author={{Pablo Endres|Title=CONTRIBUTOR}}}}<br />
|}</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=AIoT_DevOps_and_Infrastructure&diff=7048AIoT DevOps and Infrastructure2022-03-29T14:52:42Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><br />
<imagemap><br />
Image:2.3-DevOps.png|frameless|1000px|AIoT - DevOps and Infrastructure<br />
<br />
rect 4 0 651 133 [[AIoT_Framework|More...]]<br />
rect 970 0 1298 133 [[AIoT_Data_Strategy|More...]]<br />
rect 651 0 970 133 [[Artificial_Intelligence|More...]]<br />
rect 1298 0 1767 133 [[Digital_Twin_Execution|More...]]<br />
rect 1767 0 2095 133 [[Internet_of_Things|More...]]<br />
rect 2095 0 2542 133 [[Hardware.exe|Hardware.exe]]<br />
<br />
rect 2764 128 3539 257 [[Product_Architecture|More...]]<br />
rect 2764 257 3539 390 [[Agile AIoT|More...]]<br />
rect 2764 385 3539 518 [[AIoT_DevOps_and_Infrastructure|More...]]<br />
rect 2764 518 3539 651 [[Trust_and_Security|More...]]<br />
rect 2764 651 3539 784 [[Reliability_and_Resilience|More...]]<br />
rect 2764 779 3539 917 [[Verification_and_Validation|More...]]<br />
<br />
desc none<br />
</imagemap><br />
<br />
<s data-category="AIoTFramework"></s><br />
<br />
The introduction of DevOps -- together with Continuous Integration/Continuous Delivery (CI/CD) -- has fundamentally changed the way software is developed, integrated, tested, and deployed. DevOps and CI/CD are key enablers of agile development. However, today's DevOps practices predominantly focus on cloud and enterprise application development. For successful AIoT products, DevOps will need to be extended to include AI and IoT. <br />
__TOC__<br />
<br />
= Agile DevOps for Cloud and Enterprise Applications =<br />
DevOps organizations breakdown the traditional barriers between development and operations, focusing on cross-functional teams that support all aspects of development, testing, integration and deployment. Successful DevOps organizations avoid overspecialization and instead focus on cross-training and open communication between all DevOps stakeholders.<br />
<br />
DevOps culture is usually closely aligned with agile culture; both are required for incremental and explorative development. <br />
<br />
Continuous Integration / Continuous Delivery (CI/CD) emphasize automation tools that drive building and testing, ultimately enabling a highly efficient and agile software life cycle. The Continuous Integration (CI) process typically requires commitment of all code changes to a central code repository. Each new check-in triggers an automated process that rebuilds the system, automatically performs unit tests, and executes automated code-quality checks. The resulting software packages are deployed to a CI server, with optional notification of a repository manager.<br />
<br />
Continuous Testing (CT) goes beyond simple unit tests, and utilizes complex test suites that combine different test scripts to simulate and test complex interactions and processes.<br />
<br />
Finally, Continuous Delivery (CD) uses [https://en.wikipedia.org/wiki/Infrastructure_as_code Infrastructure-as-Code (IaC)] concepts to deploy updated software packages to the different test and production environments.<br />
<br />
[[File:2.3-DevOps-Enterprise.png|800px|frameless|center|link=|Agile DevOps for Cloud and Enterprise Applications]]<br />
<br />
= Agile DevOps for AI: MLOps =<br />
The introduction of AI to the traditional development process is adding many new concepts, which create challenges for DevOps:<br />
* New roles: data scientist, AI engineer<br />
* New artefacts (in addition to code): Data, Models<br />
* New methods/processes: AI/data-centric, e.g., "Agile CRISP-DM", Cognitive Project Management for AI (CPMAI) <br />
* New AI tools + infrastructure<br />
<br />
The development of AI-based systems also introduces a number of new requirements from a DevOps perspective:<br />
* Reproducibility of models: Creating reproducible models is a key prerequisite for a stable DevOps process<br />
* Model validation: Validating models from a functional and business perspective is key<br />
* Explainability (XAI, or 'explainable AI'): How to ensure that the results of the AI are comprehensible for humans? <br />
* Testing and test automation: AI requires new methods and infrastructure<br />
* Versioning: Models, code, data<br />
* Lineage: Track evolution of models over time<br />
* Security: Deliberately skewed models as new attack vector/adversarial attacks<br />
* Monitoring and retraining: Model decay requires constant monitoring and retraining<br />
<br />
The figure below provides an overview of how an AI-specific DevOps process can help in addressing many of the issues outlined above.<br />
<br />
[[File:2.3-DevOps-AI.png|800px|frameless|center|link=|AI DevOps]]<br />
<br />
= Agile DevOps for IoT =<br />
Finally, we need to look at the DevOps challenges from an IoT point of view. The main factors are:<br />
* OTA: [[OTA_Updates|Over-the-Air updates (OTA)]] require a completely different infrastructure and process than traditional, cloud-based DevOps approaches<br />
* Embedded Software & Hardware: The lifecycle of embedded hardware and software is very different from cloud-based software. Testing and test automation are possible, but require special efforts and techniques.<br />
<br />
The OTA update process is described in more detail [[OTA Updates|here]]. The figure following provides a high-level overview. The OTA Update process usually comprises three phases. During the authoring phase, new versions of the software (or AI models or other content) are created. The distribution phase is responsible for physical distribution (e.g., between different global regions) and the management of update campaigns. Finally, once arrived on the asset, the local distribution process ensures that the updates are securely deployed, and the updated system is validated.<br />
<br />
[[File:OTA Overview.png|600px|frameless|center|link=|OTA Overview]]<br />
<br />
Looking again at the 4 quadrant DevOps overview, this time from the IoT perspective, a number of differences compared to the standard DevOps approach can be seen:<br />
* Agile development is structured to match the needs of an IoT organization, as discussed [[Product_Organization#OrgAndArch|here]]<br />
* Continuous Integration (CI) will usually have to cover a much more diverse set of development environments since it needs to cover cloud and embedded development<br />
* Continuous Testing (CT) will have to address the test automation of embedded components, e.g. by utilizing different abstraction and simulation techniques such as HIL (hardware in the loop), SIL (software in the loop) and MIL (model in the loop)<br />
* Continuous Delivery (CD) will have to utilize OTA not only for the production system but also for Quality Assurance and User Acceptance Tests<br />
<br />
Finally, all of the above will also have to be examined from the perspective of [[Verification_and_Validation|Verification and Validation]].<br />
<br />
[[File:2.3-DevOps-IoT.png|800px|frameless|center|link=|AIoT DevOps]]<br />
<br />
= Agile DevOps for AIoT =<br />
The AIoT DevOps approach will need to combine all three perspectives outlined in the previous sections: Cloud DevOps, AI DevOps and IoT DevOps.<br />
Each of these three topics in itself is complex, and integrating the three into a single, homogeneous and highly automated DevOps approach will be one of the main challenges of each AIoT product. However, without succeeding in this effort, it will be nearly impossible to deliver an attractive and feature-rich product that can also evolve over time, as far as the limitations of hardware deployed in the field will allow. Utilizing OTA to evolve the software and AI deployed on the assets in the field will be a key success factor for smart, connected products in the future.<br />
<br />
= Authors and Contributors =<br />
{|{{Borderstyle-author}}<br />
|{{Designstyle-author|Image=[[File:Dirk Slama.jpeg|left|100px]]|author={{Dirk Slama|Title=AUTHOR}}}}<br />
|}</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Internet_of_Things&diff=7047Internet of Things2022-03-29T14:51:53Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><br />
<imagemap><br />
Image:1.2-IoT.png|frameless|1000px|Ignite AIoT - Internet of Things perspective<br />
<br />
<br />
rect 4 0 651 133 [[AIoT_Framework|More...]]<br />
rect 970 0 1298 133 [[AIoT_Data_Strategy|More...]]<br />
rect 651 0 970 133 [[Artificial_Intelligence|More...]]<br />
rect 1298 0 1767 133 [[Digital_Twin_Execution|More...]]<br />
rect 1767 0 2095 133 [[Internet_of_Things|More...]]<br />
rect 2095 0 2542 133 [[Hardware.exe|Hardware.exe]]<br />
<br />
rect 2764 128 3539 257 [[Product_Architecture|More...]]<br />
rect 2764 257 3539 390 [[Agile AIoT|More...]]<br />
rect 2764 385 3539 518 [[AIoT_DevOps_and_Infrastructure|More...]]<br />
rect 2764 518 3539 651 [[Trust_and_Security|More...]]<br />
rect 2764 651 3539 784 [[Reliability_and_Resilience|More...]]<br />
rect 2764 779 3539 917 [[Verification_and_Validation|More...]]<br />
<br />
desc none<br />
</imagemap><br />
<br />
<s data-category="AIoTFramework"></s><br />
<br />
The IoT perspective in AIoT is usually much more focused on the physical product and/or the site of deployment, as well as the end-to-end system functionality. In this context, it makes sense to look at the IoT through the lens of the process that will support building, maintaining and enhancing the end-to-end system functionality. The AIoT Framework is based on the premise that overall an agile approach is desirable, but that due to the specifics of an AIoT system, some compromises will have to be made. For example, this could concern the development of embedded and hardware components, as well as safety and security concerns. <br />
<br />
Consequently, the assumption is that there is an overarching agile release train, with different (more or less) agile work streams. Each workstream represents some of the key elements of the AIoT system, including cloud services, communication services and IoT/EDGE components. In addition, AIoT DevOps & Infrastructure as well as cross-cutting tasks such as security and end-to-end testing are defined as workstreams. Finally, asset preparation is a workstream that represents the interface to the actual asset/physical product/site of deployment.<br />
<br />
The following provides a more detailed description of each of the standard work streams:<br />
* Agile Release Train: Responsible for end-to-end coordination, UX, and system architecture; ultimately responsible for ensuring that the AIoT system is implemented, tested, deployed and released<br />
* Cross-Cutting: Addresses tasks that are cutting especially across the cloud and IoT/EDGE, including end-to-end security, testing and QA<br />
* AIoT DevOps & Infrastructure: Must provide the infrastructure and processes for automating the AIoT system lifecycle, utilizing the [[AIoT_DevOps_and_Infrastructure|AIoT DevOps]] concepts outlined in the AIoT Framework<br />
* Cloud Services: Should more accurately be called ''Backend services, including cloud and on-premises AIoT applications, as well as enterprise system integration/EAI''. Must also address the backend side of Digital Twin, as well as AIoT-related business processes<br />
* Communication Services: Must provide LAN and WAN communication services. Can involve complex service contract negotiations in case a global AIoT WAN is required<br />
* IoT/EDGE Components: Includes responsibility for the development/procurement of all hardware (e.g., gateways, sensors), software, firmware and AI/ML execution environments deployed on or near the asset/product<br />
* Asset Preparation: Must ensure that the asset/physical product (or, in the case of an AIoT solution, the sites of deployment) are prepared to work with the AIoT system. Must include basic tasks such as ensuring power supply and providing storage/assembly points for AIoT hardware components<br />
<br />
The following will look at both the product and solution perspectives in more detail.<br />
__TOC__<br />
<br />
= Digital OEM: Product Perspective =<br />
This section looks at key milestones for an AIoT-enabled product, along the work streams defined earlier:<br />
* Basic prototype/pilot: Must include a combination of what will later become the AI-enabled functionality (could be scripted/hard coded at this stage), plus basic system functionality and ideally a rudimentary prototype of the actual asset/physical product (''A/B samples''). Should show end-to-end how the different components will interact to deliver the desired user experience<br />
* Fully functional prototype: Functional, basic prototype with full AIoT functionality and a relatively high level of maturity. Must include first real AI models and AI-driven functionality, as well as full asset/physical product functionality (''C/D samples''). After this, both the APIs between the cloud and EDGE should be stable, as well as the interfaces to the asset/physical product (power lines, antenna and gateway fastening, etc.).<br />
* AIoT MVP: This focuses only on the AIoT elements, assuming that the asset/physical product will no longer undergo any major changes. The AIoT MVP must not only be functionally complete, but also ensure that all procurement aspects are finalized. Furthermore, a fully automated AIoT DevOps infrastructure, including cloud, IoT and AI pipelines, should be developed<br />
* SOP (Start of Production): This is the day of no return: the manufacturing lines will now start processing assets/physical products and shipping them to customers around the world. Any changes/fixes on the hardware side will now become very costly or nearly impossible. Currently, the required operations support must also be fully operational (either providing fully automated online support services, or call-center or even on-site field services)<br />
* Cloud SW Updates after SOP: This must utilize the AIoT DevOps pipeline, including Continuous Integration and Continuous Testing for quality purposes<br />
* EDTE SW Updates after SOP: Finally, this must utilize the established OTA infrastructure to deliver updates to assets in the field (which will have already been established in the later stages of system field tests)<br />
<br />
Note that this perspective does not differentiate between the hardware engineering and manufacturing perspectives of the on-asset AIoT hardware vs. the actual asset/physical product itself. Furthermore, it also does not differentiate between line-fit and retrofit scenarios.<br />
<br />
[[File:1.3-IoTProductLC.png|900px|frameless|center|link=|AIoT Product Perspective]]<br />
<br />
= Digital Equipment Operator: Solution Perspective =<br />
An AIoT solution is usually not focused on the design/manufacturing of assets/physical products. In many cases, assets are highly heterogeneous, and the AIoT solution components will be applied using a retrofit approach. Instead of asset preparation, the focus is on site preparation. Additionally, the level of productization is usually not as high.<br />
<br />
This makes the process and the milestones easier and less complex:<br />
* Pilot: Usually, much more lightweight; could simply be some sensors retrofitted to an existing asset, with a WLAN connection to a standard cloud backend<br />
* MVP: Again, more lightweight and most likely also less sophisticated in terms of process automation<br />
* Roll-out: Critical part of the process: not only in technical terms but also in terms of fulfilling on-site user expectations<br />
* First Cloud SW-Update: Should be automated, utilizing existing standard cloud DevOps mechanisms<br />
* First EDGE SW-Update: Can be automated and utilizing OTA, but for small-scale solutions; potentially also manual<br />
<br />
[[File:1.3-IoTSolutionLC.png|900px|frameless|center|link=|AIoT Solution Perspective|class=AIoT_Data_Strategy]]</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Digital_Twin_Execution&diff=7046Digital Twin Execution2022-03-29T14:50:46Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><br />
<imagemap><br />
Image:1.0 Digital Twin-exe.png|frameless|1000px|Digital Twin<br />
<br />
rect 4 0 651 133 [[AIoT_Framework|More...]]<br />
rect 970 0 1298 133 [[AIoT_Data_Strategy|More...]]<br />
rect 651 0 970 133 [[Artificial_Intelligence|More...]]<br />
rect 1298 0 1767 133 [[Digital_Twin_Execution|More...]]<br />
rect 1767 0 2095 133 [[Internet_of_Things|More...]]<br />
rect 2095 0 2542 133 [[Hardware.exe|Hardware.exe]]<br />
<br />
rect 2764 128 3539 257 [[Product_Architecture|More...]]<br />
rect 2764 257 3539 390 [[Agile AIoT|More...]]<br />
rect 2764 385 3539 518 [[AIoT_DevOps_and_Infrastructure|More...]]<br />
rect 2764 518 3539 651 [[Trust_and_Security|More...]]<br />
rect 2764 651 3539 784 [[Reliability_and_Resilience|More...]]<br />
rect 2764 779 3539 917 [[Verification_and_Validation|More...]]<br />
<br />
desc none<br />
</imagemap><br />
<br />
<s data-category="AIoTFramework"></s><br />
<br />
As discussed in [[Digital_Twin_101|Digital Twin 101]], a Digital Twin is the virtual representation of a real-world physical object. Digital Twins help manage complexity by providing a semantically rich abstraction layer, especially for systems with a high level of functional complexity and heterogeneity. As an AIoT project lead, one should start by looking at the question "Is a Digital Twin needed, and if so - what kind of Digital Twin?", before defining the Digital Twin implementation roadmap.<br />
<br />
__TOC__<br />
<br />
= Is a Digital Twin needed? =<br />
The decision of whether and when to apply the Digital Twin concept in an AIoT initiative will depend on at least two key factors: Sensor Data Complexity/Analytics Requirements and System Complexity (e.g., the number of different machine types, organizational complexity, etc.).<br />
<br />
If both are low, the system will probably be fine with using Digital Twin more as a logical design concept and applying traditional data analytics. Only with increasing sensor data complexity and analytics requirements will the use of AI be required. <br />
<br />
High system complexity is an indicator that dedicated Digital Twin implementation should be considered, potentially utilizing a dedicated DT platform. The reason is that a high system complexity will make it much harder to focus on the semantics. Here, a formalized DT can help.<br />
<br />
[[File:0.3.1 DT Conclusions.png|800px|frameless|center|link=|Conclusions - Digital Twin and AIoT]]<br />
<br />
= If So, What Kind of Digital Twin? =<br />
Since Digital Twin is a relatively generic concept, the concrete implementation will heavily depend on the type of data that will be used as the foundation. Since Digital Twins usually refer to physical assets (at least in the context of our discussions), the potential data can be identified along the lifecycle of a typical physical asset: design data or digital master data, simulation data, manufacturing/production data, customer data, and operational data. For the operational data, it is important to differentiate between data related to the physical asset itself (e.g., state, events, configuration data, history) versus data relating to the environment of the asset.<br />
<br />
Depending on the application area, the Digital Twin (DT) can have a different focus. The Operational DT will mainly focus on operational data, including the internal state and data relating to the environment. PLM-focused DT will combine the product/asset design perspective with the operational perspective, sometimes also adding manufacturing-related data. The simulation-focused DT will combine design data with operational data and apply simulation to it. And finally, the holistic DT will combine all of the above.<br />
<br />
[[File:DT Categories.png|800px|frameless|center|link=|DT Categories]]<br />
<br />
= Examples =<br />
The Digital Twin concept is quite versatile, and can be applied to many different use cases. The following table provides an overview of four concrete examples and how they are mapped to the DT categories introduced before.<br />
<br />
[[File:DT Categories Examples.png|800px|frameless|center|Examples for different DT categories ]]<br />
<br />
The drone-based building facade inspection is covered in detail in the [[Drone-based_Building_Facade_Inspection|TÜV SÜD case study]]. The physics simulation example is covered in the [[Digital_Twin_101|Digital Twin 101]] section. The following will provide an overview of the pneumatic system example, as well as the elevator example.<br />
<br />
== Operational DT: Pneumatic System ==<br />
Leakage detection for pneumatic systems is a good example for an operational Digital Twin. Pneumatic systems provide pressured air to implement different use cases, e.g., the drying of cars in a car wash, eliminating bad grains in a stream of grains analyzed using high-speed video data analytics, or cleaning bottles in a bottling plant. Experts estimate that pneumatic systems consume 16 billion kilowatt hours annually, with a savings potential of up to 50% ([[mader.eu]]). In order to address this savings potential, an AIoT-enabled leakage detection system can help to identify and fix leakages at customer sites. One such solution is currently developed by the [[AIoT_Lab|AIoT Lab]]. This solution is based on a combination of ultrasound sensors and edge-ML for sound pattern analysis. The solution can be used on-site to perform an analysis of the customer`s pneumatic application for leakages. The results can then be used by a service technician to fix the problems and eliminate the leakages.<br />
<br />
[[File:DT Pneumatic Example.png|800px|frameless|center|Example: Digital Twin for Pneumatic System]]<br />
<br />
The foundation for the leakage detection system is an operational Digital Twin. Since customers usually don not provide detailed design information about their own systems, the focus here is to obtain as much information during the site visit and build up the main part of the Digital Twin dynamically while being on site.<br />
The system is based on Digital Twin data in four domains:<br />
* Domain I includes the components of the AIoT solution itself, e.g., the mobile gateways and ultrasound sensors. This DT domain is important to support the system administration, e.g., OTA-based updates of the ML models for sound detection.<br />
* Domain II includes the pneumatic components found on-site, including pressure generators, pressure tanks, valves, etc. The definitions of these components are provided via the product catalogue, and can be selected dynamically on-site.<br />
* Domain III includes the fuselage and how it is mapped to the applications of the customer. Key parts of the customer equipment must be identified and included in the DT model for documentation purposes. Usually, only those parts of the customer equipment are captured that are involved with any of the leakages found.<br />
* Domain IV includes the leakages that are identified during the on-site assessment. These leakages are also captured as Digital Twins, including information about the related sound patterns, as well as the position of the leakage relative to DT information from domains II and III.<br />
<br />
The creation of the Digital Twins happens along these domains: DT data in domain I are created once per test equipment pack. Domains II-IV are created dynamically and per customer site.<br />
<br />
[[File:DT Pneumatic Example - Roadmap.png|800px|frameless|center|Digital Twin - Pneumatics Example - Domains]]<br />
<br />
Daniel Burkhardt, Chief Product Owner, AIoT Lab: ''We have the goal of providing a solution architecture that enables ML model reuse and holistic AIoT DevOps. The implementation of leakage detection based on a Digital Twin of a pneumatic system provided us with relevant insights about the requirements and design principles for achieving this goal. In comparison to typical software development, reuse and AIoT DevOps require design principles such as continuous learning, transferability, modularization, and openness. Realizing these principles will guarantee the ease of use of AIoT for organizations with, e.g., no technological expertise, which in the long term leads to more detailed and meaningful Digital Twins and thus more accurate and valuable analytics.''<br />
<br />
== Holistic Digital Twin: DT and Elevators ==<br />
A good example of the use of a holistic Digital Twin approach is elevators, since they have a quite long and complex lifecycle that can benefit from this approach. What is interesting here as well is the combination of the elevator lifecycle in combination with the building lifecycle, since most elevators are deployed in buildings. The example in the following shows how a standard elevator design is fitted into a building design. This is a complex process, that needs to take into consideration the elevator design specification, building design, elevator shaft design, and required performance parameters.<br />
<br />
[[File:BuildingRendering.png|600px|frameless|center|Digital Twin of building and elevator - 3D model]]<br />
<br />
The CAD model and EBOM data of the elevator design can be a good foundation for the digital twin. To support efficient monitoring of the elevator during the operations phase, an increasing number of advanced sensors have been applied. These include, for example, sensors to monitor elevator speed, braking behavior, positioning of the elevator in the elevator shaft, vibrations, ride comfort, doors, etc. Based on these data, a dashboard can be provided that provides reports for the physical conditions and the elevator utilization.<br />
<br />
One pain point for building operators is the usually mandatory on-site inspections by a third party inspection service. Using advanced remote monitoring services based on a digital twin of the elevator, some countries are already allowing combination or remote and on-site inspections. For example, instead of 12 on-site inspections per year, this could be reduced to 4 on-site inspections with 8 inspections being performed remotely. This helps save costs and reduces operations interruptions due to inspection work.<br />
<br />
The Digital Twin concept helps brings together all relevant data, and allows semantic mappings between data from different perspectives and created during different stages of the lifecycle.<br />
<br />
[[File:DT Example - Elevator.png|800px|frameless|center|Holistic DT Example - Elevator]]<br />
<br />
= Digital Twin Roadmap=<br />
From the execution perspective, a key question is how to design a realistic roadmap for the different types of Digital Twins we have looked at here. The following provides two examples, one from the automotive perspective and one from the building perspective.<br />
== Operational Digital Twin (Vehicle example)==<br />
Let us assume an OEM wants to introduce the Digital Twin concept as part of their Software-defined Vehicle initiative. Over time, all key elements of the vehicle should be represented on the software layer as Digital Twin components. How should this be approached?<br />
<br />
Importantly, this should be done step by step or more precisely use case by use case. Developing a Digital Twin for a complex physical product can be a huge effort. The risk of doing this without specific use cases and interim releases is that the duration and cost involved will lead to a cancellation of the effort before it can finished. This is why it is better to select specific use cases, then develop the Digital Twin elements required for them, release this, and show value creation along the way. Over time, the Digital Twin can then develop to an abstraction layer that will cover the entire asset, hopefully enabling reuse for many different applications and use cases.<br />
<br />
[[File:DT Evolution.png|800px|frameless|center|link=|DT Evolution]]<br />
<br />
== Holistic Digital Twin (Building example) ==<br />
A good example of use of a holistic Digital Twin concept from design to operation and maintenance is the digital building lifecycle:<br />
* During the building design phase, the BIM (Building Information Model) approach can help optimize the design with simulation and automated validation. This way, aspects such as future operational sustainability and capacity can be evaluated. Automated design validation provides a higher level of planning safety.<br />
* During the building construction process, AIoT-enabled solutions such as robot-based construction progress monitoring can provide transparency and reliability. Meeting budgets and timelines can be better ensured.<br />
* Sub-systems like elevators can also be integrated into the Digital Twin approach, as discussed in the previous section.<br />
* Finally, building inspection can be supported by solutions such as the Drone-based façade inspection. The results of the façade inspection can be mapped back to the Digital Twin, augmenting the planning data with real-world as-is data.<br />
<br />
The decision for a BIM / Digital Twin-based approach for building and construction is strategic. Upfront investments will have to be made, which must be recuperated through efficiency increased further downstream. The holistic Digital Twin approach here is promising, but requires a certain level of stringency to be successful.<br />
<br />
[[File:DT Evolution Building.png|800px|frameless|center|DT Evolution - Building Example]]<br />
<br />
= Expert Opinion =<br />
The following short interview with Dominic Kurtaz (Managing Director for Dassault Systèmes in Central Europe) highlights the experience that a global PLM company is currently making with its customers in the area of AIoT and Digital Twins.<br />
<br />
Dirk Slama: ''Welcome Dominic. Can you briefly introduce your company?''<br />
<br />
Dominic Kurtaz: ''Dassault Systèmes consists of 20,000 inspired people around the world, developing software solutions and supporting clients in the manufacturing, healthcare and life science sector, as well as the infrastructure sector. We help to digitally design and manufacture more than 1 in 4 of the physical products you touch every day, with a focus on how they are being used by the end users and consumers. We believe that the virtual world can enhance and improve the overall physical world toward a more sustainable world, which I think is probably a good segue to the whole topic of AIoT.''<br />
<br />
Dirk: ''In this context, AIoT and Digital Twins can play an important role as enablers. What kind of activities and investments are you currently seeing in this space?''<br />
<br />
Kurtaz: ''When people think of AIoT or IoT, they immediately think of operational performance measurements with sensors, predictive maintenance, and so on. Which of course is a very valid application, but we need to think far beyond that. This is why I like this concept of a holistic Digital Twin. We need to take a step back from IoT right now. When you are looking at the Experience Economy, you will see that the value that we perceive as customers and consumers is going increasingly away from the actual product itself. Today, it is often much more about the end-to-end experience: how the product is perceived how we select it, how we are using it, and how we dispose and recycle the product. The end-to-end life cycle experience is clearly important. From my experience, we need to look at the IoT through the eyes of the customer and the eyes of the consumer. First, we have to understand how business strategies and business execution with AIoT can truly support and improve those aspects.''<br />
<br />
''Second, I believe that the Digital Twin is truly becoming pervasive across industries and all products. Take, for example, one of the most mundane products that we experienced in our lives - the light bulb. If you go back 10 years, it was just this item at the end of the shopping list that you grab off from the shop shelf without thinking much about it - you bought it, you screwed it in, you turned it on and off, and hopefully you would never have to think about that product again for the next years…until it breaks.''<br />
<br />
''Today, this is fundamentally changing. I am not just buying a commodity product for my house anymore. I am buying something that is part of a connected ecosystem. I can set different moods at home using different light configurations. I can use smart lighting as part of my home security system. From a business perspective, this is a game changer. Light bulb manufacturers are no longer just producing light bulbs – today, they are connected to their customers. In the past, we did not know our customers or how they were using our products. Today, – enabled by the IoT – I can have a direct relationship with my customer. This will change things on many levels and opens up new business models.''<br />
<br />
''Thus far, we have only seen the tip of the iceberg: although many of the enabling technologies are reaching a good level of maturity, the actual implementations are often still very immature and limited to those basic connectivity features – but not delivering the holistic Digital Twin experience. For example, I have recently bought a new kitchen, including connectivity to my smart home. Now I can control and integrate it into my own kitchen facilities. This is really good and interesting as well as delivering additional features but I was not able to experience and understand the value that it can really bring until after I had purchased all of those IoT enabled and connected products. And in today's world, I should have been able to use a Digital Twin of the product prior to my buy to fully understand not just the product, but the behavior, the context, the operational aspect of that post my buying - and that is simply not yet possible. Take, as another example, mobility. As a customer, I should be able to experience all these new features such as advanced driver assistance, before I acquire the physical product – enabled by a holistic Digital Twin. I really want to be able to experience in the virtual world how these products are going to behave, before using them in the physical world. This is also very helpful for product development, because it allows us to validate the customer experience in the virtual world – before making expensive investments in physical prototypes.''<br />
<br />
''From what I am seeing from our customers, this is not just a hype or a fad. I think it is absolutely mission critical for anybody who is designing and manufacturing products, and dealing with the digital experience of those products. We see this across all industries where we are operating: manufacturing, healthcare, life science, and infrastructure.''<br />
<br />
Dirk Slama: ''What are your recommendations from the implementation perspective?''<br />
<br />
Dominic Kurtaz: ''You need a clear focus on the end user experience that you are trying to deliver. This will determine the holistic design philosophy you need to apply. Many companies have started with Big Data, and they are now drowning in it. The problem is to find and connect the data that are relevant for the end user experience. The connection of digital, semantic models with data will open up potential for all industries. Of course, this has to be done step-by-step, use case by use case – building up the holistic Digital Twin with a clearly value-driven approach.''<br />
<br />
''Another key aspect is the alignment between the digital supply chain and the physical supply chain. For the IT, we have Continuous Integration and Continuous Delivery (CI/CD). For the physical product, we have simultaneous engineering and closed loop PLM. The challenge is now to close the even bigger loop around all of this– bringing IT DevOps together with physical product engineering. This is exactly where AIoT and Digital Twin will play an important role. AIoT enables new digital/physical product features. And the Digital Twin is the semantic interface between the digital and the physical world. During design and development, the Digital Twin helps create the required interfaces at the technical and the organizational levels. During runtime, it enables a new customer experience.''</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=AIoT_Data_Strategy&diff=7045AIoT Data Strategy2022-03-29T14:50:31Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><br />
<imagemap><br />
Image:1.5-DataStrategy.png|frameless|1000px|Overview of Ignite AIoT Framework<br />
<br />
rect 4 0 651 133 [[AIoT_Framework|More...]]<br />
rect 970 0 1298 133 [[AIoT_Data_Strategy|More...]]<br />
rect 651 0 970 133 [[Artificial_Intelligence|More...]]<br />
rect 1298 0 1767 133 [[Digital_Twin_Execution|More...]]<br />
rect 1767 0 2095 133 [[Internet_of_Things|More...]]<br />
rect 2095 0 2542 133 [[Hardware.exe|Hardware.exe]]<br />
<br />
rect 2764 128 3539 257 [[Product_Architecture|More...]]<br />
rect 2764 257 3539 390 [[Agile AIoT|More...]]<br />
rect 2764 385 3539 518 [[AIoT_DevOps_and_Infrastructure|More...]]<br />
rect 2764 518 3539 651 [[Trust_and_Security|More...]]<br />
rect 2764 651 3539 784 [[Reliability_and_Resilience|More...]]<br />
rect 2764 779 3539 917 [[Verification_and_Validation|More...]]<br />
<br />
desc none<br />
</imagemap><br />
<s data-category="AIoTFramework"></s><br />
<br />
__NOTOC__<br />
<br />
As part of their digital transformation initiatives, many companies are putting data strategies at the center stage. Most enterprise data strategies are a mixture of high-level vision, strategic principles, goal definitions, priority setting, data governance models, architecture tools and best practices for managing semantics and deriving information from raw data.<br />
<br />
Since both AI and IoT are also very much about data, every AIoT initiative should also adopt a data strategy. However, it is important to note that this data strategy must work on the level of an individual AIoT-enabled product or solution, not the entire enterprise (unless, of course, the enterprise is pretty much built around said product/solution). This section of the AIoT Framework proposes a structure for an AIoT Data Strategy and identifies the typical dependencies that must be managed.<br />
<br />
__TOC__<br />
<br />
= Overview =<br />
<br />
The AIoT Data Strategy proposed by the AIoT Framework is designed to work well for AIoT product/solution initiatives in the context of a larger enterprise. Consequently, it focuses on supporting product/solution implementation and long-term evolution and tries to avoid replicating typical elements of an enterprise data strategy.<br />
<br />
[[File:1.5-DSDetails.png|800px|frameless|center|link=|AIoT Data Strategy]]<br />
<br />
The AIoT Data Strategy has four main elements. First, the development of a prioritization framework that aims to make the relationship between use cases and their data needs visible. Second, management of the data-specific implementation aspects, as well as the Data Lifecycle Management. Third, Data Capabilities required to support the data strategy. Fourth, a lean and efficient Data Governance approach was designed to work on the product/solution level.<br />
<br />
Of course, each of these four elements of the AIoT Data Strategy has to be seen in the context of the enterprise that is hosting product/solution development: Enterprise Business Strategy must be well aligned with the use cases. Data-specific implementation projects frequently have to take cross-organization dependencies into consideration, e.g., if data are imported or exported across the boundaries of the current AIoT product/solution. Product/solution-specific data capabilities must be aligned with the existing enterprise capabilities. Product/solution-specific data governance always has to take existing enterprise-level governance into consideration.<br />
<br />
= Business Alignment & Prioritization =<br />
The starting point for business alignment and prioritization should be the actual use cases, which are defined and prioritized by business sponsors, or Epics which have been prioritized in the agile backlog. Sometimes, Epics might be too coarse grained. In this case, Features can be used instead.<br />
<br />
For each Use Case/Epic, an analysis from the data perspective should be completed:<br />
* What are the actual data needs to support the Use Case/Epic?<br />
* Which of these data is believed to be already available, which must be newly acquired?<br />
* How can the required data quality be ensured for the particular use case?<br />
* What are potential financial aspects of the data acquisition?<br />
* How do the use cases support the monetization side of things?<br />
* Is this a case where the required data adds functional value to the use case, or is there a direct data monetization aspect to it?<br />
* What are the relationships between the identified data and the other elements of the AIoT Data Strategy: Implementation & Data Lifecycle Management, specific capabilities applying to this particular kind of data, and Data Governance.<br />
<br />
A key aspect of the analysis will be the '''Data Acquisition''' perspective. For data that can (at least theoretically) be acquired within the boundaries of the AIoT product/solution organization, the following questions should be answered:<br />
* Is the required technical infrastructure already available?<br />
* Does the team have the required capabilities and resources available?<br />
* Especially in the case of AIoT data acquired via sensors:<br />
** Are new sensors required?<br />
** If so, what is the additional development and unit cost?<br />
** Is there an additional downstream cost from the asset/sensor line-fit point of view (i.e. additional manufacturing costs)?<br />
** What is the impact on the business plan?<br />
** What is the impact on the project plan?<br />
** What are the technical risks for new, unknown sensor technologies?<br />
** What are required steps in terms of sourcing and procurement?<br />
<br />
For data that need to be acquired from other business units, a number of additional questions will have to be answered:<br />
* Is it technically feasible to access the data (availability of APIs, bandwidth, support of required data access frequency and volume, etc.)<br />
* Can the neighboring business unit support your requirements, not only in terms of technical access, but also in terms of project support and timelines?<br />
* Are there costs involved in technical implementation and/or data access (internal billing)?<br />
* Are there potential limitations or restrictions due to existing internal data governance guidelines, regional or organizational boundaries, etc.?<br />
<br />
For data that have to be acquired from external partners or suppliers, there are typically a number of additional complexities that will have to be addressed:<br />
* Technical feasibility across enterprise boundaries<br />
* Legal framework required for data access<br />
* SLA insurance<br />
* Billing and cost management<br />
<br />
Based on all of the above, the team should be able to assess the overall feasibility and costs/efforts involved on a per use case/per data item basis. This information is then used as part of the overall prioritization process.<br />
<br />
= Data Pipeline: Implementation & Data Lifecycle Management =<br />
Sometimes it can be difficult to separate data-specific implementation aspects from general implementation aspects. This is an issue that the AIoT Data Strategy needs to deal with to avoid redundant efforts. Typical data-specific implementation and Data Lifecycle Management aspects include the following:<br />
* Data Ingestion: In our context, data ingestion should first be seen as moving data from outside of our organization's boundary to within. Second, technical aspects such as stream vs. batch processing need to be addressed. Typical data ingestion tasks also include cleansing and quality assurance.<br />
* Storage: Depending on the business and technical requirements, data can be stored permanently or temporarily, structured or unstructured, with or without backup, with cache-only or with operational/transactional support, etc. This often needs to be addressed differently for different data types.<br />
* Integration: Data integration is the process of merging data from different sources into a single, unified view. In the case of AIoT, this can be -- for example -- sensor data fusion, done close to the sensors in the edge layer. Or it can be -- usually on a high-level of abstraction -- a real-time data stream integration process. Or it can be -- typically further in the backend -- a batch-oriented integration process.<br />
* Transformation: Many projects spend much time with data transformation, since this is often a prerequisite for data integration or further data processing. The approaches chosen usually vary widely depending on the format, structure, complexity, and volume of the data being transformed.<br />
* Modeling: Data modeling is usually a key step toward dealing with semantics of data and deriving information from raw data. There are different levels of data modeling, including conceptual, logical and physical levels. Another important type of model building on top of data models is AI/ML models. However, these models are usually less data-structure oriented and more mathematical/statistical models.<br />
* Validation: Data validation is the tool that helps ensure data quality, e.g., by applying data cleansing and validation checks. Data validation can use simple, local "validation rules" or "validation constraints" that check for correctness and meaningfulness (e.g., a date of birth cannot be in the future). In some cases, data validation can actually be much more complex, e.g., involving interactions with remote systems, or even AI/ML-based validation algorithms.<br />
* Analysis: In many cases, data analysis is a key use case other than, for example, transactional use of the data. Generally, data analysis supports the discovery of useful information and supports decision-making. Data analysis is a multifaceted topic. It is key that the required Data Capabilities are provided to support here.<br />
* Access Control & Security: Finally, effectively ensuring confidentiality and secure handling of data must be part of every AIoT data strategy. This includes both IoT data coming from assets and data combining from users, other business units, or event external data sources. While security is sometimes dealt with on a different level, fine-grained data access control must usually be dealt with as part of the data strategy.<br />
<br />
Finally, another key aspect of Implementation & Data Lifecycle Management is dealing with cross-organizational dependencies. While the earlier data acquisition phase might have already answered some of the high-level questions related to this topic, on the implementation level efficient stakeholder management is a key success factor. Often, earlier agreements with respect to technical data access or commercial conditions, will have to be reviewed, revised or refined during the implementation phase. Some practitioners say that this can sometimes be more difficult in the case of cross-divisional data integration within one enterprise than across enterprise boundaries.<br />
<br />
= Data Capabilities and Resource Availability =<br />
Data-related capabilities can be important in a number of different areas, including:<br />
* Skills: Data-related skills can include a number of areas, including specific data-processing technologies and mathematical, statistical, or algorithmic skills in AI/ML, etc.<br />
* Technology: For an AIoT product/solution initiative, it is usually important that technical management agrees on fixed setup technologies that cover most of the required use cases, e.g., batch vs real-time processing, basic analytics vs AI/ML, etc.<br />
* Processes & Methods: Depending on the specific environment, this can also be a very important aspect. Data-related processes and methods can be specific to a certain analytics method, or they can be related to certain processes and methods defined by an enterprise organization as mandatory.<br />
<br />
Depending on the project requirements, it is also important that specific capabilities be supported by appropriate resources. For example, if it is clear that an AIoT project will require the development of certain AI/ML algorithms, then the project management will have to ensure that this particular capability is supported by skilled resources that are available during the required time period. Managing the availability of such highly specialized resources is a topic that can be difficult to align with the pure agile project management paradigm and might require longer-term planning, involving alignment with HR or sourcing/procurement.<br />
<br />
= Data Governance =<br />
Finally, larger AIoT product/solution initiatives will require Data Governance as part of their Data Strategy. This Data Governance cannot be compared with a Data Governance approach typically found on the enterprise level. It needs to be lightweight and pragmatic, covering basic aspects such as:<br />
* Data & Trust Policies: How is this specific AIoT product/solution dealing with this topic? This is likely to be very use case specific, so the AIoT initiative will have to build on generic enterprise-level requirements but will have to add policies specific to its own use case. <br />
* Data Architecture: It is not always clear if data architecture is a discipline on its own, or if this is simply one facet of the product/solution architecture. For example, the AIoT Framework has a dedicated viewpoint to support the combination of [[AIoT_Data_and_Functional_Viewpoint|data and functionality]].<br />
* Data Lineage: Data lineages traces where data originate, what happens with it on the way, and where it moves over time. Data lineage provides visibility and transparency and can help simplify root cause analysis in the data analytics process. Data Governance can either support the central documentation of data lineages or provide tools and best practices for implementation teams.<br />
* Metadata Management and Data Catalog: Efficient management of metadata is a prerequisite for efficient data processing and analytics. Types of metadata include descriptive, structural and administrative. A data catalog can provide support for metadata management, together with other tools, such as search. <br />
* Data Model Management: For many AIoT applications, centrally managing a high-level data model that describes key entities and their relationships, as well as dependencies on different use cases and components, can be of great help in creating transparency and improving alignment between different teams. The AIoT Framework proposes a lightweight [[AIoT_Data_and_Functional_Viewpoint#Data_Domain_Model|AIoT Domain Model]] approach. In addition, the Data Governance team could also provide tooling and best practices for teams that need more detailed models in their areas. This can also be linked back to the Metadata Management and Data Catalog topics.<br />
* API Management: In his famous "API Mandate", Amazon CEO Jeff Bezos declared that ''"All teams will henceforth expose their data and functionality through service interfaces."'' at Amazon. This executive-level support for an API-centric way of dealing with data exchange (and exposing component functionality) shows how important API management has become at the enterprise level. The success of an AIoT initiative will also depend strongly on it. If there is no enterprise-wide API infrastructure and management approach available, this is a key support element that must be provided and enforced by the Data Governance team.<br />
<br />
Finally, the Data Governance / Data Strategy team should give itself a setup of KPIs by which they can measure their own success and the effectiveness and efficiency of the AIoT Data Strategy.<br />
<br />
= Authors and Contributors =<br />
<br />
{|{{Borderstyle-author}}<br />
|{{Designstyle-author|Image=[[File:Dirk Slama.jpeg|left|100px]]|author={{Dirk Slama|Title=AUTHOR}}}}<br />
|}</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Artificial_Intelligence&diff=7044Artificial Intelligence2022-03-29T14:50:14Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><br />
<imagemap><br />
Image:1.3-AI.png|frameless|1000px|Ignite AIoT - Artificial Intelligence<br />
<br />
rect 4 0 651 133 [[AIoT_Framework|More...]]<br />
rect 970 0 1298 133 [[AIoT_Data_Strategy|More...]]<br />
rect 651 0 970 133 [[Artificial_Intelligence|More...]]<br />
rect 1298 0 1767 133 [[Digital_Twin_Execution|More...]]<br />
rect 1767 0 2095 133 [[Internet_of_Things|More...]]<br />
rect 2095 0 2542 133 [[Hardware.exe|Hardware.exe]]<br />
<br />
rect 2764 128 3539 257 [[Product_Architecture|More...]]<br />
rect 2764 257 3539 390 [[Agile AIoT|More...]]<br />
rect 2764 385 3539 518 [[AIoT_DevOps_and_Infrastructure|More...]]<br />
rect 2764 518 3539 651 [[Trust_and_Security|More...]]<br />
rect 2764 651 3539 784 [[Reliability_and_Resilience|More...]]<br />
rect 2764 779 3539 917 [[Verification_and_Validation|More...]]<br />
<br />
desc none<br />
</imagemap><br />
<br />
<s data-category="AIoTFramework"></s><br />
<br />
__NOTOC__<br />
<br />
Naturally, AI plays a central role in every AIoT initiative. If this is not the case, then it is maybe IoT - but not AIoT. In order to get the AI part right, the AIoT Playbook proposes to start with the definition of the AI-enabled value proposition in the context of the larger IoT system. Next, the AI approach should be fleshed out in more detail. Before starting the implementation, one will have to also address skills, resources and organizational aspects. Next, data acquisition and AI platform selection are on the agenda before actually designing and testing the model and then building and integrating the AI Microservices. Establishing MLops is another key prerequisite for enabling an agile approach, which should include PoC, MVP and continuous AI improvements.<br />
__TOC__<br />
=Understanding the Bigger Picture=<br />
Many AIoT initiatives initially only have a vague idea about the use cases and how they can be supported by AI. It is important that this is clarified in the early stages. The team must identify and flesh out the key use cases (including KPIs) and how they are supported by AIoT. Next, one should identify what kind of analysis or forecasting is required to support these KPIs. Based on this, potential sensors can be identified to serve as the main data source. In addition, the AIoT system architecture must be defined. Both will have implications for the type of AI/ML that can be applied.<br />
[[File:1.2-AIValueProp.png|800px|frameless|center|link=|AI Value Proposition and IoT]]<br />
<br />
=The AIoT Magic Triangle <span id="MagicTriangle"></span>=<br />
The AIoT Magic Triangle describes the three main driving forces of a typical AIoT solution:<br />
* IoT Sensors & data sources: What sensors can be used, taking physical constraints, cost and availability into consideration? What does this mean for the type of sensor data/measurements which will be available? What other data sources can be accessed? And how can relevant data sets be created?<br />
* AIoT system architecture: How does the overall architecture look like, e.g. how to distributed data and processing logic between cloud and edge? What kind of data management and AI processing infrastructure can be used?<br />
* AI algorithm: Finally, which AI method/algorithm can be used, based on the available data and selected system architecture?<br />
<br />
The AIoT magic triangle also looks at the main factors that influence these three important factors:<br />
* Business requirements/KPIs, e.g., required classification accuracy<br />
* UX requirements, e.g., expected response times<br />
* Technical/physical constraints, e.g., bandwidth and latency<br />
<br />
[[File:1.2-AIoTMagicTriangle.png|800px|frameless|center|link=|The AIoT magic triangle]]<br />
<br />
The AIoT magic triangle is definitely important for anybody working on the AIoT short tail (i.e., products), where there are different options for defining any of the tree elements of the triangle. For projects focusing on the AIoT long tail, the triangle might be less relevant - simply because for AIoT long tail scenarios, the available sensor and data sources are often predefined, as is the architecture into which the new solutions have to fit. Keep in mind that the AIoT long tail usually involves multiple, lower-impact AIoT solutions that share a common platform or environment, so freedom of choice might be limited.<br />
<br />
= Managing the AIoT Magic Triangle =<br />
As a product/project manager, managing the AIoT magic triangle can be very challenging. The problem is that the three main elements have very different lifecycle requirements in terms of stability and changeability:<br />
* The IoT sensor design/selection must be frozen earlier in the lifecycle, since the sensor nodes will have to be sourced/manufactured/assembled - which means potentially long lead times<br />
* The AIoT System Architecture must usually also be frozen some time later, since a stable platform will be required at some point in time to support development and productization<br />
* The AI Method will also have to be fixed at some point in time, while the actual AI model is likely to continuously change and evolve. Therefore, it is vital that the AIoT System Architecture supports remote monitoring and updates of AI models deployed to assets in the field<br />
<br />
The diagram following shows the typical evolution of the AIoT magic triangle in the time leading up to the launch of the system (including the potential Start of Production of the required hardware). <br />
<br />
[[File:1.2-AIoTMagicTriangleEvolution.png|800px|frameless|center|link=|AIoT Magic Triangle Evolution]]<br />
<br />
Especially in the early phase of an AIoT project, it is important that all three angles of the AIoT magic triangle are tried out and brought together. A Proof-of-Concept or even a more thorough pilot project should be executed successfully before the next stages are addressed, where the elements of the magic triangle are frozen from a design spec point of view, step by step.<br />
<br />
= First: Project Blueprint =<br />
Establishing a solid project blueprint as early as possible in the project will help align all stakeholders and ensure that all are working toward a common goal. The project blueprint should include an initial system design, as well as a strategy for training data acquisition. A proof-of-concept will help validate the project blueprint.<br />
<br />
== Proof-of-Concept ==<br />
In the early stages of the evaluation, it is common to implement a Proof-of-Concept (PoC). The PoC should provide evidence that the chosen AIoT system design is technically feasible and supports the business goals. This PoC is not to be confused with the MVP (Minimal Viable Product). For an AIoT solution or product, the PoC must identify the most suitable combination of sensors and data sources, AI algorithms, and AIoT system architecture. Initially, the PoC will usually rely on very restricted data sets for initial model training and testing. These initial data sets will be acquired through the selected sensors and data sources in a lab setting. Once the team is happy that it has found a good system design, more elaborate data sets can be acquired through additional lab test scenarios or even initial field tests.<br />
<br />
== Initial System Design ==<br />
After the PoC is completed successfully, the resulting system architecture should be documented and communicated with all relevant stakeholders. The system architecture must cover all three aspects of the AIoT magic triangle: sensors and data selection, AIoT architecture, and AI algorithm. As the project goes from PoC to MVP, all the assumptions have to be validated and frozen over time, so that the initial MVP can be released. Depending on the requirements of the project (first-time-right vs. continuous improvement), the system architecture might change again after the release of the MVP.<br />
<br />
It should be noted that changes to a system design always come at a cost. This cost will be higher the further the project is advanced. Changing a sensor spec after procurement contracts have been signed will come at a cost. Changing the design of any hardware component after the launch of the MVP will cause issues, potentially forcing existing customers to upgrade at extra cost. This is why a well-validated and stable system architecture is worth a lot. If continuous improvement is an essential part of the business plan, then the system architecture will have to be designed to support this. For example, by providing means for monitoring AI model performance in the field, allowing for continuous model retraining and redeployment, and so on.<br />
<br />
== Define Strategy for Training Data Acquisition and Testing==<br />
In many AI projects, the acquisition of data for model training and testing is one of the most critical - and probably one of the most costly - project functions. This is why it is important to define the the strategy for training data acquisition early on. There will usually be a strong dependency between system design and training data acquisition:<br />
* Training data acquisition will rely on the system architecture, e.g., sensor selection. The same sensor, which is defined by the system architecture, will also have to be used for the acquisition of the training data.<br />
* The system architecture will have to support training data acquisition. Ideally, the systems used for training data acquisition should be the same system, which is later put into production. Once the system is launched, the production system can often be used to acquire even more data for training and testing.<br />
<br />
Training data acquisition usually evolves alongside the system design - both are going hand in hand. In the early stages, the PoC environment is used to generate basic training data in a simple lab setup. In later stages, more mature system prototypes are deployed in the field, where they can generate even better and more realistic training data, covering an increasing number of real-world cases. Finally, if feasible, the production system can generate even more data from an entire production fleet.<br />
<br />
Advanced organizations are using the so-called "shadow mode" to test model improvements in production. In this mode, the new ML model is deployed alongside the production model. Both models are given the same data. The outputs of the new model are recorded but not actively used by the production system. This is a safe way of testing new models against real-world data, without exposing the production system to untested functionality. Again, methods such as the "shadow mode" must be supported by the system design, which is why all of this must go hand in hand.<br />
<br />
= Second: Freeze IoT Sensor Selection =<br />
The selection of suitable IoT Sensors can be a complex task, including business, functional and technical considerations. Especially in the early phase of the project, the sensor selection process will have to be closely aligned with the other two elements of the AIoT magic triangle to allow room for experimentation. The following summarizes some of the factors that must be weighted for sensor selection, before making the final decision:<br />
* Functional feasibility: does the sensor deliver the right data?<br />
* Response speed: does it capture time-sensitive events at the right speed?<br />
* Sensing range: does it cover the required sensing range?<br />
* Repetition accuracy: does it tread similar events equally?<br />
* Adaptability: can the sensor be configured as required, are all required interfaces openly accessible?<br />
* Form factor: Size, shape, mounting type<br />
* Suitability for target environment: ruggedness, protection class, temperature sensitivity<br />
* Power supply: voltage range, power consumption, electrical connection<br />
* Cost: What is the cost for sensor acquisition? What about additional operations costs (direct and indirect)?<br />
<br />
Of course, sensor selection cannot be performed in isolation. Especially in the early phase, it is important that sensor candidates be tested in combination with potential AI methods. However, once the team is convinced on the PoC-level (Proof of Concept) that a specific combination of sensors, AI architecture and AI method is working, the decision for the sensor is the first one that must be frozen, since the acquisition of the sensors will have the longest lead time. Additionally, once this decision is fixed, it will be very difficult to change. For more details on the IoT and sensors, refer to the [[Internet_of_Things_101|AIoT 101]] and [[Internet_of_Things|IoT.exe]] discussion.<br />
<br />
= Third: Freeze AIoT System Architecture =<br />
<br />
The acquisition of an AI platform is not only a technical decision but also encompasses strategic aspects (cloud vs. on premises), sourcing, and procurement. The latter should not be underestimated, especially in larger companies. The often lengthy decision-making processes of technology acquisition/procurement processes can potentially derail an otherwise well planned project schedule.<br />
<br />
However, what actually constitutes an AI system architecture? Some key elements are as follows:<br />
* Distributed system architecture: how much processing should be done on the edge, how much in the cloud? How are AI models distributed to the edge, e.g., via OTA? How can AI model performance be monitored at the edge? This is discussed in depth in the [[Data_101#Edge_vs_Cloud|AIoT 101]], as well as the [[AIoT_Data_and_Functional_Viewpoint|data/functional viewpoint]] of the AIoT Product/Solution Design.<br />
* AI system architecture: How is [[AIoT_Implementation_Viewpoint#AI_Pipeline_Architecture|model training and testing]] organized? How is [[AIoT_DevOps_and_Infrastructure#Agile_DevOps_for_AI%3A_MLOps|MLops]] supported? <br />
* Data pipeline: How are data ingestion, storage, transformation and preparation managed? This is discussed in the [[AIoT_Data_Strategy|Data.exe]] part.<br />
* AI platform: Finally, should a dedicated AI platform be acquired, which supports collaboration between different stakeholders? This is discussed at the end of this chapter.<br />
<br />
= Fourth: Acquisition of Training Data=<br />
Potentially one of the most resource intensive tasks of an AIoT project is the acquisition of the training data. This is usually an ongoing effort, which starts in the early project phase. Depending on the product category, this task will then either go on until the product design freeze ("first-time-right"), or even continue as an ongoing activity (continuous model improvements). In the context of AIoT, we can identify a number of different product categories. Category I is what we are calling mechanical or electro-mechanical products with no intelligence on board. Category II includes software-defined products where the intelligence is encoded in hand-coded rules or software algorithms. Category III are "first-time-right" products, which cannot be changed or updated after manufacturing. For example, a battery-operated fire alarm might use embedded AI for smoke analysis and fire detection. However, since it is a battery-operated and lightweight product, it does not contain any connectivity, which would be the prerequisite for later product updates, e.g., via OTA. Category IV are connected test fleets. These test fleets are usually used to support generation of additional test data, as well as validation of the results of the model training. A category III product can be created using a category IV test fleet. For example, a manufacturer of fire alarms might produce a test fleet of dozens or even hundreds of fire alarm test systems equipped with connectivity for testing purposes. This test fleet is then used to help finalizing the "first-time-right" version of the fire alarm, which is mass produced without connectivity. Of course, category IV test fleets can also be the starting point for developing an AI which then serves as the starting point for moving into a production environment with connected assets or products in the field. Such a category V system will use the connectivity of the entire fleet to continuously improve the AI and re-deploy updated models using OTA. Such a self-supervised fleet of smart, connected products is the ideal approach. However, due to technical constraints (e.g., battery lifetime) or cost considerations this might not always be possible.<br />
<br />
This approach of classifying AIoT product categories was introduced by Marcus Schuster, who heads the embedded AI project at Bosch. It is a helpful tool to discuss requirements and manage expectations of stakeholders from different product categories. The following will look in more detail at two examples.<br />
<br />
[[File:AI Product Categories.png|800px|frameless|center|AI Product Categories]]<br />
<br />
== Example 1: "First-time-right" Fire Alarm ==<br />
The first example we want to look at is a fire alarm, e.g., used in residential or commercial buildings. A key part of the fire alarm will be a smoke detector. Since smoke detectors usually have to be applied at differents parts of the ceiling, one cannot always assume that a power line or even internet connectivity will be available. Especially if they are battery operated, wireless connectivity usually is also not an option, because this would consume too much energy. This means that any AI-enabled smoke detection algorithm will have to be "first-time-right, and implemented on a low-power embedded platform. Sensors used for smoke detection usually include photoelectric and ionization sensors.<br />
<br />
In this example, the first product iteration is developed as a proof-of-concept, which helps validate all the assumptions which must be made according to the AIoT magic triangle: sensor selection, distribution architecture, and AI model selection. Once this is stabilized, a data campaign is executed which uses connected smoke sensors in a test lab to create data sets for model training, covering as many different situations as possible. For example, different scenarios covered include real smoke coming from different sources (real fires, or canned smoke detector tester spray), nuisance smoke (e.g., from cooking or smoking), as well as no smoke (ambient).<br />
<br />
The data sets from this data campaign are then validated and organized as the foundation for creating the final product, where the training AI algorithm is then put into or onto silicone e.g., using TinyML and an embedded platform, or even by creating a custom ASIC (application-specific integrated circuit). This standardized, "first-time-right" hardware is then embedded into the mass-manufactured smoke detectors. This means that after the Start of Production (SOP), no more changes to the model will be possible, at least not for the current product generation.<br />
<br />
[[File:Example 1 - fire alarm.png|800px|frameless|center|Example: "first-time-right" AIoT product (fire alarm)]]<br />
<br />
== Example 2: Continuous Improvement of Driver Assistance Systems ==<br />
The second example is the development of a driver assistance systems, e.g., to support highly automated driving. Usually, such systems and the situations they have to be able to deal with are an order of magnitude more complex than those of a basic, first-time-right type of product.<br />
<br />
Development of the initial models can be well supported by a simulation environment. For example, the simulation environment can simulate different traffic situations, which the driver assistance system will have to be able to handle. For this purpose, the AI is trained in the simulator.<br />
<br />
As a next step, a test fleet is created. This can be, for example, a fleet of normal cars, which undergo a retrofit with the required sensors and test equipment. Usually, the vehicles in the test fleet are connected, so that test data can be extracted, and updates can be applied.<br />
<br />
Once the system has reached a sufficient level of reliability, it will become part of a production system. From this moment onwards, it will have to perform under real-world conditions. Since a production system usually has many more individual vehicles than a test fleet, the amount of data which can now be captured is enormous. The challenge now is to extract the relevant data segments from this huge data stream which are most relevant for enhancing the model. This can be done, for example, by selecting specific "scenes" from the fleet data which represent particularly relevant real-world situations, which the model has not yet been trained on. A famous case here is the "white truck crossing a road making a u-turn on a bright, sunny day", since such a scenario has once lead to a fatal accident with a [[https://www.tesla.com/blog/tragic-loss|Tesla autopilot].<br />
<br />
When comparing the "first-time-right" approach with the continuous improvement approach, it is important to notice that the choice of the approach has a fundamental impact on the entire product design, and how it evolves in the long term. A first-time-right fire alarm is a much more basic product than a vehicle autopilot. The former can be trained using a data campaign which probably takes a couple of weeks, while the latter takes an entire product organization with thousands of AI and ML experts and data engineers, millions or cars on the road, and billions of test miles driven. But then also the value creation is hugely different here. This is why it is important for a product manager to understand the nature of this product, and which approach to choose. <br />
<br />
[[File:Example 2 - driver assistance.png|800px|frameless|center|Example: continuous improvement of AI models (driver assistance)]]<br />
<br />
== The AIoT Data Loop ==<br />
Getting feedback from the performance of the products in the field and applying this feedback to improve the AI models is key for ensuring that products are perfected over time, and that the models adapt to any potential changes in the environment. For connected products, the updated models can be re-deployed via OTA. For unconnected products, the learning can be applied to the next product generation.<br />
<br />
The problem with many AIoT-enabled systems is: how to identify areas for improvement? With physical products used in the field, this can be tricky. Ideally, the edge-based model monitoring will automatically filter out all standard data, and only report "interesting" cases to the backend for further processing. But how can the system decide which cases are interesting? For this, on usually need to find an ingenious approach which often will not be obvious in the first place.<br />
<br />
For example, for automated driving, the team could deploy an AI running in so-called shadow mode. This means the human driver is controlling the car, and the AI is running in parallel, making its own decisions but without actually using them to control the car. Every time the AI makes a decision different from the one of the human driver, this could be of interest. Or, let us take our vacuum robot example. The robot could try to capture situations which indicate sub-optimal product performance, e.g., the vacuum being stuck, or even being manually lifted by the homeowner. Another example is leakage detection for pneumatic systems, using sound pattern analysis. Every time the on-site technician is not happy with the system's recommendations, he could make this known to the system, which in turn would capture the relevant data and mark it for further analysis in the back-office.<br />
<br />
The processing of the monitoring data which has been identified as relevant will often be a manual or at least semi-manual process. Domain experts will analyze the data and create new scenarios, which need to be taught to the AI. This will result in extensions to existing data sets (or even new data sets), and new labels which represent the new lessons learned. This will then be used as input to the model re-training. After this, the re-trained models can be re-deployed or used for the next product generation.<br />
<br />
[[File:Data Loop.png|800px|frameless|center|The AIoT Data Loop]]<br />
<br />
This means that in the AIoT Data Loop, data really is driving the development process. Marcus Schuster, project lead for embedded AI at Bosch, comments: ''Data driven development will have the same impact on engineering as the assembly line had on production. Let’s go about it with the necessary passion.''<br />
<br />
= Fifth: Productize the AI Approach =<br />
Based on the lessons learned from the Proof-of-Concept, the chose AI approach must now be productized so that it can support real-world deployment. This includes refining the model inputs/outputs, choosing a suitable AI method/algorithm, and aligning the AI model metrics with UX and IoT system requirements.<br />
<br />
==Model Inputs/Outputs==<br />
A key part of the system design is the definition of the model inputs and outputs. These should be defined as early as possible and without any ambiguity. For the inputs, it is important to identify early on which data are realistic to acquire. Especially in an AIoT solution, it might not be possible technically or from a cost point of view to access certain data that would be ideal from an analytics point of view. In the UBI example from above, the obvious choice would be to have access to the driving performance data via sensors embedded in the vehicle. This would either require that the insurance can gain access to existing vehicle data or that a new, UBI-specific appliance be integrated into the vehicle. This is obviously a huge cost factor, and the insurance might look for ways to cutting this, e.g., by requiring its customer to install a UBI app on their smartphones and try to approximate the driving performance from these data instead.<br />
<br />
One can easily see that the choice of input data has a huge impact on the model design. In the UBI example, data coming directly from the vehicle will have a completely different quality than data coming from a smartphone, which might not always be in the car, etc. This means that UBI phone app data would require additional layers in the model to determine if the data are actually likely to be valid.<br />
<br />
It is also important that all the information needed to determine the model output is observable in the input. For example, if very blurry photos are used for manual labeling, the human labeling agent would not be able to produce meaningful labels, and the model would not be able to learn from it.<ref name="aiguide" /><br />
<br />
==Choosing the AI Algorithm==<br />
The choice of the AI method/algorithm will have a fundamental impact not only on the quality of the predictions but also on the requirements regarding data acquisition/data availability, data management, AI platforms, and skills and resources. If the AI method is truly at the core of the AIoT initiative, then these factors will have to be designed around the AI methods. However, this might not always be possible. For example, there might be existing restrictions with respect to available skills, or certain data management technologies that will have to be used. <br />
<br />
The following table provides an overview of typical applications of AI and the matching AI algorithms. The table is not complete, and the space is constantly evolving. When choosing an AI algorithm, it is important that the decision is not only based on the data science point of view but also simply from a feasibility point of view. An algorithm that provides perfect results but is not feasible (e.g., from the performance point of view) cannot be chosen.<br />
<br />
[[File:AI Selection Matrix.png|800px|frameless|center|AI Selection Matrix]]<br />
<br />
In the context of an AIoT initiative, it should be noted that the processing of IoT-generated sensor data will require specific AI methods/algorithms. This is because sensor data will often be provided in the form of streaming data, typically including a time stamp that makes the data a time series. For this type of data, specific AI/ML methods need to be applied, including data stream clustering, pattern mining, anomaly detection, feature selection, multi-output learning, semi-supervised learning, and novel class detection.<ref name="streamdata" /><br />
<br />
Eric Schmidt, AI Expert at Bosch: ''"We have to ensure that the reality in the field -- for example the speed at which machine sensor data can be made accessible in a given factory -- is matching the proposed algorithms. We have to match these hard constraints with a working algorithm but also the right infrastructure, e.g., edge vs. batch."''<br />
<br />
==Aligning AI Model Metrics with Requirements and Constraints==<br />
There are usually two key model metrics that have the highest impact on user experience and/or IoT system behaviour: model accuracy and prediction times.<br />
<br />
Model accuracy has a strong impact on usability and other KPIs. For example, if the UBI model from the example above is too restrictive (i.e., rating drivers as more risk-taking than they actually are), than the insurance might lose customers simply because it is pricing itself out of the market. On the other hand, if the model is too lax, then the insurance might not make enough money to cover future insurance claims. <br />
<br />
Eric Schmidt, AI Expert at Bosch: ''"We currently see that there is an increasing demand in not only having accurate models, but also providing a quantification of the certainty of the model outcome. Such certainty measurements allow -- for example -- for setting thresholds for accepting or rejecting model results"''<br />
<br />
Similarly, in autonomous driving, if the autonomous vehicle cannot provide a sufficiently accurate analysis of its environment, then this will result (in the worst case) in an unacceptable rate of accidents, or (in the best case) in an unacceptable rate of requests for avoidable full brakes or manual override requests.<br />
<br />
Prediction times tell us how long the model needs to actually make a prediction. In the case of the UBI example, this would probably not be critical, since this is likely executed as a monthly batch. In the case of the autonomous driving example, this is extremely critical: if a passing pedestrian is not recognized in (near-) real time, this can be deadly. Another example would be the recognition of a speed limited by an AIoT solution in a manually operated vehicle: if this information is displayed with a huge delay, the user will probably not accept the feature as useful.<br />
<br />
= Sixth: Release MVP=<br />
In the agile community, the MVP (Minimum Viable Product) plays an important role because it helps ensure that the team is delivering a product to the market as early as possible, allows valuable customer feedback and ensures that the product is viable. Modern cloud features and DevOps methods make it much easier to build on the MVP over time and enrich the product step-by-step, always based on real-world customer feedback.<br />
<br />
For most AIoT projects, the launch of the MVP is a much "bigger deal" than in a pure software project. This is because any changes to the hardware setup - including sensors for generating data processed by an AI - are much harder to implement. In manufacturing, the term used is SOP (Start of Production). After SOP, changes to the hardware design usually require costly changes to the manufacturing setup. Even worse, changing hardware already deployed in the field requires a costly product recall. So being able to answer the question "What is the MVP of my smart coffee maker, vacuum robot, or electric vehicle" becomes essential.<br />
<br />
Jan Bosch is Professor at Chalmers University and Director of the Software Center: ''If we look at traditional development, I think the way in which you are representing the "When do I freeze what" is spot on. However, there is a caveat. In traditional development, I spend 90% of my energy and time obtaining the first version of the product. So I go from greenfield to first release, and I spend as little as possible afterwards. However, I am seeing many companies which are shifting toward a model that says "How do I get to a V1 of my product with the lowest effort possible?". Say I am spending 10% on the V1, then I can spend 90% on continuously improving the product based on real customer feedback. This is definitely a question of changing the mindset of manufacturing companies.''<br />
<br />
Continuous improvement of software and AI models can be ensured today using a holistic DevOps approach, which covers all elements of AIoT: code and ML models, edge (via OTA) and cloud. This is discussed in more detail in the [[AIoT_DevOps_and_Infrastructure|AIoT DevOps section]].<br />
<br />
Managing the evolution of hardware is a complex topic, which is addressed in detail in the [[Hardware.exe#Managing_system_evolution|Hardware.exe]] section. <br />
<br />
Finally, the actual rollout or Go-to-Market perspective for AIoT-enabled solutions and products is not to be underestimated. This is addressed in the [[Rollout_GTM|Rollout and Go-to-Market]] section.<br />
<br />
=Required Skills and Resources =<br />
AI projects require special skills, which must be made available with the required capacity at the required time, as in any other project situation. Therefore, it is important to understand the typical AI-roles and utilize them. Additionally, it is important to understand how the AI team should be structured and how it fits into the overall AIoT organization.<br />
<br />
There are potentially three key roles required in the AI team: Data Scientist, ML Engineer, and Data Engineer. The Data Scientist creates deep, new Intellectual Property in a research-centric approach that can potentially require a 3 to 12-month development time or even longer. So the project will have to make a decision regarding how far a Data Science-centric approach is required and feasible, or in how far re-use of existing models would be sufficient. The ML Engineer turns models developed by data scientists into live production systems. They sit at the intersection of software engineering and data science to ensure that raw data from data pipelines are properly fed to the AI models for inference. THey also write production-level code and ensure scalability and performance of the system. The Data Engineer creates and manages the data pipeline that is required for training data set creation, as well as feeding the required data to the trained models in the production systems.<br />
<br />
[[File:1.2-AIRoles.png|800px|frameless|center|link=|AI Roles for AIoT|class=Internet_of_Things]]<br />
<br />
Another important question is how the AI team works with the rest of the software organization. The AIoT Playbook proposes the adoption of feature teams, which combine all the required skills to implement and deploy a specific feature. On the other hand, especially with a new technology such as AI, it is also important that experts with deep AI and data skills can work together in a team to exchange best practices. Project management has to carefully balance this out.<br />
<br />
=Model Design and Testing=<br />
In the case of the development of a completely new model utilizing data science, an iterative approach is typically applied. This will include many iterations of business understanding, data understanding, data preparation, modeling, evaluation/testing, and deployment. In the case of reusing existing models, the model tuning or -- in the case of supervised learning models -- data labeling should also not be underestimated.<br />
<br />
[[File:1.2-ModelDevelopment.png|800px|frameless|center|link=|Model Development]]<br />
<br />
=Building and Integrating the AI Microservices=<br />
A key architectural decision is how to design microservices for inference and business logic. It is considered good practice to separate the inferencing functions from the business logic (in the backend, or -- if deployed on the asset -- also in the edge tier). This means that there should be separate microservices for model input provisioning, AI-based inferencing, and model output processing. While decoupling is generally good practice in software architecture, it is even more important for AI-based services in case specialized hardware is used for inferencing.<br />
<br />
[[File:1.2-UBIComponents.png|600px|frameless|center|link=|UBI Microservices]]<br />
<br />
=Setting Up MLOps =<br />
Automating the AI model development process is a key prerequisite not only from an efficiency point of view, but also for ensuring that model development is based on a reproducible approach. Consequently, a new type of DevOps is emerging: MLOps. With the IoT, MLOps not only have to support cloud-based environments but also potentially the deployment and management of AI models on hundreds -- if not hundreds of thousands -- of remote assets. In the ''AIoT Playbook'' there is a dedicated section on [[AIoT_DevOps_and_Infrastructure|Holistic DevOps for AIoT]] because this topic is seen as so important.<br />
<br />
[[File:AIoTDevOps.png|600px|frameless|center|link=|Holistic DevOps for AIoT]]<br />
<br />
= Managing the AIoT Long Tail: AI Collaboration Platforms =<br />
<br />
When addressing the long tail of AI-enabled opportunities, it is important to provide a means to rapidly create, test and deploy new solutions. Efficiency and team collaboration are important, as is reuse. This is why a new category of AI collaboration platforms has emerged, which addresses this space. While high-end products on the short tail usually require very individual solutions, the idea here is to standardize a set of tools and processes that can be applied to as many AI-related problems as possible within a larger organization. A shared repository must support the workflow from data management over machine learning to model deployment. Specialized user interfaces must be provided for data engineers, data scientists and ML engineers. Finally, it is also important that the platforms support collaboration between the aforementioned AI specialists and domain experts, who usually know much less about AI and data science.<br />
<br />
[[File:AI Collaboration Platform.png|600px|frameless|center|AI Collaboration Platform]]<br />
<br />
= References =<br />
<references><br />
<ref name="aiguide">''[https://towardsdatascience.com/the-essential-guide-to-creating-an-ai-product-in-2020-543169a48bd The Essential Guide to Creating an AI Product in 2020]'', Rahul Parundekar, 2020</ref><br />
<ref name="streamdata">''[https://www.researchgate.net/publication/337581742_Machine_learning_for_streaming_data_state_of_the_art_challenges_and_opportunities Machine learning for streaming data: state of the art,challenges, and opportunities]'', H. Gomes et. al., 2020</ref><br />
</references><br />
<br />
= Authors and Contributors =<br />
{|{{Borderstyle-author}}<br />
|{{Designstyle-author|Image=[[File:Dirk Slama.jpeg|left|100px]]|author={{Dirk Slama|Title=AUTHOR}}}}<br />
{{Designstyle-author|Image=[[File:Eric Schmidt.jpg|left|100px]]|author={{Eric Schmidt|Title=CONTRIBUTOR}}}}<br />
{{Designstyle-author|Image=[[File:Martin Lubisch.jpg|left|100px]]|author={{Martin Lubisch|Title=CONTRIBUTOR}}}}<br />
{{Designstyle-author|Image=[[File:Juan Garcia.jpg|left|100px]]|author={{Juan Garcia|Title=CONTRIBUTOR}}}}<br />
|}</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=AIoT_Framework&diff=7043AIoT Framework2022-03-29T14:49:55Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><imagemap><br />
Image:1.1-IgniteAIoT.png|frameless|1000px|Overview of Ignite AIoT Framework<br />
<br />
rect 4 0 651 133 [[AIoT_Framework|More...]]<br />
rect 970 0 1298 133 [[AIoT_Data_Strategy|More...]]<br />
rect 651 0 970 133 [[Artificial_Intelligence|More...]]<br />
rect 1298 0 1767 133 [[Digital_Twin_Execution|More...]]<br />
rect 1767 0 2095 133 [[Internet_of_Things|More...]]<br />
rect 2095 0 2542 133 [[Hardware.exe|Hardware.exe]]<br />
<br />
rect 2764 128 3539 257 [[Product_Architecture|More...]]<br />
rect 2764 257 3539 390 [[Agile AIoT|More...]]<br />
rect 2764 385 3539 518 [[AIoT_DevOps_and_Infrastructure|More...]]<br />
rect 2764 518 3539 651 [[Trust_and_Security|More...]]<br />
rect 2764 651 3539 784 [[Reliability_and_Resilience|More...]]<br />
rect 2764 779 3539 917 [[Verification_and_Validation|More...]]<br />
<br />
desc none<br />
</imagemap><br />
<s data-category="AIoTFramework"></s><br />
__NOTOC__<br />
Technical execution must ensure delivery of the AIoT-enabled product or solution in close alignment with the business execution. In the software world, this would usually be managed with an agile approach to ensure continuous value creation and improvement. However, in the AIoT world, we usually face a number of impediments that will prevent a pure agile setup. These impediments exist because of the typical complexity and heterogeneity of an AIoT system, including hardware, software, and AI development. In addition, an AIoT system usually includes components that have to be "first time right" because they cannot be changed after the Start of Production (especially hardware-based components or functionally relevant system components). Designing the system and the delivery organization in a way that maximizes those areas where continuous improvement can be applied while also efficiently supporting those areas where this is not possible is one of the key challenges of the technical execution.<br />
<br />
Consequently, the technical execution part of the ''AIoT Playbook'' looks at ways of supporting this. This starts with looking again at the data, AI, IoT, Digital Twin and hardware perspective from the AIoT 101 section, but this time with the technical execution perspective ("*.exe").<br />
<br />
In addition, this section provides a set of good practices and templates for the design of AIoT-enabled products and solutions, the implementation of an agile approach for AIoT (including the so-called "Agile V-Model"), AIoT DevOps (including cloud DevOps, MLops and DevOps for IoT), Trust & Security, Reliability & Resilience, Functional Safety, and Quality Management. Before going into detail, the following provides an overview of how all of these fit together, starting with the development life-cycle perspective.<br />
<br />
__TOC__<br />
<br />
= Development Life-cycle Perspective =<br />
The development lifecycle of an AIoT-enabled product or solution usually includes a number of different sub-elements, which need to be brought together in a meaningful way. The following will discuss this for both products and solutions.<br />
<br />
== Smart, Connected Products ==<br />
Smart, connected products usually combine two types of features: physical and digital. The physical features are enabled by physical elements and mechanical mechanisms. The digital features are supported by sensors and actuators as the interface to the physical product, as well as edge and cloud-based components. Digital features can be realized as hardware, software or AI.<br />
<br />
This means that the development life-cycle of a smart, connected product must include physical product development as well as manufacturing engineering. The development lifecycle of digital features focuses on DevOps for the edge components (including MLops for the AI deployed to the edge, DevOps for embedded and edge software, and embedded/edge hardware), as well as the cloud (including MLops for cloud-based AI and standard DevOps for cloud-based software).<br />
<br />
All of this must be managed with a holistic Product Lifecycle Management approach. In most cases, this will require the integration of a number of different processes and platforms. For example, the development life cycle of the physical features is traditionally supported by an engineering PLM platform, while software development is supported through a CI/CT/CD pipeline (Continuous Integration, Continuous Testing, Continuous Deployment). For AI, these kinds of pipelines are different and not yet as sophisticated and mature as in the software world. The following will describe how such a holistic lifecycle can be supported.<br />
<br />
[[File:PLM Product.png|800px|frameless|center|link=|Lifecycle - Product Perspective]]<br />
<br />
Topics closely related to this include Cyber Physical Systems (CPS), as well as mechatronics. Mechatronics is an interdisciplinary engineering approach that focuses on the integration of mechanical, electronic and electrical engineering systems. The term CPS is sometimes used in the embedded world, sometimes with a similar meaning as IoT: integrate sensing and control as well as computation and networking into physical assets and infrastructure. Both concepts and the related development life-cycles can support smart, connected products.<br />
<br />
== Smart, Connected Solutions ==<br />
For smart, connected solutions supporting the Digital Equipment Operator, the picture looks slightly different since physical product development is usually not within our scope. Sensors, actuators and edge nodes are usually deployed to existing assets in the field by using a retrofit approach. This means that the holistic lifecycle in this case does not include physical product design and manufacturing engineering. Other than this, it looks similar to the product perspective, expect that usually the required development pipelines will not be as sophisticated and highly automated as in the case of standardized product development (which typically invests more in these areas).<br />
<br />
[[File:PLM Solution.png|800px|frameless|center|link=|Lifecycle - Solution Perspective]]<br />
<br />
= AIoT Design =<br />
An important element in the development lifecycle is the end-to-end design of the product or solution. The design section will provide a set of detailed templates that can be used here. These templates support the key viewpoints developed by the AIoT Playbook: Business Viewpoint, UX Viewpoint, Data/Functional Viewpoint, and Implementation Viewpoint. These design viewpoints must be aligned with the agile product development perspective, in particular the story map as the top-level work breakdown. They will have to be updated frequently to reflect any learning from the implementation sprints. This means that they can only have a level of detail that permits them to do this.<br />
<br />
[[File:Design Viewpoints.png|600px|frameless|center|link=|AIoT Design Viewpoints]]<br />
<br />
= AIoT Pipelines =<br />
<br />
Pipelines have become an important concept in many development organizations, especially from a DevOps perspective. This section introduces the concept of AIoT pipelines and discusses pipeline aggregations.<br />
<br />
== Definition ==<br />
There are a number of different definitions for the pipeline concept. On the technical level, a good example is the popular development support tool git, which provides a set of tools to allow flexible creation of pipelines to automate the continuous integration process. On the methodological level, for example, the Scaled Agile Framework (SAFe) introduces the concept of Continuous Delivery Pipelines (CDP) as the automation workflows and activities required to move a new piece of functionality from ideation to release. A SAFe pipeline includes Continuous Exploration (CE), Continuous Integration (CI), Continuous Deployment (CD), and Release on Demand. This makes sense in principle. <br />
<br />
''The AIoT Playbook'' is also based on the concept of pipelines. An AIoT pipeline helps move a new functionality through the cycle from ideation and design to release, usually in a cyclic approach, meaning that the released functionality can enter the same pipeline at the beginning to be updated in a subsequent release. The assumption is that AIOT pipelines are usually bound to a particular AIoT technical platform, e.g., edge AI, edge SW, cloud AI, cloud SW, smartphone apps, etc. Each AIoT pipeline usually has an associated pipeline team with skills specific to the pipeline and the target platform. <br />
<br />
[[File:Pipeline Definition.png|600px|frameless|center|link=|AIoT Pipeline - Definition]]<br />
<br />
== Pipeline Aggregations ==<br />
Due to the complexity of many AIoT initiatives, it can make sense to logically aggregate pipelines. This is something that many technical tools with built-in pipeline support such as git are providing out of the box. From the point of view of the target platform, the aggregation concept also makes sense. Take, for example, an edge pipeline that aggregates edge AI components, edge software components, and potentially even custom edge hardware into a higher-level edge component. On the organizational level, this can mean that a higher-level pipeline organization aggregates a number of pipeline teams. For example, the edge pipeline team consists of an edge AI and an edge software team.<br />
<br />
This way of looking at an organization can be very helpful to manage complexity. It is important to note that careful alignment of the technical and organizational perspectives is required. Usually, it is best to create a 1:1 mapping between technical pipelines, target platforms and pipeline teams.<br />
<br />
The diagram below shows an edge pipeline that aggregates three pipelines, namely edge AI, edge HW and edge SW. The combined output of the three lower-level pipelines is combined into integrated edge components.<br />
<br />
[[File:Pipelines Aggregates.png|700px|frameless|center|link=|AIoT Pipelines Aggregates]]<br />
<br />
= AIoT Pipelines & Feature-driven Development =<br />
Technical pipelines are useful for managing and -- at least partially -- automating the creation of new functionalities within a single technology platform. However, many functional features in an AIoT system will require support from components on a number of different platforms. Take, for example, the function to activate a vacuum robot via the smartphone. This feature will require components on the smartphone, the cloud and the robot itself. Each of these platforms is managed by an individual pipeline. It is now important to orchestrate the development of the new feature across the different pipelines involved. This is best done by assigning new features to feature teams, which work across pipelines and pipeline teams. There are a number of different ways this can be done, e.g., by making the pipeline teams the permanent home of technology experts in a particular domain and then creating virtual team structures for the feature teams that get the required experts from the technical pipelines teams assigned for the duration of the development of the particular feature. Another approach can be to permanently establish the feature teams and look at the technical pipeline teams more as a loose grouping. Unfortunately, different technology stacks and cross-technology features tend to require dealing with some kind of organizational matrix structure, which must be addressed one way or another. There are some examples of how other organizations are looking at this, e.g., the famous [https://www.pmtoday.co.uk/spotify-scaling-agile-model/ Spotify model]. ''The AIoT Playbook'' does not make any assumptions about how this is addressed in detail but recommends the combination of pipelines/pipelines teams on the one hand, and features/features teams on the other.<br />
<br />
[[File:Pipelines Feature Teams.png|800px|frameless|center|link=|AIoT Features]]<br />
<br />
Jan Bosch is Professor at Chalmers University and Director of the Software Center: ''There are two different ways in which you're going to organize. In the component-based organizational model, you have the overall system architecture and assign teams to the different components and subsystems. The alternative model is a feature teams model; you have teams pick work items from the backlog. That team can then touch any component in the system and make all the changes they need to make to deliver their features. That is, in general, my preferred approach, but it is an important caveat. The companies that do this in an embedded systems context are associating the required skills typically with work items in the backlog. They say whatever team picks this up has to have at least these skills to deliver on this feature successfully. So it is not that any team can pick any work item.''<br />
<br />
= Holistic AIoT DevOps =<br />
<br />
Finally, the pipeline concept must be closely aligned with DevOps. DevOps is a well-established set of practices that combine software development and IT operations. In more traditional organizations, these two functions used to be in different silos, which often caused severe problems and inefficiencies. DevOps focuses on removing these frictions between development and operations teams by ensuring that developer and operations experts are working in close alignment across the entire software development lifecycle, from coding to testing to deployment.<br />
<br />
An AIoT initiative will have to look at DevOps beyond the more or less well-established DevOps for software. One reason is that AI development usually requires a different DevOps approach and organization. This is usually referred to as MLops. Another reason is that the highly distributed nature of an AIoT system usually requires that concepts such as Over the Air Updates be included, which is another complexity usually not found in cloud-centric DevOps organizations. All of these aspects will be addressed in the [[AIoT_DevOps_and_Infrastructure|AIoT DevOps]] section in more detail.<br />
<br />
In addition to the DevOps focused on continuous delivery of new features and functionalities, an AIoT organization will usually also need to look explicitly at security and potentially functional safety, as well as reliability and resilience. These different aspects will have to be examined through the cloud and edge software perspective, as well as the AI perspective. ''The AIoT Playbook'' builds on existing concepts such as DevSecOps (an extension of DevOps to also cover security) to address these issues specifically from an AIoT point of view.<br />
<br />
[[File:Pipeline DevOps Cycle.png|800px|frameless|center|link=|AIoT Pipelines + DevOps]]<br />
<br />
= Managing Different Speeds of Development =<br />
One of the biggest challenges in most AIoT projects is managing the different speeds of development that can usually be found. For example, hardware and manufacturing-related topics usually move much slower (i.e., months) than software or AI development (weeks). In some cases, one might even have to deal with elements that change on a daily basis, e.g., automatically retrained AI models. To address this, one must carefully consider the organizational setup. Often, it can make sense to allow these different topics to evolve at their own speed, e.g., by allowing a different sprint regime for different pipelines that produce AIoT artifacts and components at different speeds. An overview is given in the figure following. Please note that there is often no straightforward answer for dealing with AIoT elements that require either very long or very short iterations. For example, for very slow moving elements, one can choose very long sprints. Alternatively, one can have all teams work with a similar spring cadence but allow the slower moving topics to deliver non-deployable artifacts, e.g., updated planning and design documents, etc. Similarly, for very fast moving elements the strict sprint cadence might be too rigid, so it could be better to allow them to be worked on and released ad hoc. For example, like automatically retrained AI models, this makes perfect sense since for an automated process no sprint planning seems required.<br />
<br />
However, there is a key prerequisite for this to work: dependencies between artefacts and components from the different AIoT pipelines have to be carefully managed from a dependency point of view. In general, it is OK for fast moving artefacts to depend on slower moving artefacts, but not the other way around - otherwise the evolution of the fast moving artefacts will have a negative impact on the slower moving artefacts. These dependencies can be of a technical nature (e.g., call dependencies between software components, or deployment dependencies between hardware and software) or of a more organizational nature (e.g., procurement decisions). The technical dependencies and how to deal with them will be discussed in more detail in the [[AIoT_Data_and_Functional_Viewpoint|Data/Functional Viewpoint]] of the Product / Solution Design. Finally, the [[Agile_V-Model|Agile V-Model]] is introduced later as an option to manage product development teams in these types of situations.<br />
<br />
[[File:Different Speeds of Development.png|700px|frameless|center|Managing different speeds of development]]<br />
<br />
Jan Bosch from Chalmers University and the Software Center: ''This is a key question: How do you do a release? There are companies in the earliest development stage that do heartbeat-based releases; every component releases every third or every fourth week at the end of the agile sprints. You release all the new versions of the components simultaneously, so that is one way. However, this requires a high level of coordination between the different groups who are building different subsystems in different parts of the system. This is why many companies aim to reach a state where continuous integration and testing of the overall system is so advanced that any of the components in the system can release at any point in time, as long as they have passed the test cases. Then, the teams can start to operate on different heartbeats. Some of the leading cloud companies are now releasing multiple times a day. This should also be the goal for an AIoT system: frequent releases, early validation, less focus on dependency management between different teams''.</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=AIoT_Framework&diff=7042AIoT Framework2022-03-29T14:49:30Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><s data-category="AIoTFramework"><br />
<br />
<imagemap><br />
Image:1.1-IgniteAIoT.png|frameless|1000px|Overview of Ignite AIoT Framework<br />
<br />
rect 4 0 651 133 [[AIoT_Framework|More...]]<br />
rect 970 0 1298 133 [[AIoT_Data_Strategy|More...]]<br />
rect 651 0 970 133 [[Artificial_Intelligence|More...]]<br />
rect 1298 0 1767 133 [[Digital_Twin_Execution|More...]]<br />
rect 1767 0 2095 133 [[Internet_of_Things|More...]]<br />
rect 2095 0 2542 133 [[Hardware.exe|Hardware.exe]]<br />
<br />
rect 2764 128 3539 257 [[Product_Architecture|More...]]<br />
rect 2764 257 3539 390 [[Agile AIoT|More...]]<br />
rect 2764 385 3539 518 [[AIoT_DevOps_and_Infrastructure|More...]]<br />
rect 2764 518 3539 651 [[Trust_and_Security|More...]]<br />
rect 2764 651 3539 784 [[Reliability_and_Resilience|More...]]<br />
rect 2764 779 3539 917 [[Verification_and_Validation|More...]]<br />
<br />
desc none<br />
</imagemap><br />
</s><br />
__NOTOC__<br />
Technical execution must ensure delivery of the AIoT-enabled product or solution in close alignment with the business execution. In the software world, this would usually be managed with an agile approach to ensure continuous value creation and improvement. However, in the AIoT world, we usually face a number of impediments that will prevent a pure agile setup. These impediments exist because of the typical complexity and heterogeneity of an AIoT system, including hardware, software, and AI development. In addition, an AIoT system usually includes components that have to be "first time right" because they cannot be changed after the Start of Production (especially hardware-based components or functionally relevant system components). Designing the system and the delivery organization in a way that maximizes those areas where continuous improvement can be applied while also efficiently supporting those areas where this is not possible is one of the key challenges of the technical execution.<br />
<br />
Consequently, the technical execution part of the ''AIoT Playbook'' looks at ways of supporting this. This starts with looking again at the data, AI, IoT, Digital Twin and hardware perspective from the AIoT 101 section, but this time with the technical execution perspective ("*.exe").<br />
<br />
In addition, this section provides a set of good practices and templates for the design of AIoT-enabled products and solutions, the implementation of an agile approach for AIoT (including the so-called "Agile V-Model"), AIoT DevOps (including cloud DevOps, MLops and DevOps for IoT), Trust & Security, Reliability & Resilience, Functional Safety, and Quality Management. Before going into detail, the following provides an overview of how all of these fit together, starting with the development life-cycle perspective.<br />
<br />
__TOC__<br />
<br />
= Development Life-cycle Perspective =<br />
The development lifecycle of an AIoT-enabled product or solution usually includes a number of different sub-elements, which need to be brought together in a meaningful way. The following will discuss this for both products and solutions.<br />
<br />
== Smart, Connected Products ==<br />
Smart, connected products usually combine two types of features: physical and digital. The physical features are enabled by physical elements and mechanical mechanisms. The digital features are supported by sensors and actuators as the interface to the physical product, as well as edge and cloud-based components. Digital features can be realized as hardware, software or AI.<br />
<br />
This means that the development life-cycle of a smart, connected product must include physical product development as well as manufacturing engineering. The development lifecycle of digital features focuses on DevOps for the edge components (including MLops for the AI deployed to the edge, DevOps for embedded and edge software, and embedded/edge hardware), as well as the cloud (including MLops for cloud-based AI and standard DevOps for cloud-based software).<br />
<br />
All of this must be managed with a holistic Product Lifecycle Management approach. In most cases, this will require the integration of a number of different processes and platforms. For example, the development life cycle of the physical features is traditionally supported by an engineering PLM platform, while software development is supported through a CI/CT/CD pipeline (Continuous Integration, Continuous Testing, Continuous Deployment). For AI, these kinds of pipelines are different and not yet as sophisticated and mature as in the software world. The following will describe how such a holistic lifecycle can be supported.<br />
<br />
[[File:PLM Product.png|800px|frameless|center|link=|Lifecycle - Product Perspective]]<br />
<br />
Topics closely related to this include Cyber Physical Systems (CPS), as well as mechatronics. Mechatronics is an interdisciplinary engineering approach that focuses on the integration of mechanical, electronic and electrical engineering systems. The term CPS is sometimes used in the embedded world, sometimes with a similar meaning as IoT: integrate sensing and control as well as computation and networking into physical assets and infrastructure. Both concepts and the related development life-cycles can support smart, connected products.<br />
<br />
== Smart, Connected Solutions ==<br />
For smart, connected solutions supporting the Digital Equipment Operator, the picture looks slightly different since physical product development is usually not within our scope. Sensors, actuators and edge nodes are usually deployed to existing assets in the field by using a retrofit approach. This means that the holistic lifecycle in this case does not include physical product design and manufacturing engineering. Other than this, it looks similar to the product perspective, expect that usually the required development pipelines will not be as sophisticated and highly automated as in the case of standardized product development (which typically invests more in these areas).<br />
<br />
[[File:PLM Solution.png|800px|frameless|center|link=|Lifecycle - Solution Perspective]]<br />
<br />
= AIoT Design =<br />
An important element in the development lifecycle is the end-to-end design of the product or solution. The design section will provide a set of detailed templates that can be used here. These templates support the key viewpoints developed by the AIoT Playbook: Business Viewpoint, UX Viewpoint, Data/Functional Viewpoint, and Implementation Viewpoint. These design viewpoints must be aligned with the agile product development perspective, in particular the story map as the top-level work breakdown. They will have to be updated frequently to reflect any learning from the implementation sprints. This means that they can only have a level of detail that permits them to do this.<br />
<br />
[[File:Design Viewpoints.png|600px|frameless|center|link=|AIoT Design Viewpoints]]<br />
<br />
= AIoT Pipelines =<br />
<br />
Pipelines have become an important concept in many development organizations, especially from a DevOps perspective. This section introduces the concept of AIoT pipelines and discusses pipeline aggregations.<br />
<br />
== Definition ==<br />
There are a number of different definitions for the pipeline concept. On the technical level, a good example is the popular development support tool git, which provides a set of tools to allow flexible creation of pipelines to automate the continuous integration process. On the methodological level, for example, the Scaled Agile Framework (SAFe) introduces the concept of Continuous Delivery Pipelines (CDP) as the automation workflows and activities required to move a new piece of functionality from ideation to release. A SAFe pipeline includes Continuous Exploration (CE), Continuous Integration (CI), Continuous Deployment (CD), and Release on Demand. This makes sense in principle. <br />
<br />
''The AIoT Playbook'' is also based on the concept of pipelines. An AIoT pipeline helps move a new functionality through the cycle from ideation and design to release, usually in a cyclic approach, meaning that the released functionality can enter the same pipeline at the beginning to be updated in a subsequent release. The assumption is that AIOT pipelines are usually bound to a particular AIoT technical platform, e.g., edge AI, edge SW, cloud AI, cloud SW, smartphone apps, etc. Each AIoT pipeline usually has an associated pipeline team with skills specific to the pipeline and the target platform. <br />
<br />
[[File:Pipeline Definition.png|600px|frameless|center|link=|AIoT Pipeline - Definition]]<br />
<br />
== Pipeline Aggregations ==<br />
Due to the complexity of many AIoT initiatives, it can make sense to logically aggregate pipelines. This is something that many technical tools with built-in pipeline support such as git are providing out of the box. From the point of view of the target platform, the aggregation concept also makes sense. Take, for example, an edge pipeline that aggregates edge AI components, edge software components, and potentially even custom edge hardware into a higher-level edge component. On the organizational level, this can mean that a higher-level pipeline organization aggregates a number of pipeline teams. For example, the edge pipeline team consists of an edge AI and an edge software team.<br />
<br />
This way of looking at an organization can be very helpful to manage complexity. It is important to note that careful alignment of the technical and organizational perspectives is required. Usually, it is best to create a 1:1 mapping between technical pipelines, target platforms and pipeline teams.<br />
<br />
The diagram below shows an edge pipeline that aggregates three pipelines, namely edge AI, edge HW and edge SW. The combined output of the three lower-level pipelines is combined into integrated edge components.<br />
<br />
[[File:Pipelines Aggregates.png|700px|frameless|center|link=|AIoT Pipelines Aggregates]]<br />
<br />
= AIoT Pipelines & Feature-driven Development =<br />
Technical pipelines are useful for managing and -- at least partially -- automating the creation of new functionalities within a single technology platform. However, many functional features in an AIoT system will require support from components on a number of different platforms. Take, for example, the function to activate a vacuum robot via the smartphone. This feature will require components on the smartphone, the cloud and the robot itself. Each of these platforms is managed by an individual pipeline. It is now important to orchestrate the development of the new feature across the different pipelines involved. This is best done by assigning new features to feature teams, which work across pipelines and pipeline teams. There are a number of different ways this can be done, e.g., by making the pipeline teams the permanent home of technology experts in a particular domain and then creating virtual team structures for the feature teams that get the required experts from the technical pipelines teams assigned for the duration of the development of the particular feature. Another approach can be to permanently establish the feature teams and look at the technical pipeline teams more as a loose grouping. Unfortunately, different technology stacks and cross-technology features tend to require dealing with some kind of organizational matrix structure, which must be addressed one way or another. There are some examples of how other organizations are looking at this, e.g., the famous [https://www.pmtoday.co.uk/spotify-scaling-agile-model/ Spotify model]. ''The AIoT Playbook'' does not make any assumptions about how this is addressed in detail but recommends the combination of pipelines/pipelines teams on the one hand, and features/features teams on the other.<br />
<br />
[[File:Pipelines Feature Teams.png|800px|frameless|center|link=|AIoT Features]]<br />
<br />
Jan Bosch is Professor at Chalmers University and Director of the Software Center: ''There are two different ways in which you're going to organize. In the component-based organizational model, you have the overall system architecture and assign teams to the different components and subsystems. The alternative model is a feature teams model; you have teams pick work items from the backlog. That team can then touch any component in the system and make all the changes they need to make to deliver their features. That is, in general, my preferred approach, but it is an important caveat. The companies that do this in an embedded systems context are associating the required skills typically with work items in the backlog. They say whatever team picks this up has to have at least these skills to deliver on this feature successfully. So it is not that any team can pick any work item.''<br />
<br />
= Holistic AIoT DevOps =<br />
<br />
Finally, the pipeline concept must be closely aligned with DevOps. DevOps is a well-established set of practices that combine software development and IT operations. In more traditional organizations, these two functions used to be in different silos, which often caused severe problems and inefficiencies. DevOps focuses on removing these frictions between development and operations teams by ensuring that developer and operations experts are working in close alignment across the entire software development lifecycle, from coding to testing to deployment.<br />
<br />
An AIoT initiative will have to look at DevOps beyond the more or less well-established DevOps for software. One reason is that AI development usually requires a different DevOps approach and organization. This is usually referred to as MLops. Another reason is that the highly distributed nature of an AIoT system usually requires that concepts such as Over the Air Updates be included, which is another complexity usually not found in cloud-centric DevOps organizations. All of these aspects will be addressed in the [[AIoT_DevOps_and_Infrastructure|AIoT DevOps]] section in more detail.<br />
<br />
In addition to the DevOps focused on continuous delivery of new features and functionalities, an AIoT organization will usually also need to look explicitly at security and potentially functional safety, as well as reliability and resilience. These different aspects will have to be examined through the cloud and edge software perspective, as well as the AI perspective. ''The AIoT Playbook'' builds on existing concepts such as DevSecOps (an extension of DevOps to also cover security) to address these issues specifically from an AIoT point of view.<br />
<br />
[[File:Pipeline DevOps Cycle.png|800px|frameless|center|link=|AIoT Pipelines + DevOps]]<br />
<br />
= Managing Different Speeds of Development =<br />
One of the biggest challenges in most AIoT projects is managing the different speeds of development that can usually be found. For example, hardware and manufacturing-related topics usually move much slower (i.e., months) than software or AI development (weeks). In some cases, one might even have to deal with elements that change on a daily basis, e.g., automatically retrained AI models. To address this, one must carefully consider the organizational setup. Often, it can make sense to allow these different topics to evolve at their own speed, e.g., by allowing a different sprint regime for different pipelines that produce AIoT artifacts and components at different speeds. An overview is given in the figure following. Please note that there is often no straightforward answer for dealing with AIoT elements that require either very long or very short iterations. For example, for very slow moving elements, one can choose very long sprints. Alternatively, one can have all teams work with a similar spring cadence but allow the slower moving topics to deliver non-deployable artifacts, e.g., updated planning and design documents, etc. Similarly, for very fast moving elements the strict sprint cadence might be too rigid, so it could be better to allow them to be worked on and released ad hoc. For example, like automatically retrained AI models, this makes perfect sense since for an automated process no sprint planning seems required.<br />
<br />
However, there is a key prerequisite for this to work: dependencies between artefacts and components from the different AIoT pipelines have to be carefully managed from a dependency point of view. In general, it is OK for fast moving artefacts to depend on slower moving artefacts, but not the other way around - otherwise the evolution of the fast moving artefacts will have a negative impact on the slower moving artefacts. These dependencies can be of a technical nature (e.g., call dependencies between software components, or deployment dependencies between hardware and software) or of a more organizational nature (e.g., procurement decisions). The technical dependencies and how to deal with them will be discussed in more detail in the [[AIoT_Data_and_Functional_Viewpoint|Data/Functional Viewpoint]] of the Product / Solution Design. Finally, the [[Agile_V-Model|Agile V-Model]] is introduced later as an option to manage product development teams in these types of situations.<br />
<br />
[[File:Different Speeds of Development.png|700px|frameless|center|Managing different speeds of development]]<br />
<br />
Jan Bosch from Chalmers University and the Software Center: ''This is a key question: How do you do a release? There are companies in the earliest development stage that do heartbeat-based releases; every component releases every third or every fourth week at the end of the agile sprints. You release all the new versions of the components simultaneously, so that is one way. However, this requires a high level of coordination between the different groups who are building different subsystems in different parts of the system. This is why many companies aim to reach a state where continuous integration and testing of the overall system is so advanced that any of the components in the system can release at any point in time, as long as they have passed the test cases. Then, the teams can start to operate on different heartbeats. Some of the leading cloud companies are now releasing multiple times a day. This should also be the goal for an AIoT system: frequent releases, early validation, less focus on dependency management between different teams''.</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=DTPeriodicTableSidebar&diff=7040DTPeriodicTableSidebar2022-03-28T15:16:58Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div>*main<br />
* Digital Twin Periodic Table<br />
** Digital Twin Periodic Table|Overview<br />
** Data Services|Data Services<br />
** Integration|Integration<br />
** Intelligence|Intelligence<br />
** UX|UX<br />
** Management|Management<br />
** Trustworthiness|Trustworthiness<br />
**https://www.digitaltwinconsortium.org/|{{{IMAGE= DTC Logo.png}}}</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Integration&diff=7029Integration2022-03-28T14:02:07Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><s data-category="PeriodicTable"></s><br />
<imagemap><br />
File:DT CPT Integration.png|1000px|center|<br />
<br />
rect 1081 31 1548 215[[#Position_1_on_page|Go to Enterprise system integration section]]<br />
rect 1086 247 1542 420 [[#Position_2_on_page|Go to Engineering systems integration section]]<br />
rect 1081 446 1548 619[[#Position_3_on_page|Go to OT/IoT system integration section]]<br />
rect 1081 656 1563 818 [[#Position_4_on_page|Go to Digital Twin Integration section]]<br />
rect 1075 834 1542 1028 [[#Position_5_on_page|Go to Collaboration platform integration section]]<br />
rect 1075 1039 1569 1222 [[#Position_6_on_page|Go to API Services section]]<br />
<br />
<br />
rect 63 58 1039 1595 [[Data Services|Go to Data Services]]<br />
rect 1590 52 3069 1217 [[Intelligence|Go to Intelligence]]<br />
rect 1070 1259 2067 1611 [[Management|Go to Management]]<br />
rect 2093 1243 3599 1611 [[Trustworthiness|Go to Trustworthiness]]<br />
rect 3137 47 4097 1222 [[UX|Go to UX]]<br />
desc none<br />
</imagemap><br />
<br />
<br />
Enables data access to existing internal and external enterprise systems and applications. Enables communication across different digital twins.<br />
<br />
<div id='Position_1_on_page'></div><br />
==Enterprise system integration==<br />
===Ability===<br />
The ability to integrate the digital twin with existing enterprise such as ERP, EAM, CRM, CMMS.<br />
===Purpose===<br />
The purpose is to integrate business applications that enables data to flow between Digital Twin systems with ease.<br />
<br />
<div id='Position_2_on_page'></div><br />
==Engineering systems integration==<br />
===Ability===<br />
The ability to integrate the digital twin with existing engineering systems such as CAD, CAM, BIM, Historians<br />
===Purpose===<br />
The purpose is to integrate engineering applications that enables model use and data to flow between Digital Twin systems with ease. <br />
<br />
<div id='Position_3_on_page'></div><br />
<br />
==OT/IoT system integration==<br />
===Ability===<br />
The ability to integrate directly with control systems and IOT devices/sensors, SCADA. <br />
===Purpose===<br />
The purpose is to integrate operation technology (OT) and IoT applications that data to flow between Digital Twin systems with ease.<br />
<br />
<div id='Position_4_on_page'></div><br />
==Digital Twin Integration==<br />
===Ability===<br />
The ability to integrate or access information from existing digital twin instances. <br />
===Purpose===<br />
The purpose is to integrate Digital Twin applications with one another to enable interoperable Digital Twins.<br />
<br />
<div id='Position_5_on_page'></div><br />
==Collaboration platform integration==<br />
===Ability===<br />
The ability for the digital twin to interface with platforms like Yammer, Jabber, Teams, Slack.<br />
===Purpose===<br />
The purpose is to integrate collaboration platforms to provide Digital Twin users with a conversational user interface.<br />
<br />
<div id='Position_6_on_page'></div><br />
==API Services==<br />
===Ability===<br />
The ability for the digital twin to publish APIs to external, partner, and internal developers to access data and services.<br />
===Purpose===<br />
The purpose is to simplify Digital Twin development by allowing Digital to integrate to products and services without knowing how they are implemented.</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=UX&diff=7028UX2022-03-28T14:01:50Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><s data-category="PeriodicTable"></s><br />
<imagemap><br />
File:DT CPT UX.png|1000px|center|<br />
<br />
rect 3132 52 3583 215 [[#Position_1_on_page|Go to Basic Visualization section]]<br />
rect 3127 247 3573 420[[#Position_2_on_page|Go to Advanced Visualization section]]<br />
rect 3116 441 3599 614 [[#Position_3_on_page|Go to Real-time Monitoring section]]<br />
rect 3122 640 3594 813 [[#Position_4_on_page|Go to Entity Relationship Visualization section]]<br />
rect 3116 839 3588 1023 [[#Position_5_on_page|Go to Augmented Reality (AR) section]]<br />
rect 3127 1039 3588 1228 [[#Position_6_on_page|Go to Virtual Reality (VR) section]]<br />
rect 3630 58 4097 226[[#Position_7_on_page|Go to Dashboards section]]<br />
rect 3636 241 4087 420 [[#Position_8_on_page|Go to Continuous Intelligence section]]<br />
rect 3641 456 4103 619[[#Position_9_on_page|Go to Business Intelligence section]]<br />
rect 3646 645 4082 813 [[#Position_10_on_page|Go to Business Process Management & Workflow section]]<br />
rect 3636 834 4097 1034 [[#Position_11_on_page|Go to Gaming Engine Visualization section]]<br />
rect 3620 1049 4103 1228 [[#Position_12_on_page|Go to 3D rendering section]]<br />
rect 3641 1243 4097 1416 [[#Position_13_on_page|Go to Gamification section]]<br />
<br />
rect 63 58 1039 1595 [[Data Services|Go to Data Services]]<br />
rect 1070 52 1558 1228 [[Integration|Go to Integration]]<br />
rect 1590 52 3069 1217 [[Intelligence|Go to Intelligence]]<br />
rect 1070 1259 2067 1611 [[Management|Go to Management]]<br />
rect 2093 1243 3599 1611 [[Trustworthiness|Go to Trustworthiness]]<br />
desc none<br />
</imagemap><br />
<br />
Provides the user with the ability to interact with Digital Twins and visualize its data.<br />
<br />
<div id='Position_1_on_page'></div><br />
==Basic Visualization==<br />
===Ability===<br />
The ability to graphically or parametrically (that is, through parameters and values) visualize data through simple charts, graphs, simple dashboards, tables, hierarchical and basic 3D views of the assets.<br />
===Purpose===<br />
The purpose is to help people understand the significance of data by placing it in a visual context.<br />
<br />
<div id='Position_2_on_page'></div><br />
==Advanced Visualization==<br />
===Ability===<br />
The ability to graphically or parametrically (that is, through parameters and values), visualize data through complex charts and graphs, dashboards fetching raw and process data from multiple systems, complex 3D models and animations, visualizations with overlayed data from different systems.<br />
===Purpose===<br />
The purpose is to help people understand the significance of data by placing it in a visual context.<br />
<br />
<div id='Position_3_on_page'></div><br />
==Real-time Monitoring==<br />
===Ability===<br />
The ability to present and interact with continuously updated information streaming at zero or low latency.<br />
===Purpose===<br />
The purpose is to help make decisions which are of consequence to real-time.<br />
<br />
<div id='Position_4_on_page'></div><br />
==Entity Relationship Visualization==<br />
===Ability===<br />
The ability to present Digital Twin entities and their hierarchical or graph-based relationships in an interactive way.<br />
===Purpose===<br />
The purpose is to help business users navigate and interact with complex entity (asset) hierarchies in a user friendly manner.<br />
<br />
<div id='Position_5_on_page'></div><br />
==Augmented Reality (AR)==<br />
===Ability===<br />
The ability to provide an interactive experience of a real-world environment where the objects that reside in the real world are enhanced by computer-generated perceptual information such as visual, auditory, haptic etc. environment.<br />
===Purpose===<br />
The purpose is to realize an improved, immersive, and interactive experience for the user to around simulating the physical world in a virtual environment.<br />
<br />
<div id='Position_6_on_page'></div><br />
==Virtual Reality (VR)==<br />
===Ability===<br />
The ability to provide a simulated experience that can be similar to, or completely different, from the real world.<br />
===Purpose===<br />
The purpose is to realize an improved, immersive, and interactive experience for the user to around simulating the physical world in a virtual environment.<br />
<br />
<div id='Position_7_on_page'></div><br />
==Dashboards==<br />
===Ability===<br />
The ability to provide a graphical user interface which provides at-a-glance views of key performance indicators relevant to a particular objective or business process.<br />
===Purpose===<br />
The purpose is to enable various personas in operations, technology, and business to visually understand the current or past state of a system.<br />
<br />
<div id='Position_8_on_page'></div><br />
==Continuous Intelligence==<br />
===Ability===<br />
The ability to analyze data in flight (signals) to derive insights and actions in a business user focused visual interface.<br />
===Purpose===<br />
The purpose is to have various personas in operations, technology, and business to make informed real-time decisions.<br />
<br />
<div id='Position_9_on_page'></div><br />
==Business Intelligence ==<br />
===Ability===<br />
The ability to analyze stored data (records) to derive insights and actions in a business user focussed visual interface<br />
===Purpose===<br />
The purpose is to have various personas in operations, technology, and business to make informed real-time decisions.<br />
<br />
<div id='Position_10_on_page'></div><br />
==Business Process Management & Workflow==<br />
===Ability===<br />
The ability to execute a sequence of actions as a process flow to achieve specific business outcomes.<br />
===Purpose===<br />
The purpose is to have effective, repeatable actions that deliver the business outcomes of the Digital Twin.<br />
<br />
<div id='Position_11_on_page'></div><br />
==Gaming Engine Visualization==<br />
===Ability===<br />
The ability to create immersive virtual worlds and interactive experiences with gaming engine technology. <br />
===Purpose===<br />
The purpose it to enable Digital Twins in a digital metaverse where users interact with the Digital Twin in a highly interactive manner.<br />
<br />
<div id='Position_12_on_page'></div><br />
==3D rendering==<br />
===Ability===<br />
The ability to render 3D visualizations from point cloud data sets generated by LiDAR and other scanning technologies.<br />
===Purpose===<br />
The purpose is to interact with large point cloud and 3D datasets in a user friendly manner.<br />
<br />
<div id='Position_13_on_page'></div><br />
==Gamification ==<br />
===Ability===<br />
The ability to enable typical elements of game playing in Digital Twin interaction.<br />
===Purpose===<br />
The purpose is to facilitate the use of gamification elements such as points scoring, badges, competition etc. in the user experience and interactive engagement of a Digital Twin.</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Trustworthiness&diff=7027Trustworthiness2022-03-28T14:01:26Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><s data-category="PeriodicTable"></s><br />
<imagemap><br />
File:DT CPT Trustworthiness.png|1000px|center|<br />
<br />
rect 2104 1249 2555 1416 [[#Position_1_on_page|Go to Data Encryption section]]<br />
rect 2109 1443 2555 1616 [[#Position_2_on_page|Go to Device Security section]]<br />
rect 2618 1249 3074 1406 [[#Position_3_on_page|Go to Security section]]<br />
rect 2607 1443 3080 1626[[#Position_4_on_page|Go to Privacy section]]<br />
rect 3111 1249 3588 1406 [[#Position_5_on_page|Go to Safety section]]<br />
rect 3116 1448 3588 1626 [[#Position_6_on_page|Go to Reliability section]]<br />
rect 3625 1443 4113 1626 [[#Position_6_on_page|Go to Resilience section]]<br />
<br />
rect 63 58 1039 1595 [[Data Services|Go to Data Services]]<br />
rect 1070 52 1558 1228 [[Integration|Go to Integration]]<br />
rect 1590 52 3069 1217 [[Intelligence|Go to Intelligence]]<br />
rect 1070 1259 2067 1611 [[Management|Go to Management]]<br />
rect 3137 47 4097 1222 [[UX|Go to UX]]<br />
<br />
desc none<br />
</imagemap><br />
<br />
<br />
Security, privacy, safety, reliability, and resilience capabilities.<br />
<br />
<div id='Position_1_on_page'></div><br />
==Data Encryption==<br />
===Ability===<br />
The ability to convert Digital Twin data from a readable format into an encoded format that can be used to transfer data securely. It also includes the ability to decrypt the data in order to read or process the data once it reaches its destination.<br />
===Purpose===<br />
The purpose of data encryption protect digital data confidentiality as it is stored in a Digital Twin system, accessed and transmitted.<br />
<br />
<div id='Position_2_on_page'></div><br />
==Device Security==<br />
===Ability===<br />
The ability to enforce authenticated and authorized access to IoT device data through identity management, role-based access, encryption, and policies.<br />
===Purpose===<br />
The purpose is to control access to device data by having the appropriate privileges and enforcement framework for users and programs.<br />
<br />
<div id='Position_3_on_page'></div><br />
==Security==<br />
===Ability===<br />
The ability to protected Digital Twins from unintended or unauthorized access, change or destruction. Security concerns equipment, systems and information, ensuring availability, integrity and confidentiality of information.<br />
===Purpose===<br />
The purpose is to ensure a Digital Twin is protected from unintended or authorized access, change or destruction.<br />
<br />
<div id='Position_4_on_page'></div><br />
==Privacy==<br />
===Ability===<br />
The ability to enable the rights of individuals that interact with Digital Twins to control or influence what information related to them may be collected and stored and by whom and to whom that information may be disclosed.<br />
===Purpose===<br />
The purpose of privacy is to ensure the rights of individuals with regards to data collection, storage and use is respected and enforced.<br />
<br />
<div id='Position_5_on_page'></div><br />
==Safety==<br />
===Ability===<br />
The ability to operate digital twins without causing unacceptable risk of physical injury or damage to the health of people, either directly, or indirectly as a result of damage to property or to the environment.<br />
===Purpose===<br />
The purpose is to ensure a Digital Twin is operating safely without causing an unacceptable risk to safety.<br />
<br />
<div id='Position_6_on_page'></div><br />
==Reliability==<br />
===Ability===<br />
The ability of a Digital Twin system or component to perform its required functions under stated conditions for a specified period of time. This includes expected levels of performance, QoS, functional availability and accuracy.<br />
<br />
<div id='Position_6_on_page'></div><br />
==Resilience==<br />
===Ability===<br />
The ability of a Digital Twin system or component to maintain an acceptable level of service in the face of disruption. This includes the ability to recover lost capacity in a timely manner (using a more or less automated procedure), or to reassign workloads and functions.<br />
===Purpose===<br />
The purpose is to ensure a Digital Twin is able to operate and maintain an acceptable level of service when disrupted.</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Data_Services&diff=7026Data Services2022-03-28T14:01:14Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><s data-category="PeriodicTable"></s><br />
<imagemap><br />
File:DT CPT Data Service.png|1000px|center|<br />
<br />
rect 47 58 525 220 [[#Position_1_on_page|Go to Data Acquisition and Ingestion section]]<br />
rect 63 252 519 425 [[#Position_2_on_page|Go to Data Streaming section]]<br />
rect 68 446 514 619[[#Position_3_on_page|Go to Data Transformation and Wrangling section]]<br />
rect 68 666 498 818 [[#Position_4_on_page|Go to Data Contextualization section]]<br />
rect 52 839 519 1013 [[#Position_5_on_page|Go to Batch Processing section]]<br />
rect 79 1054 509 1222[[#Position_6_on_page|Go to Real-time processing section]]<br />
rect 68 1249 519 1416 [[#Position_7_on_page|Go to Data PubSub Push section]]<br />
rect 52 1453 530 1616[[#Position_8_on_page|Go to Data Aggregation section]]<br />
rect 561 47 1028 226 [[#Position_9_on_page|Go to Synthetic Data Generation section]]<br />
rect 572 247 1049 409 [[#Position_10_on_page|Go to Ontology Management section]]<br />
rect 567 446 1028 619 [[#Position_11_on_page|Go to Digital Twin Model Repository section]]<br />
rect 561 645 1018 829 [[#Position_12_on_page|Go to Digital Twin Instance Repository section]]<br />
rect 588 845 1028 1023[[#Position_13_on_page|Go to Temporal (Time Series) Data Store section]]<br />
rect 577 1065 1023 1207 [[#Position_14_on_page|Go to Data Storage and Archive Services section]]<br />
rect 567 1243 1034 1416 [[#Position_15_on_page|Go to Simulation Model Repository section]]<br />
rect 572 1443 1034 1611 [[#Position_16_on_page|Go to AI Model Repository section]]<br />
<br />
rect 1070 52 1558 1228 [[Integration|Go to Integration]]<br />
rect 1590 52 3069 1217 [[Intelligence|Go to Intelligence]]<br />
rect 1070 1259 2067 1611 [[Management|Go to Management]]<br />
rect 2093 1243 3599 1611 [[Trustworthiness|Go to Trustworthiness]]<br />
rect 3137 47 4097 1222 [[UX|Go to UX]]<br />
<br />
desc none<br />
</imagemap><br />
<br />
<br />
Enables data access, ingestion and data management across the platform from the edge to the cloud. It establishes the physical to virtual connection and receives data directly from equipment sensors or control systems, performs localized processing, and distributes to other tiers<br />
<br />
<br />
<div id='Position_1_on_page'></div><br />
==Data Acquisition and Ingestion==<br />
===Ability===<br />
The ability to configure and acquire data from different data sources including control system, historians, IoT sensors, smart devices, engineering system, enterprise systems etc.<br />
===Purpose===<br />
The purpose is to acquire data from the physical world, engineering technology systems, and information technology systems to support subsequent processing and insight generation.<br />
<br />
<div id='Position_2_on_page'></div><br />
==Data Streaming==<br />
===Ability===<br />
The ability to transfer of large volumes of data continuously and incrementally between a source and a destination without having to access all data at the same time.<br />
===Purpose===<br />
The purpose is to acquire fast continuous packets of information which is changing at high speed to be able to get near real-time insights.<br />
<br />
<div id='Position_3_on_page'></div><br />
==Data Transformation and Wrangling==<br />
===Ability===<br />
The ability to convert data types and properties through cleaning, structuring and enriching raw data to make if suitable for further processing and analytics.<br />
===Purpose===<br />
The purpose is to make data useable in Digital Twins.<br />
<br />
<div id='Position_4_on_page'></div><br />
==Data Contextualization==<br />
===Ability===<br />
The ability to add language or meta data to enrich real time or transactional data<br />
===Purpose===<br />
The purpose is to combine data from different sources such as real-time and context to make it suitable for subsequent processing by the digital twin.<br />
<br />
<div id='Position_5_on_page'></div><br />
==Batch Processing==<br />
===Ability===<br />
The ability to execute against previously collected data in bulk form.<br />
===Purpose===<br />
The purpose is to provide is an efficient way of processing high volumes of data in batches or groups.<br />
<br />
<br />
<div id='Position_6_on_page'></div><br />
==Real-time processing==<br />
===Ability===<br />
The ability to manage and act on the captured data with minimal latency<br />
===Purpose===<br />
The purpose is to support immediate insights from the data.<br />
<br />
<div id='Position_7_on_page'></div><br />
==Data PubSub Push==<br />
===Ability===<br />
The ability to package filtered data to different services based on publish / subscribe model<br />
===Purpose===<br />
The purpose is to provide information to subscribed digital twin consumers.<br />
<br />
<div id='Position_8_on_page'></div><br />
==Data Aggregation==<br />
===Ability===<br />
The ability to gather raw data and express in a summary form.<br />
===Purpose===<br />
The purpose is to gather data from multiple sources with the intent of combining these data sources into a summary for data analysis.<br />
<br />
<div id='Position_9_on_page'></div><br />
==Synthetic Data Generation==<br />
===Ability===<br />
The ability to generate synthetic data based on patterns and rules in existing sources.<br />
===Purpose===<br />
The purpose is to create representative synthetic data that can used by the digital twin to train and score predictive models.<br />
<br />
<div id='Position_10_on_page'></div><br />
==Ontology Management==<br />
===Ability===<br />
The ability to manage knowledge graphs and ontologies.<br />
===Purpose===<br />
The purpose is to enable a digital twin to interpret data directly from knowledge graphs and ontologies.<br />
<br />
<div id='Position_11_on_page'></div><br />
==Digital Twin Model Repository==<br />
===Ability===<br />
The ability to store, manage and retrieve the meta data that describe the digital twin model. The model can include formal data names, comprehensive data definitions, proper data structures, and precise data integrity rules. <br />
===Purpose===<br />
The purpose is to register and manage a portfolio of Digital Twin models in a central repository to improve configuration management and model governance.<br />
<br />
<div id='Position_12_on_page'></div><br />
==Digital Twin Instance Repository==<br />
===Ability===<br />
The ability to store, manage and retrieve digital twin instance data that conforms to the requirements of the digital twin model.<br />
===Purpose===<br />
The purpose is to store, manage and retrieve Digital Twin instance state data.<br />
<br />
<div id='Position_13_on_page'></div><br />
==Temporal (Time Series) Data Store==<br />
===Ability===<br />
The ability to store, organize and retrieve data relating to time instances through temporal data types, and stores information relating to past, present and potentially future time.<br />
===Purpose===<br />
The purpose is to store, manage and retrieve temporal (timeseries) data.<br />
<br />
<div id='Position_14_on_page'></div><br />
==Data Storage and Archive Services==<br />
===Ability===<br />
The ability to store, organize and retrieve data based on how frequently it will be accessed and how long it will be retained.<br />
===Purpose===<br />
The purpose is to reduce the cost and effort of managing Digital Twin data by using hot, cold and archival data services.<br />
<br />
<div id='Position_15_on_page'></div><br />
==Simulation Model Repository==<br />
===Ability===<br />
The ability to store, manage and retrieve the algorithmic codebase, business rules and meta data that describe a simulation model.<br />
===Purpose===<br />
The purpose is to register and manage a portfolio of simulation models in a central repository to improve configuration management and model governance.<br />
<br />
<div id='Position_16_on_page'></div><br />
==AI Model Repository==<br />
===Ability===<br />
The ability to store, manage, search and retrieve the algorithmic codebase that describe an artificial intelligence (AI) model or machine learning (ML) model<br />
===Purpose===<br />
The purpose is to register and manage a portfolio of AI and machine learning models in a central repository to improve configuration management and model governance.</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Management&diff=7025Management2022-03-28T14:01:00Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><s data-category="PeriodicTable"></s><br />
<imagemap><br />
File:DT CPT Management.png|1000px|center|<br />
<br />
rect 1096 1259 1548 1406 [[#Position_1_on_page|Go to Device Management section]]<br />
rect 1086 1443 1548 1605[[#Position_2_on_page|Go to System Monitoring and Alerting section]]<br />
rect 1584 1249 2067 1416[[#Position_3_on_page|Go to System Monitoring and Alerting section]]<br />
rect 1590 1432 2062 1616[[#Position_4_on_page|Go to Data Governance section]]<br />
<br />
rect 63 58 1039 1595 [[Data Services|Go to Data Services]]<br />
rect 1070 52 1558 1228 [[Integration|Go to Integration]]<br />
rect 1590 52 3069 1217 [[Intelligence|Go to Intelligence]]<br />
rect 2093 1243 3599 1611 [[Trustworthiness|Go to Trustworthiness]]<br />
rect 3137 47 4097 1222 [[UX|Go to UX]]<br />
<br />
desc none<br />
</imagemap><br />
<br />
<br />
System and ecosystem management capabilities.<br />
<br />
<div id='Position_1_on_page'></div><br />
==Device Management==<br />
===Ability===<br />
The ability to provision and authenticate, configure, maintain, monitor and diagnose connected IoT devices operating as part of Digital Twin environment.<br />
===Purpose===<br />
The purpose of (IoT) device management is to provide and support the whole spectrum of functional capabilities of the devices and sensors.<br />
<br />
<div id='Position_2_on_page'></div><br />
==System Monitoring and Alerting==<br />
===Ability===<br />
The ability to observe Digital Twin systems, applications, and services by collecting, analyzing, and acting on their health data in order to maximize their availability and performance.<br />
===Purpose===<br />
The purpose of system monitoring is to provide and support the whole spectrum of Digital Twin systems, applications and services.<br />
<br />
<div id='Position_3_on_page'></div><br />
==Logging==<br />
===Ability===<br />
The ability to record events, transactions, access data of users, and transactions to understand and trace the activities occurring in a Digital Twin system.<br />
===Purpose===<br />
The purpose of event logging is to provide records that enable event activities to be traced within a Digital Twin system.<br />
<br />
<div id='Position_4_on_page'></div><br />
==Data Governance==<br />
===Ability===<br />
The ability to manage the availability, usability, integrity and security of the data in Digital Twin systems, based on internal data standards and policies that also control data usage.<br />
===Purpose===<br />
The purpose is to ensure that data is consistent and trustworthy and doesn't get misused.</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Management&diff=7024Management2022-03-28T13:59:47Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><s data-category="PeriodicTable"></s><br />
<imagemap><br />
File:DT CPT Management.png|1000px|center|<br />
<br />
rect 1096 1259 1548 1406 [[#Position_1_on_page|Go to Device Management section]]<br />
rect 1086 1443 1548 1605[[#Position_2_on_page|Go to System Monitoring and Alerting section]]<br />
rect 1584 1249 2067 1416[[#Position_3_on_page|Go to System Monitoring and Alerting section]]<br />
rect 1590 1432 2062 1616[[#Position_4_on_page|Go to Data Governance section]]<br />
<br />
rect 63 58 1039 1595 [[Data Services|Go to Data Services]]<br />
rect 1070 52 1558 1228 [[Integration|Go to Integration]]<br />
rect 1584 52 3080 1207 [[Intelligence|Go to Intelligence]]<br />
rect 2093 1243 3599 1611 [[Trustworthiness|Go to Trustworthiness]]<br />
rect 3137 47 4097 1222 [[UX|Go to UX]]<br />
<br />
desc none<br />
</imagemap><br />
<br />
<br />
System and ecosystem management capabilities.<br />
<br />
<div id='Position_1_on_page'></div><br />
==Device Management==<br />
===Ability===<br />
The ability to provision and authenticate, configure, maintain, monitor and diagnose connected IoT devices operating as part of Digital Twin environment.<br />
===Purpose===<br />
The purpose of (IoT) device management is to provide and support the whole spectrum of functional capabilities of the devices and sensors.<br />
<br />
<div id='Position_2_on_page'></div><br />
==System Monitoring and Alerting==<br />
===Ability===<br />
The ability to observe Digital Twin systems, applications, and services by collecting, analyzing, and acting on their health data in order to maximize their availability and performance.<br />
===Purpose===<br />
The purpose of system monitoring is to provide and support the whole spectrum of Digital Twin systems, applications and services.<br />
<br />
<div id='Position_3_on_page'></div><br />
==Logging==<br />
===Ability===<br />
The ability to record events, transactions, access data of users, and transactions to understand and trace the activities occurring in a Digital Twin system.<br />
===Purpose===<br />
The purpose of event logging is to provide records that enable event activities to be traced within a Digital Twin system.<br />
<br />
<div id='Position_4_on_page'></div><br />
==Data Governance==<br />
===Ability===<br />
The ability to manage the availability, usability, integrity and security of the data in Digital Twin systems, based on internal data standards and policies that also control data usage.<br />
===Purpose===<br />
The purpose is to ensure that data is consistent and trustworthy and doesn't get misused.</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Trustworthiness&diff=7023Trustworthiness2022-03-28T13:59:37Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><s data-category="PeriodicTable"></s><br />
<imagemap><br />
File:DT CPT Trustworthiness.png|1000px|center|<br />
<br />
rect 2104 1249 2555 1416 [[#Position_1_on_page|Go to Data Encryption section]]<br />
rect 2109 1443 2555 1616 [[#Position_2_on_page|Go to Device Security section]]<br />
rect 2618 1249 3074 1406 [[#Position_3_on_page|Go to Security section]]<br />
rect 2607 1443 3080 1626[[#Position_4_on_page|Go to Privacy section]]<br />
rect 3111 1249 3588 1406 [[#Position_5_on_page|Go to Safety section]]<br />
rect 3116 1448 3588 1626 [[#Position_6_on_page|Go to Reliability section]]<br />
rect 3625 1443 4113 1626 [[#Position_6_on_page|Go to Resilience section]]<br />
<br />
rect 63 58 1039 1595 [[Data Services|Go to Data Services]]<br />
rect 1070 52 1558 1228 [[Integration|Go to Integration]]<br />
rect 1584 52 3080 1207 [[Intelligence|Go to Intelligence]]<br />
rect 1070 1259 2067 1611 [[Management|Go to Management]]<br />
rect 3137 47 4097 1222 [[UX|Go to UX]]<br />
<br />
desc none<br />
</imagemap><br />
<br />
<br />
Security, privacy, safety, reliability, and resilience capabilities.<br />
<br />
<div id='Position_1_on_page'></div><br />
==Data Encryption==<br />
===Ability===<br />
The ability to convert Digital Twin data from a readable format into an encoded format that can be used to transfer data securely. It also includes the ability to decrypt the data in order to read or process the data once it reaches its destination.<br />
===Purpose===<br />
The purpose of data encryption protect digital data confidentiality as it is stored in a Digital Twin system, accessed and transmitted.<br />
<br />
<div id='Position_2_on_page'></div><br />
==Device Security==<br />
===Ability===<br />
The ability to enforce authenticated and authorized access to IoT device data through identity management, role-based access, encryption, and policies.<br />
===Purpose===<br />
The purpose is to control access to device data by having the appropriate privileges and enforcement framework for users and programs.<br />
<br />
<div id='Position_3_on_page'></div><br />
==Security==<br />
===Ability===<br />
The ability to protected Digital Twins from unintended or unauthorized access, change or destruction. Security concerns equipment, systems and information, ensuring availability, integrity and confidentiality of information.<br />
===Purpose===<br />
The purpose is to ensure a Digital Twin is protected from unintended or authorized access, change or destruction.<br />
<br />
<div id='Position_4_on_page'></div><br />
==Privacy==<br />
===Ability===<br />
The ability to enable the rights of individuals that interact with Digital Twins to control or influence what information related to them may be collected and stored and by whom and to whom that information may be disclosed.<br />
===Purpose===<br />
The purpose of privacy is to ensure the rights of individuals with regards to data collection, storage and use is respected and enforced.<br />
<br />
<div id='Position_5_on_page'></div><br />
==Safety==<br />
===Ability===<br />
The ability to operate digital twins without causing unacceptable risk of physical injury or damage to the health of people, either directly, or indirectly as a result of damage to property or to the environment.<br />
===Purpose===<br />
The purpose is to ensure a Digital Twin is operating safely without causing an unacceptable risk to safety.<br />
<br />
<div id='Position_6_on_page'></div><br />
==Reliability==<br />
===Ability===<br />
The ability of a Digital Twin system or component to perform its required functions under stated conditions for a specified period of time. This includes expected levels of performance, QoS, functional availability and accuracy.<br />
<br />
<div id='Position_6_on_page'></div><br />
==Resilience==<br />
===Ability===<br />
The ability of a Digital Twin system or component to maintain an acceptable level of service in the face of disruption. This includes the ability to recover lost capacity in a timely manner (using a more or less automated procedure), or to reassign workloads and functions.<br />
===Purpose===<br />
The purpose is to ensure a Digital Twin is able to operate and maintain an acceptable level of service when disrupted.</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=UX&diff=7022UX2022-03-28T13:59:25Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><s data-category="PeriodicTable"></s><br />
<imagemap><br />
File:DT CPT UX.png|1000px|center|<br />
<br />
rect 3132 52 3583 215 [[#Position_1_on_page|Go to Basic Visualization section]]<br />
rect 3127 247 3573 420[[#Position_2_on_page|Go to Advanced Visualization section]]<br />
rect 3116 441 3599 614 [[#Position_3_on_page|Go to Real-time Monitoring section]]<br />
rect 3122 640 3594 813 [[#Position_4_on_page|Go to Entity Relationship Visualization section]]<br />
rect 3116 839 3588 1023 [[#Position_5_on_page|Go to Augmented Reality (AR) section]]<br />
rect 3127 1039 3588 1228 [[#Position_6_on_page|Go to Virtual Reality (VR) section]]<br />
rect 3630 58 4097 226[[#Position_7_on_page|Go to Dashboards section]]<br />
rect 3636 241 4087 420 [[#Position_8_on_page|Go to Continuous Intelligence section]]<br />
rect 3641 456 4103 619[[#Position_9_on_page|Go to Business Intelligence section]]<br />
rect 3646 645 4082 813 [[#Position_10_on_page|Go to Business Process Management & Workflow section]]<br />
rect 3636 834 4097 1034 [[#Position_11_on_page|Go to Gaming Engine Visualization section]]<br />
rect 3620 1049 4103 1228 [[#Position_12_on_page|Go to 3D rendering section]]<br />
rect 3641 1243 4097 1416 [[#Position_13_on_page|Go to Gamification section]]<br />
<br />
rect 63 58 1039 1595 [[Data Services|Go to Data Services]]<br />
rect 1070 52 1558 1228 [[Integration|Go to Integration]]<br />
rect 1584 52 3080 1207 [[Intelligence|Go to Intelligence]]<br />
rect 1070 1259 2067 1611 [[Management|Go to Management]]<br />
rect 2093 1243 3599 1611 [[Trustworthiness|Go to Trustworthiness]]<br />
desc none<br />
</imagemap><br />
<br />
Provides the user with the ability to interact with Digital Twins and visualize its data.<br />
<br />
<div id='Position_1_on_page'></div><br />
==Basic Visualization==<br />
===Ability===<br />
The ability to graphically or parametrically (that is, through parameters and values) visualize data through simple charts, graphs, simple dashboards, tables, hierarchical and basic 3D views of the assets.<br />
===Purpose===<br />
The purpose is to help people understand the significance of data by placing it in a visual context.<br />
<br />
<div id='Position_2_on_page'></div><br />
==Advanced Visualization==<br />
===Ability===<br />
The ability to graphically or parametrically (that is, through parameters and values), visualize data through complex charts and graphs, dashboards fetching raw and process data from multiple systems, complex 3D models and animations, visualizations with overlayed data from different systems.<br />
===Purpose===<br />
The purpose is to help people understand the significance of data by placing it in a visual context.<br />
<br />
<div id='Position_3_on_page'></div><br />
==Real-time Monitoring==<br />
===Ability===<br />
The ability to present and interact with continuously updated information streaming at zero or low latency.<br />
===Purpose===<br />
The purpose is to help make decisions which are of consequence to real-time.<br />
<br />
<div id='Position_4_on_page'></div><br />
==Entity Relationship Visualization==<br />
===Ability===<br />
The ability to present Digital Twin entities and their hierarchical or graph-based relationships in an interactive way.<br />
===Purpose===<br />
The purpose is to help business users navigate and interact with complex entity (asset) hierarchies in a user friendly manner.<br />
<br />
<div id='Position_5_on_page'></div><br />
==Augmented Reality (AR)==<br />
===Ability===<br />
The ability to provide an interactive experience of a real-world environment where the objects that reside in the real world are enhanced by computer-generated perceptual information such as visual, auditory, haptic etc. environment.<br />
===Purpose===<br />
The purpose is to realize an improved, immersive, and interactive experience for the user to around simulating the physical world in a virtual environment.<br />
<br />
<div id='Position_6_on_page'></div><br />
==Virtual Reality (VR)==<br />
===Ability===<br />
The ability to provide a simulated experience that can be similar to, or completely different, from the real world.<br />
===Purpose===<br />
The purpose is to realize an improved, immersive, and interactive experience for the user to around simulating the physical world in a virtual environment.<br />
<br />
<div id='Position_7_on_page'></div><br />
==Dashboards==<br />
===Ability===<br />
The ability to provide a graphical user interface which provides at-a-glance views of key performance indicators relevant to a particular objective or business process.<br />
===Purpose===<br />
The purpose is to enable various personas in operations, technology, and business to visually understand the current or past state of a system.<br />
<br />
<div id='Position_8_on_page'></div><br />
==Continuous Intelligence==<br />
===Ability===<br />
The ability to analyze data in flight (signals) to derive insights and actions in a business user focused visual interface.<br />
===Purpose===<br />
The purpose is to have various personas in operations, technology, and business to make informed real-time decisions.<br />
<br />
<div id='Position_9_on_page'></div><br />
==Business Intelligence ==<br />
===Ability===<br />
The ability to analyze stored data (records) to derive insights and actions in a business user focussed visual interface<br />
===Purpose===<br />
The purpose is to have various personas in operations, technology, and business to make informed real-time decisions.<br />
<br />
<div id='Position_10_on_page'></div><br />
==Business Process Management & Workflow==<br />
===Ability===<br />
The ability to execute a sequence of actions as a process flow to achieve specific business outcomes.<br />
===Purpose===<br />
The purpose is to have effective, repeatable actions that deliver the business outcomes of the Digital Twin.<br />
<br />
<div id='Position_11_on_page'></div><br />
==Gaming Engine Visualization==<br />
===Ability===<br />
The ability to create immersive virtual worlds and interactive experiences with gaming engine technology. <br />
===Purpose===<br />
The purpose it to enable Digital Twins in a digital metaverse where users interact with the Digital Twin in a highly interactive manner.<br />
<br />
<div id='Position_12_on_page'></div><br />
==3D rendering==<br />
===Ability===<br />
The ability to render 3D visualizations from point cloud data sets generated by LiDAR and other scanning technologies.<br />
===Purpose===<br />
The purpose is to interact with large point cloud and 3D datasets in a user friendly manner.<br />
<br />
<div id='Position_13_on_page'></div><br />
==Gamification ==<br />
===Ability===<br />
The ability to enable typical elements of game playing in Digital Twin interaction.<br />
===Purpose===<br />
The purpose is to facilitate the use of gamification elements such as points scoring, badges, competition etc. in the user experience and interactive engagement of a Digital Twin.</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Intelligence&diff=7021Intelligence2022-03-28T13:59:06Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><s data-category="PeriodicTable"></s><br />
<imagemap><br />
File:DT CPT Intelligence.png|1000px|center|<br />
<br />
rect 1595 52 2067 220 [[#Position_1_on_page|Go to Edge AI and Intelligence section]]<br />
rect 1584 252 2067 420 [[#Position_2_on_page|Go to Command and Control section]]<br />
rect 1605 441 2051 614[[#Position_3_on_page|Go to Orchestration section]]<br />
rect 1590 640 2062 818[[#Position_4_on_page|Go to Alerts and Notification section]]<br />
rect 1595 850 2057 1023 [[#Position_5_on_page|Go to Reporting section]]<br />
rect 1600 1039 2062 1217[[#Position_6_on_page|Go to Data Analysis and Analytics section]]<br />
<br />
rect 2109 52 2565 210 [[#Position_7_on_page|Go to Prediction section]]<br />
rect 2104 247 2576 420 [[#Position_8_on_page|Go to Machine Learning (ML) section]]<br />
rect 2104 451 2576 624 [[#Position_9_on_page|Go to Artificial Intelligence section]]<br />
<br />
rect 2114 640 2576 818 [[#Position_10_on_page|Go to Federated Learning section]]<br />
rect 2099 845 2565 1013 [[#Position_11_on_page|Go to Simulation section]]<br />
<br />
rect 2104 1039 2555 1212 [[#Position_12_on_page|Go to Mathematical Analytics section]]<br />
rect 2618 446 3090 619[[#Position_13_on_page|Go to Prescriptive Recommendations section]]<br />
rect 2618 640 3085 824[[#Position_14_on_page|Go to Business Rules section]]<br />
rect 2613 850 3080 1013 [[#Position_15_on_page|Go to Distributed Ledger and Smart Contracts section]]<br />
rect 2618 1060 3085 1207[[#Position_16_on_page|Go to Composition section]]<br />
<br />
<br />
rect 63 58 1039 1595 [[Data Services|Go to Data Services]]<br />
rect 1070 52 1558 1228 [[Integration|Go to Integration]]<br />
rect 1070 1259 2067 1611 [[Management|Go to Management]]<br />
rect 2093 1243 3599 1611 [[Trustworthiness|Go to Trustworthiness]]<br />
rect 3137 47 4097 1222 [[UX|Go to UX]]<br />
<br />
desc none<br />
</imagemap><br />
<br />
<br />
Provides an environment for the development and deployment of industrial Digital Twin solution. It provides the services for data integration, basic and advanced analytics, AI, orchestration, and other Digital Twin process capabilities. <br />
<br />
<div id='Position_1_on_page'></div><br />
==Edge AI and Intelligence==<br />
===Ability===<br />
The ability to make decisions at the device level based on real -time data, distribution, and federation of analytics at the edge instead of transporting the data to the cloud to perform analytics.<br />
===Purpose===<br />
The purpose is to make real-time decisions near field.<br />
<br />
<div id='Position_2_on_page'></div><br />
==Command and Control==<br />
===Ability===<br />
The ability to execute upon work instructions without human interaction. Control would be limited to IoT devices and non-plant controls.<br />
===Purpose===<br />
The purpose is to support future smart IoT devices with centralized management.<br />
<br />
<div id='Position_3_on_page'></div><br />
<br />
==Orchestration==<br />
===Ability===<br />
The ability to coordinate the automated configuration, management, and coordination of systems, applications, digital twins. <br />
===Purpose===<br />
The purpose it to easily manage complex tasks and workflows between different systems, applications, digital twins, or systems of digital twins.<br />
<br />
<div id='Position_4_on_page'></div><br />
==Alerts and Notification==<br />
===Ability===<br />
The ability to display and manage alerts, messages, message queues, triggers, and notifications.<br />
===Purpose===<br />
The purpose is to trigger actions which may require intervention to the ongoing processes.<br />
<br />
<div id='Position_5_on_page'></div><br />
==Reporting==<br />
===Ability===<br />
The ability to generate configurable and customizable reports to get insights into the data.<br />
===Purpose===<br />
The purpose is to get insights into the data which can be useful for various stakeholders in the system as well as for regulatory compliance.<br />
<br />
<div id='Position_6_on_page'></div><br />
==Data Analysis and Analytics==<br />
===Ability===<br />
The study and presentation of data to create information and knowledge. The ability to analyse data through charts, tables, dashboards, fetch data between dates, and filter data based on various criteria. The analysis of data, typically large sets of business data, using mathematics, statistics, and computer software with an objective to draw conclusions.<br />
===Purpose===<br />
The purpose is to understand past trends from historical data.<br />
<br />
<div id='Position_7_on_page'></div><br />
==Prediction==<br />
===Ability===<br />
The ability to estimate that a specified event will happen in the future or will be a consequence of other events.<br />
===Purpose===<br />
The purpose is to use historical data, engineering, and analytical models to predict future events before they occur.<br />
<br />
<div id='Position_8_on_page'></div><br />
==Machine Learning (ML)==<br />
===Ability===<br />
The ability of computer algorithms that improve a digital twin automatically through experience. The algorithms build a mathematical model based on "training data", in order to make predictions or decisions without being explicitly programmed to do so. It is seen as a subset of artificial intelligence.<br />
===Purpose===<br />
The purpose is to enable the digital twin and digital twin systems to learn from data, identify patterns, and make decisions with minimal human intervention.<br />
<br />
<div id='Position_9_on_page'></div><br />
==Artificial Intelligence==<br />
===Ability===<br />
The ability for a system to perform actions and take decisions like humans. AI would include machine learning, natural language processing, knowledge modelling and representation, reasoning, inferencing etc. It is based on the capacity of a computer to perform operations analogous to learning and decision making in humans, as by an expert system, a program for CAD or CAM, or a program for the perception and recognition of shapes in computer vision systems.<br />
===Purpose===<br />
The purpose is to enable a digital twin or a digital twin system to take actions and decisions similar to humans.<br />
<br />
<div id='Position_10_on_page'></div><br />
==Federated Learning==<br />
===Ability===<br />
The ability to train an algorithm across multiple decentralized digital twin edge devices or servers holding local data samples, without exchanging their data samples.<br />
===Purpose===<br />
The purpose is to enable multiple actors to build a common, robust machine learning model without sharing data, thus addressing critical issues such as data privacy, data security, data access rights and access to heterogeneous data.<br />
<br />
<div id='Position_11_on_page'></div><br />
==Simulation==<br />
===Ability===<br />
The ability to create approximate imitation of a process or a system using past historical information, physical models, video, audio, and animation, what-if-scenarios.<br />
===Purpose===<br />
The purpose is to imitate the behavior of a physical system in the digital twin before applying to the physical world. Training operations and maintenance teams on simulated digital twins is another purpose of simulation.<br />
<br />
<div id='Position_12_on_page'></div><br />
==Mathematical Analytics (Engineering Calculations)==<br />
===Ability===<br />
The ability to perform mathematical and statistical calculations to enable physics-based and other mathematical models.<br />
===Purpose===<br />
The purpose is to enable the use of physics models and mathematics calculations in Digital twin analytics.<br />
<br />
<div id='Position_13_on_page'></div><br />
==Prescriptive Recommendations==<br />
===Ability===<br />
The ability to create prescriptive recommendations based on business rules and AI logic to suggest the best next actions to take when a pre-determined event happens. <br />
===Purpose===<br />
The purpose is to enable Digital Twins to provide guidance based on a combination of analytics, business rules and workflow to create actions and deliver business outcomes.<br />
<br />
<div id='Position_14_on_page'></div><br />
==Business Rules==<br />
===Ability===<br />
The ability to create, manage and use business rules that influence the digital twin behavior throughout its lifecycle. <br />
===Purpose===<br />
The purpose is to enable Digital Twins to provide and manage business rules that influence a Digital Twin’s behaviour.<br />
<br />
<div id='Position_15_on_page'></div><br />
==Distributed Ledger and Smart Contracts==<br />
===Ability===<br />
The ability to use distributed ledgers for digital twin applications that require immutable data for digital twin instances, transactions and automation (smart contracts). <br />
===Purpose===<br />
The purpose is to enable Digital twins to interact in an automated, trustworthy, and responsible manner with systems that support smart contracts and provide a full, immutable transaction record. <br />
<br />
<div id='Position_16_on_page'></div><br />
==Composition ==<br />
===Ability===<br />
The ability to use a modular digital twin application development approach to rapidly compose and recompose digital twin services that deliver use case specific outcomes. <br />
===Purpose===<br />
The purpose is to compose or recompose Digital twins from a set of packaged, reusable business capabilities (PBCs) to reduce time to value, duplication and support citizen development of Digital twins.</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Data_Services&diff=7020Data Services2022-03-28T13:58:37Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><s data-category="PeriodicTable"></s><br />
<imagemap><br />
File:DT CPT Data Service.png|1000px|center|<br />
<br />
rect 47 58 525 220 [[#Position_1_on_page|Go to Data Acquisition and Ingestion section]]<br />
rect 63 252 519 425 [[#Position_2_on_page|Go to Data Streaming section]]<br />
rect 68 446 514 619[[#Position_3_on_page|Go to Data Transformation and Wrangling section]]<br />
rect 68 666 498 818 [[#Position_4_on_page|Go to Data Contextualization section]]<br />
rect 52 839 519 1013 [[#Position_5_on_page|Go to Batch Processing section]]<br />
rect 79 1054 509 1222[[#Position_6_on_page|Go to Real-time processing section]]<br />
rect 68 1249 519 1416 [[#Position_7_on_page|Go to Data PubSub Push section]]<br />
rect 52 1453 530 1616[[#Position_8_on_page|Go to Data Aggregation section]]<br />
rect 561 47 1028 226 [[#Position_9_on_page|Go to Synthetic Data Generation section]]<br />
rect 572 247 1049 409 [[#Position_10_on_page|Go to Ontology Management section]]<br />
rect 567 446 1028 619 [[#Position_11_on_page|Go to Digital Twin Model Repository section]]<br />
rect 561 645 1018 829 [[#Position_12_on_page|Go to Digital Twin Instance Repository section]]<br />
rect 588 845 1028 1023[[#Position_13_on_page|Go to Temporal (Time Series) Data Store section]]<br />
rect 577 1065 1023 1207 [[#Position_14_on_page|Go to Data Storage and Archive Services section]]<br />
rect 567 1243 1034 1416 [[#Position_15_on_page|Go to Simulation Model Repository section]]<br />
rect 572 1443 1034 1611 [[#Position_16_on_page|Go to AI Model Repository section]]<br />
<br />
rect 1070 52 1558 1228 [[Integration|Go to Integration]]<br />
rect 1584 52 3080 1207 [[Intelligence|Go to Intelligence]]<br />
rect 1070 1259 2067 1611 [[Management|Go to Management]]<br />
rect 2093 1243 3599 1611 [[Trustworthiness|Go to Trustworthiness]]<br />
rect 3137 47 4097 1222 [[UX|Go to UX]]<br />
<br />
desc none<br />
</imagemap><br />
<br />
<br />
Enables data access, ingestion and data management across the platform from the edge to the cloud. It establishes the physical to virtual connection and receives data directly from equipment sensors or control systems, performs localized processing, and distributes to other tiers<br />
<br />
<br />
<div id='Position_1_on_page'></div><br />
==Data Acquisition and Ingestion==<br />
===Ability===<br />
The ability to configure and acquire data from different data sources including control system, historians, IoT sensors, smart devices, engineering system, enterprise systems etc.<br />
===Purpose===<br />
The purpose is to acquire data from the physical world, engineering technology systems, and information technology systems to support subsequent processing and insight generation.<br />
<br />
<div id='Position_2_on_page'></div><br />
==Data Streaming==<br />
===Ability===<br />
The ability to transfer of large volumes of data continuously and incrementally between a source and a destination without having to access all data at the same time.<br />
===Purpose===<br />
The purpose is to acquire fast continuous packets of information which is changing at high speed to be able to get near real-time insights.<br />
<br />
<div id='Position_3_on_page'></div><br />
==Data Transformation and Wrangling==<br />
===Ability===<br />
The ability to convert data types and properties through cleaning, structuring and enriching raw data to make if suitable for further processing and analytics.<br />
===Purpose===<br />
The purpose is to make data useable in Digital Twins.<br />
<br />
<div id='Position_4_on_page'></div><br />
==Data Contextualization==<br />
===Ability===<br />
The ability to add language or meta data to enrich real time or transactional data<br />
===Purpose===<br />
The purpose is to combine data from different sources such as real-time and context to make it suitable for subsequent processing by the digital twin.<br />
<br />
<div id='Position_5_on_page'></div><br />
==Batch Processing==<br />
===Ability===<br />
The ability to execute against previously collected data in bulk form.<br />
===Purpose===<br />
The purpose is to provide is an efficient way of processing high volumes of data in batches or groups.<br />
<br />
<br />
<div id='Position_6_on_page'></div><br />
==Real-time processing==<br />
===Ability===<br />
The ability to manage and act on the captured data with minimal latency<br />
===Purpose===<br />
The purpose is to support immediate insights from the data.<br />
<br />
<div id='Position_7_on_page'></div><br />
==Data PubSub Push==<br />
===Ability===<br />
The ability to package filtered data to different services based on publish / subscribe model<br />
===Purpose===<br />
The purpose is to provide information to subscribed digital twin consumers.<br />
<br />
<div id='Position_8_on_page'></div><br />
==Data Aggregation==<br />
===Ability===<br />
The ability to gather raw data and express in a summary form.<br />
===Purpose===<br />
The purpose is to gather data from multiple sources with the intent of combining these data sources into a summary for data analysis.<br />
<br />
<div id='Position_9_on_page'></div><br />
==Synthetic Data Generation==<br />
===Ability===<br />
The ability to generate synthetic data based on patterns and rules in existing sources.<br />
===Purpose===<br />
The purpose is to create representative synthetic data that can used by the digital twin to train and score predictive models.<br />
<br />
<div id='Position_10_on_page'></div><br />
==Ontology Management==<br />
===Ability===<br />
The ability to manage knowledge graphs and ontologies.<br />
===Purpose===<br />
The purpose is to enable a digital twin to interpret data directly from knowledge graphs and ontologies.<br />
<br />
<div id='Position_11_on_page'></div><br />
==Digital Twin Model Repository==<br />
===Ability===<br />
The ability to store, manage and retrieve the meta data that describe the digital twin model. The model can include formal data names, comprehensive data definitions, proper data structures, and precise data integrity rules. <br />
===Purpose===<br />
The purpose is to register and manage a portfolio of Digital Twin models in a central repository to improve configuration management and model governance.<br />
<br />
<div id='Position_12_on_page'></div><br />
==Digital Twin Instance Repository==<br />
===Ability===<br />
The ability to store, manage and retrieve digital twin instance data that conforms to the requirements of the digital twin model.<br />
===Purpose===<br />
The purpose is to store, manage and retrieve Digital Twin instance state data.<br />
<br />
<div id='Position_13_on_page'></div><br />
==Temporal (Time Series) Data Store==<br />
===Ability===<br />
The ability to store, organize and retrieve data relating to time instances through temporal data types, and stores information relating to past, present and potentially future time.<br />
===Purpose===<br />
The purpose is to store, manage and retrieve temporal (timeseries) data.<br />
<br />
<div id='Position_14_on_page'></div><br />
==Data Storage and Archive Services==<br />
===Ability===<br />
The ability to store, organize and retrieve data based on how frequently it will be accessed and how long it will be retained.<br />
===Purpose===<br />
The purpose is to reduce the cost and effort of managing Digital Twin data by using hot, cold and archival data services.<br />
<br />
<div id='Position_15_on_page'></div><br />
==Simulation Model Repository==<br />
===Ability===<br />
The ability to store, manage and retrieve the algorithmic codebase, business rules and meta data that describe a simulation model.<br />
===Purpose===<br />
The purpose is to register and manage a portfolio of simulation models in a central repository to improve configuration management and model governance.<br />
<br />
<div id='Position_16_on_page'></div><br />
==AI Model Repository==<br />
===Ability===<br />
The ability to store, manage, search and retrieve the algorithmic codebase that describe an artificial intelligence (AI) model or machine learning (ML) model<br />
===Purpose===<br />
The purpose is to register and manage a portfolio of AI and machine learning models in a central repository to improve configuration management and model governance.</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=UX&diff=7019UX2022-03-28T13:52:03Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><s data-category="PeriodicTable"></s><br />
<imagemap><br />
File:DT CPT UX.png|1000px|center|<br />
<br />
rect 3132 52 3583 215 [[#Position_1_on_page|Go to Basic Visualization section]]<br />
rect 3127 247 3573 420[[#Position_2_on_page|Go to Advanced Visualization section]]<br />
rect 3116 441 3599 614 [[#Position_3_on_page|Go to Real-time Monitoring section]]<br />
rect 3122 640 3594 813 [[#Position_4_on_page|Go to Entity Relationship Visualization section]]<br />
rect 3116 839 3588 1023 [[#Position_5_on_page|Go to Augmented Reality (AR) section]]<br />
rect 3127 1039 3588 1228 [[#Position_6_on_page|Go to Virtual Reality (VR) section]]<br />
rect 3630 58 4097 226[[#Position_7_on_page|Go to Dashboards section]]<br />
rect 3636 241 4087 420 [[#Position_8_on_page|Go to Continuous Intelligence section]]<br />
rect 3641 456 4103 619[[#Position_9_on_page|Go to Business Intelligence section]]<br />
rect 3646 645 4082 813 [[#Position_10_on_page|Go to Business Process Management & Workflow section]]<br />
rect 3636 834 4097 1034 [[#Position_11_on_page|Go to Gaming Engine Visualization section]]<br />
rect 3620 1049 4103 1228 [[#Position_12_on_page|Go to 3D rendering section]]<br />
rect 3641 1243 4097 1416 [[#Position_13_on_page|Go to Gamification section]]<br />
<br />
rect 63 58 1039 1595 [[Data Services|Go to Data Services]]<br />
rect 1102 42 1537 1201[[Integration|Go to Integration]]<br />
rect 1584 52 3080 1207 [[Intelligence|Go to Intelligence]]<br />
rect 1070 1259 2067 1611 [[Management|Go to Management]]<br />
rect 2093 1243 3599 1611 [[Trustworthiness|Go to Trustworthiness]]<br />
desc none<br />
</imagemap><br />
<br />
Provides the user with the ability to interact with Digital Twins and visualize its data.<br />
<br />
<div id='Position_1_on_page'></div><br />
==Basic Visualization==<br />
===Ability===<br />
The ability to graphically or parametrically (that is, through parameters and values) visualize data through simple charts, graphs, simple dashboards, tables, hierarchical and basic 3D views of the assets.<br />
===Purpose===<br />
The purpose is to help people understand the significance of data by placing it in a visual context.<br />
<br />
<div id='Position_2_on_page'></div><br />
==Advanced Visualization==<br />
===Ability===<br />
The ability to graphically or parametrically (that is, through parameters and values), visualize data through complex charts and graphs, dashboards fetching raw and process data from multiple systems, complex 3D models and animations, visualizations with overlayed data from different systems.<br />
===Purpose===<br />
The purpose is to help people understand the significance of data by placing it in a visual context.<br />
<br />
<div id='Position_3_on_page'></div><br />
==Real-time Monitoring==<br />
===Ability===<br />
The ability to present and interact with continuously updated information streaming at zero or low latency.<br />
===Purpose===<br />
The purpose is to help make decisions which are of consequence to real-time.<br />
<br />
<div id='Position_4_on_page'></div><br />
==Entity Relationship Visualization==<br />
===Ability===<br />
The ability to present Digital Twin entities and their hierarchical or graph-based relationships in an interactive way.<br />
===Purpose===<br />
The purpose is to help business users navigate and interact with complex entity (asset) hierarchies in a user friendly manner.<br />
<br />
<div id='Position_5_on_page'></div><br />
==Augmented Reality (AR)==<br />
===Ability===<br />
The ability to provide an interactive experience of a real-world environment where the objects that reside in the real world are enhanced by computer-generated perceptual information such as visual, auditory, haptic etc. environment.<br />
===Purpose===<br />
The purpose is to realize an improved, immersive, and interactive experience for the user to around simulating the physical world in a virtual environment.<br />
<br />
<div id='Position_6_on_page'></div><br />
==Virtual Reality (VR)==<br />
===Ability===<br />
The ability to provide a simulated experience that can be similar to, or completely different, from the real world.<br />
===Purpose===<br />
The purpose is to realize an improved, immersive, and interactive experience for the user to around simulating the physical world in a virtual environment.<br />
<br />
<div id='Position_7_on_page'></div><br />
==Dashboards==<br />
===Ability===<br />
The ability to provide a graphical user interface which provides at-a-glance views of key performance indicators relevant to a particular objective or business process.<br />
===Purpose===<br />
The purpose is to enable various personas in operations, technology, and business to visually understand the current or past state of a system.<br />
<br />
<div id='Position_8_on_page'></div><br />
==Continuous Intelligence==<br />
===Ability===<br />
The ability to analyze data in flight (signals) to derive insights and actions in a business user focused visual interface.<br />
===Purpose===<br />
The purpose is to have various personas in operations, technology, and business to make informed real-time decisions.<br />
<br />
<div id='Position_9_on_page'></div><br />
==Business Intelligence ==<br />
===Ability===<br />
The ability to analyze stored data (records) to derive insights and actions in a business user focussed visual interface<br />
===Purpose===<br />
The purpose is to have various personas in operations, technology, and business to make informed real-time decisions.<br />
<br />
<div id='Position_10_on_page'></div><br />
==Business Process Management & Workflow==<br />
===Ability===<br />
The ability to execute a sequence of actions as a process flow to achieve specific business outcomes.<br />
===Purpose===<br />
The purpose is to have effective, repeatable actions that deliver the business outcomes of the Digital Twin.<br />
<br />
<div id='Position_11_on_page'></div><br />
==Gaming Engine Visualization==<br />
===Ability===<br />
The ability to create immersive virtual worlds and interactive experiences with gaming engine technology. <br />
===Purpose===<br />
The purpose it to enable Digital Twins in a digital metaverse where users interact with the Digital Twin in a highly interactive manner.<br />
<br />
<div id='Position_12_on_page'></div><br />
==3D rendering==<br />
===Ability===<br />
The ability to render 3D visualizations from point cloud data sets generated by LiDAR and other scanning technologies.<br />
===Purpose===<br />
The purpose is to interact with large point cloud and 3D datasets in a user friendly manner.<br />
<br />
<div id='Position_13_on_page'></div><br />
==Gamification ==<br />
===Ability===<br />
The ability to enable typical elements of game playing in Digital Twin interaction.<br />
===Purpose===<br />
The purpose is to facilitate the use of gamification elements such as points scoring, badges, competition etc. in the user experience and interactive engagement of a Digital Twin.</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=UX&diff=7018UX2022-03-28T13:51:03Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><s data-category="PeriodicTable"></s><br />
<imagemap><br />
File:DT CPT UX.png|1000px|center|<br />
<br />
rect 3132 52 3583 215 [[#Position_1_on_page|Go to Basic Visualization section]]<br />
rect 3127 247 3573 420[[#Position_2_on_page|Go to Advanced Visualization section]]<br />
rect 3116 441 3599 614 [[#Position_3_on_page|Go to Real-time Monitoring section]]<br />
rect 3122 640 3594 813 [[#Position_4_on_page|Go to Entity Relationship Visualization section]]<br />
rect 3116 839 3588 1023 [[#Position_5_on_page|Go to Augmented Reality (AR) section]]<br />
rect 3127 1039 3588 1228 [[#Position_6_on_page|Go to Virtual Reality (VR) section]]<br />
rect 3630 58 4097 226[[#Position_7_on_page|Go to Dashboards section]]<br />
rect 3636 241 4087 420 [[#Position_8_on_page|Go to Continuous Intelligence section]]<br />
rect 3641 456 4103 619[[#Position_9_on_page|Go to Business Intelligence section]]<br />
rect 3646 645 4082 813 [[#Position_10_on_page|Go to Business Process Management & Workflow section]]<br />
rect 3636 834 4097 1034 [[#Position_11_on_page|Go to Gaming Engine Visualization section]]<br />
rect 3620 1049 4103 1228 [[#Position_12_on_page|Go to 3D rendering section]]<br />
rect 3641 1243 4097 1416 [[#Position_13_on_page|Go to Gamification section]]<br />
<br />
rect 458 13 659 606 [[Integration|Go to Integration]]<br />
rect 682 16 1312 608 [[Intelligence|Go to Intelligence]]<br />
rect 460 617 883 818 [[Management|Go to Management]]<br />
rect 907 617 1562 820 [[Trustworthiness|Go to Trustworthiness]]<br />
rect 7 18 436 809[[Data Services|Go to Data Services]]<br />
<br />
desc none<br />
</imagemap><br />
<br />
Provides the user with the ability to interact with Digital Twins and visualize its data.<br />
<br />
<div id='Position_1_on_page'></div><br />
==Basic Visualization==<br />
===Ability===<br />
The ability to graphically or parametrically (that is, through parameters and values) visualize data through simple charts, graphs, simple dashboards, tables, hierarchical and basic 3D views of the assets.<br />
===Purpose===<br />
The purpose is to help people understand the significance of data by placing it in a visual context.<br />
<br />
<div id='Position_2_on_page'></div><br />
==Advanced Visualization==<br />
===Ability===<br />
The ability to graphically or parametrically (that is, through parameters and values), visualize data through complex charts and graphs, dashboards fetching raw and process data from multiple systems, complex 3D models and animations, visualizations with overlayed data from different systems.<br />
===Purpose===<br />
The purpose is to help people understand the significance of data by placing it in a visual context.<br />
<br />
<div id='Position_3_on_page'></div><br />
==Real-time Monitoring==<br />
===Ability===<br />
The ability to present and interact with continuously updated information streaming at zero or low latency.<br />
===Purpose===<br />
The purpose is to help make decisions which are of consequence to real-time.<br />
<br />
<div id='Position_4_on_page'></div><br />
==Entity Relationship Visualization==<br />
===Ability===<br />
The ability to present Digital Twin entities and their hierarchical or graph-based relationships in an interactive way.<br />
===Purpose===<br />
The purpose is to help business users navigate and interact with complex entity (asset) hierarchies in a user friendly manner.<br />
<br />
<div id='Position_5_on_page'></div><br />
==Augmented Reality (AR)==<br />
===Ability===<br />
The ability to provide an interactive experience of a real-world environment where the objects that reside in the real world are enhanced by computer-generated perceptual information such as visual, auditory, haptic etc. environment.<br />
===Purpose===<br />
The purpose is to realize an improved, immersive, and interactive experience for the user to around simulating the physical world in a virtual environment.<br />
<br />
<div id='Position_6_on_page'></div><br />
==Virtual Reality (VR)==<br />
===Ability===<br />
The ability to provide a simulated experience that can be similar to, or completely different, from the real world.<br />
===Purpose===<br />
The purpose is to realize an improved, immersive, and interactive experience for the user to around simulating the physical world in a virtual environment.<br />
<br />
<div id='Position_7_on_page'></div><br />
==Dashboards==<br />
===Ability===<br />
The ability to provide a graphical user interface which provides at-a-glance views of key performance indicators relevant to a particular objective or business process.<br />
===Purpose===<br />
The purpose is to enable various personas in operations, technology, and business to visually understand the current or past state of a system.<br />
<br />
<div id='Position_8_on_page'></div><br />
==Continuous Intelligence==<br />
===Ability===<br />
The ability to analyze data in flight (signals) to derive insights and actions in a business user focused visual interface.<br />
===Purpose===<br />
The purpose is to have various personas in operations, technology, and business to make informed real-time decisions.<br />
<br />
<div id='Position_9_on_page'></div><br />
==Business Intelligence ==<br />
===Ability===<br />
The ability to analyze stored data (records) to derive insights and actions in a business user focussed visual interface<br />
===Purpose===<br />
The purpose is to have various personas in operations, technology, and business to make informed real-time decisions.<br />
<br />
<div id='Position_10_on_page'></div><br />
==Business Process Management & Workflow==<br />
===Ability===<br />
The ability to execute a sequence of actions as a process flow to achieve specific business outcomes.<br />
===Purpose===<br />
The purpose is to have effective, repeatable actions that deliver the business outcomes of the Digital Twin.<br />
<br />
<div id='Position_11_on_page'></div><br />
==Gaming Engine Visualization==<br />
===Ability===<br />
The ability to create immersive virtual worlds and interactive experiences with gaming engine technology. <br />
===Purpose===<br />
The purpose it to enable Digital Twins in a digital metaverse where users interact with the Digital Twin in a highly interactive manner.<br />
<br />
<div id='Position_12_on_page'></div><br />
==3D rendering==<br />
===Ability===<br />
The ability to render 3D visualizations from point cloud data sets generated by LiDAR and other scanning technologies.<br />
===Purpose===<br />
The purpose is to interact with large point cloud and 3D datasets in a user friendly manner.<br />
<br />
<div id='Position_13_on_page'></div><br />
==Gamification ==<br />
===Ability===<br />
The ability to enable typical elements of game playing in Digital Twin interaction.<br />
===Purpose===<br />
The purpose is to facilitate the use of gamification elements such as points scoring, badges, competition etc. in the user experience and interactive engagement of a Digital Twin.</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=File:DT_CPT_UX.png&diff=7017File:DT CPT UX.png2022-03-28T13:43:55Z<p>Sangamithra Panneer Selvam: Sangamithra Panneer Selvam uploaded a new version of File:DT CPT UX.png</p>
<hr />
<div></div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=File:DT_CPT_Trustworthiness.png&diff=7016File:DT CPT Trustworthiness.png2022-03-28T13:42:05Z<p>Sangamithra Panneer Selvam: Sangamithra Panneer Selvam uploaded a new version of File:DT CPT Trustworthiness.png</p>
<hr />
<div></div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Trustworthiness&diff=7015Trustworthiness2022-03-28T13:40:00Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><s data-category="PeriodicTable"></s><br />
<imagemap><br />
File:DT CPT Trustworthiness.png|1000px|center|<br />
<br />
rect 2104 1249 2555 1416 [[#Position_1_on_page|Go to Data Encryption section]]<br />
rect 2109 1443 2555 1616 [[#Position_2_on_page|Go to Device Security section]]<br />
rect 2618 1249 3074 1406 [[#Position_3_on_page|Go to Security section]]<br />
rect 2607 1443 3080 1626[[#Position_4_on_page|Go to Privacy section]]<br />
rect 3111 1249 3588 1406 [[#Position_5_on_page|Go to Safety section]]<br />
rect 3116 1448 3588 1626 [[#Position_6_on_page|Go to Reliability section]]<br />
rect 3625 1443 4113 1626 [[#Position_6_on_page|Go to Resilience section]]<br />
<br />
rect 63 58 1039 1595 [[Data Services|Go to Data Services]]<br />
rect 1102 42 1537 1201[[Integration|Go to Integration]]<br />
rect 1584 52 3080 1207 [[Intelligence|Go to Intelligence]]<br />
rect 1070 1259 2067 1611 [[Management|Go to Management]]<br />
rect 3137 47 4097 1222 [[UX|Go to UX]]<br />
<br />
desc none<br />
</imagemap><br />
<br />
<br />
Security, privacy, safety, reliability, and resilience capabilities.<br />
<br />
<div id='Position_1_on_page'></div><br />
==Data Encryption==<br />
===Ability===<br />
The ability to convert Digital Twin data from a readable format into an encoded format that can be used to transfer data securely. It also includes the ability to decrypt the data in order to read or process the data once it reaches its destination.<br />
===Purpose===<br />
The purpose of data encryption protect digital data confidentiality as it is stored in a Digital Twin system, accessed and transmitted.<br />
<br />
<div id='Position_2_on_page'></div><br />
==Device Security==<br />
===Ability===<br />
The ability to enforce authenticated and authorized access to IoT device data through identity management, role-based access, encryption, and policies.<br />
===Purpose===<br />
The purpose is to control access to device data by having the appropriate privileges and enforcement framework for users and programs.<br />
<br />
<div id='Position_3_on_page'></div><br />
==Security==<br />
===Ability===<br />
The ability to protected Digital Twins from unintended or unauthorized access, change or destruction. Security concerns equipment, systems and information, ensuring availability, integrity and confidentiality of information.<br />
===Purpose===<br />
The purpose is to ensure a Digital Twin is protected from unintended or authorized access, change or destruction.<br />
<br />
<div id='Position_4_on_page'></div><br />
==Privacy==<br />
===Ability===<br />
The ability to enable the rights of individuals that interact with Digital Twins to control or influence what information related to them may be collected and stored and by whom and to whom that information may be disclosed.<br />
===Purpose===<br />
The purpose of privacy is to ensure the rights of individuals with regards to data collection, storage and use is respected and enforced.<br />
<br />
<div id='Position_5_on_page'></div><br />
==Safety==<br />
===Ability===<br />
The ability to operate digital twins without causing unacceptable risk of physical injury or damage to the health of people, either directly, or indirectly as a result of damage to property or to the environment.<br />
===Purpose===<br />
The purpose is to ensure a Digital Twin is operating safely without causing an unacceptable risk to safety.<br />
<br />
<div id='Position_6_on_page'></div><br />
==Reliability==<br />
===Ability===<br />
The ability of a Digital Twin system or component to perform its required functions under stated conditions for a specified period of time. This includes expected levels of performance, QoS, functional availability and accuracy.<br />
<br />
<div id='Position_6_on_page'></div><br />
==Resilience==<br />
===Ability===<br />
The ability of a Digital Twin system or component to maintain an acceptable level of service in the face of disruption. This includes the ability to recover lost capacity in a timely manner (using a more or less automated procedure), or to reassign workloads and functions.<br />
===Purpose===<br />
The purpose is to ensure a Digital Twin is able to operate and maintain an acceptable level of service when disrupted.</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Trustworthiness&diff=7014Trustworthiness2022-03-28T13:39:49Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><s data-category="PeriodicTable"></s><br />
<imagemap><br />
File:DT CPT Trustworthiness.png|1000px|center|<br />
<br />
rect 2104 1249 2555 1416 [[#Position_1_on_page|Go to Data Encryption section]]<br />
rect 2109 1443 2555 1616 [[#Position_2_on_page|Go to Device Security section]]<br />
rrect 2618 1249 3074 1406 [[#Position_3_on_page|Go to Security section]]<br />
rect 2607 1443 3080 1626[[#Position_4_on_page|Go to Privacy section]]<br />
rect 3111 1249 3588 1406 [[#Position_5_on_page|Go to Safety section]]<br />
rect 3116 1448 3588 1626 [[#Position_6_on_page|Go to Reliability section]]<br />
rect 3625 1443 4113 1626 [[#Position_6_on_page|Go to Resilience section]]<br />
<br />
rect 63 58 1039 1595 [[Data Services|Go to Data Services]]<br />
rect 1102 42 1537 1201[[Integration|Go to Integration]]<br />
rect 1584 52 3080 1207 [[Intelligence|Go to Intelligence]]<br />
rect 1070 1259 2067 1611 [[Management|Go to Management]]<br />
rect 3137 47 4097 1222 [[UX|Go to UX]]<br />
<br />
desc none<br />
</imagemap><br />
<br />
<br />
Security, privacy, safety, reliability, and resilience capabilities.<br />
<br />
<div id='Position_1_on_page'></div><br />
==Data Encryption==<br />
===Ability===<br />
The ability to convert Digital Twin data from a readable format into an encoded format that can be used to transfer data securely. It also includes the ability to decrypt the data in order to read or process the data once it reaches its destination.<br />
===Purpose===<br />
The purpose of data encryption protect digital data confidentiality as it is stored in a Digital Twin system, accessed and transmitted.<br />
<br />
<div id='Position_2_on_page'></div><br />
==Device Security==<br />
===Ability===<br />
The ability to enforce authenticated and authorized access to IoT device data through identity management, role-based access, encryption, and policies.<br />
===Purpose===<br />
The purpose is to control access to device data by having the appropriate privileges and enforcement framework for users and programs.<br />
<br />
<div id='Position_3_on_page'></div><br />
==Security==<br />
===Ability===<br />
The ability to protected Digital Twins from unintended or unauthorized access, change or destruction. Security concerns equipment, systems and information, ensuring availability, integrity and confidentiality of information.<br />
===Purpose===<br />
The purpose is to ensure a Digital Twin is protected from unintended or authorized access, change or destruction.<br />
<br />
<div id='Position_4_on_page'></div><br />
==Privacy==<br />
===Ability===<br />
The ability to enable the rights of individuals that interact with Digital Twins to control or influence what information related to them may be collected and stored and by whom and to whom that information may be disclosed.<br />
===Purpose===<br />
The purpose of privacy is to ensure the rights of individuals with regards to data collection, storage and use is respected and enforced.<br />
<br />
<div id='Position_5_on_page'></div><br />
==Safety==<br />
===Ability===<br />
The ability to operate digital twins without causing unacceptable risk of physical injury or damage to the health of people, either directly, or indirectly as a result of damage to property or to the environment.<br />
===Purpose===<br />
The purpose is to ensure a Digital Twin is operating safely without causing an unacceptable risk to safety.<br />
<br />
<div id='Position_6_on_page'></div><br />
==Reliability==<br />
===Ability===<br />
The ability of a Digital Twin system or component to perform its required functions under stated conditions for a specified period of time. This includes expected levels of performance, QoS, functional availability and accuracy.<br />
<br />
<div id='Position_6_on_page'></div><br />
==Resilience==<br />
===Ability===<br />
The ability of a Digital Twin system or component to maintain an acceptable level of service in the face of disruption. This includes the ability to recover lost capacity in a timely manner (using a more or less automated procedure), or to reassign workloads and functions.<br />
===Purpose===<br />
The purpose is to ensure a Digital Twin is able to operate and maintain an acceptable level of service when disrupted.</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=File:DT_CPT_Trustworthiness.png&diff=7013File:DT CPT Trustworthiness.png2022-03-28T13:32:31Z<p>Sangamithra Panneer Selvam: Sangamithra Panneer Selvam uploaded a new version of File:DT CPT Trustworthiness.png</p>
<hr />
<div></div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Management&diff=7012Management2022-03-28T13:31:52Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><s data-category="PeriodicTable"></s><br />
<imagemap><br />
File:DT CPT Management.png|1000px|center|<br />
<br />
rect 1096 1259 1548 1406 [[#Position_1_on_page|Go to Device Management section]]<br />
rect 1086 1443 1548 1605[[#Position_2_on_page|Go to System Monitoring and Alerting section]]<br />
rect 1584 1249 2067 1416[[#Position_3_on_page|Go to System Monitoring and Alerting section]]<br />
rect 1590 1432 2062 1616[[#Position_4_on_page|Go to Data Governance section]]<br />
<br />
rect 63 58 1039 1595 [[Data Services|Go to Data Services]]<br />
rect 1102 42 1537 1201[[Integration|Go to Integration]]<br />
rect 1584 52 3080 1207 [[Intelligence|Go to Intelligence]]<br />
rect 2093 1243 3599 1611 [[Trustworthiness|Go to Trustworthiness]]<br />
rect 3137 47 4097 1222 [[UX|Go to UX]]<br />
<br />
desc none<br />
</imagemap><br />
<br />
<br />
System and ecosystem management capabilities.<br />
<br />
<div id='Position_1_on_page'></div><br />
==Device Management==<br />
===Ability===<br />
The ability to provision and authenticate, configure, maintain, monitor and diagnose connected IoT devices operating as part of Digital Twin environment.<br />
===Purpose===<br />
The purpose of (IoT) device management is to provide and support the whole spectrum of functional capabilities of the devices and sensors.<br />
<br />
<div id='Position_2_on_page'></div><br />
==System Monitoring and Alerting==<br />
===Ability===<br />
The ability to observe Digital Twin systems, applications, and services by collecting, analyzing, and acting on their health data in order to maximize their availability and performance.<br />
===Purpose===<br />
The purpose of system monitoring is to provide and support the whole spectrum of Digital Twin systems, applications and services.<br />
<br />
<div id='Position_3_on_page'></div><br />
==Logging==<br />
===Ability===<br />
The ability to record events, transactions, access data of users, and transactions to understand and trace the activities occurring in a Digital Twin system.<br />
===Purpose===<br />
The purpose of event logging is to provide records that enable event activities to be traced within a Digital Twin system.<br />
<br />
<div id='Position_4_on_page'></div><br />
==Data Governance==<br />
===Ability===<br />
The ability to manage the availability, usability, integrity and security of the data in Digital Twin systems, based on internal data standards and policies that also control data usage.<br />
===Purpose===<br />
The purpose is to ensure that data is consistent and trustworthy and doesn't get misused.</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Management&diff=7011Management2022-03-28T13:30:36Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><s data-category="PeriodicTable"></s><br />
<imagemap><br />
File:DT CPT Management.png|1000px|center|<br />
<br />
rect 1096 1259 1548 1406 [[#Position_1_on_page|Go to Device Management section]]<br />
rect 1086 1443 1548 1605[[#Position_2_on_page|Go to System Monitoring and Alerting section]]<br />
rect 1584 1249 2067 1416[[#Position_3_on_page|Go to System Monitoring and Alerting section]]<br />
rect 1590 1432 2062 1616[[#Position_4_on_page|Go to Data Governance section]]<br />
<br />
rect 458 13 659 606 [[Integration|Go to Integration]]<br />
rect 682 16 1312 608 [[Intelligence|Go to Intelligence]]<br />
rect 11 7 429 814 [[Data Services|Go to Data Services]]<br />
rect 907 617 1562 820 [[Trustworthiness|Go to Trustworthiness]]<br />
rect 1352 16 1781 599 [[UX|Go to UX]]<br />
<br />
desc none<br />
</imagemap><br />
<br />
<br />
System and ecosystem management capabilities.<br />
<br />
<div id='Position_1_on_page'></div><br />
==Device Management==<br />
===Ability===<br />
The ability to provision and authenticate, configure, maintain, monitor and diagnose connected IoT devices operating as part of Digital Twin environment.<br />
===Purpose===<br />
The purpose of (IoT) device management is to provide and support the whole spectrum of functional capabilities of the devices and sensors.<br />
<br />
<div id='Position_2_on_page'></div><br />
==System Monitoring and Alerting==<br />
===Ability===<br />
The ability to observe Digital Twin systems, applications, and services by collecting, analyzing, and acting on their health data in order to maximize their availability and performance.<br />
===Purpose===<br />
The purpose of system monitoring is to provide and support the whole spectrum of Digital Twin systems, applications and services.<br />
<br />
<div id='Position_3_on_page'></div><br />
==Logging==<br />
===Ability===<br />
The ability to record events, transactions, access data of users, and transactions to understand and trace the activities occurring in a Digital Twin system.<br />
===Purpose===<br />
The purpose of event logging is to provide records that enable event activities to be traced within a Digital Twin system.<br />
<br />
<div id='Position_4_on_page'></div><br />
==Data Governance==<br />
===Ability===<br />
The ability to manage the availability, usability, integrity and security of the data in Digital Twin systems, based on internal data standards and policies that also control data usage.<br />
===Purpose===<br />
The purpose is to ensure that data is consistent and trustworthy and doesn't get misused.</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=File:DT_CPT_Management.png&diff=7010File:DT CPT Management.png2022-03-28T13:25:54Z<p>Sangamithra Panneer Selvam: Sangamithra Panneer Selvam uploaded a new version of File:DT CPT Management.png</p>
<hr />
<div></div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Intelligence&diff=7009Intelligence2022-03-28T13:22:33Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><s data-category="PeriodicTable"></s><br />
<imagemap><br />
File:DT CPT Intelligence.png|1000px|center|<br />
<br />
rect 1595 52 2067 220 [[#Position_1_on_page|Go to Edge AI and Intelligence section]]<br />
rect 1584 252 2067 420 [[#Position_2_on_page|Go to Command and Control section]]<br />
rect 1605 441 2051 614[[#Position_3_on_page|Go to Orchestration section]]<br />
rect 1590 640 2062 818[[#Position_4_on_page|Go to Alerts and Notification section]]<br />
rect 1595 850 2057 1023 [[#Position_5_on_page|Go to Reporting section]]<br />
rect 1600 1039 2062 1217[[#Position_6_on_page|Go to Data Analysis and Analytics section]]<br />
<br />
rect 2109 52 2565 210 [[#Position_7_on_page|Go to Prediction section]]<br />
rect 2104 247 2576 420 [[#Position_8_on_page|Go to Machine Learning (ML) section]]<br />
rect 2104 451 2576 624 [[#Position_9_on_page|Go to Artificial Intelligence section]]<br />
<br />
rect 2114 640 2576 818 [[#Position_10_on_page|Go to Federated Learning section]]<br />
rect 2099 845 2565 1013 [[#Position_11_on_page|Go to Simulation section]]<br />
<br />
rect 2104 1039 2555 1212 [[#Position_12_on_page|Go to Mathematical Analytics section]]<br />
rect 2618 446 3090 619[[#Position_13_on_page|Go to Prescriptive Recommendations section]]<br />
rect 2618 640 3085 824[[#Position_14_on_page|Go to Business Rules section]]<br />
rect 2613 850 3080 1013 [[#Position_15_on_page|Go to Distributed Ledger and Smart Contracts section]]<br />
rect 2618 1060 3085 1207[[#Position_16_on_page|Go to Composition section]]<br />
<br />
<br />
rect 63 58 1039 1595 [[Data Services|Go to Data Services]]<br />
rect 1102 42 1537 1201[[Integration|Go to Integration]]<br />
rect 1070 1259 2067 1611 [[Management|Go to Management]]<br />
rect 2093 1243 3599 1611 [[Trustworthiness|Go to Trustworthiness]]<br />
rect 3137 47 4097 1222 [[UX|Go to UX]]<br />
<br />
desc none<br />
</imagemap><br />
<br />
<br />
Provides an environment for the development and deployment of industrial Digital Twin solution. It provides the services for data integration, basic and advanced analytics, AI, orchestration, and other Digital Twin process capabilities. <br />
<br />
<div id='Position_1_on_page'></div><br />
==Edge AI and Intelligence==<br />
===Ability===<br />
The ability to make decisions at the device level based on real -time data, distribution, and federation of analytics at the edge instead of transporting the data to the cloud to perform analytics.<br />
===Purpose===<br />
The purpose is to make real-time decisions near field.<br />
<br />
<div id='Position_2_on_page'></div><br />
==Command and Control==<br />
===Ability===<br />
The ability to execute upon work instructions without human interaction. Control would be limited to IoT devices and non-plant controls.<br />
===Purpose===<br />
The purpose is to support future smart IoT devices with centralized management.<br />
<br />
<div id='Position_3_on_page'></div><br />
<br />
==Orchestration==<br />
===Ability===<br />
The ability to coordinate the automated configuration, management, and coordination of systems, applications, digital twins. <br />
===Purpose===<br />
The purpose it to easily manage complex tasks and workflows between different systems, applications, digital twins, or systems of digital twins.<br />
<br />
<div id='Position_4_on_page'></div><br />
==Alerts and Notification==<br />
===Ability===<br />
The ability to display and manage alerts, messages, message queues, triggers, and notifications.<br />
===Purpose===<br />
The purpose is to trigger actions which may require intervention to the ongoing processes.<br />
<br />
<div id='Position_5_on_page'></div><br />
==Reporting==<br />
===Ability===<br />
The ability to generate configurable and customizable reports to get insights into the data.<br />
===Purpose===<br />
The purpose is to get insights into the data which can be useful for various stakeholders in the system as well as for regulatory compliance.<br />
<br />
<div id='Position_6_on_page'></div><br />
==Data Analysis and Analytics==<br />
===Ability===<br />
The study and presentation of data to create information and knowledge. The ability to analyse data through charts, tables, dashboards, fetch data between dates, and filter data based on various criteria. The analysis of data, typically large sets of business data, using mathematics, statistics, and computer software with an objective to draw conclusions.<br />
===Purpose===<br />
The purpose is to understand past trends from historical data.<br />
<br />
<div id='Position_7_on_page'></div><br />
==Prediction==<br />
===Ability===<br />
The ability to estimate that a specified event will happen in the future or will be a consequence of other events.<br />
===Purpose===<br />
The purpose is to use historical data, engineering, and analytical models to predict future events before they occur.<br />
<br />
<div id='Position_8_on_page'></div><br />
==Machine Learning (ML)==<br />
===Ability===<br />
The ability of computer algorithms that improve a digital twin automatically through experience. The algorithms build a mathematical model based on "training data", in order to make predictions or decisions without being explicitly programmed to do so. It is seen as a subset of artificial intelligence.<br />
===Purpose===<br />
The purpose is to enable the digital twin and digital twin systems to learn from data, identify patterns, and make decisions with minimal human intervention.<br />
<br />
<div id='Position_9_on_page'></div><br />
==Artificial Intelligence==<br />
===Ability===<br />
The ability for a system to perform actions and take decisions like humans. AI would include machine learning, natural language processing, knowledge modelling and representation, reasoning, inferencing etc. It is based on the capacity of a computer to perform operations analogous to learning and decision making in humans, as by an expert system, a program for CAD or CAM, or a program for the perception and recognition of shapes in computer vision systems.<br />
===Purpose===<br />
The purpose is to enable a digital twin or a digital twin system to take actions and decisions similar to humans.<br />
<br />
<div id='Position_10_on_page'></div><br />
==Federated Learning==<br />
===Ability===<br />
The ability to train an algorithm across multiple decentralized digital twin edge devices or servers holding local data samples, without exchanging their data samples.<br />
===Purpose===<br />
The purpose is to enable multiple actors to build a common, robust machine learning model without sharing data, thus addressing critical issues such as data privacy, data security, data access rights and access to heterogeneous data.<br />
<br />
<div id='Position_11_on_page'></div><br />
==Simulation==<br />
===Ability===<br />
The ability to create approximate imitation of a process or a system using past historical information, physical models, video, audio, and animation, what-if-scenarios.<br />
===Purpose===<br />
The purpose is to imitate the behavior of a physical system in the digital twin before applying to the physical world. Training operations and maintenance teams on simulated digital twins is another purpose of simulation.<br />
<br />
<div id='Position_12_on_page'></div><br />
==Mathematical Analytics (Engineering Calculations)==<br />
===Ability===<br />
The ability to perform mathematical and statistical calculations to enable physics-based and other mathematical models.<br />
===Purpose===<br />
The purpose is to enable the use of physics models and mathematics calculations in Digital twin analytics.<br />
<br />
<div id='Position_13_on_page'></div><br />
==Prescriptive Recommendations==<br />
===Ability===<br />
The ability to create prescriptive recommendations based on business rules and AI logic to suggest the best next actions to take when a pre-determined event happens. <br />
===Purpose===<br />
The purpose is to enable Digital Twins to provide guidance based on a combination of analytics, business rules and workflow to create actions and deliver business outcomes.<br />
<br />
<div id='Position_14_on_page'></div><br />
==Business Rules==<br />
===Ability===<br />
The ability to create, manage and use business rules that influence the digital twin behavior throughout its lifecycle. <br />
===Purpose===<br />
The purpose is to enable Digital Twins to provide and manage business rules that influence a Digital Twin’s behaviour.<br />
<br />
<div id='Position_15_on_page'></div><br />
==Distributed Ledger and Smart Contracts==<br />
===Ability===<br />
The ability to use distributed ledgers for digital twin applications that require immutable data for digital twin instances, transactions and automation (smart contracts). <br />
===Purpose===<br />
The purpose is to enable Digital twins to interact in an automated, trustworthy, and responsible manner with systems that support smart contracts and provide a full, immutable transaction record. <br />
<br />
<div id='Position_16_on_page'></div><br />
==Composition ==<br />
===Ability===<br />
The ability to use a modular digital twin application development approach to rapidly compose and recompose digital twin services that deliver use case specific outcomes. <br />
===Purpose===<br />
The purpose is to compose or recompose Digital twins from a set of packaged, reusable business capabilities (PBCs) to reduce time to value, duplication and support citizen development of Digital twins.</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=File:DT_CPT_Intelligence.png&diff=7008File:DT CPT Intelligence.png2022-03-28T13:10:16Z<p>Sangamithra Panneer Selvam: Sangamithra Panneer Selvam uploaded a new version of File:DT CPT Intelligence.png</p>
<hr />
<div></div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Integration&diff=7007Integration2022-03-28T13:07:37Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><s data-category="PeriodicTable"></s><br />
<imagemap><br />
File:DT CPT Integration.png|1000px|center|<br />
<br />
rect 1081 31 1548 215[[#Position_1_on_page|Go to Enterprise system integration section]]<br />
rect 1086 247 1542 420 [[#Position_2_on_page|Go to Engineering systems integration section]]<br />
rect 1081 446 1548 619[[#Position_3_on_page|Go to OT/IoT system integration section]]<br />
rect 1081 656 1563 818 [[#Position_4_on_page|Go to Digital Twin Integration section]]<br />
rect 1075 834 1542 1028 [[#Position_5_on_page|Go to Collaboration platform integration section]]<br />
rect 1075 1039 1569 1222 [[#Position_6_on_page|Go to API Services section]]<br />
<br />
<br />
rect 63 58 1039 1595 [[Data Services|Go to Data Services]]<br />
rect 1584 52 3080 1207 [[Intelligence|Go to Intelligence]]<br />
rect 1070 1259 2067 1611 [[Management|Go to Management]]<br />
rect 2093 1243 3599 1611 [[Trustworthiness|Go to Trustworthiness]]<br />
rect 3137 47 4097 1222 [[UX|Go to UX]]<br />
desc none<br />
</imagemap><br />
<br />
<br />
Enables data access to existing internal and external enterprise systems and applications. Enables communication across different digital twins.<br />
<br />
<div id='Position_1_on_page'></div><br />
==Enterprise system integration==<br />
===Ability===<br />
The ability to integrate the digital twin with existing enterprise such as ERP, EAM, CRM, CMMS.<br />
===Purpose===<br />
The purpose is to integrate business applications that enables data to flow between Digital Twin systems with ease.<br />
<br />
<div id='Position_2_on_page'></div><br />
==Engineering systems integration==<br />
===Ability===<br />
The ability to integrate the digital twin with existing engineering systems such as CAD, CAM, BIM, Historians<br />
===Purpose===<br />
The purpose is to integrate engineering applications that enables model use and data to flow between Digital Twin systems with ease. <br />
<br />
<div id='Position_3_on_page'></div><br />
<br />
==OT/IoT system integration==<br />
===Ability===<br />
The ability to integrate directly with control systems and IOT devices/sensors, SCADA. <br />
===Purpose===<br />
The purpose is to integrate operation technology (OT) and IoT applications that data to flow between Digital Twin systems with ease.<br />
<br />
<div id='Position_4_on_page'></div><br />
==Digital Twin Integration==<br />
===Ability===<br />
The ability to integrate or access information from existing digital twin instances. <br />
===Purpose===<br />
The purpose is to integrate Digital Twin applications with one another to enable interoperable Digital Twins.<br />
<br />
<div id='Position_5_on_page'></div><br />
==Collaboration platform integration==<br />
===Ability===<br />
The ability for the digital twin to interface with platforms like Yammer, Jabber, Teams, Slack.<br />
===Purpose===<br />
The purpose is to integrate collaboration platforms to provide Digital Twin users with a conversational user interface.<br />
<br />
<div id='Position_6_on_page'></div><br />
==API Services==<br />
===Ability===<br />
The ability for the digital twin to publish APIs to external, partner, and internal developers to access data and services.<br />
===Purpose===<br />
The purpose is to simplify Digital Twin development by allowing Digital to integrate to products and services without knowing how they are implemented.</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Integration&diff=7006Integration2022-03-28T13:06:31Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><s data-category="PeriodicTable"></s><br />
<imagemap><br />
File:DT CPT Integration.png|1000px|center|<br />
<br />
rect 1081 31 1548 215[[#Position_1_on_page|Go to Enterprise system integration section]]<br />
rect 1086 247 1542 420 [[#Position_2_on_page|Go to Engineering systems integration section]]<br />
rect 1081 446 1548 619[[#Position_3_on_page|Go to OT/IoT system integration section]]<br />
rect 1999 913 1537 829 [[#Position_4_on_page|Go to Digital Twin Integration section]]<br />
rect 1075 834 1542 1028 [[#Position_5_on_page|Go to Collaboration platform integration section]]<br />
rect 1075 1039 1569 1222 [[#Position_6_on_page|Go to API Services section]]<br />
<br />
<br />
rect 63 58 1039 1595 [[Data Services|Go to Data Services]]<br />
rect 1584 52 3080 1207 [[Intelligence|Go to Intelligence]]<br />
rect 1070 1259 2067 1611 [[Management|Go to Management]]<br />
rect 2093 1243 3599 1611 [[Trustworthiness|Go to Trustworthiness]]<br />
rect 3137 47 4097 1222 [[UX|Go to UX]]<br />
desc none<br />
</imagemap><br />
<br />
<br />
Enables data access to existing internal and external enterprise systems and applications. Enables communication across different digital twins.<br />
<br />
<div id='Position_1_on_page'></div><br />
==Enterprise system integration==<br />
===Ability===<br />
The ability to integrate the digital twin with existing enterprise such as ERP, EAM, CRM, CMMS.<br />
===Purpose===<br />
The purpose is to integrate business applications that enables data to flow between Digital Twin systems with ease.<br />
<br />
<div id='Position_2_on_page'></div><br />
==Engineering systems integration==<br />
===Ability===<br />
The ability to integrate the digital twin with existing engineering systems such as CAD, CAM, BIM, Historians<br />
===Purpose===<br />
The purpose is to integrate engineering applications that enables model use and data to flow between Digital Twin systems with ease. <br />
<br />
<div id='Position_3_on_page'></div><br />
<br />
==OT/IoT system integration==<br />
===Ability===<br />
The ability to integrate directly with control systems and IOT devices/sensors, SCADA. <br />
===Purpose===<br />
The purpose is to integrate operation technology (OT) and IoT applications that data to flow between Digital Twin systems with ease.<br />
<br />
<div id='Position_4_on_page'></div><br />
==Digital Twin Integration==<br />
===Ability===<br />
The ability to integrate or access information from existing digital twin instances. <br />
===Purpose===<br />
The purpose is to integrate Digital Twin applications with one another to enable interoperable Digital Twins.<br />
<br />
<div id='Position_5_on_page'></div><br />
==Collaboration platform integration==<br />
===Ability===<br />
The ability for the digital twin to interface with platforms like Yammer, Jabber, Teams, Slack.<br />
===Purpose===<br />
The purpose is to integrate collaboration platforms to provide Digital Twin users with a conversational user interface.<br />
<br />
<div id='Position_6_on_page'></div><br />
==API Services==<br />
===Ability===<br />
The ability for the digital twin to publish APIs to external, partner, and internal developers to access data and services.<br />
===Purpose===<br />
The purpose is to simplify Digital Twin development by allowing Digital to integrate to products and services without knowing how they are implemented.</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Integration&diff=7005Integration2022-03-28T13:06:06Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><s data-category="PeriodicTable"></s><br />
<imagemap><br />
File:DT CPT Integration.png|1000px|center|<br />
<br />
rect 1081 31 1548 215[[#Position_1_on_page|Go to Enterprise system integration section]]<br />
rect 1086 247 1542 420 [[#Position_2_on_page|Go to Engineering systems integration section]]<br />
rect 1081 446 1548 619[[#Position_3_on_page|Go to OT/IoT system integration section]]<br />
<br />
rect 1075 834 1542 1028 [[#Position_5_on_page|Go to Collaboration platform integration section]]<br />
rect 1075 1039 1569 1222 [[#Position_6_on_page|Go to API Services section]]<br />
<br />
<br />
rect 63 58 1039 1595 [[Data Services|Go to Data Services]]<br />
rect 1584 52 3080 1207 [[Intelligence|Go to Intelligence]]<br />
rect 1070 1259 2067 1611 [[Management|Go to Management]]<br />
rect 2093 1243 3599 1611 [[Trustworthiness|Go to Trustworthiness]]<br />
rect 3137 47 4097 1222 [[UX|Go to UX]]<br />
desc none<br />
</imagemap><br />
<br />
<br />
Enables data access to existing internal and external enterprise systems and applications. Enables communication across different digital twins.<br />
<br />
<div id='Position_1_on_page'></div><br />
==Enterprise system integration==<br />
===Ability===<br />
The ability to integrate the digital twin with existing enterprise such as ERP, EAM, CRM, CMMS.<br />
===Purpose===<br />
The purpose is to integrate business applications that enables data to flow between Digital Twin systems with ease.<br />
<br />
<div id='Position_2_on_page'></div><br />
==Engineering systems integration==<br />
===Ability===<br />
The ability to integrate the digital twin with existing engineering systems such as CAD, CAM, BIM, Historians<br />
===Purpose===<br />
The purpose is to integrate engineering applications that enables model use and data to flow between Digital Twin systems with ease. <br />
<br />
<div id='Position_3_on_page'></div><br />
<br />
==OT/IoT system integration==<br />
===Ability===<br />
The ability to integrate directly with control systems and IOT devices/sensors, SCADA. <br />
===Purpose===<br />
The purpose is to integrate operation technology (OT) and IoT applications that data to flow between Digital Twin systems with ease.<br />
<br />
<div id='Position_4_on_page'></div><br />
==Digital Twin Integration==<br />
===Ability===<br />
The ability to integrate or access information from existing digital twin instances. <br />
===Purpose===<br />
The purpose is to integrate Digital Twin applications with one another to enable interoperable Digital Twins.<br />
<br />
<div id='Position_5_on_page'></div><br />
==Collaboration platform integration==<br />
===Ability===<br />
The ability for the digital twin to interface with platforms like Yammer, Jabber, Teams, Slack.<br />
===Purpose===<br />
The purpose is to integrate collaboration platforms to provide Digital Twin users with a conversational user interface.<br />
<br />
<div id='Position_6_on_page'></div><br />
==API Services==<br />
===Ability===<br />
The ability for the digital twin to publish APIs to external, partner, and internal developers to access data and services.<br />
===Purpose===<br />
The purpose is to simplify Digital Twin development by allowing Digital to integrate to products and services without knowing how they are implemented.</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Integration&diff=7004Integration2022-03-28T13:05:32Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><s data-category="PeriodicTable"></s><br />
<imagemap><br />
File:DT CPT Integration.png|1000px|center|<br />
<br />
rect 1081 31 1548 215[[#Position_1_on_page|Go to Enterprise system integration section]]<br />
rect 1086 247 1542 420 [[#Position_2_on_page|Go to Engineering systems integration section]]<br />
rect 1081 446 1548 619[[#Position_3_on_page|Go to OT/IoT system integration section]]<br />
rect 1999 913 1537 829 [[#Position_4_on_page|Go to Digital Twin Integration section]]<br />
rect 1075 834 1542 1028 [[#Position_5_on_page|Go to Collaboration platform integration section]]<br />
rect 1075 1039 1569 1222 [[#Position_6_on_page|Go to API Services section]]<br />
<br />
<br />
rect 63 58 1039 1595 [[Data Services|Go to Data Services]]<br />
rect 1584 52 3080 1207 [[Intelligence|Go to Intelligence]]<br />
rect 1070 1259 2067 1611 [[Management|Go to Management]]<br />
rect 2093 1243 3599 1611 [[Trustworthiness|Go to Trustworthiness]]<br />
rect 3137 47 4097 1222 [[UX|Go to UX]]<br />
desc none<br />
</imagemap><br />
<br />
<br />
Enables data access to existing internal and external enterprise systems and applications. Enables communication across different digital twins.<br />
<br />
<div id='Position_1_on_page'></div><br />
==Enterprise system integration==<br />
===Ability===<br />
The ability to integrate the digital twin with existing enterprise such as ERP, EAM, CRM, CMMS.<br />
===Purpose===<br />
The purpose is to integrate business applications that enables data to flow between Digital Twin systems with ease.<br />
<br />
<div id='Position_2_on_page'></div><br />
==Engineering systems integration==<br />
===Ability===<br />
The ability to integrate the digital twin with existing engineering systems such as CAD, CAM, BIM, Historians<br />
===Purpose===<br />
The purpose is to integrate engineering applications that enables model use and data to flow between Digital Twin systems with ease. <br />
<br />
<div id='Position_3_on_page'></div><br />
<br />
==OT/IoT system integration==<br />
===Ability===<br />
The ability to integrate directly with control systems and IOT devices/sensors, SCADA. <br />
===Purpose===<br />
The purpose is to integrate operation technology (OT) and IoT applications that data to flow between Digital Twin systems with ease.<br />
<br />
<div id='Position_4_on_page'></div><br />
==Digital Twin Integration==<br />
===Ability===<br />
The ability to integrate or access information from existing digital twin instances. <br />
===Purpose===<br />
The purpose is to integrate Digital Twin applications with one another to enable interoperable Digital Twins.<br />
<br />
<div id='Position_5_on_page'></div><br />
==Collaboration platform integration==<br />
===Ability===<br />
The ability for the digital twin to interface with platforms like Yammer, Jabber, Teams, Slack.<br />
===Purpose===<br />
The purpose is to integrate collaboration platforms to provide Digital Twin users with a conversational user interface.<br />
<br />
<div id='Position_6_on_page'></div><br />
==API Services==<br />
===Ability===<br />
The ability for the digital twin to publish APIs to external, partner, and internal developers to access data and services.<br />
===Purpose===<br />
The purpose is to simplify Digital Twin development by allowing Digital to integrate to products and services without knowing how they are implemented.</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=File:DT_CPT_Integration.png&diff=7003File:DT CPT Integration.png2022-03-28T12:53:49Z<p>Sangamithra Panneer Selvam: Sangamithra Panneer Selvam uploaded a new version of File:DT CPT Integration.png</p>
<hr />
<div></div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Data_Services&diff=7002Data Services2022-03-28T12:52:19Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><s data-category="PeriodicTable"></s><br />
<imagemap><br />
File:DT CPT Data Service.png|1000px|center|<br />
<br />
rect 47 58 525 220 [[#Position_1_on_page|Go to Data Acquisition and Ingestion section]]<br />
rect 63 252 519 425 [[#Position_2_on_page|Go to Data Streaming section]]<br />
rect 68 446 514 619[[#Position_3_on_page|Go to Data Transformation and Wrangling section]]<br />
rect 68 666 498 818 [[#Position_4_on_page|Go to Data Contextualization section]]<br />
rect 52 839 519 1013 [[#Position_5_on_page|Go to Batch Processing section]]<br />
rect 79 1054 509 1222[[#Position_6_on_page|Go to Real-time processing section]]<br />
rect 68 1249 519 1416 [[#Position_7_on_page|Go to Data PubSub Push section]]<br />
rect 52 1453 530 1616[[#Position_8_on_page|Go to Data Aggregation section]]<br />
rect 561 47 1028 226 [[#Position_9_on_page|Go to Synthetic Data Generation section]]<br />
rect 572 247 1049 409 [[#Position_10_on_page|Go to Ontology Management section]]<br />
rect 567 446 1028 619 [[#Position_11_on_page|Go to Digital Twin Model Repository section]]<br />
rect 561 645 1018 829 [[#Position_12_on_page|Go to Digital Twin Instance Repository section]]<br />
rect 588 845 1028 1023[[#Position_13_on_page|Go to Temporal (Time Series) Data Store section]]<br />
rect 577 1065 1023 1207 [[#Position_14_on_page|Go to Data Storage and Archive Services section]]<br />
rect 567 1243 1034 1416 [[#Position_15_on_page|Go to Simulation Model Repository section]]<br />
rect 572 1443 1034 1611 [[#Position_16_on_page|Go to AI Model Repository section]]<br />
<br />
rect 1102 42 1537 1201[[Integration|Go to Integration]]<br />
rect 1584 52 3080 1207 [[Intelligence|Go to Intelligence]]<br />
rect 1070 1259 2067 1611 [[Management|Go to Management]]<br />
rect 2093 1243 3599 1611 [[Trustworthiness|Go to Trustworthiness]]<br />
rect 3137 47 4097 1222 [[UX|Go to UX]]<br />
<br />
desc none<br />
</imagemap><br />
<br />
<br />
Enables data access, ingestion and data management across the platform from the edge to the cloud. It establishes the physical to virtual connection and receives data directly from equipment sensors or control systems, performs localized processing, and distributes to other tiers<br />
<br />
<br />
<div id='Position_1_on_page'></div><br />
==Data Acquisition and Ingestion==<br />
===Ability===<br />
The ability to configure and acquire data from different data sources including control system, historians, IoT sensors, smart devices, engineering system, enterprise systems etc.<br />
===Purpose===<br />
The purpose is to acquire data from the physical world, engineering technology systems, and information technology systems to support subsequent processing and insight generation.<br />
<br />
<div id='Position_2_on_page'></div><br />
==Data Streaming==<br />
===Ability===<br />
The ability to transfer of large volumes of data continuously and incrementally between a source and a destination without having to access all data at the same time.<br />
===Purpose===<br />
The purpose is to acquire fast continuous packets of information which is changing at high speed to be able to get near real-time insights.<br />
<br />
<div id='Position_3_on_page'></div><br />
==Data Transformation and Wrangling==<br />
===Ability===<br />
The ability to convert data types and properties through cleaning, structuring and enriching raw data to make if suitable for further processing and analytics.<br />
===Purpose===<br />
The purpose is to make data useable in Digital Twins.<br />
<br />
<div id='Position_4_on_page'></div><br />
==Data Contextualization==<br />
===Ability===<br />
The ability to add language or meta data to enrich real time or transactional data<br />
===Purpose===<br />
The purpose is to combine data from different sources such as real-time and context to make it suitable for subsequent processing by the digital twin.<br />
<br />
<div id='Position_5_on_page'></div><br />
==Batch Processing==<br />
===Ability===<br />
The ability to execute against previously collected data in bulk form.<br />
===Purpose===<br />
The purpose is to provide is an efficient way of processing high volumes of data in batches or groups.<br />
<br />
<br />
<div id='Position_6_on_page'></div><br />
==Real-time processing==<br />
===Ability===<br />
The ability to manage and act on the captured data with minimal latency<br />
===Purpose===<br />
The purpose is to support immediate insights from the data.<br />
<br />
<div id='Position_7_on_page'></div><br />
==Data PubSub Push==<br />
===Ability===<br />
The ability to package filtered data to different services based on publish / subscribe model<br />
===Purpose===<br />
The purpose is to provide information to subscribed digital twin consumers.<br />
<br />
<div id='Position_8_on_page'></div><br />
==Data Aggregation==<br />
===Ability===<br />
The ability to gather raw data and express in a summary form.<br />
===Purpose===<br />
The purpose is to gather data from multiple sources with the intent of combining these data sources into a summary for data analysis.<br />
<br />
<div id='Position_9_on_page'></div><br />
==Synthetic Data Generation==<br />
===Ability===<br />
The ability to generate synthetic data based on patterns and rules in existing sources.<br />
===Purpose===<br />
The purpose is to create representative synthetic data that can used by the digital twin to train and score predictive models.<br />
<br />
<div id='Position_10_on_page'></div><br />
==Ontology Management==<br />
===Ability===<br />
The ability to manage knowledge graphs and ontologies.<br />
===Purpose===<br />
The purpose is to enable a digital twin to interpret data directly from knowledge graphs and ontologies.<br />
<br />
<div id='Position_11_on_page'></div><br />
==Digital Twin Model Repository==<br />
===Ability===<br />
The ability to store, manage and retrieve the meta data that describe the digital twin model. The model can include formal data names, comprehensive data definitions, proper data structures, and precise data integrity rules. <br />
===Purpose===<br />
The purpose is to register and manage a portfolio of Digital Twin models in a central repository to improve configuration management and model governance.<br />
<br />
<div id='Position_12_on_page'></div><br />
==Digital Twin Instance Repository==<br />
===Ability===<br />
The ability to store, manage and retrieve digital twin instance data that conforms to the requirements of the digital twin model.<br />
===Purpose===<br />
The purpose is to store, manage and retrieve Digital Twin instance state data.<br />
<br />
<div id='Position_13_on_page'></div><br />
==Temporal (Time Series) Data Store==<br />
===Ability===<br />
The ability to store, organize and retrieve data relating to time instances through temporal data types, and stores information relating to past, present and potentially future time.<br />
===Purpose===<br />
The purpose is to store, manage and retrieve temporal (timeseries) data.<br />
<br />
<div id='Position_14_on_page'></div><br />
==Data Storage and Archive Services==<br />
===Ability===<br />
The ability to store, organize and retrieve data based on how frequently it will be accessed and how long it will be retained.<br />
===Purpose===<br />
The purpose is to reduce the cost and effort of managing Digital Twin data by using hot, cold and archival data services.<br />
<br />
<div id='Position_15_on_page'></div><br />
==Simulation Model Repository==<br />
===Ability===<br />
The ability to store, manage and retrieve the algorithmic codebase, business rules and meta data that describe a simulation model.<br />
===Purpose===<br />
The purpose is to register and manage a portfolio of simulation models in a central repository to improve configuration management and model governance.<br />
<br />
<div id='Position_16_on_page'></div><br />
==AI Model Repository==<br />
===Ability===<br />
The ability to store, manage, search and retrieve the algorithmic codebase that describe an artificial intelligence (AI) model or machine learning (ML) model<br />
===Purpose===<br />
The purpose is to register and manage a portfolio of AI and machine learning models in a central repository to improve configuration management and model governance.</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Data_Services&diff=7001Data Services2022-03-28T12:48:15Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><s data-category="PeriodicTable"></s><br />
<imagemap><br />
File:DT CPT Data Service.png|1000px|center|<br />
<br />
rect 47 58 525 220 [[#Position_1_on_page|Go to Data Acquisition and Ingestion section]]<br />
rect 63 252 519 425 [[#Position_2_on_page|Go to Data Streaming section]]<br />
rect 68 446 514 619[[#Position_3_on_page|Go to Data Transformation and Wrangling section]]<br />
rect 68 666 498 818 [[#Position_4_on_page|Go to Data Contextualization section]]<br />
rect 52 839 519 1013 [[#Position_5_on_page|Go to Batch Processing section]]<br />
rect 79 1054 509 1222[[#Position_6_on_page|Go to Real-time processing section]]<br />
rect 68 1249 519 1416 [[#Position_7_on_page|Go to Data PubSub Push section]]<br />
rect 52 1453 530 1616[[#Position_8_on_page|Go to Data Aggregation section]]<br />
rect 561 47 1028 226 [[#Position_9_on_page|Go to Synthetic Data Generation section]]<br />
rect 572 247 1049 409 [[#Position_10_on_page|Go to Ontology Management section]]<br />
rect 567 446 1028 619 [[#Position_11_on_page|Go to Digital Twin Model Repository section]]<br />
rect 561 645 1018 829 [[#Position_12_on_page|Go to Digital Twin Instance Repository section]]<br />
rect 588 845 1028 1023[[#Position_13_on_page|Go to Temporal (Time Series) Data Store section]]<br />
rect 577 1065 1023 1207 [[#Position_14_on_page|Go to Data Storage and Archive Services section]]<br />
rect 567 1243 1034 1416 [[#Position_15_on_page|Go to Simulation Model Repository section]]<br />
rect 572 1443 1034 1611 [[#Position_16_on_page|Go to AI Model Repository section]]<br />
<br />
rect 1043 25 1491 1170[[Integration|Go to Integration]]<br />
rect 1562 31 3017 1140 [[Intelligence|Go to Intelligence]]<br />
rect 1043 1221 2020 1572 [[Management|Go to Management]]<br />
rect 2076 1221 3561 1577 [[Trustworthiness|Go to Trustworthiness]]<br />
rect 3083 20 4045 1180 [[UX|Go to UX]]<br />
<br />
desc none<br />
</imagemap><br />
<br />
<br />
Enables data access, ingestion and data management across the platform from the edge to the cloud. It establishes the physical to virtual connection and receives data directly from equipment sensors or control systems, performs localized processing, and distributes to other tiers<br />
<br />
<br />
<div id='Position_1_on_page'></div><br />
==Data Acquisition and Ingestion==<br />
===Ability===<br />
The ability to configure and acquire data from different data sources including control system, historians, IoT sensors, smart devices, engineering system, enterprise systems etc.<br />
===Purpose===<br />
The purpose is to acquire data from the physical world, engineering technology systems, and information technology systems to support subsequent processing and insight generation.<br />
<br />
<div id='Position_2_on_page'></div><br />
==Data Streaming==<br />
===Ability===<br />
The ability to transfer of large volumes of data continuously and incrementally between a source and a destination without having to access all data at the same time.<br />
===Purpose===<br />
The purpose is to acquire fast continuous packets of information which is changing at high speed to be able to get near real-time insights.<br />
<br />
<div id='Position_3_on_page'></div><br />
==Data Transformation and Wrangling==<br />
===Ability===<br />
The ability to convert data types and properties through cleaning, structuring and enriching raw data to make if suitable for further processing and analytics.<br />
===Purpose===<br />
The purpose is to make data useable in Digital Twins.<br />
<br />
<div id='Position_4_on_page'></div><br />
==Data Contextualization==<br />
===Ability===<br />
The ability to add language or meta data to enrich real time or transactional data<br />
===Purpose===<br />
The purpose is to combine data from different sources such as real-time and context to make it suitable for subsequent processing by the digital twin.<br />
<br />
<div id='Position_5_on_page'></div><br />
==Batch Processing==<br />
===Ability===<br />
The ability to execute against previously collected data in bulk form.<br />
===Purpose===<br />
The purpose is to provide is an efficient way of processing high volumes of data in batches or groups.<br />
<br />
<br />
<div id='Position_6_on_page'></div><br />
==Real-time processing==<br />
===Ability===<br />
The ability to manage and act on the captured data with minimal latency<br />
===Purpose===<br />
The purpose is to support immediate insights from the data.<br />
<br />
<div id='Position_7_on_page'></div><br />
==Data PubSub Push==<br />
===Ability===<br />
The ability to package filtered data to different services based on publish / subscribe model<br />
===Purpose===<br />
The purpose is to provide information to subscribed digital twin consumers.<br />
<br />
<div id='Position_8_on_page'></div><br />
==Data Aggregation==<br />
===Ability===<br />
The ability to gather raw data and express in a summary form.<br />
===Purpose===<br />
The purpose is to gather data from multiple sources with the intent of combining these data sources into a summary for data analysis.<br />
<br />
<div id='Position_9_on_page'></div><br />
==Synthetic Data Generation==<br />
===Ability===<br />
The ability to generate synthetic data based on patterns and rules in existing sources.<br />
===Purpose===<br />
The purpose is to create representative synthetic data that can used by the digital twin to train and score predictive models.<br />
<br />
<div id='Position_10_on_page'></div><br />
==Ontology Management==<br />
===Ability===<br />
The ability to manage knowledge graphs and ontologies.<br />
===Purpose===<br />
The purpose is to enable a digital twin to interpret data directly from knowledge graphs and ontologies.<br />
<br />
<div id='Position_11_on_page'></div><br />
==Digital Twin Model Repository==<br />
===Ability===<br />
The ability to store, manage and retrieve the meta data that describe the digital twin model. The model can include formal data names, comprehensive data definitions, proper data structures, and precise data integrity rules. <br />
===Purpose===<br />
The purpose is to register and manage a portfolio of Digital Twin models in a central repository to improve configuration management and model governance.<br />
<br />
<div id='Position_12_on_page'></div><br />
==Digital Twin Instance Repository==<br />
===Ability===<br />
The ability to store, manage and retrieve digital twin instance data that conforms to the requirements of the digital twin model.<br />
===Purpose===<br />
The purpose is to store, manage and retrieve Digital Twin instance state data.<br />
<br />
<div id='Position_13_on_page'></div><br />
==Temporal (Time Series) Data Store==<br />
===Ability===<br />
The ability to store, organize and retrieve data relating to time instances through temporal data types, and stores information relating to past, present and potentially future time.<br />
===Purpose===<br />
The purpose is to store, manage and retrieve temporal (timeseries) data.<br />
<br />
<div id='Position_14_on_page'></div><br />
==Data Storage and Archive Services==<br />
===Ability===<br />
The ability to store, organize and retrieve data based on how frequently it will be accessed and how long it will be retained.<br />
===Purpose===<br />
The purpose is to reduce the cost and effort of managing Digital Twin data by using hot, cold and archival data services.<br />
<br />
<div id='Position_15_on_page'></div><br />
==Simulation Model Repository==<br />
===Ability===<br />
The ability to store, manage and retrieve the algorithmic codebase, business rules and meta data that describe a simulation model.<br />
===Purpose===<br />
The purpose is to register and manage a portfolio of simulation models in a central repository to improve configuration management and model governance.<br />
<br />
<div id='Position_16_on_page'></div><br />
==AI Model Repository==<br />
===Ability===<br />
The ability to store, manage, search and retrieve the algorithmic codebase that describe an artificial intelligence (AI) model or machine learning (ML) model<br />
===Purpose===<br />
The purpose is to register and manage a portfolio of AI and machine learning models in a central repository to improve configuration management and model governance.</div>Sangamithra Panneer Selvamhttps://www.digitalplaybook.org/index.php?title=Data_Services&diff=7000Data Services2022-03-28T07:37:42Z<p>Sangamithra Panneer Selvam: </p>
<hr />
<div><s data-category="PeriodicTable"></s><br />
<imagemap><br />
File:DT CPT Data Service.png|1000px|center|<br />
<br />
rect 47 58 525 220 [[#Position_1_on_page|Go to Data Acquisition and Ingestion section]]<br />
rect 63 252 519 425 [[#Position_2_on_page|Go to Data Streaming section]]<br />
rect 68 446 514 619[[#Position_3_on_page|Go to Data Transformation and Wrangling section]]<br />
rect 68 666 498 818 [[#Position_4_on_page|Go to Data Contextualization section]]<br />
rect 52 839 519 1013 [[#Position_5_on_page|Go to Batch Processing section]]<br />
rect 79 1054 509 1222[[#Position_6_on_page|Go to Real-time processing section]]<br />
rect 68 1249 519 1416 [[#Position_7_on_page|Go to Data PubSub Push section]]<br />
rect 52 1453 530 1616[[#Position_8_on_page|Go to Data Aggregation section]]<br />
rect 561 47 1028 226 [[#Position_9_on_page|Go to Synthetic Data Generation section]]<br />
rect 572 247 1049 409 [[#Position_10_on_page|Go to Ontology Management section]]<br />
rect 567 446 1028 619 [[#Position_11_on_page|Go to Digital Twin Model Repository section]]<br />
rect 561 645 1018 829 [[#Position_12_on_page|Go to Digital Twin Instance Repository section]]<br />
rect 577 845 1023 102 [[#Position_13_on_page|Go to Temporal (Time Series) Data Store section]]<br />
rect 577 1065 1023 1207 [[#Position_14_on_page|Go to Data Storage and Archive Services section]]<br />
rect 567 1243 1034 1416 [[#Position_15_on_page|Go to Simulation Model Repository section]]<br />
rect 572 1443 1034 1611 [[#Position_16_on_page|Go to AI Model Repository section]]<br />
<br />
rect 1043 25 1491 1170[[Integration|Go to Integration]]<br />
rect 1562 31 3017 1140 [[Intelligence|Go to Intelligence]]<br />
rect 1043 1221 2020 1572 [[Management|Go to Management]]<br />
rect 2076 1221 3561 1577 [[Trustworthiness|Go to Trustworthiness]]<br />
rect 3083 20 4045 1180 [[UX|Go to UX]]<br />
<br />
desc none<br />
</imagemap><br />
<br />
<br />
Enables data access, ingestion and data management across the platform from the edge to the cloud. It establishes the physical to virtual connection and receives data directly from equipment sensors or control systems, performs localized processing, and distributes to other tiers<br />
<br />
<br />
<div id='Position_1_on_page'></div><br />
==Data Acquisition and Ingestion==<br />
===Ability===<br />
The ability to configure and acquire data from different data sources including control system, historians, IoT sensors, smart devices, engineering system, enterprise systems etc.<br />
===Purpose===<br />
The purpose is to acquire data from the physical world, engineering technology systems, and information technology systems to support subsequent processing and insight generation.<br />
<br />
<div id='Position_2_on_page'></div><br />
==Data Streaming==<br />
===Ability===<br />
The ability to transfer of large volumes of data continuously and incrementally between a source and a destination without having to access all data at the same time.<br />
===Purpose===<br />
The purpose is to acquire fast continuous packets of information which is changing at high speed to be able to get near real-time insights.<br />
<br />
<div id='Position_3_on_page'></div><br />
==Data Transformation and Wrangling==<br />
===Ability===<br />
The ability to convert data types and properties through cleaning, structuring and enriching raw data to make if suitable for further processing and analytics.<br />
===Purpose===<br />
The purpose is to make data useable in Digital Twins.<br />
<br />
<div id='Position_4_on_page'></div><br />
==Data Contextualization==<br />
===Ability===<br />
The ability to add language or meta data to enrich real time or transactional data<br />
===Purpose===<br />
The purpose is to combine data from different sources such as real-time and context to make it suitable for subsequent processing by the digital twin.<br />
<br />
<div id='Position_5_on_page'></div><br />
==Batch Processing==<br />
===Ability===<br />
The ability to execute against previously collected data in bulk form.<br />
===Purpose===<br />
The purpose is to provide is an efficient way of processing high volumes of data in batches or groups.<br />
<br />
<br />
<div id='Position_6_on_page'></div><br />
==Real-time processing==<br />
===Ability===<br />
The ability to manage and act on the captured data with minimal latency<br />
===Purpose===<br />
The purpose is to support immediate insights from the data.<br />
<br />
<div id='Position_7_on_page'></div><br />
==Data PubSub Push==<br />
===Ability===<br />
The ability to package filtered data to different services based on publish / subscribe model<br />
===Purpose===<br />
The purpose is to provide information to subscribed digital twin consumers.<br />
<br />
<div id='Position_8_on_page'></div><br />
==Data Aggregation==<br />
===Ability===<br />
The ability to gather raw data and express in a summary form.<br />
===Purpose===<br />
The purpose is to gather data from multiple sources with the intent of combining these data sources into a summary for data analysis.<br />
<br />
<div id='Position_9_on_page'></div><br />
==Synthetic Data Generation==<br />
===Ability===<br />
The ability to generate synthetic data based on patterns and rules in existing sources.<br />
===Purpose===<br />
The purpose is to create representative synthetic data that can used by the digital twin to train and score predictive models.<br />
<br />
<div id='Position_10_on_page'></div><br />
==Ontology Management==<br />
===Ability===<br />
The ability to manage knowledge graphs and ontologies.<br />
===Purpose===<br />
The purpose is to enable a digital twin to interpret data directly from knowledge graphs and ontologies.<br />
<br />
<div id='Position_11_on_page'></div><br />
==Digital Twin Model Repository==<br />
===Ability===<br />
The ability to store, manage and retrieve the meta data that describe the digital twin model. The model can include formal data names, comprehensive data definitions, proper data structures, and precise data integrity rules. <br />
===Purpose===<br />
The purpose is to register and manage a portfolio of Digital Twin models in a central repository to improve configuration management and model governance.<br />
<br />
<div id='Position_12_on_page'></div><br />
==Digital Twin Instance Repository==<br />
===Ability===<br />
The ability to store, manage and retrieve digital twin instance data that conforms to the requirements of the digital twin model.<br />
===Purpose===<br />
The purpose is to store, manage and retrieve Digital Twin instance state data.<br />
<br />
<div id='Position_13_on_page'></div><br />
==Temporal (Time Series) Data Store==<br />
===Ability===<br />
The ability to store, organize and retrieve data relating to time instances through temporal data types, and stores information relating to past, present and potentially future time.<br />
===Purpose===<br />
The purpose is to store, manage and retrieve temporal (timeseries) data.<br />
<br />
<div id='Position_14_on_page'></div><br />
==Data Storage and Archive Services==<br />
===Ability===<br />
The ability to store, organize and retrieve data based on how frequently it will be accessed and how long it will be retained.<br />
===Purpose===<br />
The purpose is to reduce the cost and effort of managing Digital Twin data by using hot, cold and archival data services.<br />
<br />
<div id='Position_15_on_page'></div><br />
==Simulation Model Repository==<br />
===Ability===<br />
The ability to store, manage and retrieve the algorithmic codebase, business rules and meta data that describe a simulation model.<br />
===Purpose===<br />
The purpose is to register and manage a portfolio of simulation models in a central repository to improve configuration management and model governance.<br />
<br />
<div id='Position_16_on_page'></div><br />
==AI Model Repository==<br />
===Ability===<br />
The ability to store, manage, search and retrieve the algorithmic codebase that describe an artificial intelligence (AI) model or machine learning (ML) model<br />
===Purpose===<br />
The purpose is to register and manage a portfolio of AI and machine learning models in a central repository to improve configuration management and model governance.</div>Sangamithra Panneer Selvam