AIoT Feature Management

AIoT Feature Management is an important link between the requirements side and how it maps to architecture and organizational setup. During the agile planning process, features are prioritized. The high priority AIoT features are assessed using the AIoT Feature Analysis tools. Based on the results of this analysis, an initial architectural mapping must be performed. Finally, a mapping to the AIoT organization must be created.

AIoT Feature Assessment

AIoT Feature Assessment must take multiple factors into consideration: functional and non-functional, as well as implementation strategy, "speed of development", and dependencies.

AIoT Feature Implementation Strategy

For the digital OEM, selecting the best implementation strategy for every each individual feature will be a critical success factor. In AIoT, there are three basic options for implementing a new feature:

  • Custom Hardware: Especially digital OEMs coming from the traditional OEM side of things often have a tendency to look at the world through a hardware/embedded-systems-centric perspective. ASICS (application-specific integrated circuits), FPGAs (field-programmable gate arrays) and low-level micro-controllers are common technologies in this space.
  • Software (coding): In AIoT, software is playing an important role as the technical platform for AIoT features. On the asset, it is increasingly replacing lower-level / hardware-centric implementations. But the key thing about software is that the rules which the algorithms are following are still defined by human developers.
  • AI: AI is the new game changer, it is becoming the third paradigm next to hard- and software. What`s important is that AI is not software - it is really a new category, with its own platforms, processes and methods. And there are no human-defined rules determining the flow. Humans are defining the models, but the AI itself is responsible doing the inference on the basis of what is has learned from reference data. Usually, AI is running on top of software platforms. In some cases, AI can also be put directly into ASICs (i.e. hardware).

As a product manager, the decision for one of these three options as the foundation for a new AIoT feature will have far-reaching implications with respect to functionality, performance, speed of change, cost, etc.

AI HW SW

AIoT Feature Assessment Quadrant

A second, also very important perspective includes "speed of development" and dependencies. Each AIoT feature can be mapped onto the AIoT Feature Assessment Quadrant shown below:

  • The first dimension is looking at how the evolution of the feature has to be supported: Is this a feature which has to be delivered "First time right" (e.g. because of its criticality in the system, or because it will be delivered as hardware which can not be changed after manufacturing) vs. continuous evolution (e.g. an AI model which is constantly re-trained due to model decay).
  • The second dimension is looking at the dependencies which this feature will have - ranging from few dependencies to very complex dependencies

The position in the quadrant gives an indication about the potentially most suitable way of managing this feature: in the upper right corner, we have features with many dependencies, which have to be delivered "First time right". For these features, the traditional V-model will be most appropriate. The opposite are features with few dependencies, which can be continuously optimized. For these, the agile approach is most suitable. In the middle, the "Agile V-Model" is recommended, where each side of the V can be mapped to a single sprint, effectively combining agile an an iterative V-model approach.

It is important to notice that since a complex, AIoT enabled product usually consists of multiple features, an organizational approach is required which supports the co-existence of these different paradigms and manages them on a holistic "agile grid" across all features. This will be discussed in the following.

AIoT Feature Quadrant

Architectural Mapping

Based on the results of the analysis, the AIoT solution architects will have to perform a mapping of the feature to the overall architecture. This is discussed in more detail in the Data and Functional Viewpoint of the Solution Architecture.

What`s important is that the architecture will have to take the "Speed of change" properties of the feature into consideration. In a complex AIoT system, we will have multiple components, all moving at different speeds of change (and some not moving at all anymore after their release, e.g. because they reside on assets after deployment which can`t be physically altered anymore without incurring unreasonable costs). Examples are given below.

Different Speeds of Change

It is the solution architect´s responsibility to ensure that the overall architecture of the system is following a layering pattern, which puts slow moving components at the bottom, and fast moving ones at the top. Dependencies (e.g. runtime call dependencies) have to be managed so that they strictly go from the top to the bottom only. Quite understandably, no slow moving components should be dependent on a fast moving component. For example, UI components tend to change must more frequently than data-centric component. Consequently, there should also be no components which combine slow and fast moving functionalities. These are key aspects which must be taken into consideration for the component design.

AIoT Component Mapping

Organizational Mapping

Finally, the feature and the supporting component design must be matched to a suitable organizational setup. Usually, an AIoT development organization will have a set of technical workstreams, e.g. seperate workstreams for cloud and edge. However, many features will go accross these technical boundaries. Consequently, it is recommended to set up a dedicated feature team for each feature, which will be responsible for delivering the feature end-to-end (more on this can be found in the section on AIoT Leadership and Organization).

It is recommended to build up the intended end-to-end component architecture as soon as possible, follwing an „API first“ approach. Mockup-implementations behind the APIs can be used for initial integration tests. Since each functional AIoT component can have a different implementation (AI, software, or custom hardware), these integration test will often initially focus on realizing call chains accross these different technical platforms. Even if the real functionality will be simulated, these initial tests will help to stabilize the end-to-end system architecture as soon as possible. Especially for hardware and embedded components, so-called hardware-in-the-loop or software-in-the-loop simulations can be helpful here. For AI, the concept of model-in-the-loop is now emerging as well.

The setup described here has the purpose to help enabling a consistent evolution of all system components, regardless of their technical platform and their respective speed-of-change. In an AIoT system, managing this will be an essential success factor.

AIoT Agile Grid

Managing System Evolution

Successfull managing the evolution of the AIoT-enbled system will be a key challenge. This challenge must be met by closely aligning architectural decisions with organizational setup. The figure below shows a simplified example, including a couple of key work streams and milestones:

  • Release of end-to-end mockups (including APIs for key components)
  • Release of end-to-end functional system for field tests
  • Start of production (SOP) and release of the system version 1.0
  • Cloud updates delivered via the cloud-part of the AIoT DevOps process
  • Edge software updates, delivered via the OTA-part of the AIoT DevOps process

Special consideration must be given to the evolution of the AI in the system:

  • Product AI (often "asset intelligence") can be developed using data from a lab environment (e.g. via model-in-the-loop simulations, etc.)
  • System AI (often "swarm intelligence") will rely on real live data from users and physical products in the field

In the example below, a first release of product AI is done before the SOP, while the start of the development of the system AI is only starting after the SOP, once a sufficiently large number of users / physical products in the field are delivering data. An example for product data could be an AI-enabled automated driving function in a car, while system AI could be traffic predictions based on data from vehicles in the field.

Managing System Evolution