AIoT hardware includes all the physical components required to build an AIoT product or retrofit solution. For the retrofit solution, this will usually include sensors, as well as edge and cloud (or on-premises) compute resources. Most retrofit solutions will not include actuators. Products, on the other hand, must not only provide IT-related hardware plus sensors, actuators and AI compute resources but also all the mechanical components for the product, as well as the product chassis, body and housing. Finally, both AIoT products and solutions will usually require specialized IT hardware for AI inferencing and AI training. The concepts developed for Cyber Physical Systems (CPS) will also be of relevance here. The following will look at both the AIoT product and retrofit solution perspective before discussing details of the hardware requirements and options for edge/cloud/AI.
Smart, connected products
The hardware for a smart, connected product must include all required physical product components. This means that it will include not only the edge/cloud/AI perspective but also the physical product engineering perspective. This will include mechatronics, a discipline combining mechanical systems, electric and electronic systems, control systems and computers.
In the example shown here, all hardware aspects for a vacuum robot are depicted. This includes edge IT components such as the on-board computer (including specialized edge AI accelerators), connectivity modules, HMI (Human-Machine Interaction), antennas, sensors and actuators such as the motors, plus the battery and battery charger. In addition, it also includes the chassis and body of the vacuum robot, plus the packaging.
The cloud or on-premises backend will include standard backend compute resources, plus specialized compute resources for AI model training. These can be, for example, GPUs (Graphics Processing Unit used for AI), TPUs (Tensor Processing Unit), or other AI accelerators.
Setting up the supply chain for such a wide breadth of different IT and other hardware components can be quite challenging. This will be discussed in more detail in the sourcing section.
Smart, Connected (Retrofit) Solutions
For smart, connected retrofit solutions, the required hardware typically includes sensors, edge compute nodes, and backend compute resources (cloud or on-premises). The example shown here is the hardware provided for a predictive maintenance solution for hydraulics components. This complete case study from Bosch Rexroth is included in Part IV.
The hydraulic components include electric and hydraulic motors, tanks, filters, cooling units, valves, and so on. Customers use these components for many different applications, e.g., in manufacturing, process industries, or mining. The hardware components for the retrofit solution include different sensor packs, each specialized for a particular hydraulic component. For example, the sensor package for a hydraulic pump includes sensors to measure pressure and leakage, while the sensor packs for a hydraulic cylinder include sensors for chamber pressure and oil temperature. Since this is a retrofit solution that is sold to many different customers, it is important that each sensor pack has custom connectors that make it easy to attach sensor packs and their different sensors to the corresponding hydraulic component in the field.
Other hardware components provided include an edge data acquisition unit (DAQ), as well as an IoT gateway to enable connectivity to the backend. Backend hardware is not shown in this example, but will obviously also be required.
This is an example of a very mature solution that is designed to serve multiple customers in different markets. This is why the different hardware components are highly standardized. AIoT solutions that are not replicated as often might have a lower level of maturity and a more ad-hoc hardware architecture.
Edge Node Platforms
Edge nodes are playing an important role for AIoT. They cover all compute resources required outside the cloud or central data centers. This is a highly heterogeneous space with a wide breadth of solutions, from tiny embedded systems to nearly full-scale edge data centers.
Edge computing is rooted in the embedded systems space. Embedded systems are programmable, small-scale compute nodes, that combine hardware and software. Embedded systems are usually designed for specific functions within a larger system.
Today, typical edge node platforms include embedded microcontrollers (MCUs), embedded microprocessors (MPUs), Single Board Computers and even full-scale PCs. While the boundaries are blurry, an MCU is usually a single-chip computer, while an MPU has surrounding chips (memory, interfaces, I/O). MCUs are very low-level, while MPUs run some kind of specialized operating system. Other differences include costs and energy consumption.
Some other key technologies often found in the context of edge node platforms include the following:
- Module (or Computer-on-Module, CoM): a specific function (e.g., a communication module), which can be integrated with a base board via standardized hardware interfaces. Provides high level of flexibility and reuse on the hardware level.
- SoC (System-on-a-Chip): combines multiple modules into a single, tightly integrated chip. Used especially for highly standardized, mass-produced systems, such as smartphones and tablets. For example, a smartphone SoC may contain a CPU, GPU, display, camera, USB, GSM modem, and GPS on a single chip
- ASIC (Application Specific Integrated Circuit): a chip that is custom designed for a specific purpose, e.g., running a mature and hardened algorithm. Provides high performance and low cost if mass-produced but requires a high level of maturity because no changes after production are possible.
- FPGA (Field Programmable Gate Arrays): chips that are programmed using highly efficient but also very low-level, configurable logic blocks. Application logic can be updated after manufacturing.
Sensor Edge Nodes
Sensor edge nodes are edge nodes that are specifically designed to process sensor data. Most basic sensors actually provide an analog signal. These analog signals are continuous in time, thus consuming a very high bandwidth. They are usually sinusoidal in shape (i.e., they look like a sinus curve). To be able to process and filter these signals, they need to be converted to a digital format. This helps reduce bandwidth, and makes the signals processable with digital technologies. Usually a Digital-to-Analog Converter (DAC) is used to connect an analog device to a digital one. However, before this happens the analog signals are often preprocessed, e.g., using amplification to reduce noise and get a more meaningful signal.
The digitalization of the signal is often done using Discrete Fourier transform (DFT). DFT computation techniques for fast analog/digital signal conversion are known as Fast Fourier Transform (FFT). Based on linear matrix operations, FFTs are supported, for example, by most FPGA platforms.
After conversion to a digital format, the digital signal can now be processed using either traditional algorithms or AI/ML. A discussion on the benefit of edge-based preprocessing of sensor data is provided in the Data 101 section.
Finally, most sensor edge nodes provide some form of data transmission capability to ensure that after preprocessing and filtering the data can be sent to a central backend via an IoT/edge gateway.
AI Edge Nodes
With edge AI becoming increasingly popular, a plethora of specialized edge AI accelerators are emerging. However, with embedded ML frameworks such as TinyML, it is currently possible to execute some basic ML algorithms on very basic hardware. Low-cost IP Cores such as a Cortex-M0 in combination with TinyML can already be used for basic event classification and anomaly detection. Standard MCUs (e.g., an Arduino Nano) can be used to run ML algorithms at the edge for voice recognition and audio classification. Higher-end MCUs even allow for tasks like ML-based image classification. Moving up to full Single Board Computers (SBC) as edge nodes, voice processing and object detection are possible (object detection combines image classification and object localization, drawing bounding boxes around objects and then assigns labels to the individual objects on the image). Finally, SBCs in combination with AI accelerators enable video data analytics (e.g., by analyzing each frame and then drawing a conclusion for the entire video). This is a fast moving space, with many development activities, constantly driving hardware prices down and ML capabilities up.
Putting it All Together
Finally, if we are putting all of this together, the following picture is emerging: AI accelerators in the cloud are used for training ever more sophisticated ML models. An important class of AI accelerators are GPUs. Originally used as Graphics Processing Units, GPUs are specialized for the manipulation of images. Since the mathematical basis of neural networks and image manipulation are quite similar (parallel processing of matrices), GPUs are now also often used as AI accelerators. FPGAs are also sometimes considered as AI accelerators since they allow processing very close to the hardware, but still in a way that allows updates after manufacturing. Finally, proprietary solutions are being built, such as TPUs from Google or Tesla's custom AI chip.
Once the model is trained, it can be deployed on the edge nodes via OTA (Over-the-Air-Updates). On the edge node, the model is used on an appropriate hardware platform to perform inference on the inbound sensor data. The loop is ideally closed by providing model monitoring data back to the cloud, in order to constantly improve the model performance.
Authors and Contributors