Ignite AIoTArtificial IntelligenceInternet of ThingsBusiness ModelProduct ArchitectureDevOps & InfrastructureTrust & SecurityReliability & ResilienceVerification & ValidationIgnite AIoT - Trust & Security

Ignite AIoT: Trust & Security

Digital Trust – or trust in digital solutions – is a complex topic. When do users deem a digital product actually trustworthy? What if a physical product component is added, as in smart, connected products? While security certainly is a key anbler of Digital Trust, there are many other aspects which are important, including ethical considerations, data privacy, quality and robustness (including reliability and resilience). Since AIoT-enabled products can have a direct, physical impact on the well-being of people, safety also plays an important role.

Safety traditionally is closely associated with Verification and Validation, which has its own, dedicated section in Ignite AIoT. The same holds true for robustness (see Reliability and Resilience). Since security is such a key enabler, it will have its own, dedicated discussion here, followed by a summary of AIoT Trust Policy Management. Before delving into this, we first need to understand the AI and IoT-specific challenges from a security point of view.

AI-related trust and security challenges

As excited as many business managers are about the potential applications of AI, as sceptical are many users and citizens. A key challenge with AI is that it is per-se not explainable: There are no more explicitly coded algorithms, but rather „black box“ models which are trainined and fine-tuned over time with data from the outside, with no chance of tracing and „debugging“ them the traditional way at runtime. While Explainable AI is trying exactly this challenge, there are no satisfactory solutions available yet.

One key challenge with AI is bias: While the AI model might be statistically correct, it is being fed training data which includes a bias, which will result in – usually unwanted – behaviour. For example, an AI-based HR solution for the evaluation of job applicants which is trained on biased data will result in biased recommendations.

While bias is often introduced unintentionally, there are also many potential ways to intentionally attack and AI-based system. A recent report from the Belfer Center describes two main classes of AI attacks: Input Attacks and Poisioning Attacks.

Input attacks: These kind of attacks are possible because an AI model never covers 100% of all possible inputs. Instead, statistical assumption are made and mathematical functions are developed to allow creation of an abstract model of the real world, derived from the training data. So-called adverserial attacks try to exploit this by manipulating input data in a way that confuses the AI model. For example, a small sticker added to a stop sign can confuse an automous vehicle and make it think that it is acually seeing a green light.

Posioning attacks: This type of attack aims at corrupting the model itself, typically during the training process. For example, malicious training data could be inserted to install some kind of back-door in the model. This could, for example, be used to bypass a building security system or confuse a military drone.

IoT-related trust and security challenges

Since IoT is dealing with the integration of physical products, one has to look beyond the cloud and enterprise perspective, including networks and physical assets in the field. If a smart connected product is suddenly not working anymore because of technical problems, users will lose trust and wish back the dumb, non-IoT version of it. If hackers use an IoT-connected toy to invade a family`s privacy sphere, this is a violation of trust beyond the normal hacked internet account. Consequently, addressing security and trust for any IoT-based product is key.

The OWASP (The Open Web Application Security Project, a nonprofit foundation) project has published the OWASP IoT Top 10, a list of the top security concerns which each IoT product must address:

  • Weak Guessable, or Hardcoded Passwords
  • Insecure Network Services
  • Insecure Ecosystem Interfaces (Web, backend APIs, Cloud, and mobile interfaces)
  • Lack of Secure Update Mechanism (Secure OTA)
  • Use of Insecure or Outdated Components
  • Insufficient Privacy Protection
  • Insecure Data Transfer and Storage
  • Lack of Device Management
  • Insecure Default Settings
  • Lack of Physical Hardening

Understanding these additional challenges is key. However, in order to address them - together with the previously discussed, AI-related challenges - a pragmatic approach is required which fits in directly with the product teams DevOps approach. The result is sometimes also referred to as DevSecOps, which will be introduced in the following.

DevSecOps for AIoT

DevSecOps augments the DevOps approach, integrating security practices into all elements of the DevOps cycle. While traditionally many security teams are centralized, in the DevSecOps approach it is assumed that security is actually delivered by the DevOps team and processes. This starts with Security-by-Design, but also includes integration, testing and delivery.

From an AIoT perspective, the key is to ensure that DevSecOps is addressing all challenges presented by the different aspects of AIoT: AI, cloud/enterprise, network, and IoT-devices/assets. The figure below provides an overview of the proposed AIoT DevSecOps model for AIoT.

DevSecOps for AIoT

The following will look at each of the 4 quadrants of the DevOps process for AIoT, as defined in the Ignite AIoT section on DevOps and Infrastructure.

Security planning for AIoT

The first step towards enabling DevSecOps for an AIoT product organization is to ensure that key stakeholders agree on the security method used, and how to integrate it with the planned DevOps setup. In addition, clarity must be reached on resources and roles:

  • Will there be a dedicated person (or even team) with the security hat on?
  • How much time is each developer expected to spend on security?
  • Will the project be able to afford dedicated DevSecOps training for the development teams?
  • Will there be a dedicated security testing team?
  • Will there be external support, e.g. an external company performing the penetration tests?
  • How will security-related reporting be set up during development and operations?

After answering these questions, the first step towards enabling SecDevOps is to perform a security threat analysis. The results of the analysis are usually captured in a threat model, which combines the system architecture perspective with the security-specific analyis. Established threat modeling techniques include STRIDE and VAST.

Based on the threat model, an impact analysis must be performed. This can then be the foundation of the actual security architecture setup, as well as the focus areas for security-related testing.

Secure AIoT Dev

TBD

Secure AIoT and Continuous Integration

TBD

Secure AIoT and Continuous Testing

TBD

Secure AIoT and Continuous Delivery

TBD

Trust Policy Management for AIoT

TBD

Authors and Contributors

DIRK SLAMA
(Editor-in-Chief)

AUTHOR
Dirk Slama is VP and Chief Alliance Officer at Bosch Software Innovations (SI). Bosch SI is spearheading the Internet of Things (IoT) activities of Bosch, the global manufacturing and services group. Dirk has over 20 years experience in very large-scale distributed application projects and system integration, including SOA, BPM, M2M and most recently IoT. He is representing Bosch at the Industrial Internet Consortium and is active in the Industry 4.0 community. He holds an MBA from IMD Lausanne as well as a Diploma Degree in Computer Science from TU Berlin.