More...More...More...More...More...Hardware.exeMore...More...More...More...More...More...Ignite AIoT - Trust & Security

Digital Trust (or trust in digital solutions) is a complex topic. When do users deem a digital product truly trustworthy? What if a physical product component is added, as in smart, connected products? While security is certainly a key enabler of Digital Trust, there are many other aspects that are important, including ethical considerations, data privacy, quality and robustness (including reliability and resilience). Since AIoT-enabled products can have a direct, physical impact on the well-being of people, safety also plays an important role.

Safety is traditionally closely associated with Verification and Validation; which has its own, dedicated section in Ignite AIoT. The same holds true for robustness (see Reliability and Resilience). Since security is such a key enabler, it will have its own, dedicated discussion here, followed by a summary of AIoT Trust Policy Management. Before delving into this, we first need to understand the AI and IoT-specific challenges from a security point of view.

Why companies invest in cyber security

Why companies invest in cyber security

AI-related Trust and Security Challenges

As excited as many business managers are about the potential applications of AI, many users and citizens are skeptical of its potential abuses. A key challenge with AI is that it is per se not explainable: there are no more explicitly coded algorithms, but rather "black box" models that are trained and fine-tuned over time with data from the outside, with no chance of tracing and "debugging" them the traditional way at runtime. While Explainable AI is trying to resolve this challenge, there are no satisfactory solutions available.

One key challenge with AI is bias: while the AI model might be statistically correct, it is being fed training data that include a bias, which will result in (usually unwanted) behaviour. For example, an AI-based HR solution for the evaluation of job applicants that is trained on biased data will result in biased recommendations.

While bias is often introduced unintentionally, there are also many potential ways to intentionally attack an AI-based system. A recent report from the Belfer Center describes two main classes of AI attacks: Input Attacks and Poisoning Attacks.

Input attacks: These kinds of attacks are possible because an AI model never covers 100% of all possible inputs. Instead, statistical assumptions are made, and mathematical functions are developed to allow creation of an abstract model of the real world derived from the training data. So-called adversarial attacks try to exploit this by manipulating input data in a way that confuses the AI model. For example, a small sticker added to a stop sign can confuse an autonomous vehicle and make it think that it is actually seeing a green light.

Poisoning attacks: This type of attack aims at corrupting the model itself, typically during the training process. For example, malicious training data could be inserted to install some kind of backdoor in the model. This could, for example, be used to bypass a building security system or confuse a military drone.

IoT-related Trust and Security Challenges

Since the IoT deals with the integration of physical products, one has to look beyond the cloud and enterprise perspective, including networks and physical assets in the field. If a smart connected product is suddenly no longer working because of technical problems, users will lose trust and wish back the dumb, non-IoT version of it. If hackers use an IoT-connected toy to invade a family's privacy sphere, this is a violation of trust beyond the normal hacked internet account. Consequently, addressing security and trust for any IoT-based product is key.

The OWASP (The Open Web Application Security Project, a nonprofit foundation) project has published the OWASP IoT Top 10, a list of the top security concerns that each IoT product must address:

  • Weak Guessable, or Hardcoded Passwords
  • Insecure Network Services
  • Insecure Ecosystem Interfaces (Web, backend APIs, Cloud, and mobile interfaces)
  • Lack of Secure Update Mechanism (Secure OTA)
  • Use of Insecure or Outdated Components
  • Insufficient Privacy Protection
  • Insecure Data Transfer and Storage
  • Lack of Device Management
  • Insecure Default Settings
  • Lack of Physical Hardening

Understanding these additional challenges is key. However, to address them -- together with the previously discussed AI-related challenges -- a pragmatic approach is required that fits directly with the product team's DevOps approach. The result is sometimes also referred to as DevSecOps, which will be introduced in the following.

DevSecOps for AIoT

DevSecOps augments the DevOps approach, integrating security practices into all elements of the DevOps cycle. While traditionally many security teams are centralized, in the DevSecOps approach it is assumed that security is actually delivered by the DevOps team and processes. This starts with Security-by-Design, but also includes integration, testing and delivery. From an AIoT perspective, the key is to ensure that DevSecOps addresses all challenges presented by the different aspects of AIoT: AI, cloud/enterprise, network, and IoT devices/assets. The following figure provides an overview of the proposed AIoT DevSecOps model for AIoT.

DevSecOps for AIoT

DevSecOps needs to address each of the four DevOps quadrants. In addition, Security Planning was added as a fifth quadrant. The following will look at each of these five quadrants in detail.

Security Planning for AIoT

Security Planning for AIoT must first determine the general approach. Next, Threat Modeling will provide insights into key threats and mitigation strategies. Finally, the security architecture and setup must be determined. Of course, this is an iterative approach, which requires continuous evaluation and refinement.

DevSecOps Approach

The first step toward enabling DevSecOps for an AIoT product organization is to ensure that key stakeholders agree on the security method used and how to integrate it with the planned DevOps setup. In addition, clarity must be reached on resources and roles:

  • Is there a dedicated budget for DevSecOps (training, consulting, tools, infrastructure, certification)?
  • Will there be a dedicated person (or even team) with their security hat on?
  • How much time is each developer expected to spend on security?
  • Will the project be able to afford dedicated DevSecOps training for the development teams?
  • Will there be a dedicated security testing team?
  • Will there be external support, e.g., an external company performing the penetration tests?
  • How will security-related reporting be set up during development and operations?

Threat Modeling

Threat Modeling is a widely established approach for identifying and predicting security threats (using the attacker’s point of view) and protecting IT assets by building a defense strategy that prepares the appropriate mitigation strategies. Threat models provide a comprehensive view of an organization’s full attack surface and help to make decisions on how to prioritize security-related investments.

There are a number of established threat modeling techniques available, including STRIDE and VAST. The figure following describes the overall threat modeling process.

Threat Modeling

First, the so-called Target of Evaluation (ToE) must be defined, including security objectives and requirements, as well as a definition of assets in scope.

Second, the Threats & Attack Surfaces must be identified. For this, the STRIDE model can be used as a starting point. STRIDE provides a common set of threats, as defined in the table below (including AIoT-specific examples).

STRIDE

The STRIDE threat categories can be used to perform an in-depth analysis of the attack surface. For this purpose, threat modeling usually uses component diagrams of the target system and applies the threat categories to it. An example is shown in the following figure.

Analyzing the attack surface

Finally, the potential severity of different attack scenarios will have to be evaluated and compared. For this process, an established method such as the Common Vulnerability Scoring System (CVSS) can be used. CVSS uses a score from zero to ten to help rank different attack scenarios. An example is given in the following figure.

CVSS

Next, the product team needs to define a set of criteria for dealing with the risks on the different levels, e.g.

  • High risk: Fixed immediately
  • Medium risk: Fixed in next minor release
  • Low risk: Fixed in next major release

To manage the identified and classified risks, a risk catalog or risk register is created to track the risks and the status. This would usually be done as part of the overall defect tracking.

Security Architecture & Setup

Securing an AIoT system is not a single task, and the results of the threat modeling exercise are likely to show attack scenarios of very different kinds. Some of these scenarios will have to be addressed during the later phases of the DevSecOps cycle, e.g., during development and testing. However, some basic security measures can usually already be established as part of the system architecture and setup, including:

  • Basic security measures, such as firewalls and anti-virus software
  • Installation of network traffic monitors and port scanners
  • Hardware-related security architecture measures, e.g., Trusted Platform Module (TPM) for extremely sensitive systems

These types of security-related architecture decisions should be made in close alignment with the product architecture team, early in the architecture design.

Integration, Testing, and Operations

In DevSecOps, the development teams must be integrated into all security-related activities. On the code-level, regular code reviews from a security perspective can be useful. On the hardware-level, design and architecture reviews should be performed from a security perspective as well. For AI, the actual coding is usually only a small part of the development. Model design and training play a more important role and should also be included in regular security reviews.

Continuous Integration has to address security concerns specifically on the code level. Code-level security tests/inspections include:

  • Before compilation/packaging: SAST can be used for Static Application Security Testing.
  • IAST (Interactive application security testing) uses code instrumentation, which can slow down performance. Individual decisions about enabling/disabling it will have to be made as part of the CI process.

Security testing includes tests with a specific focus on testing for security vulnerabilities. These can include:

  • Applications, e.g., DAST (Dynamic Application Security Testing)
  • Hardware-related security tests
  • AI model security tests
  • End-to-End System, e.g., manual and automated penetration tests

Secure operations have to include a number of activities, including:

  • Threat Intelligence
  • Infrastructure and Network Testing (including Secure OTA)
  • Security tests in the field
  • RASP: Runtime Application Self-Protection
  • Monitor/Detect/Response/Recover

Minimum Viable Security

The key challenge with security planning and implementation is to find the right approach and the right level of required resource investments. If too little attention (and % of project resources and budget) is given to security, then there is a good chance that this will result in a disaster - fast. However, if the entire project is dominated by security, this can also be a problem. This relates to the resources allocated to different topics, but also to the danger of over-engineering the security solutions (and in the process making it too difficult to deliver the required features and usability). Figuring out the Minimum Viable Security is something that must be done between product management and security experts. Also, it is important that this is seen as an ongoing effort, constantly reacting to new threats and supporting the system architecture as it evolves.

Trust Policy Management for AIoT

In addition to security-related activities, an AIoT product team should also consider taking a proactive approach toward broader trust policies. These trust policies can include topics such as:

  • Data sharing policies (e.g., sharing of IoT data with other stakeholders)
  • Transparency policies (e.g., making data sharing policies transparent to end users)
  • Ethics-related policies (e.g., for AI-based decisions)

Taking a holistic view of AIoT trust policies and establishing central trust policy management can significantly contribute to creating trust between all stakeholders involved.

The Digital Trust Forum (DTF) is working on Trust Policy Management for AIoT-based smart, connected products

Authors and Contributors

Dirk Slama.jpeg
DIRK SLAMA
(Editor-in-Chief)

AUTHOR
Dirk Slama is VP and Chief Alliance Officer at Bosch Software Innovations (SI). Bosch SI is spearheading the Internet of Things (IoT) activities of Bosch, the global manufacturing and services group. Dirk has over 20 years experience in very large-scale distributed application projects and system integration, including SOA, BPM, M2M and most recently IoT. He is representing Bosch at the Industrial Internet Consortium and is active in the Industry 4.0 community. He holds an MBA from IMD Lausanne as well as a Diploma Degree in Computer Science from TU Berlin.


Pablo Endres.jpg
PABLO ENDRES, SEVENSHIFT
CONTRIBUTOR
Pablo Endres, Founder of SevenShift GmbH. Experienced security consultant and Professional Hacker. Pablo’s career has taken place mostly doing security in a variety of industries, like Cloud Service providers, Banks, Telecommunications, contact centers, and universities. He holds a degree in computer engineering, as well as a handful security certifications. Pablo has founded multiple companies in different continents and enjoys hacking, IoT, teaching, working with new technologies, startups, collaborating with Open Source projects and being challenged.