Intelligent supply chains, rapid innovation production, integrated logistic support, prognostic health monitoring, predictive maintenance and other Industry 5.0 domains have the capabilities to address Societal Challenges and improve productivity, safety, security, sustainability, and other efficiencies.
New concepts, models and processes supported with AI and other digital capabilities are not a nice to have; they are a need to have. For sure it will and should support and augment the workforce, yet it will also challenge and change it, in an evolutionary or revolutionary way. Said otherwise, what one markets as ‘beneficial’ can easily lead to social, and societal unrest and disruption.
So, everybody will need (A) continuously consider both sides of the same coin, as well as related human-centric, societal, sustainable, economic, and other perspectives, and (B) to identify, deploy and continuously monitor and optimise the appropriate contextual, nuanced symbiosis of these essential components, dimensions, and perspectives.
This blog aims to provide further guidance in making AI truly work; not only function.
The blog is part of a Series, so if you have not already done so, please also read Part 1 of this Series where we introduce the notion that human-centred AI can become an enabler and facilitator for the climate of change we need in Europe, and worldwide.
Digital has become a must-have, for people, society and economy. Digital platforms, AI, robotics, edge computing and the internet of things (IoT) are further expediting this process by connecting, inter-connecting respectively hyper-connecting individuals, organizations, communities, societies and data, with tens of billions of objects and entities.
All these technical capabilities and related digital ecosystems generally comprise of a technical stack that to some extent can be visualized as set forth below in Figure 1. These are made up of some combination of the various forms of data together with software-enabled algorithms that have sufficient computing power either centralized, decentralized or distributed on the Edge or in IoT devices, and interfaces, connectivity and infrastructure where necessary.
When preparing the relevant kitchen tools, cooking ingredients, basic cooking skills and a plan what to cook, one can come up with the technical functions, and the functional specification, technical requirements, the technical specification, and thereafter the actual development and engineering. Right after, it is time to demonstrate it functions, and one is all set. Right? We all know it is difficult enough to make technology function. Especially regarding AI, making it function is not an easy feat.
AI technology is an inherent component of Industry 5.0. However; even if the technology itself may be at the right technical readiness level, the readiness of a technology on itself does not guarantee its success.
When it functions, does it actually work? What if it does not function?
Studies show that adding AI to a technology or process could strengthen its capacity to reach the envisioned outcome, yet it will just as well amplify the risk for negative impact. Digital technologies and intelligent networks are not immune to error, evil, incidents or other risk. These are also not immune to incidental, incremental or disruptive change, either caused by internal or external factors. The many ‘What-If’ scenarios are generally not considered sufficiently, and not re-run after in a consistent and continuous manner.
Making it work, implies having both the functionals as well as non-functionals included, by design and by default, and taken into consideration – and addressing those – end-to-end; both upstream, midstream and downstream, in the holistic, system-thinking – and system-doing – spirit and approach.
Although new and seemingly burdensome for some, it will for sure be beneficial in order to truly make it work, with AI in the equation. Before one notices, it will become second nature. The ‘it’, in ‘make it work’ is not AI or other technological functionalities or capabilities; it is a valued use case that addresses Societal Challenges of any kind.
When thinking and talking about risk, it is important not to see risk as something necessarily negative.
It is an integral part of the equation and with that an enabler and facilitator of anything that works in a trusted, trustworthy and accountable way. It gives essential and valuable insights in what may happen or may go wrong, what people or society like or fear, et cetera. For sure, in the AI or AI-supported domain that is an essential success factor.
The magnitude of risks, determined by the probability as well as the impact thereof, is very much context and application dependent. To prepare for and mitigate the potential harm, to embed preparedness for foreseen and unforeseen situations, and to make it resilient and future-proof, it is necessary that AI systems are designed and deployed guided by trust principles. These non-functionals are principles that consistently preserve trust, trustworthiness and engagement of all relevant stakeholders. Examples of such principles are security, safety, privacy, transparency, auditability, sustainability and robustness. There are several hundred of trust principles. These can be found in best practices, guidelines, white papers, standards, regulations but also in common practice and nature.
Two major challenges in the AI design and deployment are (1) to map the relevant risks accurately and comprehensively throughout the system’s entire lifecycle, and (2) to incorporate non-functionals by design.
Risk is not a four-letter word, and – even in the AI context – deserves its own series of studies, publications and the like.
In any case it is useful to segment the various AI-related dimensions of this Digital Age in order to get some relevant oversight and insight. Segmentation provides structure, insight and oversight, and facilitates awareness, understanding and appreciation.
Keeping the holistic, end-to-end ecosystem mindset and approach, an initial segmentation into four (4) categories can be done: Non-connected, Connected, Inter-connected and Hyper-connected.
For each of these segments, various value cases, business models, feasibility models and therefor use cases can be identified and created in the AI-supported Industry 5.0 domain. Each segment has its own values, benefits, efficiencies, inefficiencies, et cetera.
The segmentation set above obviously is not the only one possible. Various other segmentations are to be relevant to considered as well, such as for instance real-time, near-real-time or not. This segmentation may be relevant when near-real-time autonomous 3D printing is considered, or real-time prognostic health monitoring or related integrated logistics support is relevant. Other segmentations that can be considered are single-vendor, multi-vendor, OEM, public, private, public-private, et cetera.
When focussing on one of the above-mentioned segments, Hyper-Connected devices, and taking a risk-perspective to those, a methodology to do high-level quality risk classification is to have a multi-layered approach and do such risk classification per spectrum, starting with the risk classification of the connectors and connectivity of the IoT device itself.
Even though AI capabilities may not yet be in the equation, it is essential to understand the various risks that are embedded in or could arise from such IoT device. Subsequently, other risk spectra should be considered and risk classified, as visualised below in Figure 2.
Especially more downstream there may be risk spectra that may not be relevant; however, if such spectrum may become relevant later in the life cycle of the IoT device it is recommendable to keep it in and already do the spectrum risk classification. In general, three categories of main risk levels are used: low, medium and high. Based on the outcome of (i) a risk classification for each spectrum, and (ii) the interim outcome of the various risk classifications up to Risk Spectrum layer 13 (AI Capabilities), the baseline risk classification can be established.
Based on that baseline, the AI Capabilities risk classification can be done, and the subsequent risk spectra; the holistic perspective constitutes the Combined Risk Classification, on which one can consider and organise technical & organisational security, safety, privacy and related technical and organisational measures.
In any case, the segments, whether non-connected, connected, inter-connected or hyper-connected, that have AI capabilities of any kind, are for sure game changing, where non-functional and functional requirements have to be addressed together.
The winner will be the one who understands fully the societal challenges at hand and related sectoral requirements.
Non-functionalities are as important as functionalities. Even better, they positively augment each other if balanced out intelligently and correctly. The symbiosis of both is a main success factor for any development and deployment of AI.
For sure, Industry 5.0 and related ecosystems, including the persons, organisations and other stakeholders therein, can benefit from this, and can improve itself towards human-centric, secure, safe, sustainable, trusted, trustworthy, resilient and otherwise future-proof systems.
How to methodologically make that work? We will discuss various proven methodologies in our subsequent blogs, so please stay tuned.
This blog is part of a Series. Part 1 can be found here