AI is not about AI
With any emerging technology, or combination of existing technologies, one tends to focus on the technology itself. However, the technology should not be the focal point as it in itself is not the solution. It is a tool.
AI is no exception. The same goes for high-performance federated and edge computing, and the increasing creation of and access to data.
AI therefore is not about AI. It is about figuring out and helping out addressing challenges – of which each member state, region and society at large have a lot of – and achieving objectives that matter, for people, planet, prosperity, peace and partnership. With this mindset, Europe has great capabilities to lead.
In the first blog of this Series, we discussed the question why: why is the symbiotic, dynamic equation of both functionals and non-functionals (such as security, privacy, maintainability, interoperability, digital sovereignty and accountability) also known as: ‘all-functionals’, one of the main success factors for future-proof Industry 5.0 and related value creation.
In the second blog, we focussed on the questions what and where to start: what are the relevant non-functionals, what does success by design mean, and where to start when plotting and mapping the contextually-relevant risk spectra, and appropriate levels of dynamic accountability to cater for those hyperspectral risks.
In this third blog of this Series, we will dive into the question how: how to design the contextually-relevant symbiotic dynamic equation of all-functionals, including some examples on how to methodologically make AI truly useful and work in a trustworthy way.
How to Make It Work, with or without AI?
There is no need or obligation to make AI work. As already mentioned, technology – emerging or otherwise – is one of the many tools we can deploy. This, if and to the extent it makes sense to do so. There is a need to make our vast, vital yet vulnerable supply chains work, and making smart manufacturing, prognostic maintenance and related Industry 5.0 domains work as well. That is the ‘it’ we need to focus on. Where and how to include technology and other dimensions, to make those future-proof supply chains, manufacturing and industry 5.0 truly work?
Dimensions in the Digital Age
In this Digital Age, there is a complex of dimensions to be considered and taken into account, by design, and before, during and after deployments, maintenance and optimisations.
Any start of addressing the question ‘how’ requires a start with identifying a starting point while understanding that there is a lot to take in.
There will be various and numerous dynamics of all sorts and sizes. This, as the already mentioned technological and related developments are expedited by non-digital global occurrences such as pandemics, geopolitical and demographic developments.
There are North Stars to continue to follow, including the recent European Declaration on Digital Rights & Principles , Europe’s Fit for the Digital Decade Compass , its Data Strategy , and Cybersecurity Strategy for the Digital Decade .
There are objectives, targets, milestones and the like. But there is no end point; no finish. It is and will be a required, continuous effort, while catching up and keeping up with the dynamic developments in the various, nine (9) intertwined dimensions, such as structured and visualised below in Figure 1.
It does not matter where one starts. One will generally start in the dimension it is most comfortable with, which is perfectly fine. However, each of these dimensions need to be taken in, covered and – in contextually relevant manner – balanced out. For instance, one can have great emerging technology available, but without the appropriate amount of competences and capabilities of leadership in organisations, and of the persons able to design, deploy, use, appreciate and maintain, the technology will not make it work, and basically would be useless.
To consider risk in order to take technical, organisational and other operational measures by design and thereafter continuously, the scenario plotting methodology of Double-Looped S.I.M. can be used. S.I.M. means: Scenario, Impact, Measures. The double-looping refers to the notion that any measure in itself can be a vulnerability and can even increase risk or create new risk and related detrimental impact. So, every measure deserves its own S.I.M. cycle as well. The Double-Looped S.I.M. can be visualised as set forth below in Figure 2.
Figure 2: Double-Looped Scenario Plotting: S.I,M.
One will probably find numerous scenarios one would not immediately think about, as generally the worst-case scenarios get the most attention as per their potentially disastrous or otherwise highly detrimental impact, even though the probability may be close to 0,000%.
Good case scenarios generally are forgotten, although they may cause (initially unforeseen) impact and negative consequences as well. With the nowadays familiar race to try to be the first in a market, the risks of AI and its functions and applications that have been designed, seemingly ‘for good’, may have such severe negative societal, safety, security, personal, economical, ecological and other risks, impact and consequences. The ones that create AI are humans who are generally working with a certain focus and a certain expertise but lacking a holistic (risk)approach, under certain pressure (including by its investors, grant providers and others) and therefore with a high level of risk appetite, while not considering or allowed to consider other perspectives. Furthermore, emerging technology tends to be overconfident. In the case of AI capabilities, even the AI can be overconfident itself.
The question ‘what happens if things go wrong?’ is not the one most designers, developers, investors or marketeers wish to ask themselves. Even more, in the AI domain it is expected that incidents will have an even more severe impact than in the digital domains without AI capabilities. These notions also go for other emerging technological capabilities. Good and extensive scenario plotting and mapping are prerequisite, also from ethical and accountability perspectives.
The appropriate balance between functionalities and their benefits on the one hand, and non-functionals and impact-mitigation and their benefits on the other hand, with appropriate security and other prevention-, risk- and impact-based measures, metrics and measurements in place will need to be found per situation, per context, and meanwhile monitored and challenged continuously. Just keep in mind that there is always another angle. Therefore, designing digital (eco)systems for failure is essential. Design for failure, or chaos engineering helps addressing those various angles and scenarios. It will increase transparency, reduce unpleasant surprises, reduce embarrassing excuses, and most of all increase trust and trustworthiness. Making it work is complex but that is where the true huge potential is.
Same as in the cat and mouse game, malicious actors immediately change and improve their ways as soon as they are countered. In AI but also any components of digital ecosystems, the eternal cat and mouse game will continue, increase and expedite. ‘AI for Good’ can easily be converted into ‘AI for Malicious’, and vice versa. Therefore, future networks will indeed be smarter and safer, whilst at the same time those networks will be more vulnerable. Also, this race will not be a sprint; it will be a permanent marathon with an unknown number of sprints. These eternal games will continue, increase in dynamics and speed, and expand exponentially.
In order to identify risk and avoid that impact is mitigated or contained and vulnerabilities are not misused, the first focus should be on trying to avoid that there are risks and vulnerabilities in the first place, preferably by design and in a continuous manner. This is also not a task or responsibility of one person, one department or one organisation. No one can do this alone. For this, the Human-Centric Co-Creation Cycle methodology was developed, validated and deployed worldwide.
The Co-Creation Cycle is an aid that identifies the various all-functionals that are relevant in a particular design, development, manufacturing, logistics, monitoring, maintenance or subsequent deployment phases. It helps identifying the various expert stakeholders that should be part of the team in order to find, balance out, arbitrate, document and optimise a symbiosis of the all-functionals that is feasible from technical, operational, economical, ecological, financial, ethical and legal perspectives, as well as otherwise acceptable for all the team members. It furthermore demonstrates that both a multi-disciplinary and inter-disciplinary mindset and skillset is essential to make it work.
The Human-Centric Co-Creation Cycle visualised below in Figure 3 provides for an example where – after identifying the envisioned functionality and related interfaces – non-functionals such as security, safety, authentication, non-personal and personal data control, processing, protection, management and analytics need to be part of the symbiotic equation by design by design. If the set of desired all-functionals end up being too expensive, too unsustainable or, otherwise, not feasible, the cycle is repeated. It can happen that it needs to be repeated multiple times before – finally – the dynamic symbiotic equation has been established that is deemed – by all stakeholders involved – to be feasible and acceptable for the entire life cycle.
This will be a main success factor in any use case, design, application or deployment, if considered and included (a) by default by design upstream, (b) by default at engineering, assembly, implementation, making available midstream, and (c) by default before and after intended use, expected use and actual use downstream, during its whole life cycle.
Figure 3: Human-Centric Co-Creative Cycle
With continuously working on the why, what, where and how, as briefly discussed in this and previous blogs, the initial fundaments of accountability have been laid as well. Accountability is not a mere after-thought, dealt with after something goes wrong. It is an essential requirement, both before one acts as well as during and after. Accountability is about owning and co-owning roles and responsibilities, finding solutions, making things happen, and to helping out if things may go wrong once in a while. Accountability also caters for becoming or being more future-proof while being compliant to relevant ethics, standards and other applicable policy and legal frameworks. In any case, accountability is also not about blaming others. This also as blaming other means giving up the power of change. And change is the only constant, also in this dynamic Digital Age.
 European Declaration on Digital Rights & Principles, EU’s ‘digital DNA’, in force per January 2023, https://digital-strategy.ec.europa.eu/en/library/european-declaration-digital-rights-and-principles
 The Path to the Digital Decade: https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/europes-digital-decade-digital-targets-2030_en#the-path-to-the-digital-decade
 European strategy for data: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020DC0066
 Cybersecurity Strategy for the Digital Decade: https://digital-strategy.ec.europa.eu/en/library/eus-cybersecurity-strategy-digital-decade-0
If you would like to get involved and learn more about Safe and Trusted Human Centric Artificial Intelligence, in Future Manufacturing Lines, Industry 5.0 and related sectors, domains and developments, make sure that you contact or otherwise follow STAR-AI: