Designing autonomous systems

30 July 2020

Designing autonomous systems

Autonomous systems, like self-driving cars, industrial robots, or smart assistants may evoke mixed feelings with us – they make us excited about new technologies but at the same time can let us feel uncertain, uncomfortable, and anxious. A couple of years ago, Apple was the leading example for good design, with everybody aiming for interfaces to be as easy and joyful to use as the iPhone. Designing autonomous systems in a way that addresses user needs, business demands, and technological requirements and at the same time makes the result an intuitive, easy-to-use, and fun experience is a challenging task. Autonomous systems are complex and may not always be predictable due to intelligent algorithms, contextual awareness and personalized interactions. In the following I am going to discuss some aspects that we need to consider when designing autonomous systems.

Human attitudes towards autonomous machines

Autonomous systems have been around for a comparably short period of time and their potentials are relatively unknown to many smaller organizations. Except for marketeers and consultants who love to talk AI, Big Data, and New Technologies, autonomous systems still come with a considerable amount of “magic” to most people who previously were happy to operate machinery. A big part of this challenge is that true productive autonomous systems are much less glorious but much more mission-critical and fail-safe than the current storytelling might suggest. A central consequence for designers is not to oversell a feature- specific solution but to integrate the user and make them familiar with the potential the technology holds.

Do you trust the machine?

Trust is central to designing autonomous systems. However, trust can be directed at many different qualities – does the user merely trust the system to perform safely? Or does she worry about the machine behaving to flawlessly and efficiently so that she as the operator might be replaced by the system altogether? The central tenet here is that good design determines how predictable – and in turn trustworthy – an autonomous system will be perceived and how much trust will be needed. What I as a user know to expect and find congruent with behavior, I find easier to build trust around. Consequently, designing trustworthy system hinges on managing uncertainty and fostering predictability. Thus, we need to foster characteristics that make people feel more certain and in control of a system and systematically remove those inconsistencies that feed uncertainty.

Whom to put in control

Autonomy means giving up control. What remains? Creative problem solving. But what’s lost in that equation is that this shifts the task description of many previous operators. Are we prepared to make this step – both from an organizational standpoint and from an individual, psychological standpoint? Do my machine operators have the training and capability not to operate but to troubleshoot processes? For designers, the question arises how we ought to shape processes and experiences that minimize the stress of the operator who is only brought into the process when it fails. Philosophically, the question is whether the human operator should and can always be the failsafe and the troubleshooter in high-risk environments or whether machines might be more apt to handle catastrophes.

Aspects to help the design process

1. Apply human-centered thinking and methodologies early in the process

Design is not just about the interface and even less about making the end result look pretty. Human-centered thinking needs to be incorporated into the development of a new product or service right from the start. While in software development this has proven to be a successful approach, in the development of automation solutions it is less common to involve designers early in the process. As designers we are equipped with the right mindset of putting the user needs in the center. We have the right methodologies and tools to identify hidden needs early, thus avoid mistakes, and create products that are more successful with users.

2. Apply diverse perspectives

The saying goes that when you have a hammer, every problem looks like a nail. If you have a room full of engineers, they naturally use their engineering mindset and tools to solve problems. The lack of diversity in STEM fields might lead to narrowed perspectives and homogeneous problem-solving patterns. Thus, we need to not only bring designers early to the table but be aware of other perspectives – those of users, business stakeholders, and others – to include in the process as well.

3. Apply updated design guidelines

Standards that inform the design process, such as the ISO 9241 have been around for a long time can be applied for any kind of system – whether the system at hand is a vending machine, a smartphone app, or a web service. Due to their generic nature, many of those guidelines still hold true for modern applications, but it might be helpful to add aspects that provide guidance specifically for autonomous and intelligent systems. One of these updated design guidelines are the Microsoft Guidelines for Human-AI Interaction (https://www.microsoft.com/en-us/research/publication/guidelines-for-human-ai-interaction/) that provide 18 guidelines that are directed at different steps of the interaction, and include aspects such as “learn from user behavior”, “make clear why the system did what it did, or “match relevant social norms”.

 

Read more