About this Case Study
There is much excitement about conversation as a new material for design, driven in part by the increased accessibility of voice user interfaces and commoditisation of AI techniques. As part of increased adoption, devices like the Amazon Echo, Google Home and Siri are providing platforms for designers to interact with users in new ways. In spite of this (often hyped) anticipation of an AI-powered future, it is not always clear how the vision measures up to lived reality of ‘having a conversation’ with machines.
I will present work being done at the University of Nottingham that is empirically examining how voice UIs like the Amazon Echo actually come to be used in social settings. By capturing naturalistic recordings of Echo use in participants’ homes, we can start to build a rich picture of how users ‘get stuff done’ with voice UI (as opposed to ‘have a conversation with it’).
Drawing on insights from the field of conversation analysis, we consider issues like the importance of response design in shaping how users direct speech to voice UI, how questions and instructions are addressed to the device and how responses are managed, how voice UI is collaboratively woven into ongoing talk and activities in the home, along with considering the importance of silence as a meaningful resource in talk.
Our findings lead us to a range of practical implications as well as conceptual challenges for designers tackling voice UI to develop ‘conversations’ with users.
- How people talk to voice UI is continually directed towards making it fit in to the flow of surrounding activities, with significant implications for response design.
- Looking at real instances of multiparty talk to / around voice UI in a lot of detail can provide insight into the sensitive and complex ways that people organise what they say and when, all of which matters when designing voice UI input and response.
- It is worth considering how to move away from thinking about designing conversations and instead think about designing interactional resources for use in talk.
The target audience is anyone working with voice UI (or considering working with it) and no existing specialist knowledge is required.
There are specific potential benefits for user researchers in demonstrating what can be discovered by looking at instances of actual talk with voice UI, and the way one can approach / tackle such data.
Participants on the interaction design side will benefit from a concluding set of challenges for / considerations about the design of requests to and response from voice UIs.
I will present work being done with my colleagues Stuart Reeves and Joel Fischer at the University of Nottingham. We are empirically examining how voice UIs like the Amazon Echo actually come to be used in social settings
About the Speaker
Martin is a PhD student in the Mixed Reality Lab in Computer Science at the University of Nottingham, pending examination of his thesis.
Martin's research examines the use of everyday technologies such as smartphones and voice-based devices like the Amazon Echo in multi-party settings such as pubs and the home.
He is funded through the Horizon Centre for Doctoral Training, a £9.3m investment from EPSRC, which focuses on examining ubiquitous computing, digital identity, and personal data.