Donald McMillan

Universitetslektor/Assistant Professor in the Post Interaction Computing group at Stockholm University.


Current Research

Advanced Adaptive Intelligent Systems

KTH Digitalisation Project
In collaboration with Iolanda Leite, Jonas Beskow, Britt Östlund, Joakim Gustafson and Christian Smith

This project is focused on the development of socially assistive robots in people’s homes, education or health-care settings, as well as robots working alongside workers in small-scale manufacturing environments.
The proposed project can be seen as an example of Human-centered Artificial Intelligence (HAI) or Intelligence Augmentation (IA). The aim is to develop adaptive social robots that can understand humans’ communicative behaviour and task-related physical actions, and adapt their interaction to suit. We aim to investigate and demonstrate fluid and seamless adaptation of intelligent systems to users’ context, needs or preferences.

Designing New Speech Interfaces

VR Grant

This project is focused around understanding non-system directed audio (such as ordinary conversation and environmental, ambient audio) as well as not only what the user says to the system, but HOW they say it in order to design new user interfaces with more interesting interactions.
While this audio is much less constrained than dialogic speech directed at a device, and as a consequence, extremely challenging to recognise and model, it offers a rich potential resource for system input and human computer interaction that has been almost entirely neglected. Indeed, while speech research has been an active area of computer research, human-computer interaction research on speech has been given much less focus – potentially limiting the opportunities for applications of speech, but also for research understandings of how speech systems can fit with user activity and system use. We hope to open up new opportunities for human computer interaction using audio detection and processing.



Leveraging Eyegaze for Voice Assistant Interaction

In collaboration with the the University of Tokyo and the University of Tsukuba, Japan

In thie project we are developing and testing voice assistants able to track, respond to, and reciprocate the gaze of the users interacting with it. We hope that this line of research will not only provide a more fluid interaction with such assistants, but will also help them better fit into the busy, messy, everyday contexts that define real work use.



Implicit Interaction: Creating a new interface model for the Internet of Things

SSF project lead by Prof. Kristina Höök & Prof. Barry Brown

With the growth in ubiquitous- and IoT-based systems there is now the opportunity to make significant improvements in how technology benefits everyday life. Yet existing systems are beset with manifest human interaction problems. Each individual system has been designed with a particular, limited, interaction model: the smart lighting system in your apartment has not been designed for the sharing economy, the lawn mower robot might run off and leave your garden. Different parts of your entertainment system turn the volume up and down and fail to work together. Each smart object comes with its own form of interaction, its own mobile app, its own upgrade requirements and its own manner of calling for users’ attention. Interaction models have been inherited from the desktop-metaphor, and sometimes mobile interaction have their own apps that use non-standardised, icons, sounds or notification frameworks. When put together, the current forms of smart technology do not blend, they cannot interface one-another, and most importantly, as end-users we have to learn how to interact with them each time, one by one.

This project is built around developing a new interface paradigm that we call smart implicit interaction. Implicit interactions stay in the background, thriving on data analysis of speech, movements, and other contextual data, avoiding unnecessarily disturbing us or grabbing our attention. When we turn to them, depending on context and functionality, they either shift into an explicit interaction – engaging us in a classical interaction dialogue (but starting from analysis of the context at hand) – or they continue to engage us implicitly using entirely different modalities that do not require an explicit dialogue – that is through the ways we move or engage in other tasks, the smart objects responds to us.


Publications

My Google Scholar profile can be found here, the title of each paper below links directly to the PDF (when available).



Designing with Gaze: Tama - a Gaze Activated Smart-Speaker
Donald McMillan
, Barry Brown, Ikkaku Kawaguchi, Razan Jabber, Jordi Solsona Belenguer, Hideaki Kuzuoka
ACM CSCW 2019 (ACM)

Patterns of gaze in speech agent interaction
Razan Jaber, Donald McMillan, Jordi Solsona Belenguer, Barry Brown
ACM CUI 2019 (ACM)

Musicians' initial encounters with a smart guitar
C Rossitto, A Rostami, J Tholander, D. McMillan, L Barkhuus, C Fischione, L Turchet
ACM TOCHI 2018 (Google Scholar)

Text in Talk: Lightweight Messages in Co-Present Interaction
B Brown, K O'hara, M Mcgregor, D. McMillan
ACM TOCHI 2018 (Google Scholar)

The Smart Data Layer
M Sahlgren, E Ylipää, B Brown, K Helms, A Lampinen, D. McMillan, J Karlgren
ACM Interactions 2018 (Google Scholar)

Glimpses of the future: Designing fictions for mixed-reality performances
A Rostami, C Rossitto, D. McMillan, J Spence, R Taylor, J Hook, J Williamson, L Barkhuus
ACM Interactions 2018 (Google Scholar)

Connecting Citizens: Designing for Data Collection and Dissemination in the Smart City
D. McMillan
Proc. Internet Science 2017 (Google Scholar)

Implicit Interaction Through Machine Learning: Challenges in Design, Accountability, and Privacy
D. McMillan
Proc. Internet Science 2017 (Google Scholar)

The Smartwatch in Multi-device Interaction
D. McMillan
Proc. HCII 2017 (Google Scholar)

Friendly but not Friends: Designing for Spaces Between Friendship and Unfamiliarity
A Lampinen, D. McMillan, B Brown, Z Faraj, DN Cambazoglu, C Virtala
Proc. Communities & Technologies 2017 (Google Scholar)

Situating wearables: Smartwatch use in context
D. McMillan, B Brown, A Lampinen, M McGregor, E Hoggan, S Pizza
Proc. ACM CHI 2017 (Google Scholar)

Bio-sensed and embodied participation in interactive performance
A Rostami, D. McMillan, E Márquez Segura, C Rossito, L Barkhuus
Proc. ACM TEI 2017 (Google Scholar)

The IKEA Catalogue: Design fiction in academic and industrial collaborations
B Brown, J Bleecker, M D'Adamo, P Ferreira, J Formo, M Glöss, M Holm, D. McMillan et. al
Proc. GROUP 2016 (Google Scholar)

Smartwatch in vivo
S Pizza, B Brown, D. McMillan, A Lampinen
Proc. ACM CHI 2016 (Google Scholar)

Data and the City Honorable Mention
D McMillan, A Engström, A Lampinen, B Brown
Proc. ACM CHI 2016 (Google Scholar)

Five Provocations for Ethical HCI Research
B Brown, A Weilenmann, D McMillan, A Lampinen
Proc. ACM CHI 2016 (Google Scholar)

Pick up and play: understanding tangibility for cloud media
D. McMillan
, B. Brown, A. Sellen, S. Lindley, R. Martens
Proc. Mobile and Ubiquitous Multimedia 2016 (Google Scholar)

From in the wild to in vivo: Video Analysis of Mobile Device Use
D. McMillan, M. McGregor, B. Brown
Proc. MobileHCI 2015 (Google Scholar)

Repurposing Conversation:Experiments with the Continuous Speech Stream
D. McMillan, A. Loriette, B. Brown
Proc. ACM CHI 2015 (Google Scholar)

Searchable Objects: Search in Everyday Conversation
B. Brown, M. McGregor, D. McMillan
Proc. ACM CSCW 2015 (Google Scholar)

Improving consent in large scale mobile HCI through personalised representations of data
A. Morrison, D. McMillan, M. Chalmers
Proc. ACM NordiCHI 2014 (Google Scholar)

100 days of iPhone use: understanding the details of mobile device use
B. Brown, M. McGregor, D. McMillan
Proc. ACM MobileHCI 2014 (Google Scholar)

100 days of iPhone use: mobile recording in the wild
M. McGregor, B. Brown, D. McMillan
Proc. ACM CHI EA 2014 (Google Scholar)

Categorised Ethical Guidelines for Large Scale Mobile HCI
D. McMillan, A. Morrison & M. Chalmers
Proc. ACM CHI 2013 (Google Scholar)

A Hybrid Mass Participation Approach to Mobile Software Trials
A. Morrison, D. McMillan, S. Sherwood, S. Reeves & M. Chalmers
Proc. ACM CHI 2012 (Google Scholar)

Ethnography for Large Scale User Trials
D. McMillan,
M. Chalmers
Workshop: Research in the Large 3.0 – App Stores, Wide Distribution, and Big Data in MobileHCI Research. MobileHCI, 2012 (Google Scholar)

Ethics, Logs and Videotape: Ethics in Large Scale User Trials and User Generated Content
M. Chalmers, D. McMillan, A. Morrison, H. Cramer, M. Rost & W. Mackay
Workshop summary, published as ACM CHI Extended Abstracts, 2011. (Google Scholar)

Informed consent and users’ attitudes to logging in large scale trials
A. Morrison, O. Brown, D. McMillan & M. Chalmers
Proc. ACM CHI 2011. (Google Scholar)

A Comparison of Distribution Channels for Large-Scale Deployments of iOS Applications
D. McMillan, A. Morrison & M. Chalmers
IJMHCI 3(4), Special Issue on “Research in the Large”, 1-17, 2011 (Google Scholar)

Experiences of mass participation in Ubicomp research
A. Morrison, S. Reeves, D. McMillan, M. Chalmers
Workshop: Research in the Large, Ubicomp, 2010 (Google Scholar)

Mass Participation in Evaluation and Design
A. Morrison, S. Reeves, D. McMillan, S. Sherwood, O. Brown & M. Chalmers
Proceedings of Digital Futures, 2010.

Further into the Wild: Running Worldwide Trials of Mobile Systems
D. McMillan, A. Morrison, O. Brown, M. Chalmers
Proc. Pervasive, 2010 (Google Scholar)

EyeSpy: supporting navigation through play
M. Bell, S. Reeves, B. Brown, S. Sherwood, D. McMillan, J. Ferguson, M. Chalmers.
Proc. ACM CHI 2009 (Google Scholar)