my dissertation

a study on transparency and user trust in the smart home


Smart homes kitted out with internet-connected devices are no longer the stuff of science fiction. Heating is a particular area of interest, given the great potential to reduce energy usage. Several 'smart heating systems' are commercially available (Nest, EcoBee, tado, etc). One challenge not yet overcome is gaining the trust of users to leave the thermostat alone and allow the system to 'do its thing.'

Dr. Simone Stumpf (my advisor) has worked on a whole host of projects regarding trust in smart systems: doctor diagnosis tools, music recommendation systems, email sorting algorithms, etc. She and others have found that explanations of what machine learning systems are doing tend to help people develop trust in these systems. Therefore, my project was to test whether explanations of a textual or graphical variety help users: (1) understand what the system is doing, and (2) trust the system to make decisions for them.


To test the effect of explanations on trust, I planned a remote, unmoderated experiment with 60 participants. I gave people scenarios in which their heating system is acting differently from their expectations and provided them with mocked-up screens of a heating user interface. I also asked them questions to assess their understanding and level of trust in the system. Finally, I analyzed participants' responses both quantitatively and qualitatively and wrote up a very long (and fascinating, I'm sure!) paper about my results.


I found that explanations improved people's understanding of what the system was doing. Surprisingly, they did not change trust in the system; however, they did change the basis for participants' trust in this system from generic 'it seems smart' to specific concerns about the decisions being made. It turns out that the more people know about what's going on behind the scenes, the more opinions they have about whether those decisions are appropriate.

My findings will be published in the Workshop on Explainable Systems at the upcoming Intelligent User Interfaces conference. This project was part of the FREEDOM project, done by City University of London in cooperation with Passiv Systems Ltd and funded by Western Power Distribution and Wales and West Utilities.


I created this infographic to give an overview of the experiment I conducted and its results. To analyze the data provided by participants, I used SPSS for quantitative (ANOVA, Kruskal-Wallis, 2, Spearman's correlation coefficient) techniques. I also performed qualitative analysis of participant comments to understand the WHY behind the hard numbers.

literature review

I started the project by scouring the academic literature for previous work on smart systems, people's perceptions of heating and energy, trust, and explanations' role in understanding and trust.

mind map of relevant smart heating and explanation literature

scenario & explanations

An example scenario given to participants. It starts with describing the situation (it's evening, you're cold, etc) then presents a basic user interface and various explanations of what is happening. This scenario was followed by questions designed to assess people's understanding of the heating behavior and their trust in the system.

scenario and interface given to participants: your house is colder than expected
explanation given to some participants: you are experiencing a temporary delay in heating because of a nationwide shortage


I used a Gantt chart to plan my time in this very busy 3-month dissertation timeline. The schedule went through several iterations during the project, but the principles of how the time should be divided remained helpful for staying on track. The experience of conducting a project of this scale using a set schedule like this was an important milestone for me.

gantt chart of my 3-month dissertation