5by5: Technology and Tomorrow with Sarah Reid

A 5by5 Conversation with Sarah Reid, Social & Behavioural Insights Senior Advisor, Ontario Securities Commission about what we can learn from the recent breaches in tech to design a more ethical tomorrow.

Interview by Twisha Shah-Brandenburg and Thomas Brandenburg

DISCLAIMER: The views in this interview are those of the author, and do not necessarily reflect the views of the Ontario Securities Commission.

 

We can see how homogeneity is shaping the design process. There are a whole host of things being developed that don’t actually solve for the right problems and reinforce problematic stereotypes (think: the feminization of pretty much every virtual assistant on the market and the inability of algorithms to recognize non-white faces).” —Sarah Reid

 

Question 1
What needs to change in how tech companies scale / grow to have the time think about future implications?

The first thing that needs to change is who gets to design in the first place, and by that I mean who “counts” as an entrepreneur. Who gets access to the social and economic capital required to even get started, let alone grow, has significant implications for both how and what is designed. In other words, our mental models and rhetoric of tech a distinctive site of meritocracy is part of what needs to change. Like all other industries, there’s a tech in-group that reinforces the idea that young, white, middle-class men are the only ones with good ideas and grit to make it. Tech isn’t neutral, and there are far reaching implications associated with that for both companies and society.

We can see how homogeneity is shaping the design process. There are a whole host of things being developed that don’t actually solve for the right problems and reinforce problematic stereotypes (think: the feminization of pretty much every virtual assistant on the market and the inability of algorithms to recognize non-white faces). This is partly because many teams are literally sprinting past their users toward refining solutions based on the assumption that solving for one’s own needs solves for everyone else’s. I’m all for using prototypes as stimulus to learn about your users’ needs, preferences, and contexts, but this is not what’s happening in practice because understanding needs usually takes a bit of time, which becomes inconvenient in go-fast problem solving. It’s also really tough to adequately appreciate a diversity of potential needs and users when you’re operating in an echo chamber. The tendency to want to go fast, coupled with lack of diversity, can produce an initial problem-solution mismatch that will inevitably get amplified during a scaling stage—and that gets much more expensive because you’re now retrofitting your design(s) because they don’t quite fit and they certainly can’t adapt.

 

Question 2
How do digital interfaces need to change in order to help consumers with different levels of comfort /technology literacy understand what they are participating in?

I tend to think about these things in terms of accessibility and inclusion. It helps to start with diverse needs of end users and build from there as opposed to building for the average person (is there an average person? Spoiler alert: no). It doesn’t mean that we can build interfaces to meet the needs of everyone in every context but there are many things that edge users would need in order to navigate an interface that would suit the needs of most other users, too. Very obvious contrasts on a screen help all users better distinguish things on a page, for instance. Larger type is likely going to serve the needs of more people than smaller type. These are very simple things to do to make interfaces navigable.

Aside from these basic facets of accessibility, there’s a whole host of behavioural principles that digital interfaces ought to consider for facilitating a good user experience. Digital interfaces shape user behaviour, whether intentionally or not. In this way, all interfaces prime behaviours, so it’s a good idea to have a sense of what’s drawing users’ attention and how those cues shape how users participate. Three things come to mind: simplicity, evaluability, and transparency. Simplicity refers to removing small barriers that prohibit interaction, like overly complicated sign-in processes or multiple page navigation for task completion, as well as the messages being communicated—are they in plain language? I need to understand what my options are in order to meaningfully navigate them. If I’m in a situation where I need to make a decision between options, I need a simple way to compare in terms that make sense to me. There’s a growing body of research that shows preference reversals in contexts where options are evaluated jointly as opposed to in isolation. While not always the case, most often, users fare better when they have a few options (but not too many!) to compare because it gives them a context to adjudicate in, with a greater diversity information, so no one aspect an option is overweighed. Finally, transparency goes a long way toward building trust and ethical interactions—which should be at the heart of all things digital. Tell me why you need certain information and what you’re going to do with it. Tell me where I am in a process and where I’m headed to set my expectations, and let me course correct if I need to by showing me the consequences of various choices. This isn’t exhaustive but it’s a good start.

“Shifting from fintech to social media and tech more broadly, from a user perspective, I’d like to hear more about informed consent, not just “user agreement”.”  —Sarah Reid

 

Question 3
How do you think tech based companies should be regulated? What is the level of control that still helps innovation thrive but allows for external oversight that looks out for the safety of citizens?

This is a tough, complex question that I’m not sure we have figured out just yet. The Ontario Securities Commission (OSC) is taking this complexity head on by working closely with the fintech community to understand and respond to a rapidly evolving space in a way that balances digital innovation and growth with investor protection. There’s a few ways that the OSC is partnering with industry to achieve this priority. Through a Fintech Advisory Committee, which includes key players from a broad spectrum of the fintech community, ranging from innovation hubs to startups to financial institutions, who advise OSC staff on developments and trends in the fintech space, as well as the unique challenges encountered by innovative businesses in the securities industry. The OSC also established LaunchPad (http://www.osclaunchpad.ca) in 2016, which is the first dedicated team by a securities regulator in Canada to provide direct support to eligible fintech businesses in navigating the regulatory requirements.

Shifting from fintech to social media and tech more broadly, from a user perspective, I’d like to hear more about informed consent, not just “user agreement”. Most of the narrative we’re hearing from a tech provider standpoint is about liability and avoiding getting sued when the focus should be helping users better understand, in plain terms, what happens with their data and what they are trading off in terms of privacy to get, say, more personalization. No human is reading the 150 pages of the user agreement—and the choice to design that way ultimately harms users because it ignores actual behaviour. Every additional page represents a user barrier. All of the rules and expectations that apply in human exchanges should apply in the digital. Would you expect a person to hand over all of their contacts in their mobile device, so you could try to sell them something? Of course not. But this is what’s happening. You can find it on page 85 of the terms of service…

 

Question 4
How do you incorporate ethics into the development of a new product / service within an
organization?

There’s a saying that has a long history in the realms of public policy, the disability rights movement, and more recently in inclusive design circles: “nothing about us without us.” The process of designing the new should be the same no matter what the context—you need to include users who are being affected by your new product/service throughout the entire process in order for it to be ethical. This means going beyond superficial “consultation” to a contribution-oriented approach where different stakeholders across an organization participate in the framing, designing, and testing of the new product/service. Then there’s the test phase, which is particularly important and too often overlooked. There needs to be a conversation about measuring the social and behavioural outcomes of the new product/service and there needs to be an infrastructure and a plan to do that work. Otherwise, how will you know the impact it’s having? If you don’t test, you’re potentially creating a whole host of undesirable behaviours, which is inherently risky, potentially unethical.

 

Question 5
What are the principles of data security that companies should have top of mind?

I’m certainly no expert in data security but from the point of view of behaviour design, I think it would be fruitful to focus more attention on helping users establish safer digital behaviours. We’ve all experienced the frustration of having to reset a password every 2 months. It’s always the last thing we want to do in that moment, especially when we have to add an upper case and lower case character, a number, and it can’t be a word that exists in the common language, and it can’t etc. I’m not suggesting this is not important from a data security standpoint but from a user experience, it’s a barrier. Right now it seems like companies take the mandatory training approach to data security when the evidence suggests that training does little to impact behaviour, and if it does, it’s usually fleeting. It may be more useful to invest even a small amount into making that process easier for users, so they’re motivated to make secure choices. Maybe the consequences of not doing so need to be made more salient to break through inertia. Perhaps an automatically generated password backed by an active acknowledgement that you’ve documented it some place would help. These are empirical questions.

 

“We’re at a really interesting time in terms of the application of design, sociology, and behavior change models to wicked problems like gender and racial inequalities in the workplace, …” —Sarah Reid

BONUS
Question 6
What are things that keep you up at night

I think a lot about persistent and growing inequalities and how they’re being talked about, researched, and solutioned for. We’re at a really interesting time in terms of the application of design, sociology, and behavior change models to wicked problems like gender and racial inequalities in the workplace, for example. This is important for two reasons: it’s putting a much needed focus on empirical evidence (both quantitative and qualitative) as opposed to hype; and it’s also opening up space for cross pollination and more nuanced identification of how conditions at different levels shape behaviour, and therefore, thinking on how might our approaches to change incorporate context beyond the individual level.

 

Interested in this topic? Register to be part of a larger community at the Design Intersections conference in Chicago May 24-25, 2018.