A 5by5 conversation with Ruth Schmidt, Visiting Professor and Director of Strategic Initiatives at IIT, Institute of Design about how is access changing based on algorithms and what can we do to get it right?
Interview by Twisha Shah-Brandenburg & Thomas Brandenburg
Overview
Bureaucratic systems have been around for many decades. Financial institutions, HR departments, Hospitals, Legal Departments, Law Enforcement. Each of these systems has been collecting data on users for many years. They are the arbritors of who gets access and how much.
As all of these institutions get digitized and big data / machine learning start to influence decision making, anonymous algorithms decide the fate of millions at some of the most crucial junctures in an individuals lives.
In a Tedtalk by Cathy O’Neil she starts by saying: “We are scored by secret formulas that we don’t understand that don’t have systems of appeal.”
This set of questions explores the ethics around this new reality and hopes to give readers an understanding of what should be considered as we move into the future.
“Algorithms don’t exist in a vacuum; their results are interpreted and used by people, who introduce judgment into the equation.” —Ruth Schmidt
Question 1
Is it possible to create a completely unbiased algorithm?
There’s good evidence that it’s not… algorithms are crafted by people, who stitch their inherent biases into choosing what factors contribute to those algorithms, but the data that get used in constructing algorithms are not as objective as they seem either. And algorithms don’t exist in a vacuum; their results are interpreted and used by people, who introduce judgment into the equation. You can have situations where the seemingly objective data that “tell” a bank, for example, whether someone is a good mortgage risk are grounded in factors that are predisposed to code certain types of applicants as less worthy. So despite the veneer of objectivity, the algorithms that use this this data are biased from the get-go. And of course if those algorithm-based judgments fit someone’s biased idea of what a home-owner does (or doesn’t) look like that’s a problem.
Question 2
What is the role of diversity in the planning and creation process of algorithms?
If we assume algorithms are biased from the get-go, there’s a potentially interesting role for diversity to identify entry points that may help surface these drifts. For starters, you can make sure you’ve got other viewpoints involved who can interrogate the assumptions being made about what data is used, how it’s interpreted, etc. But you also need those diverse viewpoints when it comes to application and interpretation. For example, we need to recognize when an algorithm is likely to lead to biased results—using “do you have a relative who has been arrested” as a tag to prompt “high risk” individuals may tip the balance of who gets retained after a routine traffic stop, but when multiplied by the fact that African Americans, among other ethnic groups, are already more likely to get pulled over, you’ve just amplified the effects of bias. So you need diversity of thought from beginning to end, past algorithmic execution to human execution.
Question 3
Blind orchestra auditions were introduced to keep biases in check so that the focus is on the music and not the demographic information that can make the decision making process subjective. What might we learn from this as we design algorithms that make decisions?
Blind auditions scenarios—and others like it, such as Applied software–have been shown to help with representation… but it’s interesting to take that example to an extreme in order to learn. For example: it’s also possible to put too much trust in algorithms, where we end up ignoring signals or explaining away warning signs in technologically derived decision-making, At the same time, we sometimes face situations where we are pre-disposed to over-ride algorithms or evidence-based outcomes. Take medicine… there’s enough complexity and uncertainty in each case that while we may say we want to go with probabilities supplied by evidence-based medicine, you’ll be hard pressed to find a physician who is willing to completely go with what the data says without thoroughly considering factors that the algorithms may not have factored in (though of course this reliance on expertise can introduce a whole host of other biases). It may partly come down to the nature of the question posed as much as the data or algorithmic sophistication… some scenarios may just be better suited to purely data-driven solutions.
Question 4
What are the long-term effects of feedback loops? How might data scientists / designers and engineers think about and monitor their data?
Algorithms and feedback loops are growing ever more sophisticated and nuanced… but this can also lead to unintended consequences, such of turning up too many positives or questionable results in health care scans that might have previously gone unnoticed. That can lead to a tricky situation, where the precision of diagnostics runs into a “do anything” mindset about treatment, so the better detection of early-stage issues can sometimes lead to more people going through more invasive processes, earlier on, than may be necessary. People are already typically pretty bad at probabilities. But we now find ourselves in situations where we take advantage of better and more accurate diagnostics, even if we have read the articles that indicate they can lead to higher levels of false positives, or more aggressive treatment and side effects than one might have otherwise had. Health is an emotional issue as much as a physical one; the desire is to pursue treatment immediately is pretty hard to overcome if you know there’s something you can do, but it also means we may not sufficiently weigh the risks or burdens of over-treating. Because who wants to be the one who didn’t seek treatment early enough?
Question 5
What is the future off data science? What signals are you looking at that are making you excited and worried?
I am not myself a data scientist… so my sense of signals to be aware of may be more from the outside-in, as a layperson rather than as an expert. Data in organizational decision-making is an interesting area because I’m personally very interested in behavior within organizations. I’m also intrigued by how to balance human and algorithmic judgment; there’s a article I assign students about an overdose given in a hospital, where the combination of social norms, alert fatigue, trust in machines and expectations of what “normal” looks like all collide to create a scary situation. There’s no bad guy (or gal) and everyone was highly trained… so how do we wrestle with this complexity of human behavior and data-driven decision-making, especially in complicated, fast-moving systems?
Interested in this topic? Register to be part of a larger community at the Design Intersections conference in Chicago May 24-25, 2018.