Digital Data 2019, Week 03

Week 03: Justice, Not Fairness

The transfer of power from human to computerized decision-making threatens to make our society less just and equitable at ever-faster speeds. This is true from a purely physical standpoint; companies are investing millions of dollars to construct their own private networks and to locate themselves as closely to network distribution centers in major cities in order to process transactions more quickly by a few microseconds, the difference between winning and losing in competitive markets (Google Cloud Platform, 2016; Slavin, 2011). This is also true from an societal standpoint. As complicated mathematical models are increasingly used to process data and inform decisions, even experts don’t always understand how the algorithms work, and non-experts frequently “feel like they don’t have the right to question” these decision-making processes (cunytv75, 2016). This is not to say that computerized decision making via algorithms is completely negative. If we can use algorithms to predict post-partum depression months before symptoms occur, primary care physicians can be on the lookout for those symptoms in order to intervene early (Tufekci, 2016). As a mechanism for providing information to humans to better inform decision-making, algorithms indeed are promising.

Not all medical algorithms are potentially as promising (Munroe, n.d.).

However, too often, algorithms are used to replace human decision making, not to supplement it. Even if there are some benefits to be gained, the biggest problem posed by algorithms is that they create an environment in which people are treated as falliable beings when it’s convenient, and held to impossible standards when it’s not. When algorithms designed and implemented by humans present us with results that seem unintuitive — for example, firing a teacher highly respected by her colleagues and supervisors due to fired due to statistical underperformance — we treat these decisions as “unflinching verdicts,” but if that same teacher were to push back on the result, we’d require her to meet an unreasonable standard in proving her case (O’Neil, 2016, p. 10). It’s not enough, as O’Neil suggests, to push for greater fairness in the use of algorithms, to make sure that everyone has an equal opportunity to be fired by math. We must push for justice, to keep our humanity firmly integrated with our decision-making processes. Humans are flawed, but at least we can be held accountable for our actions.

cunytv75. (2016). The Open Mind: Death by Algorithm – Cathy O’Neil. Retrieved from https://www.youtube.com/watch?v=cK87rN4xpqA

Google Cloud Platform. (2016). Google Data Center 360° Tour. Retrieved from https://www.youtube.com/watch?v=zDAYZU4A3w0

Munroe, R. (n.d.). xkcd: Watson Medical Algorithm. Retrieved February 4, 2019, from https://www.xkcd.com/1619/

O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown.

Slavin, K. (2011). How algorithms shape our world. Retrieved from https://www.ted.com/talks/kevin_slavin_how_algorithms_shape_our_world

Tufekci, Z. (2016). Machine intelligence makes human morals more important. Retrieved from https://www.ted.com/talks/zeynep_tufekci_machine_intelligence_makes_human_morals_more_important

Tagged