teknos logo.png

Welcome to the website for Teknos, Thomas Jefferson's Science Journal, showcasing student articles, papers, and editorials. Enjoy!

Under the Scalpel: Ethics in Machine Learning Research

Under the Scalpel: Ethics in Machine Learning Research

Eric Feng Edited by Maxx Yang Thomas Jefferson High School for Science and Technology

Preface

It is easy to see machine learning (ML) and artificial intelligence (AI) as the end to all of humanity’s problems: it will cure cancer, end crime, poverty, and famine; it will prevent car accidents, and solve global warming, and all because of a public dataset and a programmer with the novel idea of using an algorithm on it.

While the title of this article is “Ethics in Machine Learning Research,” it is not about how machine learning will put people out of jobs or how robots will take over the world. It is about the real-world implications of research and shows how the tinted shades that blind us to reality create problems, not robust research.

This article offers insights that may challenge your understanding of computer science as a field, whether you are an active researcher or an avid follower.

Introduction

During the 2018 Neural Information Processing Systems (NeurIPS) conference, the presenters posed a thought experiment to the audience. They showed two pictures: one of a human surgeon and one of a robotic arm; both were capable of performing life-saving surgery. The robot was trained using a model they termed black boxes. Knowledgeable only of its task, it could not answer questions, and its decisions were privy only to the machine. The robot, however, had a mere 2% mortality rate compared to the human surgeon, who had a 15% mortality rate [9].

If you had to go under the scalpel, which would you choose: the human or the robot?

While this scenario may be inconceivable in reality, it imitates a very tangible issue faced by artificial intelligence. This multi-trillion dollar industry, spanning fields from medicine to agriculture, often values accuracy over interpretability and novelty over practicality [2]. If you’re like every other researcher, scientist, engineer, and expert at that conference, you’ll have chosen the robot. It doesn’t take much for that 2% statistic to solidify the view of any data-oriented person that the robot is safer. As long as the robot reaches the correct answer, we see it as more reliable, more efficient, and less biased. But consider how important it is to you that the surgeon be capable of explaining their procedures, how important it is that they can adapt to your individual condition, or assure you that each step will not lead to complications. Or consider how little you know about the robot’s training or how your concerns may ultimately fall upon deaf circuits

The Current State of AI

This thought experiment was posed preceding the results of the 2018 Explainable Machine Learning Challenge. This groundbreaking challenge was a collaboration between Google, the Fair Isaac Corporation (FICO), and researchers at Berkeley, Oxford, Imperial, UC Irvine, and MIT. The challenge was to create a machine learning model with both high accuracy and explainability. It was in response to the widespread misuse of black boxes in the financial sector [3]. These black boxes are a recurring idea throughout this article and infiltrate every corner of computer science research. To understand their widespread impact and our irrational devotion to them, we must first understand the black box.

What is a Black Box in Machine Learning?

A black box is a model, algorithm, or equation that is sufficiently complex that its operations or reasoning are not interpretable [8]. The most prominent example is a neural network. If you’ve ever heard a computer science researcher talk about their “most recent project”, it likely included one of these. A neural network makes predictions on a new set of attributes based on what it has seen before (this can include a training set of tens to hundreds of thousands of samples). The only evidence of what attributes the neural network focuses on is synthesized into a vast matrix of weights and biases (often millions of trainable parameters) that would appear as a table of arbitrary numbers, hence the black box. These models promise higher accuracy at the cost of reduced interpretability [8].

Ethics

Due to their high accuracy, black boxes have been applied repeatedly to healthcare, finance, crime, and other fields. These fields have an impact far wider than the programmer who codes the algorithm; however, these models don’t always work as planned, and the effects can be devastating.

In healthcare, they’re used to predict disease prognosis. However, a recent review of a black box model that successfully detected COVID-19 from chest X-rays, found that the model wasn’t basing its predictions on the X-rays, rather on small text markers on the radiographs that were specific to that hospital’s sample collection method. Applied to any other setting, accuracy dropped significantly. In the context of disease diagnosis, their focus on accuracy rather than interpretability was life-threatening [10].

In the financial sector, these models determine who gets access to loans and, by extension, credit. Black box models discriminate against traditionally low-income areas, which keep those areas poor, thus reinforcing the biases of the model, creating an inescapable system of poverty. Black boxes in automated trading may have even played a role in the 2008 Recession and the 2010 “Flash Crash” [7, 1].

In the criminal sector, they’re used as “predictive policing algorithms” to identify hotspot communities that allow police to allocate their resources better. This increases police presence in disproportionately black communities. The Chicago Police Department took it one step further, using the algorithm to compile their Strategic Subjects List (SSL), which identified individuals that were “party to violence”. The list, which they said did not factor in race, identified 56% of young black men in Chicago as dangerous [6]. This anomaly is a result of the data which compiled arrests rather than convictions. The algorithm reinforced the existing bias of racial profiling, hidden within the data. While our legal system upholds the belief that all people are innocent until proven guilty, these algorithms did not.

And last year, Meta AI released a demo of their newest algorithm, Galactica. It uses Large Language Models (LLMs) to “store, combine and reason about scientific knowledge.” In practice, what it did was spit out random and often flawed connections tied together by assertive scientific language. Meta took down the demo after only three days following a barrage of ethical concerns including scientific deep fakes and the model’s tendency to assert fallacies as truth [4].

These are just a few examples, but they are enough to highlight the most pressing flaw in black box models. Reduced interpretability causes these models, inadvertently, to replicate and reinforce our own biases. Lack of interpretability results in a lack of accountability, and, by removing human input, the results are assumed to be scientifically rigorous and impartial when in practice they’re quite the opposite.

Solution

Black box models are already deeply embedded within research and nearly synonymous with ML research. Is there a solution?

Let’s return to the premise of the Explainable Machine Learning Challenge, which was to create a model with both high accuracy and interpretability. Several teams simply created highly accurate black box models and allocated most of their time towards interpreting them. But the winners of the challenge took a different approach. Instead of trying to explain a black box model, they got rid of it entirely, creating a simple, optimized interpretable model [5]. Not only did they show that small linear models could perform just as well as black boxes, but they did so by breaking the status quo, by questioning our most basic assumption that we have to sacrifice interpretability for accuracy

This isn’t a call to action to replace all complex ML models with statistical approaches. In fact, it’s quite the opposite. ML has, and should continue to have, a major role within research, and has shown repeatedly that it can revolutionize entire fields of science. When talking about a solution, it’s important to recognize that black boxes are not synonymous with ML. Removing black boxes does not mean removing neural networks or transformers or deep learning. Black boxes are not defined as any particular algorithm; they’re defined by us. 

Instead of blindly chasing accuracy, take the time to understand why each optimization improves performance. Instead of throwing a bunch of pre-trained models at a problem and seeing what sticks, research in-depth those models, and understand their limitations and exploitations. Machine learning is as much a tool to a researcher as a scalpel is to a surgeon; while capable of saving lives, if used without care, it can produce catastrophic results.

Modelers’ Hippocratic Oath

As a response to the Global Financial Crisis and the role statistical and machine learning models played in it, data analysts Emanuel Derman and Paul Wilmott crafted the Modelers’ Hippocratic Oath. It’s a crucial read for computer scientists, who, from behind their computers and safely in their homes, can irrevocably alter the lives of people around the world. It is as follows [11]:

  1. I will remember that I didn’t make the world, and it doesn’t satisfy my equations.

  2. Though I will use my models boldly to estimate value, I will not be overly impressed by mathematics.

  3. I will never sacrifice reality for elegance without explaining why I have done so.

  4. Nor will I give the people who use my model false comfort about its accuracy. Instead, I will make explicit its assumptions and oversights.

  5. I understand that my work may have enormous effects on society and the economy, many of them beyond my comprehension.

I’ll ask you again the question that was posed at the beginning of the article: would you choose a human or a robot to perform a life-saving operation?

In the end, there’s no right answer, but I hope you’ve gained a critical perspective that you’ll apply as you conduct or interpret computer science research. Keep in mind the people research affects, because to them, the choice between interpretability and accuracy may very well be the difference between life and death.


References

[1] Borch, C. (2022). Machine learning, knowledge risk, and principal-agent problems in automated trading. Technology in Society, 68, 101852. https://doi.org/10.1016/j.techsoc.2021.101852

[2] Chui, M., Manyika, J., Miremadi, M., Henke, N., Chung, R., Nel, P., & Malhotra, S. (2018, April 17). Notes from the AI frontier: Applications and value of deep learning. McKinsey Global Institute. https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-applications-and-value-of-deep-learning

[3] Fair, Isaac, and Company. (2018, June 13). Explainable Machine Learning Challenge. FICO Community. https://community.fico.com/s/explainable-machine-learning-challenge

[4] Heaven, W. D. (2022, November 18). Why Meta's latest large language model survived only three days online. MIT Technology Review. https://www.technologyreview.com/2022/11/18/1063487/meta-large-language-model-ai-only-survived-three-days-gpt-3-science/

[5] IBM Research. (2019, April 3). We didn't explain the black box – We replaced it with an interpretable model. FICO Community Blog. https://community.fico.com/s/blog-post/a5Q2E0000001czyUAA/fico1670

[6] Kunichoff, Y., & Sier, P. (2017, August 21). The contradictions of Chicago police's secretive list. Chicago Magazine. https://www.chicagomag.com/city-life/August-2017/Chicago-Police-Strategic-Subject-List/

[7] O'Neil, C. (2017). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.

[8] Petch, J., Di, S., & Nelson, W. (2022). Opening the black box: The promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology, 38(2), 204-213. https://doi.org/10.1016/j.cjca.2021.09.004

[9] Rudin, C., & Radin, J. (2019). Why are we using black box models in AI when we don't need to? A lesson from an explainable AI competition. Harvard Data Science Review, 1(2). https://doi.org/10.1162/99608f92.5a8a3a3d

[10] Šiklar, M. (2021, August 3). Why building black-box models can be dangerous. Towards Data Science. https://towardsdatascience.com/why-building-black-box-models-can-be-dangerous-6f885b252818

[11] Wilmott, P., & Derman, E. (2009, January 8). Financial Modelers' Manifesto. https://wilmott.com/financial-modelers-manifesto/

A Shift in the Culture: Context and Concerns of Lab-Grown Meat

A Shift in the Culture: Context and Concerns of Lab-Grown Meat

Give Lofi a Try?

Give Lofi a Try?