Why Utilitarianism Incorrectly Justifies Anything
Utilitarianism, founded by philosophers John Stuart Mill and Jeremy Bentham, is a way of making moral judgments. Utilitarianism claims that it is possible to make an ethical decision in any scenario if an action maximizes pleasure for the most amount of people while minimizing pain. A famous example of applied utilitarianism is the trolley problem, where there is a trolley that is about to run over five people tied up on the tracks but an individual has the ability to change the course of the trolley to another track that only has one person. Utilitarianists would not skip a beat and choose to alter the train’s path; one person’s death is better than 5 people’s death because there is less net pain. I used to be a strong advocate for utilitarianism; I thought it was a very practical approach to life that seemed to work for almost every scenario that I tested. However, I realized that there were several flaws in the nuance of its argument because it could incorrectly justify any action whatsoever.
I first began to notice the issue with utilitarianism when I read about the Util Monster by Robert Nozick, a famous American 20th-century philosopher. Nozick gave a simple analogy: imagine that there is a monster living inside of a population that is genetically engineered to feel 50x more pleasure than anyone else. Nozick uses cookies to symbolize resources in the population. In order to maximize the “net pleasure” in the population, the people should give all the cookies to the monster. But at the same time, since utilitarianism also strongly encourages the maximum pleasure for the most amount of people, there is a contradiction; shouldn’t the cookies be all shared instead, even if it brings down the “net pleasure?” At first, Nozick’s idea didn’t completely disillusion me from still using utilitarian frameworks when making a moral calculus because it seemed like an unrealistic hypothetical situation.
However, after pondering further, I realized that the situation was not unrealistic at all. Of course, we won’t have monsters with broken dopamine receptors in real life, but it dawned on me that pleasure and pain could not be put or measured under any kind of metric. If I eat the exact same food with 10 other people, every single individual will gain some different levels of “pleasure.” Some may not like the dressing, others may not like how well-done the steak is. However, there is absolutely no way to quantify any of such “pleasures.” Objective metrics such as kilograms or liters do not exist for the realm of pain and pleasure: there are no “pleasuregrams” or “paincups.” The issue with making a moral calculus without metrics is that everything becomes a subjective quarrel. It would look something like this:
“I feel more pain than the pleasure you derive from this activity!”
“No, you don’t!”
“Prove it!”
“I can’t!”
“We’ll then how in the world are we going to agree?!”
“We can’t!”
One could argue that using a simple 1~10 scale of pain and pleasure could be the alternative. However, this idea also fails to escape the grips of subjectivity; one person’s level 10 pleasure is highly unlikely to be equivalent to another’s. At a doctor’s office, people are often asked to compare a “level 10 pain” with the “worst pain you’ve ever felt.” The worst pain of someone a soldier whose arm got blown off on a battlefield will be significantly different from the worst pain of a 9 year-old-boy. Thus, the moral calculation completely falls apart. In fact, it is even possible to reenact the Util Monster in real life. All people have to claim is that, for themselves, they feel drastically more pain or more pleasure than normal. A group of people could assert that eating strawberry ice cream uncomparably brings more joy than studying. Under utilitarianism, shouldn’t it be absolutely justified that they quit school and pursue ice cream?
But making this argument usually makes utilitarianists frown and claim that, obviously, under rational understanding, people wouldn’t make decisions such as sacrificing all educational opportunities for ice cream in real life. These responses actually highlight the underlying assumption of utilitarianism, which is that humans have some kind of ability to be generally rational, the ability to act according to sound logic. However, humans are often very irrational because they are intrinsically animals. Many individuals are chained to their emotions because they are biologically engineered to be so. Further, rationality seems to clash with utilitarianism’s main idea of “maximizing pleasure” because the genealogy of pleasure stems back thousands of years back when everyone was an animal, completely unaware of concepts such as rationality. Wouldn’t prioritizing pleasure go against rationality because certain acts of pleasure aren’t the most logical? How can the two coincide?
Also, since human preferences are entirely subjective, it is possible to have multiple “correct” people coming to entirely different conclusions at once. To elaborate, I could be “correct” by finding what brings the most pleasure to me, which is probably going to be very different than other people’s net pleasure/pain, which would also make them “correct.” How is it possible to ever come to a moral judgment if many people can be “correct” at the same time? At the end of the day, decisions have to favor a certain result, which means that certain people, who came to a “correct” utilitarian judgment, will not be favored. Such reasons illustrate why utilitarianism isn’t consistent enough to be actually applied.