

ApproachĪll that a person considers to be pleasurable When we put various combinations of the above together, we get distinct, but similar versions of consequentialism. This gives us guidance in a situation with no good options, like the trolley case where we have to choose between allowing 5 innocents to die on a runaway trolley or saving the 5, but killing 1 innocent bystander in the process. Some, but not all, versions of consequentialism hold that it is not good enough to merely have a net positive of pleasure through our actions, but rather we must choose the action that will bring about the greatest amount of overall pleasure. We’ve mentioned maximization in our discussion so far. If we include animals in the moral analysis, we would not eat the steak and tip our server generously. If we expand the morally relevant community to include all humans, we should eat the steak and tip our server generously. If we only take into moral account ourselves, we should eat the steak, tip our server zero dollars. Take the notion of preparing a steak dinner. Each of these possible answers to the “for whom” question will dramatically change our moral analysis. See the chapter on Animal Rights by Eduardo Salazar. This would widen our pool to include at least some of the animals. Perhaps we expand the maximization of the good for all beings who can experience pleasure and suffering. Perhaps we should only maximize happiness for ourselves only, or maybe we maximize the good of all humans. Likewise, there are various ways we could answer the “for whom?” question. However, if we use long-term rational well-being as the good we are trying to maximize, we would choose to value paying for college and attending sometimes difficult classes instead of the more immediate pleasure of the steak dinner. If we hold all pleasures to be equal, we could form a strong argument for going to that steak dinner instead of putting it towards a class you need to graduate, earning a degree which will develop your character, widen your awareness of the world, and better prepare you to navigate our society successfully. Let’s imagine you receive a fifty-dollar bill for your birthday and want to go out for a nice steak dinner. You can imagine how different the moral analysis would be depending upon the definition of the good. For instance, we might say that the good is “any and all pleasures” or “only long-term rational well being”. There are multiple ways we can answer those questions.


For whom are we trying to maximize the good?.What is the good we are trying to maximize?.When we start to do this sort of analysis, we must ask ourselves two questions: Consequentialism(s)Ĭonsequentialism is the notion that it is the outcomes of our actions that matter the most in moral analysis, not the action themselves nor our motivations. As a normative ethical theory, Utilitarianism suggests that we can decide what is morally right or morally wrong by weighing up which of our future possible actions promotes such goodness in our lives and the lives of people more generally. Winning the lottery, marrying your true love or securing a desired set of qualifications all seem to be examples of events that improve a person’s life. Some things appear to be straightforwardly good for people. Therefore every pleasure is good because it is of one nature with us but every pleasure is not to be chosen by the same reasoning every pain is an evil but every pain is not such as to be avoided at all times. We show how this ethical algorithm can be used to assess, across seven mutually exclusive and exhaustive domains, whether an AI-supported action can be morally justified.Pleasure is the beginning and the end of the happy life: because we recognize pleasure as the first good and connate with us and to this we have recourse as to a canon, judging every good by the reaction. We use a common but key challenge in healthcare interactions, the disclosure of bad news (likely imminent death), to illustrate how the philosophical framework of the 'Felicific Calculus' developed in the 18th century by Jeremy Bentham, may have a timely quasi-quantitative application in the age of AI. While physicians have always had to carefully consider the ethical background and implications of their actions, detailed deliberations around fast-moving technological progress may not have kept up. However, the addition of these technologies to patient-clinician interactions, as with any complex human interaction, has potential pitfalls. Advances in AI hold the promise of improving the precision of outcome prediction at the level of the individual. An appropriate ethical framework around the use of Artificial Intelligence (AI) in healthcare has become a key desirable with the increasingly widespread deployment of this technology.
