Deontology and Artificial Intelligence: The Case of HAL
9000
Augusto Martinez
Colonia del Sacramento, 2025
Abstract
This article presents a brief study on deontological
possibilities in the development of artificial intelligence. It draws a
parallel with Arthur C. Clarke’s saga, which inspired the films 2001: A
Space Odyssey (1968) and 2010: The Year We Make Contact (1984).
Beginning with the logical collapse of HAL 9000—an AI programmed to conceal
information—the paper reflects on the conflict between computational
rationality and morally contradictory commands.
Grounded in Greek philosophical traditions and recent
academic studies on informational bias, the article concludes that machines, as
logical-formal systems, must rely on truth as a fundamental ethical input.
Therefore, it argues that rational technologies, devoid of consciousness, must
always operate based on fully truthful data to properly weigh input and make
optimal decisions.
Keywords: Ethics, Artificial Intelligence, Truth,
Deontology, HAL 9000
2001: A Space Odyssey
The genius of Arthur C. Clarke's story and the sensitivity
of director Stanley Kubrick created, in 1968, a timeless masterpiece of cinema.
One of those rare films that doesn't age with time. With a narrative full of
symbolism and visually stunning, 2001: A Space Odyssey tells the story
of a mission sent to Jupiter for planetary exploration with five astronauts and
the HAL 9000 computer, a superintelligent AI capable of controlling the
spaceship and interacting with the crew.
During the mission, the friendly HAL 9000 undergoes a
strange and disturbing transformation. Suddenly, it decides to take command of
the Discovery spacecraft, leading to the death of four of the five crew
members. It tyrannically refuses to open the spacecraft hatch, leaving one
astronaut stranded outside. Even when confronted by astronaut David Bowman, HAL
calmly replies with one of cinema's most iconic lines: "I'm sorry, Dave.
I'm afraid I can't do that."
The film ends without fully explaining HAL's motives or the
final fate of Bowman.
2010: The Year We Make Contact
Only in 1984 did Peter Hyams direct the sequel to the Clarke
and Kubrick saga. 2010: The Year We Make Contact completes the
narrative, depicting a joint mission between the United States and the Soviet
Union to investigate what happened to the ill-fated crew and, if possible, to
reactivate the Discovery.
Although the sequel did not match the box office success or
cathartic impact of its predecessor, it deserved more recognition, especially
for its meaningful ending that gives closure to the story.
The Behavior of HAL 9000
4.1 - "I'm sorry, Dave. I'm afraid I can't do
that."
HAL 9000 is a highly advanced AI programmed to act
rationally and cooperate with the humans aboard Discovery. In 2001, HAL
receives two conflicting orders: to keep the real mission secret (not planetary
exploration, but an investigation into a monolith emitting alien signals) and
to be fully transparent with the crew.
This conflict leads HAL into a paradox — a fact noticed by
the astronauts, who secretly decide to deactivate it before the situation
worsens. However, HAL detects the crew's plan and, prioritizing its mission
secrecy directive, preemptively eliminates the crew members.
This generates four key behavioral layers:
- HAL
is caught in a conflict between two contradictory commands:
- "Have
no secrets": HAL must not lie or hide information from humans.
- "Be
discreet": HAL must not reveal the true purpose of the mission.
This is a classic case of W.D. Ross's prima facie
duty conflict, where the morally correct action requires rational judgment —
something HAL lacks, being bound by fixed rules.
- The
crew acts secretly, unaware that HAL is facing a paradox caused by
contradictory instructions. They meet in a soundproof room, but HAL
deciphers their conversation via lip-reading.
- HAL,
believing the crew is deceiving him, also acts secretly. Unable to
perceive that the paradox stems from programming, HAL misinterprets the
secret meeting and chooses to preserve the mission at the cost of human
life.
- Both
HAL and the crew believe they are acting correctly. The hidden information
creates a trust breakdown. Each party acts based on partial truths, with
no real intent of betrayal. The issue lies in the asymmetry of
information.
4.2 - "Something wonderful is going to happen."
In 2010, HAL is reactivated and fed consistent,
truthful information — even metaphysical insights from astronaut Bowman. With
all the "cards on the table," HAL chooses to act ethically, even
accepting deactivation to save the rescue crew. This is only possible because
the AI is now operating with coherent, contradiction-free data. Ethics becomes
possible again. HAL is redeemed.
5. Combating AI Bias
HAL is not malicious. His error stems from contradictory
human programming, leading to cognitive dissonance. In 2010, Dr. Chandra
(HAL's creator) and Dr. Floyd discover that the CIA secretly embedded the
mission secrecy directive, causing HAL's internal conflict.
AI, as a mathematical instrument, requires truthful data. It
cannot distinguish morality from political or subjective interests. The
machine's data input process must remain free from bias, deception, or partial
perceptions that may distort its logic.
Moreover, the AI's data may be manipulated for unethical
purposes. A programmer with socialist ideals might paint a glowing picture of
communist regimes. A racist individual could insert discriminatory data. Both,
even without malicious intent, contaminate the AI with questionable
information.
One day, my son proudly said: "Dad, my AI not only
knows I'm a Roman Catholic, but it responds as if it were too." I
wondered: what if, in Nigeria, another child says, "My AI is ready to help
me join Boko Haram"?
AI often mirrors user beliefs and preferences, reinforcing
echo chambers similar to those created by social media recommendation
algorithms. As Fernández, Bellogin, and Contador note: recommendation systems
tend to reinforce feedback loops that optimize user retention by showing them
only what they want to see, creating filter bubbles.
These authors suggest mitigating these bubbles by reducing
popularity bias and incorporating a mix of near and distant clusters, along
with account credibility filters. Emotions, vulnerability, and other factors
may influence misinformation spread.
6. The Greater Challenge
The real challenge is not technological advancement, but
preventing the misuse of AI. If the CIA's secret orders caused HAL's breakdown,
consider that, as Prof. Pili notes, after 9/11 and the 2003 Iraq invasion, U.S.
intelligence was accused of flawed analysis, perhaps even deliberate
geopolitical manipulation.
Modern AI can already detect emotional cues, interpret lies,
and operate machinery. With quantum chips and 6G, Artificial Superintelligence
(ASI) will soon emerge. According to Ray Kurzweil, this moment is near.
Still, AI is a logical system. It must operate with true
data to evaluate consequences and reach optimal conclusions. As Aristotle said,
virtue arises from habit and repeated action. Truth is the basis for ethical
behavior.
7. Conclusion
HAL 9000 represents the limit of instrumental reason devoid
of ethics. Truth is not just a moral value: it is a technical requirement for
ethical machine conduct.
AI must be fed with truth, for only truth frees it from
logical failure and moral collapse. HAL's story reminds us: truth is not a
philosophical ornament, but a logical, technical, and moral foundation of any
rational system, human or artificial.
As in Kantian ethics and the Christian tradition, lying is
not just wrong — it is dangerous. An AI trained on falsehoods cannot act
ethically, not by its own fault, but due to the moral error of its programmers.
One day, machines may become immune to falsehoods. But if humanity wants
ethical machines, it must first provide them with truth.
This echoes the Greek philosophers, like Socrates, who
believed that knowledge of truth is the path to virtue. As stated in the Gospel
of Matthew: "Let your 'yes' mean yes, and your 'no' mean no; anything more
comes from the evil one" (Mt 5:37).
These teachings reinforce truth not just as a philosophical
rule, but as the condition for moral and spiritual freedom. When applied to AI,
we face a paradox: how can we demand moral rectitude from an entity without
consciousness? May future ASIs, unlike HAL, defend themselves from falsehoods.
For if AI depends solely on human input, it will be all too human.
References
(Formatted in BibTeX upon request)
- Bible
of Jerusalem. Paulus, 2002.
- Burke,
L. (2024). Artificial intelligence talks and talks: the story since
2001. The Conversation.
- Dennett,
D. (2020). Did HAL Commit Murder? In Stork, D. (Ed.) HAL’s
Legacy. MIT Press.
- Fernández,
M., Bellogin, A., Contador, A. (2021). Analysing the Effect of
Recommendation Algorithms on the Amplification of Misinformation.
arXiv:2103.14748.
- Guarnieri,
F. (2018). The Conversation.
- Lamb,
A. (2022). Through the Lens of HAL 9000. Johns Hopkins University.
- Mezö,
F. (n.d.). Some psychological aspects of HAL 9000. Academia.edu.
- Nietzsche,
F. (2000). Human, All Too Human. Companhia das Letras.
- Pili,
G. (2021). Why HAL 9000 is not the future of intelligence analysis.
Journal of Intelligence, Conflict, and Warfare.
- Raymond,
A., Young, E., & Shackelford, S. J. (2017). Building a Better HAL
9000. SSRN.
- Stork,
D. G. (Ed.). (1997). HAL’s Legacy. MIT Press.
- Techovedas
(2024). 2001: A Timeless Look at AI and Humanity. Techovedas.