is the subject of my Trade Tripper column in this Friday-Saturday issue of BusinessWorld:
Nowadays, to talk about morality is a bad thing.
It speaks to many, particularly the young, of suppression of freedom,
self-expression, and creativity. To the simplistic-minded, morality is
synonymous to the repression of sexuality. And to ask that morality be applied in public life is taken as
alternatively a demand sprung from naivety or, worse, to dictate the
imposition of one’s beliefs on what is supposedly a pluralistic society.
But what is morality except to refer to the behavior of human beings? It
doesn’t necessarily refer per se to determinations of good or bad.
Morality was derived from the French "moral", which in turn was taken
from the Latin "moralis". Moralis simply denotes "manners" and is also
related to the Latin word "mos", which means manner or custom.
Today, of course, when one speaks of morality it generally is taken to
mean discussions of good or bad. But the reason why "morality" has
presently taken such a meaning is precisely because in discussing
morality it actually refers to acts of human beings. Assuming you are
somebody who thinks that human beings are not merely "ideas" (i.e., that
reality are all only thoughts or imaginations) or merely bodies (i.e.,
without intellect, that we are ruled by passions or compulsions; for the
religious, they’d make reference to a "soul"), then you would have to
agree that we have the ability (or the freedom) to make choices (i.e., a
free will) on how we are to act.
Assuming you further believe that human beings have a purpose (or an
"end", which is logically so because we are moving, only the dead are
static), whether it be the Aristotelian or Platonic belief that we are
meant to be "happy" (which is taken to mean us truly fulfilling our
humanity; the religious, of course, would say heaven), then the acts we
do will now take a categorization of whether such makes us achieve that
happiness. In which case, that act is considered "good". If that act
does not allow us to achieve that happiness or if it’s a happiness that
is at the cost of greater happiness, then that act is considered "bad".
Since man has an intellect (and free will), the reasonable thing for him
to do is to choose "good" (which allows him to be really "happy",
really human and not a mere animal ruled by compulsions) and avoid the
bad (which makes us unhappy, less human). By the way, "ethics" simply
refers to the formal study of morality.
That, in a reasoned logical nutshell, is what morality is (no references
to religious scriptures here). Morality takes significance simply
because we are human beings possessed of an intellect and the freedom to
make reasonable choices as we are not mere animals ruled by instinct,
passion, compulsions, or hormones.
The problem is that: robots are apparently becoming… like us.
This The Economist (Morals and the Machine, 2 June 2012) pointed
out: "As robots become more autonomous, the notion of
computer-controlled machines facing ethical decisions is moving out of
the realm of science fiction and into the real world. Society needs to
find ways to ensure that they are better equipped to make moral
judgments…"
With robots (and computers) becoming more "intelligent" and their
immersion in our lives get all pervasive, they have now come into
positions whereby their calculations become a matter of life or death
for humans on a daily basis: "Should a drone fire on a house where a
target is known to be hiding, which may also be sheltering civilians?
Should a driverless car swerve to avoid pedestrians if that means
hitting other vehicles or endangering its occupants? Should a robot
involved in disaster recovery tell people the truth about what is
happening if that risks causing a panic? Such questions have led to the
emergence of the field of ‘machine ethics’, which aims to give machines
the ability to make such choices appropriately -- in other words, to
tell right from wrong."
One remedy is to not use robots. However, the problem is that, even in
warfare, between having the risk of using robots and having thousands of
soldiers in harm’s way, the advantage of resorting to robots is clear.
International law has certainly not been remiss in examining this new
reality. As Dave Go of the Ateneo Law School wrote (in his 2012 paper
Weathering the Electric Storm: Analyzing the Consequences of Cyber
Warfare in Light of the Principles of International Humanitarian Law):
"Cyber warfare is a new kind of warfare. As such, the present principles
of war laid down in International Law should apply -- much more
specifically the principles of International Humanitarian Law. In
conducting war, participants must ensure that humanitarian rights are
still upheld."
The irony of it all is: while we are now worrying on how to make robots
moral, some people are still obsessing in removing morality from our
so-called pluralistic society.