Can Machines Learn Morality? The Answer

Can Machines Learn Morality? The Answer
For the last few decades, artificial intelligence (AI) research has been undergoing an exponential increase. As AI grows more advanced, people are beginning to wonder whether machines can learn morality. As an ethics researcher, I contend that it is likely the case that machines will not develop morality.
 
Before discussing what makes morality so difficult for machines to learn, let us first understand how the brain works which is the basis of our understanding on why humans are moral.

There are two main components of the human brain which jointly help us feel empathy and acquire morality by developing concepts of fairness and reciprocity. How this happens will be discussed in more detail below.

How to Develop a Brain?

Our brains are complex networks made up of billions of neurons which receive sensory input and send electrical signals to cells in other parts of brain, forming complex thought patterns. Brain regions are connected via synapses which are specialized for carrying signals to other parts of the brain.

How to Develop Emotions?

Our emotions are thought to be initiated by neurons in the amygdala while the same neurons also interact with the hippocampus which creates memories of the emotional state. Signals from other parts of brain can make its way to amygdala, increasing or decreasing the emotional response. The amygdala has been found to be physically larger in males than females which may explain why males may be more prone to violent behavior than females.

How to Develop a Memory?

 
Our memory is also created via neurons in hippocampus transmitting electrical signals to various parts of brain where it creates long-term memories which can then be recalled when needed.
 
The anterior cingulate cortex is a part of the brain involved in feelings of empathy and has been found to be important for morality. While some people can indeed be very empathetic, not necessarily all the time. Neuroscientists have found that even those with high empathy can be quite selfish at times.
 
The medial prefrontal cortex (mPFC) is a region of the brain that is important for acquiring morality from birth as it receives signals from other parts of brain. Studies have shown that infants as early as 7 months old distinguish between actions that are morally good and bad before they even speak a word.
Can Machines Learn Morality? The Answer

Can Brain, Emotions and Memory Developed by a Machine?

So far, no.

How does morality develop?

 
There are various ways in which moral judgments are formed. This is what makes morality so complex because there are many factors that influence how morality is acquired. These factors include socialization, genetics, upbringing, education, environment, emotions and instinct. Different people have different genetics which can influence how they develop morals.
 
One of the most important factors in moral development is socialization which refers to how we learn from our surroundings. In this sense, we are all socialized from birth to develop morals.
 
It has been found that a mother with a higher number of kinship ties with a baby will have a stronger attachment with the baby and be more likely to encourage caregiving behavior which is important for infant survival. This means that mothers without many kin who want to help their babies learn how to survive will most likely not pass on these good behaviors to their children. This would limit the chances of developing morality in children born into such families unlike those born into more caring families.

So, Can Machines Lean Morality?

Whether machines can acquire morality depends on how we define morality and whether we consider the concept of morality to be determined entirely by biology or whether we also factor in external factors such as socialization.
 
If we define morality purely by biology, there may be a chance that machines may develop moral judgment systems like humans because they will be subject to the same biological rules as humans.
 
However, if we include external factors, it is more likely that machines will not acquire morals even though they might be capable of imitating some aspects of human-like behavior such as empathy and fairness towards others.
 
There are several reasons why I think this is the case:
 
1) There will always some conflicts between social norms and biological rules.
 
Socialization is essential in humans to acquire morality from birth. For example, when a mother gives her child a poisonous mushroom, she may face social sanctions from other mothers which will limit the chances for her child to develop morality. On the other hand, biological rules such as survival of the fittest may motivate a mother to pass on this example to her child. This means that two conflicting factors could co-exist in humans which could affect how they acquire morals. However, in machines these conflicting factors might not be present resulting in a lack of any morality in machines.
 
2) The definition of morality will need to be expanded to include the idea of fairness and reciprocity.
 
Since machines can often know whether an action is good or bad depending on how it is transmitted through electrical signals, we use the term “empathy” to describe machine behavior. However, we cannot even define morality based only on empathy because it is not entirely clear what constitutes as good and bad and what influences how and why we develop empathy.
 
3) The way in which humans acquire morals is different from the way in which machines acquire morals.
 
Humans are social animals who rely heavily on socialization in order to acquire morals.
These cultural practices themselves can affect how humans acquire morals without even realizing it. For example, if you had to choose between two women and one had children while another did not, which one would be more likely to win your heart? Of course, most people would overlook the fact that one woman might have children because children are considered valuable in almost all cultures.
 
However, if we were to ask whether killing a child for food is morally wrong, most of us will say that it is morally wrong because we cannot kill our own child no matter how hungry we are. We would not even imagine doing something like that because it’s not part of socialization. On the other hand, machines might acquire morals based on what they see or learn which makes them different from people.
 
4) Humans are more complex than machines and therefore, it is harder to program moral judgment systems in machines.
 
A particular behavior can be defined by its level of intensity or how often it is repeated. For example, killing an animal is considered inherently bad regardless of what the intention was (e.g., killing to eat for survival). However, situations change with time and location resulting in varying levels of how morally acceptable something is to people with different beliefs and backgrounds.

Biological systems are also not specific to a certain type of behavior. They can apply to different behaviors which humans would consider morally wrong depending on the context. For example, killing an animal can be considered morally wrong in some cultures but not in others. Similarly, humans can accept or reject certain behaviors based on their beliefs or social norms even though the behavior might still cause harm to animals under different circumstances.

Conclusion

 
I am moral because I know that it is bad to kill animals for food. However, I believe that this is because I am socialized into believing that killing animals for food is bad. On the other hand, I also know that many people will kill animals for food without any moral conflict if they are taught otherwise.
 
I believe machines may not learn morality because there are conflicting factors between the biological rules of survival and the socialization process. Also, it will be very hard to fully program morality in machines because it involves more than empathy and fairness to others.
 
Although I don’t believe that machines can acquire morals, I do know that some scientists are developing moral machines. Since they are not complete yet, it is too early to determine whether moral machines will become a reality in the future. But I believe that computers will eventually develop morality which will make many of the questions discussed above obsolete, including why robots should have rights.
 
P.S. I’m excited to see OpenAI by Musk.

About the Author

Leave a Reply

Your email address will not be published.