Skip to main content

Verified by Psychology Today

Artificial Intelligence

How Machine-Learning Differs from Human Learning

Can AI develop human values and ethics?

Key points

  • Artificial Intelligence remains very different from human intellect.
  • Human children (but not AI) develop ideas of right and wrong gradually, with morals emerging in stages.
  • Human children (but not AI) learn in the context of parents and teachers who guide their moral development.
  • As we create more human-like machines, we must train the next level AI to be ethical.
Image by AstroAi from Pixabay
Source: Image by AstroAi from Pixabay

Despite all the media buzz surrounding Artificial Intelligence (AI), these smart-bots’ intelligence remains very different from human IQ. AI may crunch gigabytes of data while simulating conversational speech, but bots do not yet demonstrate near the complexity, nuance, and multiple intelligences of a human mind.

AI bots can play chess but still cannot truly express how it feels for them to win or lose. Simulated conversations with algorithm driven phrases and grammar do not give chat bots the capacity for human insight or empathy. Perhaps this is why some folks fear AI or at least seem unsure whether AI will save us or destroy us.

Humans develop in stages, guided by others.

To instill values and morality into AI, the programmers might try to imitate the way children learn and develop notions of right and wrong. Children’s thinking seems to emerge in stages, sometimes undergoing remarkable mental leaps and growth spurts. It takes years for children to evolve adult-like thinking, emotional intelligence, theory of mind, and metacognition.

Most important, humans learn in the context of parents, teachers, peers, and others who adjust their helping behaviors to each child’s level and capacity (scaffolding). Should we even expect AI to think like a human or someday demonstrate empathy when they are not programmed to learn gradually, in stages, with human guidance? Can we ever expect AI to learn values, empathy, or develop morality, unless bots are carefully guided by others to think about “right vs. wrong” as human children do?

Furthermore, humans possess a unique natural curiosity. Children continually yearn to know more and strive to explore and understand the world and themselves. Therefore, it is not enough to simply program machines to learn. We must also endow AI with an innate curiosity—not just data hunger but something more similar to a human child’s biological drive to understand, organize, and adapt. Programmers are already working with deep-learning models to continually improve AI with human neurocognitive-inspired algorithms.(1)

If AI machines are ever to possess ethics, empathy, conscience, or moral values, they must develop into high-functioning moral beings on their own. Where does empathy, kindness, and compassion come from? Is it in innate? More likely you achieved your values and morals through life experience. Perhaps AIs must come to higher moral reasoning through gradual experience and guidance, more in the way that humans do?

To create ethical AI, machine-learners must develop gradually in steps, with adults and ethicists guiding the AI in much the way a parent or teacher guides a child’s moral evolution. This takes time. The next generation of AI will require training beyond linguistics and data synthesis. We must teach AI to go beyond the rules of words and syntax, such that the next generation AI learns about right and wrong. Perhaps starting with Asimov’s laws of robotics: “A robot (or AI) may not harm humanity, or, by inaction, allow humanity to come to harm.”(2) Can we perhaps develop an AI capable of thinking for itself, beyond obeying rules?

Can we program a "post-conventional" AI? Would we want to?

As humans mature, they develop higher-level "adult" moral reasoning. Psychologist Lawrence Kohlberg referred to the highest level as “post-conventional” thought.(3) The idea is that advanced moral reasoning can go beyond adherence to local laws (conventions) to discovering universal ethical principles to live by.

The question is: Do we want AIs to become sentient, post-conventional, and independent thinkers capable of going beyond the rules? That is a frightening thought. As we strive to create ever more complex and human-like machines, we must consider how future programmers will create the next level AI to have emotional intelligence, empathy, and ethical thinking and behavior.

References

(1) A guide to machine learning algorithms and their applications https://www.sas.com/en_gb/insights/articles/analytics/machine-learning-…

(2) Asimov, Isaac (1950). "Runaround". in I, Robot (The Isaac Asimov Collection). New York City: Doubleday. ISBN 978-0-385-42304-5.

(3) Kohlberg, Lawrence, 1927-1987. (1981). The philosophy of moral development: moral stages and the idea of justice. San Francisco: Harper & Row

Kohlberg, L. (1976). Moral stages and moralization: The cognitive-developmental approach. In T. Lickona (Ed.), Moral development and behavior. New York: Holt, Rinehart, & Winston.

See also: Walrath, R. (2011). Kohlberg’s Theory of Moral Development. In: Goldstein, S., Naglieri, J.A. (eds) Encyclopedia of Child Behavior and Development. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-79061-9_1595

advertisement
More from Jeffrey N Pickens Ph.D.
More from Psychology Today