Skip to main content

Verified by Psychology Today

Artificial Intelligence

Concerns Over AI: Moral Panic or Mindful Caution?

As AIs develop, we need to moderate our expectations of their impact.

Key points

  • As AI becomes increasingly accessible, people will see an inevitable cycle of concerns and misunderstandings
  • Many discussions confuse generative AI with other types of sentience.
  • Even in news media, AI has been used for more than a decade.
  • People should be concerned about AI and any emerging technologies, but not conflate panic with progress.

With increasingly accessible tools bringing artificial intelligence (AI) platforms to the masses, an inevitable cycle of hype, fear, and misunderstanding of AI stands to outpace bona fide developments in the technologies themselves. For example, with a basic internet connection and a mobile device, people can generate text and content far beyond their professional skillsets (generative AI), posting a de facto threat to creative industries (DeCremer et al., 2023).

Andrea De Santis / Unsplash
Discussions of AI tend to conjure thoughts of sentient humanoids, regardless of the technology's capabilities.
Source: Andrea De Santis / Unsplash

Concerns over disruptions are necessary, especially if we accept that technological developments are disruptive by definition—as we develop tools to assist with and automate otherwise “human” processes, those tools challenge and displace established norms and routines (Chandler, 1995). At the same time, rash or unreasoned panics can overstate concerns while overlooking the benefits of technological advancement, and the process is made even more complex when we have an imprecise understanding of the emerging technology itself.

An excerpt from the Brookings Institute explains (West, 2018):

The lack of clarity around the term enables technology pessimists to warn AI will conquer humans, suppress individual freedom, and destroy personal privacy through a digital ‘1984.’

AI isn't as new as you think

If we look to the not-too-distant past, before generative AI had entered the lexicon, computer-assisted journalism capable of generating original text was introduced in sports reporting as early as 2010, with the Associated Press using AI for financial reporting a few years later (NPR, 2010; Pieser, 2019). Thus, while generative AI is still marked as an emerging technology by many, the technologies have existed in media industries for quite some time.

AI concerns and panics

Concerns over AI are relevant and needed, as it is important to identify problems ex nihilo (completely novel problems) from problems de novo (modern versions of old problems; Verdoux, 2009). That said, panics around generative AI can be explained by at least two processes common to media coverage of technologies: moral panic cycles and exemplification theory (Zillmann, 1999). Moral panics are understood as scenarios in which a specified entity “emerges to become defined as a threat to societal values and interests” (Cohen, 1973). They often appear in the form of media coverage, in which a given act is covered as an act of deviance.

For generative AI, this could be found in the sensationalist coverage of the technologies that often conflate various types of AIs (a quick Google News search will generate several such headlines), such as Microsoft’s Bing AI chatbot professing its love for its operator—a perception in the operator but nonetheless impossible for a generative AI that lacks self-awareness (Roose, 2013; Xiang, 2023). The moral panic around (generative) AI that frames the technology as sentient does a disservice because it misrepresents the technologies in ways that reinforce the representations. In turn, these misrepresentations serve as concrete and readily accessible exemplars for media audiences: two conditions that Zillmann (1999) argues make them especially influential in shaping broader public perceptions of AI. Coverage of AI in international newspapers has become more prevalent and critical in the last decade (Nguyen & Hekman, 2022). Conversely, when media audiences recall more nuanced portrayals of intelligences (such as those in social robots), they feel less anxiety and might have subconscious orientations toward acceptance (Sundar et al., 2016; Banks, 2020).

It would be unwise to suggest that there are no entities interested in creating sentient and self-aware AI. That said, critiques of generative AI that rely on claims of sentience are misguided and logically broken—they represent a classic “red herring” argument by distracting us from meaningful concerns that we can and should have about the future.

AI as a human supplement

At our recent Newhouse Summit 2023, the program featured discussions of several different tools aimed at improving productivity and unlocking the distinctly human potential of their end users. Common in these presentations was the explicitly supplemental role of the tools being discussed, with consistent reminders that in the generative AI space, human decision-making must be at the center of the final action, implementation, or deliverable. This was especially prevalent in revelations and reminders about the inherent biases embedded in AI training data (Srinivasan & Chander, 2021). In a sense, we are reminded of calls for computer scientists who lament that the term “intelligence” is being used so loosely (Pretz, 2021):

People are getting confused about the meaning of AI in discussions of technology trends—that there is some kind of intelligent thought in computers that is responsible for the progress and which is competing with humans… We don’t have that, but people are talking as if we do.

Moving forward, we need to recalibrate our discussions of and expectations for generative AI systems. They might well produce content that “seems almost human,” but this should not be confused with the interpretation that the systems are (a) intelligent ones that are (b) capable of replacing our own sentience (as I wrote about in an earlier post for Psychology Today). Generative AI models represent bleeding-edge technologies able to sift through patterns of our own human content and produce probabilistic inferences from that content (Bender et al., 2021). They reflect our own content back on us, and in the spirit of many other communication technologies, we likely learn more about ourselves, our processes, and our content in the end. As suggested by Bogost (2022), such programs are dumber than we think, and it could be that we just “expect so little from text."

What we do with those lessons will be decidedly up to us.

A version of this post also appears in the Newhouse Impact Journal.

References

Banks, J. (2020). Optimus primed: Media cultivation of robot mental models and social judgments. Frontiers in Robotics and AI, 7. https://doi.org/10.3389/frobt.2020.00062

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623). Association for Computing Machinery. https://doi.org/10.1145/3442188.3445922

Bogost, I. (2022, December 7). ChatGPT is dumber than you think. The Atlantic. https://www.theatlantic.com/technology/archive/2022/12/chatgpt-openai-artificial-intelligence-writing-ethics/672386/

Chandler, D. (1995). Technological or media determinism. Retrieved August 4, 2023, from http://visual-memory.co.uk/daniel//Documents/tecdet/tdet11.html

Cohen, S. (1973). Moral panic and folk devils. Paladin Press.

De Cremer, D., Bianzino, N. M., & Falk, B. (2023, April 13). How generative AI work could disrupt creative work. Harvard Business Review. https://hbr.org/2023/04/how-generative-ai-could-disrupt-creative-work

National Public Radio. (2010, January 10). Program creates computer-generated sports stories. All Things Considered. https://www.npr.org/templates/story/story.php?storyId=122424166

Nguyen, D., & Hekman, E. (2022). The news framing of artificial intelligence: A critical exploration of how media discourses make sense of automation. AI & Society. https://doi.org/10.1007/s00146-022-01511-1

Ongweso, E. (2022, December 16). Everybody please calm down about ChatGPT. Vice. https://www.vice.com/en/article/bvmk9m/everybody-please-calm-down-about-chatgpt

Pieser, J. (2019, February 5). The rise of the robot reporter. The New York Times. https://www.nytimes.com/2019/02/05/business/media/artificial-intelligence-journalism-robots.html

Pretz, K. (2021, March 31). Stop calling everything AI, machine-learning pioneer says. IEEE Spectrum. https://spectrum.ieee.org/stop-calling-everything-ai-machinelearning-pioneer-says

Roose, K. (2023, February 17). A conversation with Bing’s chatbot left me deeply unsettled. The New York Times. https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html

Srinivasan, R., & Chander, A. (2021). Biases in AI systems. Communications of the ACM, 84(8), 44-49. https://dl.acm.org/doi/10.1145/3464903

Sundar, S. S., Waddell, T. F., & Jung, E. H. (2016). The Hollywood robot syndrome: Media effects on older adults’ attitudes toward robots and adoption intentions. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 343-350). https://doi.org/10.1109/HRI.2016.7451771

Verdoux, P. (2009). Transhumanism, progress and the future. Journal of Evolution & Technology, 20(2), 49-69. https://jetpress.org/v20/verdoux.htm

West, D. M. (2018, October 4). Research: What is artificial intelligence? Brookings Institute. https://www.brookings.edu/articles/what-is-artificial-intelligence/

Xiang, C. (2023, February 16). Bing is not sentient, does not have feelings, is not alive, and does not want to be alive. Vice. https://www.vice.com/en/article/k7bmmx/bing-ai-chatbot-meltdown-sentience

Zillmann, D. (1999). Exemplification theory: Judging the whole by some of its parts. Media Psychology, 1(1), 69–94. https://doi.org/10.1207/s1532785xmep0101_5

advertisement
More from Nick Bowman Ph.D.
More from Psychology Today