publisher website screenshot

Humanely Human

This new book by Justin Gregg examines anthropomorphism, or our tendency to humanize our pets, inanimate objects, and even chatbots and considers its biological benefits and our psychological vulnerabilities.

Gregg highlights the evolutionary advantages of assuming that eyes, movement, or language for example constitute evidence of human attributes. This tendency, which Gregg describes as an Anthropo-Dial, is counterbalanced by another tendency, which he calls the Humanity-Limiter, that reduces the chance of delusions.

These tendencies Gregg acknowledges can be exploited by propaganda for example or in advertising, and can also help explain dehumanization in racism for example or other prejudices. However, these tendencies especially when coupled with critical reflection, can produce better relationships and fuller lives as well as a more humane world.

Gregg effortlessly presents debates and research about anthropomorphism, and offers a positive and productive perspective on it, one that can enable us to resist others’ manipulations and avoid its worst uses. In doing so, he has produced an informed and accessible survey as a frame for his credible conclusions by impressively integrating quantitative and qualitative data and primary and secondary sources with a clear awareness of his audience.

One of the more interesting moments is the connection to Pascal’s wager, or the argument for a belief in God as an existential gamble. Gregg suggests that the anthropomorphizing argument is similar — humans are safer as a species if they believe that something with eyes or exhibits independent movement or has language is human, or at least human-like — and that such a wager also has other benefits, such as the better treatment of animals.

The AI consideration, which appears near the end, is perhaps the most relevant, and urgent, part. Gregg argues that this human tendency can enable AI companies to succeed by merely establishing the impression of consciousness, which he calls the Garland Test after the 2015 movie Ex Machina by Alex Garland.

The Garland Test suggests that AI poses an even greater risk to humanity, or at least a lower bar for AI companies. At the same time, this insight seems somewhat diluted by the addition of parasocial relationships, or those one-sided relationships with famous celebrities for example or fictional characters.

Gregg uses this link to explain why he apologized to an AI chat bot after chastising it for fabricating research, and in so doing illustrate how this human tendency can extract an emotional toll. Others however might question how AI experiences could even produce parasocial relationships if AI as he suggests, only seems sentient, and wonder how it furthers his concerns about anthropomorphizing AI, which he already did.

That regardless is a minor quibble with an undeniably effective book that changed the way I understand and think. For example, it convinced me to be more tolerant of those who anthropomorphize — he claims that lonelier people are more likely to anthropomorphize and that more social people are more likely to dehumanize — and showed me how to realize when I might be negatively doing so myself.

What more should, or could, I expect?


Posted

in

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *