Ethics of communicating with generative AI chatbots

Article

AI chatbots like ChatGPT blur the line between human and machine, at the same time captivating minds and raising alarms. Having been swiftly integrated into search engines and programmes, these bots lack clear boundaries. Jeffrey Chan dissects the ethical challenges and environmental impacts of AI bots, as well as navigating fair AI use and the language and wisdom boundary.

Artificial intelligence cyborg with long nose
Teaser Image Caption
Cyborg artificial intelligence with long nose

How Chatbots Reshape Communication Ethics

At breakneck speed, generative AI systems are developing with few, if any, guardrails or guidelines. A recently proposed voluntary code of conduct is unlikely the game-changer that regulators need, while the European Union’s landmark AI Act, which has been amended to reflect the risks and responsibilities of generative AI, remains to be ratified. In this regulatory gap, chatbots powered by large language models (LLMs) – such as ChatGPT – warrant special attention. Already integrated into search engines and programmes, with countless other applications surely to follow, these chatbots, which are capable of generating human-like responses, will rapidly become routine in everyday life. In turn, they bring about unprecedented ethical complexities for how we communicate with one another. In the foreseeable absence of an enforceable code of conduct on the one hand, and on the other, the unrelenting speed at which generative AI development far outpaces regulatory foresight, what can be done, in the meantime, to mitigate – if not entirely obviate – the practical risks and ethical pitfalls of these chatbots? This article suggests that reflecting on the ethics of communicating with chatbots can help.

Consider first the existing ethics that regulate how people communicate with one another. Depending on one’s role or position in communication, this ethics may differ. If one is a professional – a journalist, writer, filmmaker, professor or politician – communicating ideas to a mass audience, then one has to comply with the ethics of mass communication. One should report only the truth, and verify the truthfulness of the message before communicating it to others. If this message can traumatize another person, one should always minimize, if not completely avoid, this harm. However, if one is just relaying a message to another person, then one takes special care not to distort the original message. And when engaging in subliminal meta-communication, one is held by a special obligation of transparency to disclose any ‘nudge’ used – with the assumption that a more transparent communication is the ethically superior communication, especially when others depend on the message for making important decisions.

First category: the carbon footprint of chatbots

Generative AI chatbots can compound this general ethics of human communication. Three different categories of ethical issues stand out. 

First, chatbots consume non-negligible amounts of energy on top of all the present energy-guzzling technologies that mediate communication today. A report estimates that just to get ChatGPT-3 to work, carbon emissions equivalent to 123 gasoline-powered cars driven for one year were generated. Another report estimates that based on the number of specialized graphics processing unit (GPU) chips for generative AI shipped in 2022, they could consume about 9,500 gigawatt-hours of electricity, which is comparable to Google’s total energy consumption in 2018. And this was before the world had access to the expanding range of generative AI tools. Profligate use, at the global scale, will surely spell trouble for the environment, even with a foreseeable mix of renewable energy sources in tow. What then can counsel our judicious use of these chatbots? Might a helpful proxy come in the form of an ethical calculus that can weigh the value of use against the cost of using these chatbots? Nevertheless, if this ethical calculus proves to be too distant – we do not even have a similar calculus for more measurable units of cars and planes presently ­– then at the very least, the energy consumption used in the chatbots’ training and inference should be calculated and then made transparent to their users. Similar to the logic of a smart meter, increasing the knowledge and awareness of real-time energy consumption by chatbots may help users to reduce their unnecessary use.

Second category: the appropriate use of chatbots

Second, chatbots can disrupt the social norms of communication. What is considered appropriate or ethical use of these chatbots? In one regrettable instance, Vanderbilt University released a letter of consolation to their students in response to a recent shooting at the Michigan State University. This letter was later discovered to have been generated by a chatbot, which subsequently led to a backlash, and Vanderbilt University then had to apologise. Where actual human support, presence and sympathy is expected, communication should never be delegated to a chatbot.

Yet determining where human support, presence or sympathy should be expected in communication is not always a straightforward matter. Consider this case: Is it appropriate to generate a legal letter using a chatbot to contest a parking violation? In this case, the defendant was in fact innocent. The chatbot was strictly used to organise nuanced facts of the case, which persuaded the judge to dismiss a parking summon. Given that the chatbot was merely making a factual argument on behalf of the defendant, its use appears appropriate, despite that the defendant was still counting on the prospect of a sympathetic judge.

In contrast, what if another defendant is guilty but nevertheless wishes to express his or her sincere contrition in a legal letter to the same judge, hoping to receive a pardon for an honest parking blunder? Unlike the former instance, this scenario has the added weight of conveying remorse. A chatbot is able to generate words of remorse. But tasking a chatbot to generate words of remorse is not the same as struggling to find words that can express remorse and then accepting the vulnerability of being subjected to the judge’s discretion. Delegating contrition to a chatbot not only risks the impression of relying on expedient means for a task that ought to fall under one’s direct responsibility, but also mistakes effective persuasion for sincere contrition. If this scenario suggests that even subtle variations in the nature of communication can render a chatbot inappropriate, then it is paramount that more deliberation is required to think about when one should or should not use chatbots in communication today.

Third category: the limits of language when prompting a chatbot

Third, chatbots today rely on natural language processing, without, however, natural language understanding. To effectively communicate with a chatbot, one must prompt with specificity, details and contextual information in mind. Nevertheless, it is impossible to be sufficiently specific, detailed or contextual such that prompting can cover all the necessary conditions for a chatbot’s output to be deemed ethical.

Consider this allegory from computer scientist Stuart Russell in his Reith Lectures. In this allegory, people have asked a very powerful AI for a solution to deacidify the oceans, with the specific requirements that there shall be absolutely no toxic by-product and absolutely no harm to the fishes. The AI then implements an output solution that does exactly that, but which uses one quarter of all oxygen on Earth, which, in turn, condemns every living organism to die a slow and painful death. Earlier, Norbert Wiener (1894–1964) made a similar point using the chilling story of ‘The Monkey’s Paw’ (1902) by W.W. Jacobs. Retelling this story in brief, a magical wish-granting monkey’s paw came into the possession of an elderly couple. They first wished for a sum of 200 British pounds; but this came in the form of an insurance compensation for their son’s fatal workplace accident. In anguish, the mother wished for this son to return. The son did return, but suggestively as a spooky ghost. In a final act of desperation, the couple wished their ghostly son to go away, and it did. 

These stories illustrate the dangers of a powerful ‘wishing well’ – and the contemporary generative AI chatbot is the beginning of a very potent one. Wiener was correct to point out that the danger of a powerful wish-granting technology lies precisely in its singularly literal-mindedness: that if it grants one anything at all, it only grants what one asks for, not what one should have asked for or what one intends. Is it possible that we will become so wise as to exactly ask for all the conditions necessary for an ethical output? Here, recalling a famous line by philosopher Ludwig Wittgenstein (1889-1951), who said that ‘the limits of my language means the limits of my world’, it is possible to conserve its meaning with regard to prompting by rephrasing it to ‘the limits of my language in prompting also limit my moral horizon’. An individual, no matter how morally conscientious or endowed with keen intelligence, will always leave out important details, conditions and contexts when prompting – if not for reasons of linguistic limitations, then ignorance, or a contingent world where bad consequences can follow from good intentions. As a fictional character in J.R.R. Tolkien’s enduring work once remarked: ‘For even the very wise cannot see all ends’. If this recognition marks the human limit of language and wisdom when communicating with powerful chatbots in the near future, then it can only mean that either the upper limits of their power must be severely curtailed ex ante – that is, we must never create chatbots that can actuate the irreversible consequences seen in Russell’s hypothetical example – or that prompts with serious public consequences must always be monitored by a democratically elected group, specifically appointed to identify their blind spots and pitfalls.

Conclusion: the cultivation of virtues and institutions

These three categories mark the beginning of an ethics of communicating with generative AI chatbots. All three converge on an important pivot: our trouble with chatbots today resides not so much in the chatbot itself but rather in us: our tendency for profligate technological use; our lack of a sense of fit or appropriateness when using generative AI; and our natural limit of language and wisdom. The ongoing efforts to develop even more robust technical guardrails in chatbots are necessary. But they are insufficient insofar as the cultivation of human virtues and institutions for flourishing alongside powerful AI remains woefully nascent. As we delve deeper into the kernel of even more powerful machine intelligence, may we also discover an even greater room for advancing our humanity.

This article is part of the essay collection ‘Living with Machines: Communications and Gender in AI and Robotics’ published by Heinrich Böll Stiftung Hong Kong in Aug–Sep 2023, as a post-conference publishing project of the 6th International Conference on Artificial Intelligence Humanities (ICAIH) 2023, hosted by the Humanities Research Institute at Chung-Ang University.