To be clear, this is not a warning about robots taking over the world and killing everyone who’s mean to them (although that meme is funny).
This is a long-winded story about why being mean to an AI can have a negative impact on you. Over the past few months, I’ve been working with several people who are interested in using large-scale language models (LLMs) such as ChatGPT in their daily work. The work ranged from helping them write essays to writing draft code and debugging existing code.
For most people, their first interaction with an LLM is awe-inspiring. Interestingly, programmers who have no knowledge of how an LLM works are even more intrigued by it than those without a technical background.
Though imperfect, the experience of getting a computer to do things through natural language is extremely useful, and I’ve seen people incorporate the LLM into their daily work to get their jobs done faster and more efficiently.
But I know there is a dark side to interacting with language models: on the surface, LLMs are very human-like and can handle simple tasks very well, which makes for a very useful, human-like experience.
However, when the tasks become more technical and require interaction with the model, it can get frustrating as the model may misunderstand your intentions. Also, I have seen several people use vulgar language and even yell at the model when they did not get the response they expected.
When you ask why it behaves this way, you usually get an answer along the lines of “It’s just a stupid robot, it has no emotions.” That’s right. LLM has no emotions. It has no sentience. Once trained, the parameters are fixed, and with each new chat session the model is a clean slate with no memory of past interactions (unless it has memory capabilities).
I’m not worried about the emotions of the models, but about the humans who normalize such behavior in their professional interactions. Unlike machine learning models, the human brain does not have separate training and inference stages. Every experience rewires the brain, and new habits are formed through repetition.
But why does this matter about language models? LLMs are becoming more and more human-like in their responses. They have already passed the traditional text-based Turing test (though they are still a long way from replicating many aspects of human intelligence). We are already assigning them tasks that we would have previously done ourselves or assigned to colleagues. And platforms already exist for adding AI agents into conversations to collaborate with humans. And the role of LLMs and AI agents will only expand over time.
How long will it take for abusive behavior towards AI agents to spill over into human relationships? If humans and LLMs share the same chat application, these habits can easily be transmitted between bots and humans.
We’ve seen similar trends in the past: The pseudo-anonymity of the internet allowed people to behave in ways they wouldn’t in real life, face-to-face, but the habits formed online eventually seeped into real life.
This is similar, but in a different way: we start by ignoring the LLM as a non-sentient being, allowing it to engage in abusive behavior, but in the process, it is not the LLM we harm, it is ourselves.
I am all for tough criticism and honest feedback, but we also need a code of ethics in the workplace. These statements may sound funny or silly now, but I hope they will still be true in two years.
So the next time you find yourself frustrated with a language model, do yourself a favor and read the Prompt Engineering Manual — not to please your robot overlords, but for your own sake.