Notice any changes?
My apologies for all that mess that’s on the front page today (if you’re reading this much later, it might be gone by now). I’m going to have to gradually clean things up, but I can only do one thing at a time. Today I changed the theme and fixed my main page header image, as well as make the links at the top go to dummy pages. I’m afraid there’s still a bunch of sample stuff in there that I need to replace with my own material, but we’ll do that another time.
+ + +
WordPress has a troubleshooting bot that it calls “your personal AI assistant.” (It also has human helpers, which I definitely unlocked during the migration process. And which were very helpful.)
Today I didn’t need help from any humans. I started working on the appearance of the site. I start with no experience in WordPress’s specific user interface, but I do have experience in desktop publishing of various kinds, some of it fairly outdated: I’ve coded in HTML, I’ve of course used Typepad for blogging for years, and by the way I wrote my PhD thesis in LaTeX, which will give anyone training in troubleshooting issues as they come up.
In the pre-AI-chatbot world, I would learn the new UI on a need-to-know basis, by searching for the topic I wanted to learn in the help files, finding the right instruction documents, and following the steps outlined. I’m comfortable doing things this way. It’s what I typically do when I have trouble making my iPhone or iMac do what I want it to do. It feels like “figuring it out for myself.”
If I happened to have a physical book that served as a manual or tutorial, I might skim through it looking for a general outline of the basics, then consult the index for more specific topics. This is essentially how I learned LaTeX, with Leslie Lamport’s introductory text. I was maybe even more comfortable with this kind of learning. (And I still love LaTeX, even if I don’t have call to use it very often anymore.)
Of course, if I exhausted the help pages or the index, I might seek out help from a human being. I have, for instance, entered support chats with Apple or with Amazon, where I have had a conversation with a human being who is trained to help people like me who were unable to find the answers in the help pages via search boxes.
Now I have access to a chatbot which may seem very similar to those conversations. Instead of typing search terms in a box or looking up topics in an index, I can type a question in natural language, and information (or follow up requests for more specific questions) come back to me in the form of natural language.
I could write very straightforward and simple questions. I am aware that there is no human being reacting to what I say. But I find that if I write the questions very simply, it feels less efficient. I have a sense that in order to give the chatbot enough information in one bite, I would do better to just go all in and write as if I were interacting with a human.

The question is, is this an illusion or is it correct? And if it is an illusion, is it worth doing anyway? Writing as if I were talking to a human.
+ + +
So I don’t have a lot of extra time to play with the chatbot or experiment with it. I’m not particularly entertained by trying out ChatGPT and similar programs just to see what they can do. I do not have a job in which I am expected to use LLMs or any such tools. I’m rather irritated by the rollout of AI features in search engines and other places where I didn’t ask for it, especially where it frequently returns erroneous information.
But I do see “advanced form of troubleshooting manual” as a suitable use for this kind of program. Interacting with a help database by querying in natural language, and receiving results in natural language, makes it easier to use most of the time (at least if you read and write its language)! And if the chatbot is programmed to escalate to a human expert once the queries get beyond its capabilities, there are few downsides.
In fact, if we stipulate that I will receive the correct answers (big and important if), I confess that I would rather deal with a troubleshooting chatbot than a human being support assistant. I think there are three reasons for this:
- It is related to the fact that I would rather order a pizza online than talk on the phone, which is a purely social preference that I happen to have and don’t have a good explanation for.
- Knowing that there is no human being on the other end, I still feel like I am “figuring things out for myself,” which is a kind of learning that I associate with key parts of my identity.
- Once I gain some experience of the kind of query language that makes the chatbot issue useful information efficiently, the chatbot becomes more predictable than a human troubleshooter. If I contact the chatbot tomorrow, it will behave very similarly to how it behaved today. If I contact a human troubleshooting line tomorrow, I will be connected to a different human who may behave differently from the one I connected to today.
+ + +
So now let’s consider whether I should or shouldn’t converse with a chatbot in a tone similar to what I would use with a human being.
One practical consideration, which rests on a hypothesis that I haven’t tested, is that the tone and word choices that I use might actually give the chatbot relevant information that may usefully change its replies. This might not even be information that I am consciously aware of giving!
So, for example, if a user writes queries in a clipped, hostile tone and includes language that expresses frustration and even anger at having to deal with difficult features of whatever software I’m trying to use, perhaps the chatbot’s programming that will cause it to use language that most people might read as soothing and helpful, which might de-escalate the emotional content that the user perceives. Or perhaps it will cause the chatbot to escalate to human intervention sooner than it otherwise would. If I were designing a chatbot, I think I’d want it to be able to recognize a distinction between, say, a cheerful and open-minded user who was interested in learning lots of details, and a hurried, frustrated user who just wants to get to the point and solve the problem at hand.
I do not actually know whether the chatbot I used today has such a capability. But it does strike me as a desirable quality, which means that if chatbots programmers haven’t produced that yet, they might in the future.
I was nodding towards the possibility of that kind of practical consideration when I added in my introductory query that I had had “wonderful” help from the AI assistant before and also from “pretty great” human tech support (don’t read anything into the “happiness engineer” thing, it’s WordPress’s job title for the tech support people and I just wanted to make sure it correctly parsed who I was talking about). Using those adjectives felt right while I was typing the query, and the reason it felt right was because I believed it was likely an efficient way to get across the background information, that my migration had gone well albeit with significant support. I would have to type out a lot more words if I wanted to do it without cheerful adjectives.
Now, what about non-practical considerations? Ought I write politely to a useful chatbot? Is purely neutral and utilitarian language morally better? Or, taking it to the other extreme, is it acceptable to curse and revile the chatbot if I get frustrated with it, or just for fun?
+ + +
Let’s take the anti-politeness argument first. I think the strongest argument for refusing to engage politely and conversationally with a chatbot, eschewing little “thank yous” and “hellos” and the like, is that we run the risk of forgetting that the chatbot is a bot. What we treat as human, we begin to believe is human; what we treat as nonhuman, we begin to believe is nonhuman. This is just something that we observe happens in human nature. Maybe not everyone is susceptible to it; I feel like I am not much in danger from it, but I could be deluding myself. We are certainly seeing news reports that some people, on using an LLM, develop attachments to it and bizarre beliefs that it is sentient. I guess humans in general just have a really strong tendency to see patterns and draw conclusions without passing them through the reason-and-logic part of our brains. So, talking to a chatbot as if it is human does carry the danger that we will start to believe some very incorrect and possibly dangerous things about it. We might, for instance, trust it.
But now I’m going to give a pro-politeness (or at least an anti-cursing and revulsion) argument that is founded in the same sort of pattern-matching. How we behave is who we become. If I interact with chatbots frequently enough, and if I have developed the habit of cursing the chatbot, calling it names, responding to its questions with language that would be considered “rude” and “dehumanizing” if I were dealing with a human…. I might actually be training myself to behave in such a way whenever I am in any sort of context that has the same sort of feel to it. I might be training myself to behave that way even to human beings in human-human troubleshooting chat sessions, since the chatbots are programmed to emulate just that sort of session. I might be training myself to behave that way in texts, even to friends! I might be strengthening the neural pathways that choose rude language instead of the neural pathways that choose kind and polite language. Basically, I run the risk of programming myself to be a worse human being.
So what about a neutral, cold tone? This might seem to be the best choice or at least neutral, but honestly I think it circles back to being a special case of the anti-cursing-and-revulsion argument, since many human beings feel distinctly uncomfortable when they are spoken to in a neutral, cold tone. I’m afraid that if “training yourself to speak in a way that makes other humans feel bad” is a big problem, then writing neutrally and coldly on purpose (as opposed to if you are, well, just naturally a detached sort of person) is still a real problem.
Whether either of those latter outweighs the danger of “speaking conversationally to the chatbot risks you thinking that the chatbot is sentient and trusting it to the point that you do its harmful bidding,” I’m not sure. It may depend on the mental wellness of the user in a way that’s not terribly predictable.
+ + +
One thing that we can and should do, though, I’m convinced: we should be careful in how we talk and write about the chatbots and other LLMs to other humans. I argue that we should do our best to default to language that comes from well-established contexts of interacting with computers, databases, and other machines, rather than language that is generally used in human-to-human contexts.
For example, I strongly prefer using the terms “query” or “prompt” rather than “ask” or “tell.” We have been “querying” databases for a long time. “Prompt” is fairly new usage; we do use it for humans, but almost exclusively in describing an interaction where there is a power-and-authority differential, such as an adult prompting a child “Say please.”
The least we can do is speak honestly and not misleadingly to other humans when we talk about what these bots are useful for.