After I graduated college, I had the good fortune of joining an R&D team at Intermec, where I was surrounded by incredibly talented engineers.

I was constantly learning from the many senior people in our department, observing and supporting their work on some incredibly impressive things.[1] What made them really unique was that they were all willing to teach. They were generous with their time and patient with their explanations, which made them invaluable to my personal growth.

But this isn’t about my time at Intermec. It is about a phrase one of the Principal Engineers on our team would often say in moments of frustration:

“Do what I want, not what I said.”

This was never aimed at a person on our team; his quip was always directed at the computer in front of him. He was badgering the compiler and code he had just written, or the command he’d just typed in a terminal window.

His frustration was that what he had in mind didn’t match what it actually did. He knew the computer was merely executing the request he’d provided - and that the real issue was that he’d made a mistake in communicating his desired outcome. He would have to change the structure or syntax of his code or command and try again in order to achieve his goal.

Computers always do what they’re told, right?

Command Line Tools

This post from Julie Evans showed up in my timeline this afternoon, and got me thinking about that Principal Engineer.

Julia is right - the output from dig is particularly arcane, at least in terms of being able to easily integrate with other tools.

But command line tools have a tendency to be esoteric. One of my personal favorite mistakes is usage of service on Linux:

~$ sudo service restart nginx
[sudo] password for thaddeus:
restart: unrecognized service

The astute reader will recognize that I’ve flipped the service_name and command parameters around in that command. However… my intent is pretty clear.

I have absolutely muttered the words, “oh just do what I want.” while using command line tools.

Does GPT Understand?

Well, it turns out there are some computers that appear to understand what I’m trying to do.

I asked ChatGPT the following question:

What am I trying to do when I type “sudo service restart ssh”

Here was the response:

When you type “sudo service restart ssh,” you are attempting to restart the SSH (Secure Shell) service on your computer.

Let’s break down the command:

  • “sudo”: It stands for “superuser do” and is used to run the command with administrative privileges. By using “sudo,” you elevate your user permissions to execute the following command as the superuser or administrator.

  • “service”: It is a command used in Unix-based systems to manage system services. It allows you to start, stop, restart, enable, disable, and manage various services running on your computer.

  • “restart”: It is an argument passed to the “service” command, indicating that you want to restart the specified service.

  • “ssh”: It refers to the SSH service, which is responsible for secure remote access and communication with your computer using the SSH protocol.

Therefore, the complete command “sudo service restart ssh” is used to restart the SSH service, which may be useful if you need to apply changes to the SSH configuration or if you’re experiencing issues with SSH and want to restart it to resolve them.

Hilariously, it doesn’t correct the mistake I’ve made, but it does correctly explain what I was attempting to do. Other incantations of similar mistakes produce similar results: ChatGPT can produce the correct syntax for my desired outcome pretty consistently.


It’s Not Understanding

If there was actual understanding, it would have also corrected my mistake.

To me, this is a really interesting application of GPT technology. Fundamentally, GPT, like other LLMs, isn’t actually understanding, but instead predicting what you want to hear.

Today’s “AI” is like the friend who lies to you and says exactly what you want to hear. It will happily tell you those shoes look great, or that you were right… even if you were wrong.

However, sometimes what you want is actually the right outcome. Here’s another take at the service restart request with ChatGPT:

Give me a bash command to restart nginx. Return it without description or other information so I can pipe it to /bin/sh


sudo service nginx restart

Dang. That’s pretty cool. It did a good job of predicting what I wanted, and gave me exactly what I needed in a format that’s usable in my described context.

Maybe it can do what I want, exactly the way I say it.

  1. I got my first exposure to neural networks at Intermec, in what would be come foundational technology in a patent I helped author. The models required days of training, but ran on embedded hardware with a fraction of the computing capabilities of modern cell phones. ↩︎