Talking to the Ghost in the Machine: How to Interface with ChatGPT

Talking to the Ghost in the Machine: How to Interface with ChatGPT

An interface allows two or more separate components of a system to exchange information. Communication requires an interface. I hope to illustrate the values I've learned interfacing with AI.

  • The Newbie Zone, or Chronic Googler Tries AI
  • Assist the Stuff You Dread or Something New
  • Conversational Benefits
  • Key Insight: Context Front-loading

I think I have some pretty strong Google-fu. I've been searching the internet for decades for video game cheat codes, obscure programming errors, ytmnd memes, and everything in between. Knowing how to use an index is crucial for finding what you're looking for in a large dataset—whether that's a text book or the internet. Google has been the index tool for a long time. They own the verb 'to search' and that's still the case for now. I'll Skype you later about that.

ChatGPT and the Large Language Models (LLMs) are actively carving their own use-cases in computing. I think I'm starting to see more of the metaphorical elephant, or eldritch horror if you insist. This is kind of crazy, but it's just math. It's not human, but it's a tool that can interface like what we'd expect from another human over instant messenger, only they're cracked, as the kids once said.

My first chats with ChatGPT were short one-liners closer to Googling. This was me trying it out with a toe dip. I wanted to see what the LLM provides to basic questions. Here's a sample of questions I asked:

  • What is MiSTerFPGA?
  • In Python, how can I verify items in a class's dictionary?
  • What could humans do about removing microplastics from our environment?
  • How do I make a homemade custard based ice cream?
  • what is knolling
Colin Robinson knows all about knolling - What We Do In the Shadows

I didn't think it was "all that and a bag of chips," but a compelling idea changed my mind. Use ChatGPT to rewrite your annual performance self-review. Totally not my idea originally, but I don't remember where I got it. I am fine at getting things done, but mediocre at remembering what I've accomplished without a paper trail. I usually try to keep a brag doc for review season, and I used mine as a basis for my first draft. I sent the draft into ChatGPT, and it rewrote it in prototypical AI style—better than me. I followed up with:

Okay ChatGPT, I wrote more noteworthy things of this year. Could you rewrite the following to fit into a narrative that meets the company's values?
The values are:
1) Employees come first
2) Win Today, Win Tomorrow
3) Obsessed with customers
4) Get after it
5) Thirst for growth
6) Succeed together
7) Results matter
8) Lead the way
9) Know more to be more
10) Give Back

Please rewrite the following: [redacted self-review draft]

I don't think SPS Commerce minds the world knows their values. ChatGPT knocked it out of the park with tying appropriate value expressions to my self-review. This is a relief. I have dreaded this corporate rigmarole in the past. I don't think I'll dread it ever again. That was maybe the tipping point for me. I wasn't much better at using it yet, but I saw potential value in using it more.

My use didn't change much for a while, but frequency had gone up considerably. ChatGPT was starting to beat Google in usefulness for exploring Kubernetes and OpenTelemetry information. I used it to automatically format semi-structured notes from a conference and also give me information on all of the sponsors of the event. Things I hadn't thought about doing before were now faster and therefore viable.


The next big breakthrough came when I started hosting a Counter-Strike 1.6 server.

Counter-Strike 1.6
I have a home server that I'd like to host a Counter Strike 1.6 server on for LAN play. What are some of the high level ideas for doing this? I've never hosted a CS1.6 server before, but I have hosted some other game servers in the past, and you know my background with SRE. Would it be overkill to try to run multiple servers on one host with some sort of Kind cluster? Idk, what do you think?

The initial response was solid. It laid out the options, addressed the pros and cons of each with respect to my goals of potentially running multiple servers on a single host and recommendations for getting things working. I decided to start with something pretty close to the docker-compose.yml it spit at me. It also generated a server.cfg to use along with a mapcycle.txt which is nice for deleting and replacing with de_dust2. Great, but the deploy failed. I asked for help:

I tried the 2 IP addresses idea and am getting this error on trying to bring up the docker compose: Error response from daemon: driver failed programming external connectivity on endpoint cs_office_server (674541ab1a1095fa877caa9c16835c004c1b47d4847f09eb1a022c165c54679e): Bind for 0.0.0.0:27015 failed: port is already allocated

I had a goal to host two Counter-Strike servers on a single PC using multiple containers. The networking is possible, but awkward. What I was getting out of this chat was so much more valuable than I had gotten from any previous interaction. I was starting to ask more specific probing questions while sharing what I've done so far.

This back-and-forth is conversational. Sometimes I'm writing sentences, sometimes I'm dumping logs. It's all context for the LLM—the tool. Unlike my initial uses of ChatGPT, the conversational feedback loop is noticeably stronger because it's leveraging previously added context in the same thread. Turns out the more context the better. I did eventually get my CS server up, so I can run one whenever. I don't have a real need for the server, it was a whim.

That's a long story Chris, who cares about your whim? Well, stories have lessons and this one is meta. Call it "context front-loading," and it is the last and possibly most important take-away I have from talking to the ghost in the machine. When I use any LLM tool these days, I tell it the most information relevant to the matter up front, then refine with a conversation of sorts. I'm not the only one that feels this is top tier use of the interface by the way. Chain-of-Thought prompting demonstrates the value of conversational approaches, and Context-faithful Prompting validates the benefits of context front-loading.


Try it for yourself. I encourage you to think of something you feel like is just out of reach and ask an LLM about it. Tell it a story about why you want to do that thing and why you don't think you can. You might be surprised.