6 min read

I Changed My Mind About AI

Unpacking harmful discourse, false promises of meaning in work, and the possibilities created by opening your mind.
Working at computer | photo by Maria Roberts

I’m somewhat of a tech skeptic. You won’t find me racing to buy the latest gadget. I’ll be using my iPhone 11 until it bricks. Leave the trying to someone who can afford the risk.

When ChatGPT first launched in late 2022, my reaction was, unsurprisingly, “huh.” 

And, like many creatives, I was immediately put off by tech bros and managers extolling the benefits of this new technology taking my livelihood. How could I get on board when the argument is personal, when the demise of something I value is celebrated? 

You might find it hard to believe that I use AI tools in my work almost every day now. That, after an initial three years of grumbling and griping, I’ve (mostly) come around.    

But this is not an argument for AI. This is the story of how I changed my mind.

Vibes Aren’t Discourse

As a lifelong student of rhetoric, I should have seen it sooner. I mean, shit, I did an entire undergraduate thesis on how fear is an ineffective driver of sustainable behavior change. What kind of professional communicator would I be if I couldn’t see AI fear appeals for what they are? 

A human one, as it turns out.

Let’s revisit the definition of rhetoric: it’s the art and science of persuasion, often drawing upon tools like credibility, emotion, logic, and delivery to influence an audience. Effective rhetoric, or effective argument, invites conversation. It sparks collaboration. The discourse becomes something bigger, more interesting, and more intelligent as new voices contribute.

AI presents an exciting opportunity for conversation that is curious, democratic, and necessary for shaping our collective future. There are real risks, both ethical and environmental, that demand discussion.  

Instead, we went for “arguments” based on vibes, hot takes, elitism, and gaslighting. And that’s at both ends of the dichotomy, from AI as a panacea to our greatest existential threat.

The engagement economy is not discourse — it’s performative status and provocation dressed up as thinking.

With little thoughtful conversation to pull from, there was only one logical place to start.

Fine, I’ll Bite

My first serious foray into using generative AI tools like ChatGPT, Gemini, and Claude for work came at the urging of a mentor. I’d been using them sporadically to see what all the fuss was about, but wasn’t convinced. Importantly, she framed the opportunity as getting up to speed with a new technology that employers, or in my case, clients, might expect me to know. 

And so I tried. 

I used these tools to generate long form blog posts (passable with some editing), social posts (meh – I can do better, faster), marketing emails (not great), and ad copy (terrible). They gave me a foothold for asking smarter questions of clients in highly technical industries I know nothing about. 

At no point could I outsource thinking to these tools, despite their convincing replies. After recently interviewing three Miami University computer science professors for an article, I finally understood why.   

Allow me to give you a quick look under the hood. This, building upon my own experiments, is the information that nudged me from hater to cautious optimist.

The New Imitation Game

Again, let’s ground ourselves with some context. Artificial intelligence is not new. Its foundations actually trace back to Alan Turing’s eponymous Turing Test in 1950, which aimed to see if a machine could imitate intelligent behavior. Fast forward to the 2010s, when advancements in graphics chips provided processing power for huge data sets. This deep learning era paved the way to the current generative AI era.

When people talk about AI today, they’re often using this term as shorthand for large language models (LLMs). These are tools like Gemini, ChatGPT, and Claude. We can think about AI and LLMs like squares and rectangles: LLMs are a type of AI, but not all of AI is LLMs.    

One of the biggest problems with how we talk about AI, unhealthy rhetorical moves aside, is that it’s often portrayed as a reasoning agent. In reality, LLMs are statistical models. 

Consider this example professor Liran Ma shared with me. When you play a game of rock-paper-scissors, the winning strategy is typically unpredictability. LLMs are trained to know this, but if you play a game with one, it favors “rock.” So why would the LLM ignore its training? Because the word “rock” appears much more often in our language. 

LLMs learn shortcuts. They generate results based on relevance. They don’t see chains of action or cause and effect. The danger here is that relevance and cause/effect can overlap, so LLMs can generate accurate responses, albeit inadvertently. But this is not because they “thought” about the answer. It’s just a game of probability. 

Once I understood how LLMs work, it became easier to see them for what they are: tools that require knowledgeable human oversight.

Making Space for What Matters 

I was about a year and a half into my marketing career when I realized I’d been duped. The meaning and fulfillment that was promised from a creative job? Clearly not hiding at the bottom of a third glass of wine on a Tuesday after another workday that left me depressed, depleted, and wondering how the hell I got here. 

It’s a well-meaning argument with good intentions. Appeals like “build a career you’re passionate about” drive overachievers like me to a fair bit of external success. 

I’ve never hated the work. I’m good at it and I find it engaging. The problem is this: my in-house creative career didn’t leave room for creative fulfillment. It consumed everything. 

Most people don’t understand that my “little side projects” aren’t cute hobbies I do for fun. Making art – by which I mean writing that processes the world, telling stories that explore The Big Stuff – is a biological need. I’m not being dramatic. The vice-grip on my heart, the anxiety from not creating, the internal voice that screams “you’re squandering your gifts!” is not the same force that would occasionally like to sit down and knit for an hour or two.  

Jobs are meant to provide the resources to enjoy life outside of work. You can’t have a life outside of work, let alone a creative one, if you’re too exhausted to try. A different kind of person might have detached more from their job or given just enough effort to get by. (Not me. Here I am at 32 still trying to figure out maximum economy of ass.)  

Even if I had, the trap remains. “Loving what you do” for work as a primary source of fulfillment exploits our desire for meaning. Maybe the narrative we should be telling young people is that building a meaningful life starts with understanding your needs, your interests, and how to hold space for what matters without going broke. 

I tell you this story because by the time AI tools hit the market, I was well and fully disillusioned with work. Leaving to run my own business a few months later finally allowed me some detachment from the work itself, but still, my social media campaigns and email marketing sequences aren’t out there improving lives. My work makes a lot of money for companies that already have a lot of money. And when time spent is time paid, prioritizing client work is the obvious choice.  

AI tools help me carve out space for what matters.

After extensive experimentation with AI, I figured out how to continue delivering great work for my clients, now without completely draining myself in the process. AI starts a first draft for about 25% of my paying work, but the space that’s opened up feels enormous. I stopped feeling threatened by the tools when I suddenly had time and energy for creative work that actually fulfills me – work I hope makes a difference in ways that marketing can't. Quite simply, this publication would not be possible otherwise.   

My use case may not apply to you. You might still hate AI, and that's valid. There are serious environmental and ethical issues to contend with before we reach the point of no return. That transitions are painful and messy can’t be minimized. 

When hope feels like too much, what if we start with curiosity instead? 

What assumptions about your worldview does new technology challenge? What facets of your identity does it amplify or call into question? What becomes possible in the space that opens up?   

I wasn’t prepared to change my mind. But funny things happen when you open it.


Alicia Boettjer is a creative nonfiction writer whose work has been published in Bending Genres and Hecate magazine. She is the owner of a content marketing business and lives in Cincinnati, Ohio with her husband and cat.