AI and the value of thinking out loud

First, I am still conflicted about generative AI. It’s still a horrible, extractive, resource-intensive, opportunistic, hype-addled, broligarchy-enhancing opaque bullshit machine. And it’s still the elephant in every room, the sometimes-unspoken layer underneath every conversation, such that I can’t just pretend that it doesn’t exist. Hence the ongoing conflict.

For thinking-out-loud, this was prompted by Audrey’s recent post on mirrors, awareness, and AI: A Better World Is Possible. Especially, this bit:

But here’s what I think: most people don’t want “AI.” Most people are exhausted by the onslaught of technology “upgrades” that have consistently made everything worse.

We didn’t ask for generative AI. We’re not the ones a) spinning up unicorn companies to build and rent it, then b) hyping it beyond belief to over-inflate the capabilities and risks, to c) foment demand for the new product and make line go up so they can get more money to go back to a) and repeat until we’ve finished melting the ice caps because our energy grids are now pointed at sprawling new AI datacentres instead of decarbonization.

Which made me think of my own use of genAI, and if there has been any value to it (beyond “literacy” and “awareness” so that I can hold informed conversations and help lead our approach to this stuff).

My own explorations of genAI have been mostly to make what are essentially disposable baubles. “haha look what I managed to get AI to do isn’t that interesting” trinkets that mostly don’t actually get used (aside from a handful of things that have been genuinely useful). But - my process of using genAI in my work is some form of “here’s a draft of an idea - help me reformulate it a bit” and then continues with varying versions of “no, not that. omg. not that either. wow, you don’t get this at all. hey - that gives me an idea…” and I go and write the thing myself. The thinking out loud is what helps me to formalize my thinking.

Using a chatbot forces you to think out loud in ways that many people don’t have as part of their regular practice. You have to articulate your half-formed thoughts, translate your internal muddle into something resembling coherent language, and then respond to what comes back - even when (especially when) what comes back is completely wrong.

This isn’t a new concept. In computer science, there’s the practice of “rubber duck debugging” - explaining your code line by line to an inanimate rubber duck until you spot the problem yourself. The duck doesn’t need to understand programming; it just needs to be there as a catalyst for your own thinking process.

Blogging has served a similar function for many of us. Writing in public becomes a way of thinking out loud, of making connections, of documenting not just conclusions but the messy process of getting there. The audience isn’t the point - sometimes it’s the act of articulation itself that does the work.

Natasha Kenny’s work on AI executive coaching gets at something similar. In her experiments, she’s found that the real value isn’t in the chatbot’s advice - which is often generic, surface-level, or just plain wrong. The value is in going through the process of articulating your questions, of making sense of the responses (even/especially the bullshit ones), and figuring out what you really needed to know or do.

The AI “coaching” isn’t coaching at all. It’s a mirror that forces you to coach yourself.

If this is the case - if the value is in the process of thinking out loud rather than in the outputs - how can we better foster this practice without coming to rely on generative AI chatbots? How do we create structures and spaces for articulation, reflection, and iterative thinking that don’t require feeding the machine?

The machine might serve as a mirror, but it’s a funhouse mirror - distorted, artificial, reflecting back something that looks like understanding but isn’t. We see faces in toast, so it feels like it’s more intelligent and real than it is. Real thinking out loud happens in relationship, in the generous and compassionate attention of another mind, in the back-and-forth that builds something neither person would have created alone.

And what really gets to me about all of this: for the billions of dollars spent every year building AI infrastructure and renting access to it all, we could have just hired actual people. As Audrey puts it:

I mean, we could actually hire 5 amazing graduate students for each classroom teacher if we had the will; but instead, powerful people want to replace human labor – all human labor – with machines. Not because it’s good. But because they profit.

Imagine what we could build with those $Billions invested in human connection, in genuine dialogue, in spaces for thinking together. Imagine writing circles, thinking partnerships, peer coaching networks, or simply better support for the kinds of reflective practices that help us make sense of our work and our world.

My conflicted relationship with AI has shown me how much I value the process of thinking out loud, of iterating ideas, of working through problems in dialogue. But it has also reminded me how much I’d rather do that work with other people.

And.

Our opportunities to really think out loud - and to think out loud together - can be rare. We have meetings, with agendas because that’s what meetings are. We talk about projects and problems and questions, but that’s not the same kind of thing as the personal thinking-out-loud that happens when you’re talking with a trusted partner (or alone, talking into a chatbot interface). The thinking where it feels safer to have and share half-baked ideas, to take a risk with some outlandish idea that may not actually be useful but who knows, maybe? And this is where chatbots have become useful - as a crutch, if nothing else - because it’s possible to have a low-resolution distorted facsimile of this kind of process.

The conflict remains. But maybe that’s exactly where it should be - in the tension between what we could build, what we’re choosing to build instead, and in what we’re just abdicating ownership and agency for because there’s a shiny thing available.

Last updated: June 27, 2025