👾 Keep the Muscles That Matter in the Age of AI

Why effort, struggle, and reflection—not shortcuts—are the foundation of lasting knowledge.

The following is a guest post by Forward Future community member Mandy McLean, co-founder and CEO of ClassWaves.

We’re living in an age of effortlessness, where emails draft themselves, presentations condense into neat summaries, code completes before we’ve worked out the logic, and maps choreograph every turn and lane change. The daily frictions that once slowed us down are disappearing, one autocomplete at a time, and it feels like progress—because it is.

But every shortcut leaves something behind: durable knowledge is built on effort, and the evidence of that is overwhelming. Retrieval, generation, spacing, and testing, the unglamorous “desirable difficulties”, are the mechanisms that make learning stick. Studies from decades of cognitive psychology show that these difficulties create stronger, more transferable memories than passive review ever could (e.g., Bjork and Bjork). When systems strip away all struggle, they don’t just accelerate the task at hand; they risk short-circuiting the very processes that transform an answer into true understanding

We can already see the pattern in everyday tech. When people know information will be available later, they tend to remember where to find it rather than the thing itself—a phenomenon psychologists call the “Google Effect.” Offloading knowledge isn’t inherently bad; libraries and laptops did it too. But it reshapes what lives in our heads, shifting recall from substance to source, and that means we have to plan around the trade-off. 

Navigation research tells the same story. When people rely on turn-by-turn satnav, the hippocampus and prefrontal cortex, the brain’s mapping and planning centers, show reduced activity compared with active navigation. By contrast, London taxi drivers who undergo the grueling training known as “The Knowledge”, memorizing 25,000 streets, thousands of landmarks, and hundreds of routes, demonstrate enlarged posterior hippocampi. Brain scans have followed drivers through this training and found that only those who passed and earned their license showed measurable growth in grey matter in the hippocampus, direct evidence that the brain reshapes itself through repeated spatial problem-solving. Use it, or lose it isn’t just a metaphor, it’s visible in brain plasticity. And again, this isn’t necessarily bad. If you’re comfortable outsourcing navigation to your phone, you don’t need to spend mental bandwidth memorizing routes—you’ve simply traded one kind of capacity for another.

Automation brings a third caution: the out-of-the-loop problem. In highly automated settings, from aviation to industrial control rooms, humans lose vigilance and situational awareness when they’re no longer actively engaged. Skills atrophy, attention drifts, and when the system fails, recovery is slower and more error-prone. This isn’t a new finding; decades of research show that over-reliance on automation erodes manual skill and judgment, and once you’re out of the loop it’s hard to get back in.

So what exactly do we want from AI? Speed on the grunt work? Yes. But for learning, we need structured effort and other people. You can’t bolt that into a UI without breaking what makes the tool useful. Let the software stay seamless, and put the effort where it belongs: in the human practices of reflection, debate, and manual reps. The tool can capture the outcome, but the growth has to happen outside it.

Six Weeks Back, Five Capacities Down

A Gallup-Walton survey made headlines this year: teachers who use AI weekly reclaim 5.9 hours per week—nearly six weeks across a school year. For a profession long frustrated by time pressure, that was hard to ignore. They even titled it "Unlocking Six Weeks a Year With AI.” But what struck me was how this single finding on time savings became the only headline to circulate widely. I pulled together the top related news articles, and here’s what I found:

Walton / Gallup Originals

National / Mainstream Coverage

Education Outlets

Other Coverage

Let me explain why I was struck by this. I read the full report, not just the headlines, and what stayed with me wasn’t the six weeks of time saved. It was the chart tucked away on page 15.

That chart shows something much more sobering: majorities of teachers predict that if students use AI weekly, independent thinking (57%), critical thinking (52%), persistence (48%), ability to build meaningful relationships (46%), and resilience (45%) will decrease. Only a thin slice thought these capacities might increase. Meanwhile, the only area where teachers foresaw meaningfully more gains than losses was grades and we have to wonder whether that’s because AI is artificially inflating the grades, not because students are truly building capacity.

What makes this even more jarring is how it was presented. This takeaway didn’t even make the header of the chart on page 15 of the 19-page report. Instead, the section was labeled: “Teachers who use AI have a more confident outlook on AI tools’ ability to improve student outcomes.” In other words, the report literally placed the most optimistic possible spin at the top, while the data immediately beneath it showed teachers anticipating losses in the very qualities schools exist to cultivate. That’s the disconnect: we’ve elevated the easy-to-market headline of time saved, while burying the hard one: capacity lost.

Beyond Guardrails: Where the Real Work Happens

The instinct in Silicon Valley has been to build guardrails into the tools themselves in an attempt to restore the deeper learning, reflection, and judgment they know the technology can erode. If the risk is that AI makes things too easy, the thinking goes, then the fix must be to add friction: “study modes” that withhold answers, dashboards that force justifications, nudges that slow you down. You can already see this not just in edtech experiments, but also in workplace pilots and consumer apps. But here’s the problem: people come to tools for seamlessness. They don’t want sand in the gears. And when companies try to insert “desirable difficulties” into the interface, users often either work around them or abandon the feature entirely.

The deeper issue isn’t that the software is too smooth. It’s that we’re trying to outsource the parts of growth that never belonged on a screen in the first place: reflection, dialogue, perspective-taking, and the messy work of being with others. Those are not interface features; they are human practices.

In schools, the Gallup-Walton survey highlights the concern that students leaning on AI will weaken in independence, persistence, resilience, and critical thinking. In workplaces, the same pattern shows up as one-click AI suggestions flatten judgment and creativity, so the meeting ends faster while the team’s ideas get thinner. In our personal lives, AI companions promise frictionless relationships that are always listening, always affirming, and never demanding. But if digital friends absorb all our attention, we become less practiced at the slow, imperfect, and sometimes uncomfortable process of sustaining real friendships.

And at the level of society, the stakes are higher still. Civic life depends on discourse—on being able to hear another person’s point of view, tolerate disagreement, and work through conflict toward common ground. If AI serves us only smoothed, personalized feeds or agreeable chatbots, we risk losing the practice of grappling with real differences. A feed that always validates your worldview may feel efficient, but it leaves you unprepared for the rough edges of a town hall, a jury room, or even a neighborhood meeting. The same way GPS weakens your internal map of a city, algorithmic ease can weaken our shared maps of meaning, belonging, and democratic decision-making.

That doesn’t mean the tools are bad. In fact, they’re excellent at what they do best: removing administrative burden, smoothing paperwork, accelerating production, and buying back time. But if all we do is celebrate the hours saved while ignoring the capacities lost, we’ve missed the point.

The alternative is simple: let the tools stay seamless, and reinvest the time they save us in the work that can only be done with other people. In schools, that means more peer-to-peer debate and reflection. In workplaces, it means making space for structured dissent and human-led brainstorming. In personal life, it means logging off long enough to practice real friendship. And in civic life, it means deliberately engaging in forums, conversations, and disagreements where friction is the point—not a bug, but the feature that makes progress possible.

These practices don’t clutter the product, but they do strengthen the muscles that matter. They turn the AI dividend into something durable, not brittle.

Mandy McLean

Mandy McLean is the co-founder and CEO of ClassWaves, an AI-powered tool that helps teachers guide deeper student dialogue in real time. She spent over a decade studying how people learn, earning her PhD in education and statistics and leading research on working adult learners at Guild.

A former high school teacher, she’s committed to building technology that strengthens human connection in classrooms. Mandy lives in Colorado with her family and loves running, spending time in the mountains, and big questions.

Reply

or to participate.