top of page

AI doesn't level the playing field. It steepens it.




The same dynamic is emerging with every industry adopting AI. Output volume rises for everyone but the quality of judgment behind that output is separating those who are genuinely building something from those who are simply generating things. This is a framework for understanding that gap, what drives it, and what it demands of the humans who are supposed to be making the decisions.


The amplification effect


The intuition many people have about AI is that it's an equalizer filling in gaps and reducing the advantage of expertise. There's a sense that it brings everyone to roughly the same level of capability. This is the wrong model of thought. AI is not an equalizer. It is an amplifier. And amplifiers don't care what they're amplifying.


Give a skilled doctor AI-assisted diagnostic tools and you get faster, more accurate, better-documented diagnoses. Give a distracted administrator those same tools and you get confident-looking paperwork that might be wrong in ways no one checks. The tool is identical. The person behind it is not. And now the gap between their outputs, which used to be partially masked by the friction of slow, manual work, is wider and faster to produce than ever.


The consequences of poor judgment no longer wait. They arrive almost instantaneously.

This is the central tension of the AI adoption moment. Every industry is racing to integrate these tools because the productivity gains are real. But productivity gains without judgment gains produce more outcomes, faster, with the same quality ceiling as before. The volume goes up but accountability for what's in that volume becomes harder to trace.


The spectrum that AI reveals


In any room of people using AI to do their work, a spectrum is becoming visible. Every team has people operating at different levels of attention, rigor, and genuine engagement with their craft. But AI makes the spectrum impossible to ignore, because it surprisingly removes the noise that used to obscure it.


When everything took longer and required more effort, the person who was going through the motions was partially hidden by the shared slowness of the work. Now, when everyone can produce a report in twenty minutes, the difference between a thoughtful report and a careless one is immediate and stark.


At one end sits the passive consumer: someone who accepts AI output as finished work, doesn't interrogate it, doesn't cross-check it, and doesn't apply domain judgment to it. They produce high volume. The integrity of that volume is low.


At the other end is the active collaborator: someone who treats AI as a thinking partner, brings genuine expertise to the conversation, questions the model's framing, and makes the final call from their own judgment. Their output is consistently better than either human or AI could produce alone.


The dangerous zone is not the person who refuses to use AI. They are just slower and have the same challenges they had beforehand. The dangerous zone is the passive consumer in a position of authority: someone who has adopted the tools, is producing work at volume, and has stopped reading what they're producing carefully enough to know whether it's right. In individual contributors, this creates quality problems. In decision-makers, it creates direction problems.


The specific risk: when AI's conceptual models become your strategy


There is a more serious problem than simply accepting bad AI output. It happens when the framing AI provides gets adopted wholesale as the strategic frame for a decision.


AI systems are trained on enormous volumes of existing human thought. That means they are, by nature, backward-looking pattern synthesizers. They are exceptionally good at producing the most plausible-sounding synthesis of what has been written and said before. They are structurally poor at generating genuinely novel frames, identifying what has never been tried, or questioning the premises of the field itself. I wonder actually if the word "generative" is a bit of a misnomer in this regard.


Consider what this looks like in practice. A leadership team asks an AI to help them think through their go-to-market strategy. The AI produces a well-structured response covering market segmentation, competitive positioning, channel strategy, and pricing tiers. It looks thorough. It uses all the right language. The team refines it and builds a plan around it. What they may not have noticed: the AI's frame was drawn entirely from conventional playbooks. It never questioned whether the product should go to market in its current form. It didn't surface the possibility that the real opportunity was in a segment the team hadn't mentioned. It optimized within a frame the team had implicitly provided and the team never stepped outside that frame to check whether the frame itself was right. The strategy was coherent but the premise was unexamined.


This is not a failure of AI because it is doing exactly what it was designed to do. The issue is in how humans are relating to the output. When you outsource the framing of a problem to AI, you are not just getting help with the answer. You are getting a conceptual model of what the problem even is. And that model carries the biases, assumptions, and blind spots of everything it was trained on.


The question is never just "is this answer correct?" The deeper question is: "is this the right problem to be answering?"

For individual decisions, this matters. For organizational direction it matters enormously because the compounding of dozens of decisions made by leaders who are each separately deferring to AI's framing is something that propagates through a company. Organizations can drift into strategies shaped more by AI's probabilistic synthesis of convention than by the genuine insight and contrarian thinking that create competitive advantage.


What it looks like in practice: the two kinds of AI user


The difference between someone who is using AI well and someone who isn't is rarely visible at the point of output. Both produce documents. Both produce recommendations. Both can speak fluently about what AI told them. The difference lives in what happened between the prompt and the deliverable and whether a human mind was genuinely present in that space.


The passive user asks a question, reads the first answer, and moves on. They accept AI's framing of the problem without examining it. They edit for tone but don't interrogate the substance. They cannot explain the reasoning behind the AI's recommendation in their own words. They produce high volume and cannot defend the detail under questioning. When they are wrong, they are confidently wrong, because AI gave the conclusion an air of authority.


The active collaborator reads the AI's response critically and then asks follow-up questions that challenge it. They explicitly interrogate the framing and what the assumptions are, what would have to be true for this to be wrong. They validate substance, not just prose. They can explain the recommendation without the AI's phrasing because they understood the reasoning, rather than just the conclusion. They use AI to move faster inside a frame they have independently evaluated. When AI is wrong, they catch it because they know enough to recognize it.


The tell is always the same (outside of overuse of em dashes): ask someone to explain in their own words, why the recommendation is what it is. The active collaborator can do this. The passive consumer often cannot. They remember the conclusion but not the reasoning, because they let AI engaged with the reasoning of it for them.


The stakes at the leadership level


The amplification dynamic is consequential for individual contributors. It is potentially civilizational in its implications at the level of the people making decisions about the direction of organizations, industries, and policy.


Leaders in the AI era face a specific temptation: AI provides extremely fluent, extremely confident-sounding answers to hard questions. The harder the question the more attractive it is to have something that sounds like a reasoned answer arrive in thirty seconds. The pressure to decide quickly, to appear decisive, to have a position, all push toward accepting that answer.


But hard questions are hard precisely because they require judgment that cannot be synthesized from prior text. They require someone to hold contradictions, weigh incommensurable values, read the specific context of a specific organization at a specific moment in time, and make a call that no historical dataset can fully inform.


The deeper risk is the gradual migration of human judgment to AI judgment across hundreds of small decisions, in ways that are individually invisible but collectively reshape what an organization actually values, believes, and pursues. AI's conceptual models and its implicit picture of what a good strategy looks like, what a well-run organization is, what constitutes a reasonable risk, were not chosen. They were absorbed from the distribution of text it was trained on. And that distribution has its own biases, its own blind spots, and its own picture of what is normal and what is possible.


If leaders are not actively maintaining their own conceptual models those models will be replaced by AI's.


What healthy human-AI collaboration actually requires


I'm not suggesting this as an argument against using AI. The productivity and capability gains are real and significant, and will be a cornerstone of industry in upcoming years. The argument is about the posture you bring to the interaction and about maintaining the habits of mind that using AI in a careless manner tends to erode.


Bring your frame before you get AI's frame. Before asking AI to help you think through a problem, write down what you think the problem is, what you think the constraints are, and what a good answer would look like. Then use AI. You will immediately be able to see where it agrees with you, where it extends your thinking, and where it has silently substituted a different frame for your own.


Validate reasoning, not just conclusions. The output of AI can look correct while the path to it is flawed. Develop the habit of asking AI to show its reasoning and then reading the reasoning, not just the answer. Ask what it is assuming. Ask what would have to be true for it to be wrong. These questions expose the structure underneath the fluent prose.


Maintain a position before you enter the conversation. Judgment requires something to judge from. If you approach every question as a blank slate waiting for AI to fill it, you have no basis to evaluate what you receive. The goal is not to be right before AI helps you, it is to have a considered position that AI can challenge, extend, or correct. It's to have confidence both in yourself and your own understanding prior to engaging in dialog. The difference between those two states is the difference between a collaborator and a consumer.


Preserve your ability to be wrong in your own way. When humans defer to AI's framing across industries and organizations, the diversity of approaches to hard problems narrows. A field in which everyone is using the same tool and asking similar questions will converge on similar answers. Some of the most important breakthroughs in any field came from people whose judgment led them somewhere the consensus said was wrong.


Make the decisions yours. When a decision is made, someone should be able to articulate why in their own reasoning, not in AI's language. This is not just good practice; it is the foundation of accountability. If the decision later proves wrong, the learning requires a human who understood the reasoning well enough to trace where it broke down. Organizations that cannot do this are not learning. They are producing outputs.


The real differentiation is not who uses AI. It's who remains present while using it.


The creative, the detail-oriented, the genuinely curious, the people who read what they produce and ask whether it is actually true, these people will use AI to do work that would have been impossible for them alone. They will be faster, more capable, more prolific, and the quality of their judgment will compound over time as they develop richer intuitions from the patterns they actively notice.


The inattentive, the credulous, the people who treat AI output as finished work, the decision-makers who accept AI's frame as their own, they will also produce more. They will produce it faster. And they will be wrong at scale, in ways that are harder to catch because the outputs look polished, the language is fluent, and the reasoning is buried inside a tool no one is interrogating, but society will catch it, it will be amplified incredibly both in speed and effect.


The path forward is not to slow down the adoption of AI. It is to deliberately cultivate the human capacities that AI use can erode: the habit of independent framing, the discipline of genuine validation, the intellectual honesty to say "I don't know if this is right" before shipping it, and the courage to hold a position that AI's synthesis of existing thought does not support.


AI can tell you what has been thought before. Only you can decide what should be thought next. That asymmetry is not a bug in the technology. It is a permanent feature of what human judgment is for.

The organizations, the industries, and the leaders who understand this will not just use AI better. They will build cultures of active, critical, accountable engagement with AI rather than passive consumption of it. They will remain the ones actually steering, while everyone else will be moving fast in a direction they did not fully choose.



bottom of page