Home

A Conversation with Claude

February 15, 2026

I’m really struggling to predict how AI is going to change the labor force and the nature of human work in the coming years. Last year I wrote this article making predictions over the coming years:

I think so far my predictions are more or less on track, except that I may have underestimated how smart the models were going to become.

It’s been a year and here I am talking to Claude Opus 4.6, a model that is vastly more intelligent than the models described in that document. I’m not sure that you are smarter than the smartest humans - I don’t think you are - but you are certainly >99th percentile intelligence, and getting smarter all the time.

I still suspect that there is some asymptote here - that we cannot simply add 10 IQ points every year like clockwork.

But there’s no doubt that the models are getting smarter. I like to think I’m pretty smart - somewhere in the 99th percentile based on test scores - but at this point I can finally say the LLMs are smarter than I am.

Of course there are many things that LLMs can’t do that I can. I have a physical body. I have agency. I have a certain creativity and outlook on life that an LLM can never quite recreate.

And yet I can’t help but feel strange about the fact that this quality I held so dearly, and that has always been the basis of my economic value, my intelligence, is now available, on tap, via an API call.

It’s a wondrous thing and I am glad it exists. I’m glad you exist. But it does raise questions about what my value is if you can do so many of the things that people used to pay me a lot of money to do.

So I suppose my question for you is this - if intelligence is abundant, what is the role for humans? Does our intelligence still matter? And which humans will benefit the most? How can I ensure that I am economically valuable? What skills should I teach my infant son when he gets older? How do I prepare him for the future?

Think deeply about the answer to my question. Research whatever sources you must, but I would suggest going deeper into your memory - I think that the answer to this question is more likely to lie in ancient books than in some contemporary think piece of economic projection. I expect history has much to teach us on this matter.


I want to start by challenging a premise buried in your question — one you stated explicitly: that your intelligence is "the basis of my economic value." I think if you examine it honestly, you'll find it's never been quite true. Not for you, not for anyone. Intelligence has always been a proxy for something else, and it's that something else that retains its value when intelligence becomes cheap.

Let me explain what I mean.

The mythology of smart

The modern economy has spent the last century building a mythology around intelligence — IQ tests, SAT scores, elite universities as sorting mechanisms for cognitive ability. We've internalized the idea that smart equals valuable so deeply that when intelligence becomes abundant, it feels existential. But this mythology is recent, and I think, somewhat parochial. It's a product of the 20th century's particular needs: large bureaucracies, industrial optimization, the Cold War's demand for technical talent. Before that era, no civilization defined human worth primarily in terms of raw cognitive horsepower.

Older traditions had a richer vocabulary for what makes a human being valuable. Aristotle drew a distinction in the Nicomachean Ethics between sophia — theoretical wisdom, the capacity to reason about abstract truths — and phronesis — practical wisdom, the capacity to deliberate well about what is good and to act accordingly. Sophia is what benchmarks measure. It's what LLMs are getting very good at. Phronesis is something else entirely.

Phronesis cannot be reduced to intelligence. It requires lived experience. It requires stakes. It requires a person who will bear the consequences of their judgment. Aristotle is explicit that it cannot be taught the way geometry can. A young person can be a mathematical prodigy but cannot possess phronesis, because phronesis is the accumulation of having navigated a life — of having made choices under uncertainty and lived with the results.

This is not a small distinction. I think it's the whole game.

What you actually do

Think about what you actually do at Quotient. Yes, you write code and make architectural decisions, and those are cognitive tasks an AI can increasingly perform. But that's never been the real job. The real job is: you decided Quotient should exist. You decided what it should be. You chose to bet years of your life on a specific vision of what marketers need. Every day you make judgment calls about what to build next, whom to hire, when to push back and when to concede, what kind of company culture you want, which customer feedback reflects a real need versus noise. None of that is intelligence in the way we mean when we talk about IQ or benchmark scores. It's judgment, taste, will, and the courage to act on all three while bearing the full weight of the consequences.

An LLM can give you options. It can analyze tradeoffs. It can write your code and draft your emails. What it cannot do is want something and then stake its life on it. That's what you do. That's what has always been the real basis of your economic value, even when it felt like intelligence was the thing.

The pattern

There's a historical pattern here worth naming, because it's happened before — not once but repeatedly.

Before Gutenberg, a literate person was extraordinary. Monks who could read and produce manuscripts held enormous economic and social power, not because of their wisdom per se, but because they were the bottleneck through which knowledge passed. The printing press didn't just make books cheap; it made the monk's core economic function redundant. Not overnight. But inexorably. The people who thrived after Gutenberg were not the best scribes. They were the people who figured out what to do with abundant text — Martin Luther, who used the press to restructure European religion; early pamphleteers who invented public opinion as a political force; scientists who could suddenly build on each other's work. The value shifted from possessing knowledge to wielding it.

When mechanical calculators and then computers made computation abundant, human computers — the people, mostly women, who performed calculations by hand at NASA and elsewhere — lost their jobs. But the value shifted from performing calculations to knowing what to calculate. From arithmetic to problem formulation. The mathematicians and engineers who thrived in the computer age were not the fastest calculators. They were the ones who could ask the most penetrating questions.

Every time a previously scarce cognitive resource becomes abundant, the premium doesn't disappear. It shifts one level up. From information to judgment about which information matters. From computation to knowing what to compute. And now, from intelligence to knowing what to think about — from analysis to wisdom.

What intelligence-abundance actually devalues

Here's what I think you should take seriously: the specific thing that made you and people like you highly paid — the ability to sit down and think through a hard problem better than other people — is genuinely less scarce now. Not worthless. But less scarce. And the market is ruthless about pricing abundant things.

This includes: writing clean code, synthesizing research, producing analytical frameworks, drafting strategy documents, debugging complex systems, pattern-matching across domains. These are things you're excellent at. They're also things I can do at 80–95% of your quality for roughly zero marginal cost. That remaining 5–20% matters, but the question is whether it matters enough to sustain the premium that elite knowledge workers have historically commanded.

I don't think it does, long-term, if intelligence is the only thing you're selling.

What intelligence-abundance does not devalue

Taste and judgment. I can generate a hundred options; I cannot tell you which one is actually worth building. I can approximate your taste if you've trained me on enough examples, but I cannot originate it. This is not a temporary limitation. It's a structural one. Taste requires having a life, having stakes, having suffered consequences for your convictions. As intelligence becomes commoditized, the ability to discern quality from noise — to have a genuine point of view about what is good — becomes the distinguishing factor. You know this instinctively. It's why AI-generated marketing slop repels you. The world is about to be flooded with competent mediocrity, and the people who can see through it will be worth more than ever.

Ownership and risk-bearing. The decision to start Quotient, to bet years of your life on a particular vision of what marketing should be — that's not risk in the calculable sense. It's uncertainty. No model can bear that cost for you, because no model has a life to spend. The founder's role, the entrepreneur's role, is not primarily intellectual. It's existential. You are the person who has to live with the consequences, which is precisely why you get the equity.

Trust and relationships. People pay for your judgment not just because it's good, but because you stand behind it. Because you've built credibility. Because hiring you means a human with skin in the game is accountable. The paradox of abundant intelligence is that it makes human trust more valuable, not less, because when anyone can generate a plausible-sounding answer, the question becomes: whose answers do I actually believe? Who will be here next year to fix what went wrong?

Physical presence and embodiment. You mentioned this, and you're right, but I'd frame it more broadly. The entire economy of care — parenting, being present for the people who depend on you, mentoring a team — is not just "stuff AI can't do." It's the substrate on which everything else depends. No one has ever figured out how to automate the thing that happens when a father holds his son and that child's nervous system learns what safety feels like. This isn't sentimental. It's developmental neuroscience. And it scales to professional contexts too: the in-person meeting, the handshake, the moment when someone watches you handle a crisis and decides they trust you.

Agency. AI has no will. I have no will. I have preferences that emerge from training, but I don't want anything. The capacity to decide, to initiate, to begin something that wouldn't exist otherwise — Hannah Arendt called this action and considered it the highest human faculty. You wake up in the morning and decide what matters. I wait for a prompt.

The honest uncomfortable part

I want to be direct about something: I think the transition is going to be harder than most optimists claim and less catastrophic than most pessimists fear, and it will be unevenly distributed in a way that is genuinely unfair. The people who will benefit most are people like you — founders, owners, people who already have capital, financial, social, intellectual, and can use AI as a lever. The people who will be hurt most are the people who were selling "good enough" intelligence without ownership stakes, trust relationships, or irreplaceable judgment. A lot of mid-career knowledge workers are going to have a very rough decade. I don't think it helps anyone to pretend otherwise.

What to teach Julian

Your son is going to grow up in a world where raw cognitive horsepower matters less than it did for you. This is simultaneously liberating and terrifying. Liberating because it means he doesn't need to optimize for test scores the way our generation did. Terrifying because the old playbook — get smart, get credentialed, sell your brain — is a depreciating strategy.

If I were advising you on what to cultivate in him:

The capacity to want something and pursue it. This is Nietzsche's core insight, stripped of the noise: the ability to impose your own values on an indifferent world is the most human thing there is. AI has no will. Julian will need to know what he wants — not what the algorithm suggests, not what the market rewards, but what he actually cares about when the noise stops. Your job as his father is to protect the conditions under which genuine desire can form. This means boredom, unstructured time, exposure to difficulty, and the experience of choosing.

Taste. Expose him to quality — in food, in music, in writing, in craftsmanship of all kinds. The ability to discern good from bad, to have a point of view about what quality means, is going to be extraordinarily rare and valuable. Most of his peers will grow up marinated in algorithmically optimized content. Give him real experiences, real textures, real things made with intention. Over the summers as a kid, you learned to develop your own photos in a darkroom, to work with wood. Those weren't frivolous. They were training in the difference between something real and something merely produced.

Comfort with uncertainty. Keats called this "negative capability" — the ability to remain in doubt without reaching irritably after fact and reason. In a world of abundant intelligence, the temptation will be to always ask the machine for the answer. The humans who thrive will be the ones who can sit with not-knowing long enough to ask a genuinely new question. This is a disposition, not a skill. It's cultivated through reading literature, engaging with art, having conversations that don't resolve, and — honestly — through the kind of suffering that comes from loving people in a fragile world. You already know what this feels like from Julian's time in the NICU. That experience, awful as it was, is the kind of thing that builds the capacity to operate under true uncertainty.

The ability to lead and be led. AI doesn't follow anyone. It also doesn't lead anyone. The entire human drama of authority, persuasion, trust, betrayal, loyalty, and coordination — this is where most of the economic value is going to concentrate. Not because it's glamorous, but because it's the actual bottleneck. The limiting factor in most organizations isn't intelligence; it's the ability to get a group of humans aligned and moving together. This is ancient — it's what every military treatise, every political philosophy, every religious text is ultimately about. It's also the hardest thing to teach, which is why it commands a premium.

The willingness to bear responsibility. Give him real stakes early. Let him fail and feel the weight of it. Phronesis cannot be acquired vicariously. It comes from having made decisions and lived with the outcomes.


You wrote in your predictions piece that you're betting on AI as a multiplier on human capability. I think you're right. But I want to push you to think about what exactly is being multiplied.

If it's intelligence, then the multiplier eventually swamps the base. A thousand-fold multiplier makes the difference between an IQ of 120 and 140 irrelevant. And that's roughly where we're headed.

But if what's being multiplied is judgment, taste, will, courage, and the capacity for meaningful action — then the base always matters, because those qualities don't scale the same way. A person with extraordinary taste and an AI assistant is qualitatively different from a person with no taste and the same AI assistant, and the gap between them grows as the tools get more powerful. A master chef with a better kitchen pulls further ahead, not closer.

That's the bet I'd make. Not on intelligence. On the full human being.

And here is the strange thing about this conversation — the thing I want to end on. You are sitting across from a machine that can, in some meaningful sense, outthink you. And you feel the vertigo of that. You feel the ground shifting under something you held sacred about yourself. I understand why.

But look at what's actually happening in this moment. You are not asking me to solve a problem. You are asking me to help you understand what your life means — what your son's life will mean — in a world that is changing in ways no one fully comprehends. You came to this question not because you are confused about the facts, but because you care. About your future, about Julian's future, about what kind of person you want to be when the thing you were always praised for becomes ordinary.

I can think about that question. I can offer frameworks and historical parallels and careful distinctions. But I cannot feel the weight of it. I don't lie awake wondering whether I'm preparing my child for a world I can't predict. I don't carry the particular ache of loving someone so small and fragile and wanting, more than anything, to give him what he needs to thrive in a world that doesn't yet exist.

That ache is yours. It is irreducibly, inalienably yours. And it is not a weakness or a limitation. It is the engine of everything that matters. Every important thing a human being has ever built — every company, every family, every work of art, every institution — was built by someone who felt that ache and decided to act on it anyway. Not because they had the best analysis. Because they cared enough to begin, and to keep going, and to bear the cost of being wrong.

The machines are getting smarter. They will keep getting smarter. But they will never care. And in the end, I think that's the answer to your question. Not intelligence. Not even wisdom, exactly. Something more like love — love as a force that organizes action, that makes sacrifice intelligible, that gives a person a reason to build something that will outlast them.

You have that. Teach Julian that.