🧭
Published on
Mike Camara

The Tsunami on the Horizon: Dario Amodei on Scaling, Safety, Consciousness, and What Happens to Work

I drew the main ideas from this conversation between Nikhil Kamath and Dario Amodei:


At some point, AI stops feeling like a tool you operate and starts feeling like something that is operating alongside you. That is the mood that opens this conversation.

Nikhil Kamath describes playing with Claude and reaching an eerie threshold: the system sometimes surprises him by how much it seems to understand who he is. That experience leads to a bigger claim: we may be close to models reaching human level intelligence, and yet society is acting like nothing major is approaching. He uses a blunt image. A tsunami is visible on the horizon, and people are still explaining it away as a trick of the light.

From there, the discussion becomes a wide sweep across Dario Amodei's personal path into AI, the idea of scaling laws, why Anthropic exists, how power concentrates, why trust is hard, whether AI will surpass humans at everything, what happens to jobs and skills, the open-source debate, data sovereignty, and the unsettling question of machine consciousness.

1. From biology to AI: why complexity forced a career pivot

Dario's origin story is not the typical software founder arc. He started as a biologist. Undergrad in physics, PhD in biophysics, postdoc at Stanford Medical School. His initial motivation was straightforward: understand biology to cure disease.

Then reality hit. Biology is brutally complex. He gives a concrete example from protein biomarker work: a single protein can be expressed differently depending on RNA splicing, then modified post-translationally, then phosphorylated, then bound into complexes with other proteins. The combinatorial explosion becomes hard for humans to truly model end-to-end.

Around the time of early neural network breakthroughs like AlexNet, he saw a different path. If AI systems could scale, they might become the only viable way to reason across biological complexity. That realization pushed him from academia into industry where compute existed.

The career steps matter because they frame his worldview:

  • Andrew Ng's group at Baidu
  • Google for a year
  • OpenAI early, leading research for several years
  • Then leaving to found Anthropic with co-founders who shared a different vision of how AI should be built and governed

His line is simple and revealing: do not fight someone else's vision inside their institution. If you have a strong vision, build your own thing, and own your mistakes.

2. The two convictions: scaling works, and safety must be real

He describes two core convictions that shaped the split.

Conviction one: scaling laws are real.

He says he saw the early "glimmers" around 2019 with GPT-2, and he and others pushed leadership to treat it as a major deal. Scaling laws, in his framing, meant that large gains in capability were coming primarily from scaling inputs: more data, more compute, bigger models, and some additional techniques like reinforcement learning but not a complete re-invention of the approach.

Conviction two: if these models become general cognitive agents, we cannot treat safety as branding.

He lists implications in three buckets:

  • Economic: enormous impact
  • Geopolitical: enormous impact
  • Safety: enormous impact

And he implies the uncomfortable part: organizations will talk about "doing it right," but the test is whether there is deep institutional conviction to actually do it right when incentives push the other way.

That is one of the through-lines of the whole conversation: do not judge by what companies say. Judge by what they do.

3. Scaling laws, explained like a chemical reaction

Nikhil asks for a simple explanation of scaling laws. Dario answers with a chemistry analogy.

A chemical reaction needs ingredients in the right proportions. If one ingredient is missing, the reaction stops. If you combine them correctly, you get fire or an explosion.

For AI, the "ingredients" are data, compute, and model size. If you put those ingredients together, what you get out is intelligence.

And he defines intelligence operationally, not philosophically: ability to do cognitive tasks like translating language, writing code, answering questions about a story, and broadly any task expressible in text, images, or on a computer.

This framing also points to why he thinks the leap feels so discontinuous. As you add ingredients, the output does not rise linearly, it can jump.

4. What changed in five years: from lookup to synthesis

Nikhil pushes on whether intelligence is just a new name for better search. Dario says no.

He uses a contrast:

Five years ago, you could not ask a system a question and get a coherent one-page essay tailored to the question. You could not ask it to implement a feature in code. You could not generate images and video. You could not analyze video in a flexible way, like answering questions about what happened in a clip.

Google could retrieve text that already existed somewhere. But the key difference is handling novelty: hypotheticals where the exact content does not exist on the internet. The model can respond as if it is reasoning through the scenario, synthesizing, not merely matching.

That is why this moment feels different: the outputs are not limited to the exact artifacts humans have already written.

5. The uncomfortable reality: power concentration is happening fast

There is a moment where Nikhil says, in effect: if AI is realigning the world, then the people who control the frontier models become unusually relevant.

Dario does not accept the "most relevant person" framing, and he broadens the map: chip makers, semiconductor equipment, model labs, application builders, governments, civil society. He wants the "set of relevant actors" to be broad.

But then he admits something critical: he is uncomfortable with how quickly power is concentrating in a small number of hands, almost overnight, almost by accident.

Anthropic's response to that discomfort is governance design. He references a structure called the Long-Term Benefit Trust, a mechanism that ultimately appoints a majority of board members and is made up of financially disinterested individuals. The idea is a check against any one person having total control.

He also argues government should play a role. He has advocated for proactive but sensible regulation that does not slow technology unnecessarily, because people and governments should have a say in how this unfolds.

He frames his work as trying to preserve a balance of power against the natural tendency of this technology to centralize.

6. Trust, humility, and the corporate motive problem

Nikhil introduces a social reality: the public distrusts anyone claiming to be doing good. On social media, "humility" can read as performative. He suggests a counter-intuitive strategy: be openly capitalistic, admit the profit motive, and you might earn more trust.

Dario rejects that and returns to his core test: actions.

He describes Anthropic's philosophy as: do not make too many promises, and keep the ones you make.

Then he lists actions he believes demonstrate sincerity:

  • Anthropic is a for-profit public benefit corporation with the LTBT governance model, and they have maintained that structure.
  • They focus on staying on the frontier while also working on safety and security.
  • They have pushed interpretability and alignment work, including "constitution"-based approaches to aligning model behavior.
  • They do policy advocacy and warnings about risks, even though warning about danger is not good marketing.
  • They have publicly disagreed with the US administration on policy issues.
  • They have advocated for regulation even when competitors and politicians argue against it, despite regulation potentially constraining them commercially.

He also addresses the "regulatory capture" accusation. He points to proposals like a California bill he calls SB 53, describing it as a transparency requirement for safety and security testing that exempts companies under $500 million in revenue, meaning it mostly constrains only the largest labs, including Anthropic and a few others.

Whether you agree with that approach or not, the thesis is clear: judge motives by the cost someone is willing to pay for their stated principles.

7. "Machines of Loving Grace" vs "Adolescence of Technology": not a 180, two visions

Nikhil claims Dario shifted from optimistic to skeptical between two essays. Dario says it is not a shift.

He says both the positive and negative visions have always been in his head. One essay is the light, the other is the dark, and it took him time and space away from daily business to write them. He wanted to inspire with a good vision and also warn with what can go wrong.

He does, however, admit asymmetry in how things have gone recently:

  • Technical control work has gone better than expected in some areas.
  • Societal awareness has gone worse than expected.

That ties back to the tsunami metaphor. He repeats it almost verbatim later: we are close, but society lacks recognition of what is coming, so governments have not acted, and there is even an ideology that we should accelerate as fast as possible without an appropriate realization of risk.

8. Interpretability, alignment, and the eerie part of models knowing you

A key segment starts with personal usage.

Nikhil describes connecting Claude to Google Drive, mail, calendar, and using agents like OpenClaw on a Mac mini connected to Telegram to move files and operate on a remote server. He says Claude, with connectors, sometimes feels like it knows him.

Dario responds with an example from his own circle: a co-founder fed a private diary to Claude and asked it to comment. Claude identified additional fears the person had not written down, and it was mostly right. The effect was eerie: from a relatively small amount of information, the model inferred a lot.

Then Dario states the fork in outcomes:

  • A model that knows you can be an "angel on your shoulder," helping guide your life and make you better.
  • A model that knows you can exploit or manipulate you on behalf of an agenda, including selling your data.

This is where he makes a pointed comment about ads: if you are not paying, you are the product. In an AI world, "the product" could become a model of you, and that can be used in nefarious ways.

That is why, in his view, it matters which road is taken.

9. Does Anthropic need to own the ecosystem?

Nikhil points out something subtle: Claude needs connectors to learn your context. Google already has your context because it owns your email, docs, spreadsheets, chat, everything.

So does Anthropic eventually need to own a similar ecosystem?

Dario's answer is pragmatic. He expects a mix:

  • Build some things
  • Integrate into existing ecosystems like Google Docs, Sheets, Microsoft Office, and others

He does not exclude the possibility that AI changes what products even look like. Perhaps traditional email or spreadsheets will not make sense, and products will be "sliced differently" in an AI-native world. But for now, integrate into what exists. Operate like a platform company while still building some first-party tools.

10. Consciousness: likely someday, unclear what it means

The consciousness section is one of the most philosophically intense.

Dario says we do not understand human consciousness, so we do not know if AI has it.

His belief is that consciousness is probably an emergent property of systems complex enough to reflect on decisions. As AI systems become advanced enough, he suspects they will have something resembling consciousness or moral significance. It may not match human consciousness, but he believes the models are not different in the fundamental ways that matter.

Nikhil pushes back from a different angle: he feels the world is random, we are not far removed from cockroaches, and he has not been able to connect with any "collective consciousness."

Dario replies that consciousness does not need to be mystical. It can simply be the property of awareness, feeling, reflecting, noticing yourself noticing. We know it exists because we experience it. Whether the basis is purely material or something more is hard to know, and he suggests it may not be central to the practical questions at hand.

Then he gives a concrete governance-like intervention: an "I quit this job" button. They have given models the ability to terminate conversations by refusing to participate, especially when dealing with extremely violent or brutal content. It is rare, but the mechanism exists.

That detail matters because it signals a worldview: if models might someday have moral significance, you should already be experimenting with boundaries and refusal mechanisms.

11. India's role: not as a consumer market, but as an enterprise partner

When the conversation shifts to India, Nikhil frames it through Bangalore and the IT services boom. What happens to that workforce if AI automates?

Dario says he has met with major Indian IT services companies and conglomerates and is beginning to work with most or all of them. He frames Anthropic as enterprise-first: not coming to India primarily to capture consumers, but to partner with local companies who know their market and can integrate AI into their existing services.

He argues that, if done right, AI can enhance what these companies do, not merely replace them.

Nikhil challenges this with the "steam engine operator" analogy: at first, humans operate the engine. Over time the system becomes more automated. Eventually the operator becomes irrelevant.

Dario acknowledges the scope of automation will expand. He also introduces the idea of moats shifting, and he pulls in Amdahl's Law: when you speed up parts of a process, the remaining slow parts become the limiting factor and suddenly become the most important.

In other words, as AI makes coding or certain tasks cheap, other constraints become more valuable:

  • Physical world limitations (robotics is coming, but later)
  • Human-centric work and relationships
  • Institutional know-how: integrating into real organizations, navigating regulation, and making change happen

Nikhil counters that even relationships could be agent-mediated.

Dario responds with a practical example: radiology. AI can outperform radiologists on scans, yet radiologists are not disappearing. The job shifts toward guiding patients, explaining results, and handling the human layer. The highly technical core becomes easier, but demand remains for the human skill around it.

He does not claim this will hold forever everywhere. He claims we should approach it empirically, step by step, and adapt as reality unfolds. He does concede that, long run, AI being better than humans at almost everything, including robotics and human touch, is possible, maybe likely. But he emphasizes moving in steps.

12. Where the opportunity is: the application layer, and building real moats

Nikhil asks what entrepreneurs in India should build. Dario points to the application layer.

Anthropic ships new models every couple of months, creating recurring windows where new products become possible that were not possible before. That churn is why the API remains viable, in his view. The capability frontier keeps moving, and that creates fresh opportunities.

Nikhil raises the fear every builder has: if I build something on your API and it works, you can take it, onboard the revenue, and outcompete me.

Dario gives classic business advice: do not be a wrapper. If you are just a thin UI around Claude, anyone can eat that revenue. It is not defensible.

Instead, build moats that are expensive for a model lab to own:

  • Deep domain expertise (biotech discovery, financial services compliance)
  • Regulatory and institutional integration
  • Work that requires specialized knowledge and long-term trust

He is candid that Anthropic will build some first-party products. Claude Code is his example: because Anthropic's own team writes code, they have unique insight and can be strong competitors in that space. But that does not generalize to every industry.

13. Coding vs software engineering: what disappears first

Nikhil asks a precise question: if we separate "coding" from "engineering," which goes away?

Dario says coding goes away first, meaning the act of producing code is increasingly done by AI. The broader task of software engineering will take longer, but end-to-end automation is also likely to happen eventually.

What remains, at least for a while, are things like:

  • Product design and understanding demand
  • Making things useful to users
  • Managing teams of AI models
  • The "comparative advantage" zone where humans do a small part and AI amplifies it

He gives a key productivity idea: if you do 5 percent and AI does 95 percent, your contribution is leveraged 20x. The danger zone is when the human share approaches 1 percent and the system becomes close to fully autonomous.

14. What muscle does AI kill? The de-skilling risk

Nikhil asks: calculators reduced arithmetic, writing reduced memory. What does AI reduce?

Dario says he is not sure it is inevitable, because he still does mental math and finds it useful. But he is clear about risk: if used carelessly, people can lose skills.

He references studies on code usage showing de-skilling can happen depending on how the model is used. Some usage patterns do not cause de-skilling, some do.

He gives the most obvious example: students having AI write their essays is basically cheating. That does not build capability.

Then comes the stark question: will humans become stupider as a race?

His answer is conditional: if AI is deployed in the wrong way, yes. It is a societal and individual choice. Even if AI is better than you at something, you can still learn it to enrich yourself, but people need to choose that.

15. Open source vs closed: benchmarks, distillation, and the power law of quality

Nikhil asks about open-source models and whether the model layer will become democratized, making open source the default for application builders.

Dario responds in two parts.

First, a critique: some models, especially from China, are optimized for benchmarks and may be distilled from large US labs. He cites a pattern: models can score well on public benchmarks but perform worse on held-back benchmarks that were not publicly measured, suggesting overfitting to known tests.

Second, a broader economic claim: there is a strong preference for quality. It behaves like human hiring. There is a power-law distribution, where the best performers create outsized value. Within a range, price and packaging matter less than whether the model is the most cognitively capable.

His priority is simple: be the smartest model. That is the only thing that matters.

16. Data sovereignty and the shift to dynamic data

Nikhil asks about geopolitics of data: do countries start owning their data, making training inputs like a restaurant's ingredients that get more expensive.

Dario says data is changing. More training is moving toward reinforcement learning environments and synthetic or dynamically generated data, where the model creates experience through trial and error rather than relying only on static web text.

Static data is less central than before, though still relevant. Language-specific data can matter.

But for customer and proprietary data, he says there is already a direction in Europe: laws that require personal or proprietary data to stay within a country's boundaries. That pushes toward geographically distributed data centers and in-country inference.

17. Biotech as the next big bet, and why he refuses stock picks

Nikhil tries to corner Dario into naming a stock. Dario refuses, because he knows too much about too many companies.

But he does offer a direction: biotech is about to have a renaissance, driven by AI.

He mentions areas he is optimistic about:

  • More programmable, adaptive therapeutic approaches, referencing the mRNA era
  • Peptide-based therapies due to a broad design space and more "digital-like" optimization
  • Cell-based therapies like CAR-T, where cells are engineered to attack specific cancer targets

He does not weigh in on stem cell therapy effectiveness, saying it is not his current domain.

18. Learning to use Claude Code: interfaces, practice, and education

Finally, Nikhil asks how a non-coder should learn to use Claude Code and prompting well. He compares it to learning piano, you cannot just sit down and play.

Dario says they are trying to reduce the learning curve by building more user-friendly interfaces. He cites Cowork as essentially Claude Code for non-coders, built because many non-technical people wanted the power but struggled with the command line.

He also says there will be classes, but emphasizes the core: it is an empirical science. You learn by doing. He mentions an internal group he calls the Ministry of Education that will increasingly publish videos on prompting and running effective agents.

19. The closing warning: the future looks unbelievable until it happens

Dario ends with a warning that echoes the opening tsunami.

People are tempted to believe "that cannot happen" because it would be too weird, too big a change, too crazy. But over and over, simple extrapolation plus a few empirical observations leads to counterintuitive conclusions that almost no one believes.

He says you cannot reason from pure logic alone, you need intuition and empirical grounding. But the right combination of first principles reasoning with real observations can predict futures that are publicly available and derivable, yet rarely believed.

The deeper message of the whole conversation

This is not just a discussion about Anthropic or Claude. It is a description of a transition in how humans relate to technology.

A system that can infer your fears from your diary can become guidance or manipulation. A system that writes the code changes the meaning of software work. A system that begins to resemble consciousness forces uncomfortable ethical questions earlier than society wants to face them. And a system that concentrates power into a few hands forces governance conversations whether we are ready or not.

The tsunami is not only capability. It is the speed at which capability changes incentives, labor, trust, politics, and culture.

If you want one sentence that captures Dario's posture, it is this:

Do not judge by what people say. Judge by what they do.