A few months ago (end of September), I made a difficult decision: I reset the company and pivoted away from our robotics effort.
I’m still convinced that robotics (machines performing open-ended tasks in open environments) will be one of the biggest markets in human history. But after working on it deeply, my conclusion is that the timing is wrong for us right now.
I’ll write a separate post explaining that decision. What matters is what happens next.
I went through a period of pivot hell. I tested a lot of directions. Most of them didn’t work.
But one conviction only got stronger: AI systems are getting dramatically more intelligent. But we are still very bad at actually leveraging that intelligence to create value.
The value equation
The value created by AI systems today follows a simple equation:
Value = (AI capability) × (how much we actually leverage it)
For the last two years, AI capability is growing exponentially. We live in a world where “Intelligence is on tap now” (Garry Tan).
Leverage is not.
The Human Bottleneck
I spent months building interfaces like voice, vision, even EEG to try to increase this leverage ratio.
I built workflows for myself. I built agents to take work off my plate.
And I kept hitting the same wall: Everything still had to go through me.
Chatbots, assistants, and copilots are fundamentally interactive tools, upper-bounded by the human using them.
The moment it clicked for me was simple. Hiring the smartest employee in the world doesn’t help if you still have to tell them every step and check every move. The point of hiring is autonomy and escalation when needed.
That’s how organizations scale: we delegate work to autonomous units (people and teams) with clear boundaries, and we build escalation paths for when they hit limits.
AI will scale work the same way.
Not through more chat or UI hacks.
Through autonomy, workflows and escalation.
AI is not just a tool shift. It’s an organizational shift.
Historically, transformative technologies don’t just add productivity. They reshape the structure of organizations that can fully exploit them (see Ivan Zhang, Steam, Steel, and Infinite Minds).
AI is one of those technologies.
I believe the winning organizations of the next decade will have the following shape:
- Humans: A minimal number of high-agency generalists focusing on high-value direction.
- Systems: Massive scale of agentic workflows running asynchronously.
The winners will maximize leverage.
What this means for current organizations
Many companies are trying to “bring AI into existing workflows.”
A fraction of that will work. But most of it is the wrong approach.
I believe most current organizations won’t be able to adapt due to cultural and structural limitations (see the “GenAI Divide” Report by MIT: despite an estimated $30–$40 billion in enterprise investment, 95% of organizations report zero measurable return on their GenAI initiatives.)
The biggest step-change will come from new organizations designed around agents from day one.
That’s the opportunity I care about. I am building the infrastructure for the new economy. I don’t want to be “driving to the future via the rearview window” (Marshall McLuhan).
What this means for humans
We now have, as Dario Amodei puts it, “countries of geniuses in datacenters.”
Technical skills and raw intelligence used to be the main discriminators for success. They were never sufficient on their own, but because they were required and scarce, they became massive filters.
Mark Zuckerberg could not have created Facebook if he didn’t know how to write PHP and host it on a server. Were those abilities what made Facebook successful? Absolutely not. But without them, there was no Facebook at all.
Today, thanks to LLMs, most of those technical skills (and a lot of knowledge and raw intelligence) are one API call away. When intelligence is a commodity, it ceases to be the differentiator.
So what are the new discriminators? What talents should you grow and deploy to create value?
- Agency. The people who win will be the ones who just do things. Not wait for permission, not wait for perfect conditions, not wait for someone to tell them the next step. Initiative becomes the scarcest resource.
- Taste. Since you can now create so much, so fast, the ability to discriminate becomes critical. Which endeavors to start? Which outputs to keep? You need to become very good at estimating expected value, at sensing what will actually create value, what is beautiful, what is elegant. Taste is the new bottleneck. See OpenAI acquihiring Jony Ive.
- Moving beyond “Inertial Moats”. Here’s what’s changing: building complex systems that required many intelligent people used to be a moat. Most of the time it was about writing software that worked. But it was never a very strong one. It was more inertial than structural. It was a matter of time, and if you were running fast enough, you could stay ahead.
The individuals who thrive in this new world will be the ones with the agency to act, the taste to choose and the willingness to build products that become exponentially better.
The product direction: onboarding agents like hiring employees
So what are we building?
A product that makes it as easy to onboard and manage AI agents as it is to hire and manage humans, getting exponentially better with usage and AI model progress.
A few core principles:
- Onboarding the tool is like onboarding a human
- Low-trust first, autonomy over time
- Asynchronous by default, so the organization can scale
- Humans don’t have to change their workflows, agents work where humans work
- The better the AI models get, the better the tool gets
Why now
Three reasons:
The tech
Models have crossed a threshold on tool use and reliability. They’re no longer just good at generating text. They can increasingly operate independently in real systems through tools.
The demand
People are eager to delegate. You see it everywhere:
- teams building increasingly complex automations with n8n or Gumloop
- individuals using coding agents (Cursor, Claude Code) for non-coding tasks
- workflows emerging organically even when the tooling is awkward
The emerging patterns
The software engineering world is one of the most advanced when it comes to AI adoption. First-person tools like Cursor and Claude Code are ubiquitous. Now the limits are what I described earlier: how to have agents working independently and in parallel, looping back with humans only when needed. Third-person tools. Products like Claude Code Web, Conductor or Devin are pointing the way.
Won’t incumbents or AI labs do this?
They probably will, for some applications. But the market is almost infinite.
The space of “agents + context + workflows” is close to software itself in terms of size.
It’s like when we had CPUs and assembly code. The software programs and products built on top of this foundation are almost infinite. There’s room for many (very big) winners.
What we’ll do next
I’m launching a first version of the product in the coming weeks.
The early focus is on individuals, startups and small teams. People who can move fast, adopt new workflows quickly, and design around agents instead of trying to retrofit them into legacy processes.
I’m also my own customer. I’m already using these systems daily, and I’m building from that lived constraint: the human cannot remain the bottleneck.
Conclusion
AI capability is compounding.
But the real unlock is leverage: turning capability into autonomous execution inside real organizations.
We are building the product to finally enable human organizations to leverage the raw power of AI systems.
If you want to be part of the new organization, register here for early access by sending an email to [email protected].